EPYC 2021 Benchmarks

Tests for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2102202-HA-EPYCB627828
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 4 Tests
Bioinformatics 4 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 5 Tests
C++ Boost Tests 6 Tests
Chess Test Suite 7 Tests
Timed Code Compilation 11 Tests
C/C++ Compiler Tests 38 Tests
Compression Tests 5 Tests
CPU Massive 63 Tests
Creator Workloads 47 Tests
Cryptography 7 Tests
Database Test Suite 8 Tests
Encoding 8 Tests
Finance 2 Tests
Fortran Tests 12 Tests
Game Development 7 Tests
HPC - High Performance Computing 46 Tests
Imaging 10 Tests
Java 2 Tests
Common Kernel Benchmarks 8 Tests
LAPACK (Linear Algebra Pack) Tests 2 Tests
Linear Algebra 2 Tests
Machine Learning 16 Tests
Memory Test Suite 3 Tests
Molecular Dynamics 11 Tests
MPI Benchmarks 11 Tests
Multi-Core 63 Tests
NVIDIA GPU Compute 11 Tests
Intel oneAPI 6 Tests
OpenCL 2 Tests
OpenCV Tests 2 Tests
OpenMPI Tests 20 Tests
Programmer / Developer System Benchmarks 19 Tests
Python 5 Tests
Quantum Mechanics 2 Tests
Raytracing 6 Tests
Renderers 15 Tests
Scientific Computing 24 Tests
Server 13 Tests
Server CPU Tests 38 Tests
Single-Threaded 15 Tests
Speech 3 Tests
Telephony 3 Tests
Texture Compression 3 Tests
Video Encoding 8 Tests
Common Workstation Benchmarks 8 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable
Show Perf Per Clock Calculation Graphs Where Applicable
Show Perf Per RAM Channel Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
EPYC 7702
February 01 2021
  1 Day, 19 Hours, 45 Minutes
EPYC 7402P
February 03 2021
  1 Day, 4 Hours, 10 Minutes
EPYC 7302P
February 04 2021
  1 Day, 7 Hours, 33 Minutes
EPYC 7232P
February 06 2021
  1 Day, 8 Hours
EPYC 7552
February 07 2021
  1 Day, 3 Hours, 53 Minutes
EPYC 7272
February 08 2021
  1 Day, 5 Hours, 58 Minutes
EPYC 7662
February 10 2021
  1 Day, 4 Hours, 36 Minutes
EPYC 7502P
February 11 2021
  1 Day, 4 Hours, 50 Minutes
EPYC 7F52
February 12 2021
  1 Day, 6 Hours, 14 Minutes
EPYC 7542
February 13 2021
  1 Day, 3 Hours, 43 Minutes
EPYC 7282
February 15 2021
  1 Day, 7 Hours, 3 Minutes
EPYC 7F32
February 16 2021
  1 Day, 8 Hours, 25 Minutes
EPYC 7532
February 17 2021
  1 Day, 7 Hours, 4 Minutes
EPYC 7642
February 19 2021
  1 Day, 3 Hours, 38 Minutes
Invert Hiding All Results Option
  1 Day, 6 Hours, 47 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


EPYC 2021 BenchmarksProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionEPYC 7702EPYC 7402PEPYC 7302PEPYC 7232PEPYC 7552EPYC 7272EPYC 7662EPYC 7502PEPYC 7F52EPYC 7542EPYC 7282EPYC 7F32EPYC 7532EPYC 7642AMD EPYC 7702 64-Core @ 2.00GHz (64 Cores / 128 Threads)ASRockRack EPYCD8 (P2.40 BIOS)AMD Starship/Matisse8 x 16384 MB DDR4-3200MT/s 18ASF2G72PDZ-3G2E13841GB Micron_9300_MTFDHAL3T8TDPllvmpipeVE2282 x Intel I350Ubuntu 20.045.11.0-051100rc6daily20210201-generic (x86_64) 20210131GNOME Shell 3.36.4X Server 1.20.8llvmpipe4.5 Mesa 20.2.6 (LLVM 11.0.0 256 bits)GCC 9.3.0ext41920x1080AMD EPYC 7402P 24-Core @ 2.80GHz (24 Cores / 48 Threads)AMD EPYC 7302P 16-Core @ 3.00GHz (16 Cores / 32 Threads)AMD EPYC 7232P 8-Core @ 3.10GHz (8 Cores / 16 Threads)AMD EPYC 7552 48-Core @ 2.20GHz (48 Cores / 96 Threads)AMD EPYC 7272 12-Core @ 2.90GHz (12 Cores / 24 Threads)AMD EPYC 7662 64-Core @ 2.00GHz (64 Cores / 128 Threads)AMD EPYC 7502P 32-Core @ 2.50GHz (32 Cores / 64 Threads)AMD EPYC 7F52 16-Core @ 3.50GHz (16 Cores / 32 Threads)7 x 16384 MB DDR4-3200MT/s 18ASF2G72PDZ-3G2E1AMD EPYC 7542 32-Core @ 2.90GHz (32 Cores / 64 Threads)8 x 16384 MB DDR4-3200MT/s 18ASF2G72PDZ-3G2E1AMD EPYC 7282 16-Core @ 2.80GHz (16 Cores / 32 Threads)AMD EPYC 7F32 8-Core @ 3.70GHz (8 Cores / 16 Threads)AMD EPYC 7532 32-Core @ 2.40GHz (32 Cores / 64 Threads)AMD EPYC 7642 48-Core @ 2.30GHz (48 Cores / 96 Threads)OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vDisk Details- NONE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Details- Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0x8301034Java Details- EPYC 7702: OpenJDK Runtime Environment (build 11.0.9.1+1-Ubuntu-0ubuntu1.20.04)- EPYC 7402P: OpenJDK Runtime Environment (build 11.0.9.1+1-Ubuntu-0ubuntu1.20.04)- EPYC 7302P: OpenJDK Runtime Environment (build 11.0.9.1+1-Ubuntu-0ubuntu1.20.04)- EPYC 7232P: OpenJDK Runtime Environment (build 11.0.9.1+1-Ubuntu-0ubuntu1.20.04)- EPYC 7552: OpenJDK Runtime Environment (build 11.0.9.1+1-Ubuntu-0ubuntu1.20.04)- EPYC 7272: OpenJDK Runtime Environment (build 11.0.9.1+1-Ubuntu-0ubuntu1.20.04)- EPYC 7662: OpenJDK Runtime Environment (build 11.0.9.1+1-Ubuntu-0ubuntu1.20.04)- EPYC 7502P: OpenJDK Runtime Environment (build 11.0.10+9-Ubuntu-0ubuntu1.20.04)- EPYC 7F52: OpenJDK Runtime Environment (build 11.0.10+9-Ubuntu-0ubuntu1.20.04)- EPYC 7542: OpenJDK Runtime Environment (build 11.0.10+9-Ubuntu-0ubuntu1.20.04)- EPYC 7282: OpenJDK Runtime Environment (build 11.0.10+9-Ubuntu-0ubuntu1.20.04)- EPYC 7F32: OpenJDK Runtime Environment (build 11.0.10+9-Ubuntu-0ubuntu1.20.04)- EPYC 7532: OpenJDK Runtime Environment (build 11.0.10+9-Ubuntu-0ubuntu1.20.04)- EPYC 7642: OpenJDK Runtime Environment (build 11.0.10+9-Ubuntu-0ubuntu1.20.04)Python Details- Python 3.8.5Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

EPYC 7702EPYC 7402PEPYC 7302PEPYC 7232PEPYC 7552EPYC 7272EPYC 7662EPYC 7502PEPYC 7F52EPYC 7542EPYC 7282EPYC 7F32EPYC 7532EPYC 7642Logarithmic Result OverviewPhoronix Test SuiteCpuminer-OptJohn The RipperBRL-CADC-RayAircrack-ngIndigoBenchOSPrayTachyonChaos Group V-RAYNAMDNAS Parallel BenchmarksBlenderLuxCoreRenderCloverLeafrays1benchTensorFlow LitePOV-RayGROMACSApache CassandraebizzyLAMMPS Molecular Dynamics SimulatorPostgreSQL pgbenchLeelaChessZeroTimed Linux Kernel CompilationoneDNNTimed MPlayer CompilationFFTEOpenFOAMGPAWASKAPTimed LLVM CompilationSVT-AV1KripkeSVT-VP9Tungsten RendererKvazaarminiFEYafaRayTimed FFmpeg CompilationTTSIOD 3D Rendererx265Timed Godot Game Engine Compilationx264Appleseeddav1dZstd CompressionHigh Performance Conjugate GradientRodiniaFacebook RocksDBIncompact3DPlaidMLTimed ImageMagick CompilationLULESHBuild2ParboilOpenVINOWebP2 Image EncodeACES DGEMMNCNNTimed HMMer SearchTimed PHP CompilationNumenta Anomaly BenchmarkAI Benchmark AlphaXZ CompressionTimed MrBayes AnalysisMobile Neural NetworkBlogBenchMonte Carlo Simulations of Ionised NebulaeTimed GDB GNU Debugger CompilationQuantum ESPRESSOFFTWC-BloscNebular Empirical Analysis ToolCaffeTimed Apache CompilationTimed MAFFT AlignmentNumpy BenchmarkApache CouchDBECP-CANDLETimed Eigen CompilationInfluxDBWebP Image EncodeAOM AV1PyPerformanceDolfynScikit-LearnSQLite SpeedtestGcrypt LibraryMinionRadiance BenchmarkBotansimdjsoneSpeak-NG Speech EngineAOBenchlibjpeg-turbo tjbenchTNNPyBenchMontage Astronomical Image Mosaic EngineCrypto++Himeno BenchmarkGnuPGPHPBenchPerl Benchmarksrav1eDarmstadt Automotive Parallel Heterogeneous SuiteONNX RuntimeRedisKeyDBDeepSpeech

EPYC 7702EPYC 7402PEPYC 7302PEPYC 7232PEPYC 7552EPYC 7272EPYC 7662EPYC 7502PEPYC 7F52EPYC 7542EPYC 7282EPYC 7F32EPYC 7532EPYC 7642Logarithmic Per Watt Result OverviewPhoronix Test SuiteACES DGEMMCpuminer-Optrays1benchOSPrayLuxCoreRenderKripkeBRL-CADminiFEASKAPAircrack-ngSVT-VP9John The RipperApache CassandraGROMACSIndigoBenchChaos Group V-RAYBlogBenchNAS Parallel BenchmarksHigh Performance Conjugate GradientebizzyLAMMPS Molecular Dynamics SimulatorSVT-AV1FFTEC-Bloscdav1dONNX Runtimex265x264Darmstadt Automotive Parallel Heterogeneous SuiteAI Benchmark AlphaTTSIOD 3D RendererKvazaarFacebook RocksDBKeyDBPlaidMLLULESHLeelaChessZeroZstd Compressionrav1eCrypto++PHPBenchBotanHimeno BenchmarkNumpy Benchmarklibjpeg-turbo tjbenchFFTWRedisInfluxDBAOM AV1simdjsonP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.M

EPYC 2021 Benchmarkscpuminer-opt: Skeincoinsysbench: CPUstress-ng: CPU Stressopenvino: Age Gender Recognition Retail 0013 FP16 - CPUnpb: EP.Dopenvino: Age Gender Recognition Retail 0013 FP32 - CPUnpb: EP.Cstress-ng: Vector Mathjohn-the-ripper: Blowfishospray: San Miguel - Path Tracerpennant: leblancbigstress-ng: Cryptoonednn: Convolution Batch Shapes Auto - f32 - CPUaskap: tConvolve MPI - Degriddingcoremark: CoreMark Size 666 - Iterations Per Secondonednn: IP Shapes 3D - f32 - CPUm-queens: Time To Solvestockfish: Total Timen-queens: Elapsed Timebrl-cad: VGR Performance Metricc-ray: Total Time - 4K, 16 Rays Per Pixelindigobench: CPU - Supercaraircrack-ng: indigobench: CPU - Bedroomstress-ng: Context Switchingospray: XFrog Forest - Path Tracerospray: NASA Streamlines - Path Tracerasmfish: 1024 Hash Memory, 26 Depthospray: XFrog Forest - SciVisaskap: tConvolve MPI - Griddingtachyon: Total Timecompress-7zip: Compress Speed Testjohn-the-ripper: MD5v-ray: CPUastcenc: Exhaustiveospray: Magnetic Reconnection - SciVisospray: San Miguel - SciVisstress-ng: Matrix Mathv-ray: CPUpennant: sedovbigblender: Barbershop - CPU-Onlyblender: Pabellon Barcelona - CPU-Onlyrocksdb: Read While Writingnamd: ATPase Simulation - 327,506 Atomsrocksdb: Rand Readpgbench: 100 - 250 - Read Onlypgbench: 100 - 250 - Read Only - Average Latencyblender: Classroom - CPU-Onlyrodinia: OpenMP LavaMDospray: NASA Streamlines - SciVisluxcorerender: Rainbow Colors and Prismopenvkl: vklBenchmarkluxcorerender: DLSCastcenc: Thoroughopenvino: Face Detection 0106 FP32 - CPUopenvino: Person Detection 0106 FP32 - CPUopenvino: Person Detection 0106 FP16 - CPUopenvino: Face Detection 0106 FP16 - CPUcloverleaf: Lagrangian-Eulerian Hydrodynamicsaskap: tConvolve OpenMP - Degriddingrays1bench: Large Scenetensorflow-lite: Mobilenet Quanttensorflow-lite: Mobilenet Floatpgbench: 100 - 100 - Read Only - Average Latencypgbench: 100 - 100 - Read Onlyopenfoam: Motorbike 30Mtensorflow-lite: Inception V4blender: BMW27 - CPU-Onlypovray: Trace Timeblender: Fishy Cat - CPU-Onlyonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUtensorflow-lite: Inception ResNet V2onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUlammps: 20k Atomsgromacs: Water Benchmarkonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUcassandra: Writesaskap: tConvolve OpenMP - Griddingoidn: Memorialebizzy: tungsten: Hairtensorflow-lite: SqueezeNettungsten: Non-Exponentialparboil: OpenMP CUTCPonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUlammps: Rhodopsin Proteinstress-ng: Socket Activitycompress-zstd: 19nwchem: C240 Buckyballlczero: Eigenappleseed: Disney Materialbasis: UASTC Level 3ncnn: CPU - regnety_400msvt-av1: Enc Mode 8 - 1080pparboil: OpenMP MRI Griddingncnn: CPU-v3-v3-v3 - regnety_400monednn: IP Shapes 3D - u8s8f32 - CPUncnn: CPU-v2-v2-v2 - regnety_400mbuild-linux-kernel: Time To Compilelczero: BLASonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Training - f32 - CPUrocksdb: Rand Fill Syncbuild-mplayer: Time To Compileffte: N=256, 3D Complex FFT Routinekvazaar: Bosphorus 4K - Mediumgpaw: Carbon Nanotubeonednn: Deconvolution Batch shapes_3d - f32 - CPUplaidml: No - Inference - VGG19 - CPUgromacs: water_GMX50_bareplaidml: No - Inference - VGG16 - CPUrodinia: OpenMP CFD Solvernpb: SP.Bonednn: Recurrent Neural Network Inference - f32 - CPUbuild-llvm: Time To Compileonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: IP Shapes 1D - f32 - CPUtungsten: Volumetric Causticrodinia: OpenMP Leukocytesvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080pappleseed: Emilyonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUaskap: Hogbom Clean OpenMPncnn: CPU-v3-v3-v3 - mnasnetonednn: Deconvolution Batch shapes_1d - f32 - CPUnpb: LU.Ctoybrot: TBBpgbench: 100 - 250 - Read Writepgbench: 100 - 250 - Read Write - Average Latencytoybrot: C++ Threadsdav1d: Summer Nature 4Ktoybrot: C++ Taskskvazaar: Bosphorus 4K - Very Fastbasis: UASTC Level 2toybrot: OpenMPncnn: CPU-v3-v3-v3-v3-v3 - mobilenet-v3minife: Smallonednn: IP Shapes 1D - u8s8f32 - CPUyafaray: Total Time For Sample Scenepgbench: 100 - 100 - Read Writebuild-ffmpeg: Time To Compilepgbench: 100 - 100 - Read Write - Average Latencyncnn: CPU-v3-v3-v3-v2-v2 - mobilenet-v2ttsiod-renderer: Phong Rendering With Soft-Shadow Mappingdav1d: Summer Nature 1080px265: Bosphorus 4Kbuild-godot: Time To Compilesvt-av1: Enc Mode 4 - 1080px264: H.264 Video Encodingtensorflow-lite: NASNet Mobilencnn: CPU-v2-v2-v2 - blazefacenpb: BT.Ckvazaar: Bosphorus 4K - Ultra Fastsysbench: Memorynpb: FT.Cncnn: CPU-v3-v3-v3 - blazefacehpcg: openfoam: Motorbike 60Mdav1d: Chimera 1080pnpb: MG.Cincompact3d: Cylinderncnn: CPU-v3-v3-v3 - efficientnet-b0npb: CG.Cncnn: CPU - googlenetrodinia: OpenMP Streamclusterbuild-imagemagick: Time To Compileopenvino: Face Detection 0106 FP16 - CPUopenvino: Face Detection 0106 FP32 - CPUlulesh: openvino: Person Detection 0106 FP16 - CPUnpb: IS.Dbuild2: Time To Compileopenvino: Person Detection 0106 FP32 - CPUparboil: OpenMP LBMncnn: CPU-v2-v2-v2 - googlenetwebp2: Quality 100, Lossless Compressionwebp2: Quality 75, Compression Effort 7mnn: mobilenet-v1-1.0ncnn: CPU-v3-v3-v3 - googlenetwebp2: Quality 95, Compression Effort 7amg: dav1d: Chimera 1080p 10-bitmt-dgemm: Sustained Floating-Point Raterocksdb: Seq Fillwebp2: Quality 100, Compression Effort 5ecp-candle: P3B1rocksdb: Rand Fillncnn: CPU - vgg16ai-benchmark: Device Inference Scorencnn: CPU-v2-v2-v2 - shufflenet-v2ncnn: CPU - mobilenetnumenta-nab: Earthgecko Skylinehmmer: Pfam Database Searchstream-dynamic: - Triadncnn: CPU-v3-v3-v3 - shufflenet-v2stream-dynamic: - Addnumenta-nab: Windowed Gaussianctx-clock: Context Switch Timestream: Triadmnn: MobileNetV2_224ngspice: C7552stream: Copybuild-php: Time To Compileocrmypdf: Processing 60 Page PDF Documentstream: Addstream-dynamic: - Copystream: Scalestream-dynamic: - Scalencnn: CPU-v2-v2-v2 - mobilenetncnn: CPU-v3-v3-v3 - mobilenetcompress-zstd: 3ai-benchmark: Device AI Scorefftw: Float + SSE - 2D FFT Size 2048ncnn: CPU - resnet50numenta-nab: Relative Entropyastcenc: Mediumcompress-xz: Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9appleseed: Material Testermrbayes: Primate Phylogeny Analysisopenvino: Age Gender Recognition Retail 0013 FP32 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUmnn: resnet-v2-50tungsten: Water Causticnumenta-nab: Bayesian Changepointncnn: CPU-v2-v2-v2 - squeezenet_ssddacapobench: Tradesoaponnx: shufflenet-v2-10 - OpenMP CPUncnn: CPU - squeezenet_ssdblogbench: Readncnn: CPU-v3-v3-v3 - squeezenet_ssdncnn: CPU-v3-v3-v3 - resnet50renaissance: Akka Unbalanced Cobwebbed Treencnn: CPU-v2-v2-v2 - resnet50ncnn: CPU-v2-v2-v2 - resnet18dacapobench: H2daphne: OpenMP - Points2Imagencnn: CPU-v3-v3-v3 - resnet18ncnn: CPU - resnet18ai-benchmark: Device Training Scorerenaissance: Twitter HTTP Requestsmocassin: Dust 2D tau100.0ngspice: C2670mnn: SqueezeNetV1.0rawtherapee: Total Benchmark Timeastcenc: Fastbuild-gdb: Time To Compilencnn: CPU-v3-v3-v3 - alexnetqe: AUSURF112ncnn: CPU-v2-v2-v2 - alexnetncnn: CPU - alexnetcaffe: AlexNet - CPU - 200mnn: inception-v3onnx: fcn-resnet101-11 - OpenMP CPUblosc: blosclzonnx: bertsquad-10 - OpenMP CPUonnx: super-resolution-10 - OpenMP CPUneat: basis: ETC1Sbasis: UASTC Level 0ncnn: CPU-v2-v2-v2 - yolov4-tinycaffe: GoogleNet - CPU - 200build-apache: Time To Compilefftw: Float + SSE - 2D FFT Size 4096mafft: Multiple Sequence Alignment - LSU RNAncnn: CPU - yolov4-tinyinfluxdb: 64 - 10000 - 2,5000,1 - 10000numpy: couchdb: 100 - 1000 - 24dacapobench: Jythonncnn: CPU-v3-v3-v3 - yolov4-tinywebp: Quality 100, Losslesspyperformance: python_startupwebp: Quality 100, Lossless, Highest Compressionncnn: CPU-v3-v3-v3 - vgg16aom-av1: Speed 4 Two-Passaom-av1: Speed 6 Two-Passjpegxl: PNG - 5aom-av1: Speed 6 Realtimepyperformance: floatbuild-eigen: Time To Compiledaphne: OpenMP - NDT Mappingrsvg: SVG Files To PNGncnn: CPU-v2-v2-v2 - vgg16simdjson: PartialTweetspyperformance: pathlibcrafty: Elapsed Timesimdjson: DistinctUserIDcompress-lz4: 9 - Compression Speedradiance: SMP Parallelminion: Gracefulpyperformance: crypto_pyaesdolfyn: Computational Fluid Dynamicsgcrypt: scikit-learn: simdjson: LargeRandsqlite-speedtest: Timed Time - Size 1,000webp: Quality 100, Highest Compressiontnn: CPU - MobileNet v2cryptopp: Integer + Elliptic Curve Public Key Algorithmspyperformance: nbodypybench: Total For Average Test Timescryptopp: Keyed Algorithmsfinancebench: Repo OpenMPminion: Quasigroupbotan: Blowfishminion: Solitairehint: FLOATbotan: Twofishsynthmark: VoiceMark_100fftw: Float + SSE - 1D FFT Size 4096pyperformance: django_templatefinancebench: Bonds OpenMPcompress-lz4: 3 - Compression Speedswet: Averagebotan: CAST-256botan: KASUMIradiance: Serialespeak: Text-To-Speech Synthesisperl-benchmark: Pod2htmlbotan: AES-256etcpak: ETC2tjbench: Decompression Throughputetcpak: ETC1aobench: 2048 x 2048 - Total Timeetcpak: ETC1 + Ditheringtscp: AI Chess Performancequantlib: tnn: CPU - SqueezeNet v1.1perl-benchmark: Interpretermontage: Mosaic of M17, K band, 1.5 deg x 1.5 degwebp: Quality 100himeno: Poisson Pressure Solverpyperformance: regex_compilephpbench: PHP Benchmark Suitegnupg: 2.7GB Sample File Encryptiondaphne: OpenMP - Euclidean Clusterinfluxdb: 4 - 10000 - 2,5000,1 - 10000etcpak: DXT1jpegxl-decode: Allrav1e: 10rav1e: 6rav1e: 5cryptopp: Unkeyed Algorithmssimdjson: Kostyaecp-candle: P1B2rodinia: OpenMP HotSpot3Djpegxl: PNG - 8jpegxl: JPEG - 8dacapobench: Tradebeansjpegxl-decode: 1redis: GETtinymembench: Standard Memsetlibraw: Post-Processing Benchmarkonnx: yolov4 - OpenMP CPUredis: SETredis: LPUSHredis: SADDjpegxl: PNG - 7renaissance: In-Memory Database Shootouthugin: Panorama Photo Assistant + Stitching Timekeydb: plaidml: No - Inference - ResNet 50 - CPUaom-av1: Speed 8 Realtimejpegxl: JPEG - 5jpegxl: JPEG - 7wireguard: ecp-candle: P3B2deepspeech: CPUtinymembench: Standard Memcpyrenaissance: Rand Forestcompress-lz4: 9 - Decompression Speedcompress-lz4: 3 - Decompression Speedcompress-lz4: 1 - Decompression Speedmbw: Memory Copy - 8192 MiBcompress-lz4: 1 - Compression Speedrenaissance: Apache Spark PageRankmbw: Memory Copy, Fixed Block Size - 8192 MiBrenaissance: Genetic Algorithm Using Jenetics + Futuresrenaissance: Scala Dottypolyhedron: tfft2polyhedron: rnflowpolyhedron: proteinpolyhedron: linpkpolyhedron: fatigue2polyhedron: aermodncnn: CPU-v2-v2-v2 - efficientnet-b0ncnn: CPU-v2-v2-v2 - mnasnetncnn: CPU-v2-v2-v2-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2-v2-v2-v2 - mobilenet-v2ncnn: CPU - blazefacencnn: CPU - efficientnet-b0ncnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2stress-ng: CPU Cacheblogbench: Writesvt-vp9: Visual Quality Optimized - Bosphorus 1080pcpuminer-opt: Deepcoincpuminer-opt: Garlicoincpuminer-opt: x25xparboil: OpenMP Stencilkripke: EPYC 7702EPYC 7402PEPYC 7302PEPYC 7232PEPYC 7552EPYC 7272EPYC 7662EPYC 7502PEPYC 7F52EPYC 7542EPYC 7282EPYC 7F32EPYC 7532EPYC 7642605852106421.951119926.6125067.043989.7925012.983908.23397326.45700334.927.03398211996.591.0784420185.51845888.7971831.1594313.8391009084532.80266007912.84919.095143169.1208.81120917619.765.9216.6712284946111.0720382.219.121826490841830004529245.804055.56179226.246275411.94247154.42119.1782324740.492642184698248999920.279103.5554.71176.927.824927.046.357.886.056.027.8814.707148.77243.5736468.835037.40.11289745323.3576426940.0711.50055.401.888427026071.4944624.8184.3730.8006432275649509.1427.6727017677.5070661679.62.881750.7734740.58744319.99719577.03147.52247268667.73754918.869119.3379.824110.474571117.291.16273117.5628.13026992296.532314.362306.3328711512.990145868.2946485014.9481.0602.8201526.494.37131.897.81979909.29874.440234.205878.199875.7081.703604.4626148.716409.36154.4367333.55892432.92219.282.33519102548.95521814.804437.3136.0713.29719.1119155.41.8059867.2525569924.0341.79820.46792.4961050.9925.0462.5576.752198.5895873.78.57123271.0053.526302136.653755136.508.3617.3075233.34983.6151795.61224.61637424.9615079.0836.768.92817.1154029.984030.5314612.2085170.991971.5067.9665153.9923.32292933.33481.953139.3623.18732.82257.005878375333185.3115.6148124510038.125649.30745154738.51196517.1835.5884.833191.22515.996.93012098034.85.067132.61690511.842.56118.73297206.387714.634.5135.128248.330892614333.9815.9446.6822.047188.835775115.2651.221.2126.92421.968333.26833.143626647833.16137295533.6134.4419829.40934.0717.09468619062.92402490516.8816.8011243811.018242177.6168.37556.0305.7686.3679.031208.319.029.0112896538.001748372.7390408717.36654.1028.09535.8734748424.5591779310.09735.121308517.2306.5097.954501235.4520.3918.7142.28538.632.173.3819.8413396.089861.3524.33938.400.5220.268999480.5444.54214.76952.35238112920.559261.44810.1310.3377.6119.003333.2964152.5975331351166539.45489051015.865885134.930013368.93079.392911297478250.42946302.135635.9224432857.588553.78385445.69602434885119.99577.699713.55635.9270.149543394499.500137.284169.538519233.34941.153220.80310158581970.1313.0050.0009708494.7632.9053762.48893019852960985.236954.201212918.21019.2052.9701.3421.005298.7292510.4643.164106.57644071482738.9217329.834.502401228831.791031876.331347163.145000.13855.676371448.215.5632.78307.852817.66270.113419314.02114.38010219.510162.010996.115599.8989496.004023.9709048.7431479.0401784.87022.2519.4815.953.2561.087.2225.9019.8519.3321.789.3226.7622.3416.7120.7022.9843.7044434305.10461139581.061360.063.08643519968396021053942008.11468541.1212800.591813.2913220.951806.54182716.39337552.4516.653755847.703.5434712173.7902070.1513344.1338228.194488047325.64531766125.6839.63071372.7924.49410443329.123.078.77625817755.7613892.537.7337138575212733389.0720.8330.3092207.073318425.88911291.52219.7543976950.937051143914345216530.480196.70102.99541.674.172793.8311.714.733.513.534.7620.355726.60134.1962904.161419.90.20149913833.81136641069.8420.46491.815.9220412067902.3830614.9072.7411.295992007056379.8817.17172185412.262393404.15.714261.5616880.59230814.03813570.8396.94709.51439105.69197229.16051.0954.87365.4171930.71131741.49715211674.661673.311672.2421642718.588111563.8440011411.22116.4093.3463323.3427.8111.577972.927312.698974.124972.7721.829515.2749959.652413.09194.5252474.25111432.3322.4897774421.61412736.067328.5426.6118.16716743.61.4348780.4964766232.0652.101721.297847.4023.8476.1625.563174.6310504648.335614019.491715.5629314.06839.06200.21011817.6414.76120.4302507.582522.618024.37063310.9176.6503315.6329.020656467.700138.0443.213254.240778459433135.697.2082358382247.974576.97880506631.46192619.5285.758133.7617.12116887308.45.27079677.247.41320.87486805.679147.38033.034722676922.7916.1247.4722.399173.04879885.9960.880.9229.02223.870934.3113310881023.02195961514033.698377821653.91001367112.5315462698.5882178.97853.9296.0686.9731342.147.8015469336.8767210178.3500494616.91655.1378.26739101524.530176449.72527.781290897.8307.4786.682505320.5048.7242.3032.193.4219.8813394.835899.4523.6570.5320.268807600.5345.09217.54552.29540313120.480262.91310.1540.3375.9059.014336.6574151.2915971351159538.08567350474.697917133.727081368.37779.032765297827555.09073302.120636.4294341457.288634.30989645.98600065572119.99377.570714.23535.6940.145735974487.898137.089169.701237233.38740.588220.71110133201977.3314.3070.0009558693.9232.9013765.84838720054252184.433962.881248284.11018.9712.9671.3511.009298.4906040.4641.326105.28339931466557.0314961.434.032801147451.381045821.591393300.084340.27953.356415583.965.6832.99297.531749.66872.684528902.82059.09110256.310209.310951.015510.9219472.863856.0099012.1111468.6821777.3184.2111.698.169.528.138.4141.1142491319.05193634490.86862.814.57259120981115015103027632.18545622.829316.801190.489314.821191.90120826.81223141.6521.951883871.573.7064010427.2603381.3330023.7639542.381329737498.44921905538.4886.57548284.8263.1297003110.182.156.07421122194.0212204.454.8405951371460333129.2614.0820.8363930.722348634.02554411.56317.2730862371.35058769720903684490.679287.81147.38229.412.911922.6816.683.322.512.513.3125.215509.390.3782490.480320.50.28035778140.871742890102.2928.702129.816.0349715670703.3466310.6022.0141.841781352605503.5912.18120846616.61751239476.365131.8738040.80227210.3749499.3373.86676.41233149.33571939.75441.3537.04495.7711460.60128853.39912532410.232407.932412.7916324824.05884316.2310613779.15139.1524.7260817.3921.2415.44539529.211229.69404.3261227.561228.502.410827.4687198.186316.49260.9830995.45400432.2783.2837761041.27303138.258245.6022.9423.34217082.41.74975110.9393845839.7372.606551.968634.5120.4496.7724.506150.4514444565066.6041.524502591.448237175.2015.6329313.64698.0847349.29262.63329115688.5716.8317.72923.5462414.742413.457954.15183145.891647.7887.6573155.8524.924913562.067164.8053.856301.520788266875116.684.9637157562689.354574.76769204430.84166019.7088.323128.9287.64518088105.75.77280140.152.74923.39287568.879703.57699.431192604521.8817.1838.3023.153191.51303285.9320.840.8431.54326.020134.8433526897522.45189842813564.171363322339.84670168912.0214592289.50458.5066.4792.2571403.527.4415253941.1186910476.8456452217.75057.1818.66038717325.022176009.89227.871249264.6301.2889.433514520.8658.8242.8822.143.3619.1113597.200898.2123.7840.5220.467876920.5343.98218.13552.92836813020.793268.67610.2810.3377.7449.151339.0524086.0757961371172530.36259351629.102865136.522436362.20480.330573292597759.78158297.647627.7434276157.689863.26041744.53598589228117.87776.510724.99136.3560.149784894430.718135.180167.628106229.55540.981217.4969971331952.5318.3710.0009547895.4482.9453792.80832120252418985.823943.291188387.21007.9502.9261.3290.997294.4371280.4541.148106.76941381438157.7314820.333.712771173570.001030785.791345235.044254.65355.773405905.785.6732.62308.487763.80373.171928842.72167.67310132.110172.711006.615480.2909428.773992.3048995.8533.7210.807.328.917.317.8532.7237026242.24126972965.29447.014.1769711628983876364813398.09842708.904012.40579.184115.44578.1258583.38108250.7645.154601878.746.376683350.22293387.3962697.1503087.1941603499417.42110607379.1683.11723406.3971.4563438510.350.992.82210277951.893879.10112.3935459417171757864262.146.9910.3131690.601113666.41848866.85664.2914706212.72553392061561698321.473558.87296.68614.291.50961.3833.151.741.281.291.7569.912616.8148.611654261607130.543184251113.163472940198.4555.775267.248.6715431369206.794455.4060.9853.58363517342840.136.5562327232.538824248011.96543.2271032.018385.2374965.9637.3681265.54408273.43231.1621.76175.4276101.77947101.0907474347.664348.884345.129437145.58644242.9828328154.72271.1578.271849.3011.4125.3742397.56763.5562397.222398.984.7386613.9266148.199144.32487.37237210.9582351.7076.1226133816.481770914.123150.4312.9839.66610156.63.36727186.1802133468.5024.690327.450419.199.20172.1312.53678.2121225323.115942474.372821822.068.64661573.04473.8230311.94450.8456939844.9517.8715.54037.7162298.632314.666757.98933071.911083.36145.4433090.5845.762531954.797277.6723.917507.74144991646094.361.58370461130215.547715.20954581640.67111222.33141.245126.21411.91221756788.64.944212.89452390.075.32933.01556795.152630.15123.222061837828.1923.5635.8234.509278.32378988.6660.960.9831.43533.916445.2125021883627.971944496379323230.18582577314.141094276211.4939.65072.4737.65114.8261656.889.8615155938.331599386.8409394121.34864.8269.75636902728.0821471311.11229.671113861.3270.83107.009558022.4539.4946.2971.983.0817.30143103.983861.0725.0240.5021.164146090.5243.17226.64255.07657113521.546273.01711.3270.3181.5089.414347.1773949.4362671411213513.65399753237.682291140.533955352.29182.835329284332324.15296288.698608.6054170459.592426.49479244.93561816772114.58874.265748.58737.3960.156532084311.057131.182162.564874223.02542.441210.9299693711892.0327.9890.0010060898.4143.0353589.93359120851190688.440876.451041996.8976.5862.8221.2890.965286.1889340.4444.486110.49738941421564.3514571.732.482431126925.50991962.081302498.7159.310399110.885.1731.08326.233783.23772.993108805.910057.810063.010936.615523.1409421.698983.7563.4910.876.6410.206.727.6132.5321916107.116207.891473.31216.5810.63502310089875846822780743.888815958.1623441.913292.8423559.173252.13328019.70599634.178.9885389992.151.7196716990.11530338.7875472.2861116.610823971323.35254055815.21215.984119683.6517.55517727598.845.1014.291053045789.5217300.822.810022974735286673784354.3434.4850152001.955357015.34740180.94139.2171835790.574841850950087914560.316122.0364.20766.676.754416.137.427.225.535.537.2015.286302.57217.7050401.848507.40.12083250925.1797882846.5913.27362.721.997078935171.5549422.0463.8630.8680242308718454.726.1324565118.7195368430.43.312620.8907260.53396118.50618192.32125.72716.7192773.33387120.79691.9062.39593.4811961.1323730.20619691352.621350.131350.5533130013.820135838.1205616315.5086.7392.5533527.5032.688.405787.426245.397784.815782.6521.599384.3637247.425445.92154.7312843.84018411.5272.1579495079.41514444.872406.9335.8114.23618424.11.7300764.8825694525.4991.759933.0411044.2525.3662.4846.494198.0092153.255.996380400.350816.6494260.781095.40206.58356223.9712.02718.0633325.993327.5013435.3034197.6069.8594203.8126.474040481.502140.1803.022258.367856893400178.7012.7350315362428.162584.61252191034.20204926.5086.062165.5467.03313296497.14.852131.53088717.243.84118.85595807.086805.68172.033972630328.5715.9726.9322.540182.378635104.5180.990.9925.44322.647133.9553550737328.941771624446320634.19988687314.951348223170.2168.14254.4925.8986.9511329.909.4917188535.466799155.0440464216.66654.9788.19839961024.893170909.92531.611288738.6303.4592.589511220.8528.8342.8592.163.3719.6313497.222895.5224.5300.5220.566748810.5244.49217.90453.29321213320.810264.98510.3580.3278.2559.126344.3274085.5278931371178530.79552851630.503906136.065979363.50380.151065293216264.80405297.102626.5664340458.190310.90364645.38605644364118.32476.575727.33736.3970.151249764421.410135.139167.611370229.45941.165217.4989974611945.3318.6740.0009615395.5312.9453793.16646620252357586.367956.821208480.81008.2392.9291.3310.992293.8811570.4541.914108.24642181450496.7116494.433.642641170719.461013049.131349388.2455.419394639.235.6732.60302.105788.51870.719359087.110156.510159.710920.515459.1649369.359022.4516.5719.6615.4912.8214.2116.0549.7841586336.24335897811.991139.383.77956821131030710514020091.95914078.216689.95860.016803.09867.8187851.58162241.2233.808522821.195.953486100.18440879.6417374.9751258.2182431444311.61815534752.8064.85535086.9512.2895095560.161.554.39317918132.937130.8975.035469655107233312198176.0410.4215.62546780.941755049.11209554.64426.5923579281.83895569289672756770.908387.23200.34821.742.141421.9822.402.471.811.812.4748.293257.0468.511201191170600.35927851067.232493747136.2038.411177.655.2533122429504.527587.7371.4092.49372931554004.089.0388396521.99581738079.831062.9651361.093447.5827332.8255.28844.1925195.08162451.77134.2430.37751.6510091.0389569.6129463057.083059.193056.0513158331.14361838.5158416056.49205.8916.2913513.671.40716.7918.1342022.45529.6162032.882029.283.809559.81098107.967239.47333.2460957.60505370.3944.4407743321.38251709.939203.4017.8329.24410066.62.43760131.0273189649.9193.137457.132555.1117.09122.4103.593117.1417754031.397232205.29559.09743530.10624.79325.84412717.5816.89528.2182434.172406.226871.93033264.49105.7073265.9342.947500680.895199.5415.322361.481457855350107.253.46400370585511.131623.78165605540.01137220.06101.587129.4438.79820356066.65.930137.58551714.460.20326.89855911.651978.26498.426792487626.8319.1059.3825.656210.15902787.6000.870.8938.10229.954538.0653757872122.622023180355223807.66765484214.501307250174.2009.76862.7586.9799.2241520.439.8615487138.616709442.9403394318.09660.1029.19037836726.0181682010.23629.231185857.8298.2792.348535321.2749.0644.2692.053.1917.8313899.844864.6224.0890.5021.065513510.5142.94223.37954.62808013421.394273.19510.7460.3280.1919.419347.7733965.0335961411208514.84574352936.979167140.427020352.07281.965340284710986.65814288.622608.0504191260.092387.25260443.23581194588114.68874.207746.71537.4000.154578904290.852131.145162.369693222.61742.475211.0339690661893.2328.3550.0009807498.3053.0293634.17429920851335688.439919.011143360.5978.8502.8611.2960.972286.1818200.4443.383109.59839711372468.8814776.433.122591156241.741037523.561285763.3757.242418080.475.5331.63305.223748.54273.788988825.710070.210101.010929.915482.7119365.448970.7683.549.846.499.616.547.3225.1728663184.839361.062192.82322.837.631597121256433631038108892.224020174.7628284.874019.0928316.503967.28400773.38735794.996.99532112091.560.99373220993.11867023.4567601.1250213.752988474692.78765260212.81519.062143292.7868.89820981193.886.0016.9512629007711.2420991.719.084127014041953334465845.844058.82179496.786229511.80023154.23118.2681753830.489082147177609397500.267101.6854.65476.927.925047.216.388.986.606.628.9714.036699.07243.2533024.032085.50.10991552322.8370083340.3311.50155.821.798746552791.4312425.2064.5410.7808032193809509.1429.4127626477.5509356195.52.801420.7622380.51702321.76319990.75149.62220.7240867.32634618.656115.4883.399109.3020451.1460127.97723762230.182203.162221.3333341212.956155375.9841112816.4478.3932.5236331.844.53337.387.74780724.16793.199233.513793.228792.5461.489814.3260947.369448.08153.4603923.50412439.2522.02231103985.91543184.617457.2739.1513.20619256.71.7419966.4826094923.9781.643873.4171193.6625.6562.2726.966210.6988093.7125512.6158.646374595.120955668.6317.3863232.981158.0852245.23206.95286615230.2334.008.90517.1873562.633558.0914603.3334731.722006.8168.2914732.7323.819394474.554139.5583.073256.563883735200190.1916.8811684471918.052604.14844286437.35212235.7884.794191.1496.97112098343.24.920136.86690296.543.06018.79397180.987848.18287.733412686732.7315.9686.7822.047187.069245116.1461.071.0625.22122.132233.4063624668632.091397475477319914.99783285816.191219239172.3717.54255.0425.8387.0201216.508.8112769933.581798328.2395417516.88954.8228.13234562624.916176819.98934.151294664.5303.2197.295502120.9018.843.1352.163.3619.5913497.554897.5924.7280.5220.567539020.5344.80219.2653.06867813320.783265.19210.1300.3277.9439.154342.4664091.7839781371184529.61944451437.791667135.897217362.99080.118025293169027.14643297.808624.7264273058.690672.44791745.19602203842118.22576.545723.32236.1970.149767954429.185135.235167.620286229.86441.259217.4499978091955.3317.5490.0009745595.5602.9463837.55931820252475185.728966.041209698.01007.2232.9261.3330.993293.8097300.4542.351107.87644571417246.6715640.134.012431200527.311039389.171328786.5055.945376241.115.7632.54302.424789.55570.671318852.010249.310170.710926.315616.8979376.869073.5658.6826.5222.0517.2920.2923.0144.3645477332.56450519507.951398.773.10012221577122731635056038.719411121.5918612.542380.2718514.602368.69237596.66437033.0314.781577273.713.4623612067.31147733.9487312.9628322.632608646164.59938779220.90711.85487391.7945.47712896969.643.7510.60780846847.0112698.230.896317136225963332844873.3826.0537.04112205.013708523.02876237.65182.7654860660.774391393380566141830.407162.2985.005505.043374.659.735.724.364.375.7319.075788.17167.7547911.747083.40.16461066730.62100634760.7517.14078.012.151948978841.9484517.6143.1281.073992337306617.9020.25194783610.574075080.03.565551.1492820.53763615.68516396.57114.33678.8153987.51144725.28460.7757.31576.19135061.510.99031661.4236.38815592743.712740.522738.4225935316.462136276.4993319313.97105.6663.6883326.513.14032.059.892902.259289.359919.607917.6141.593004.6955057.979437.88170.4843453.97043439.2529.342.1972778846.96456715.481361.7233.2516.3109.3916649.51.1820470.7045179429.1241.93510.39857.994932.7525.1369.4195.866182.4285462.84.5954.926612746.496246206.334.6115.2920320.20937.7344082.95189.65938313.2314759.4020.3514.32019.4052794.252791.5412840.6423578.091884.3174.1963582.5627.59831720.16467.427137.4422.92719.96253.096774304800152.159.3269098819867.972577.65184688732.0420229.9421.1485.729141.6479.936.98915087239.84.820128.92379342.945.44019.37886677.278638.620.8320.807903.635102618624.3215.8477.1722.420173.5334194.2620.840.8425.05823.112632.64523.773358842123.73195008024.1523.9724.1913.09391721272.04396400213.0413.171488214173.8957.49352.7445.9385.9628.411386.788.408.5516872832.919649808.0459475516.55354.5678.14928.6442796724.432176649.73529.111303437.6309.1086.233491628.4520.2908.6242.30831.602.23.4279.5919.9713295.806898.8423.65231.840.5219.968043050.5344.68215.15652.34447512820.689262.12410.0830.3377.1859.014336.8924139.9849671351173539.17487050609.036458133.234674368.69278.998691297816529.27817302.254637.3784390356.988487.56770845.77597361197120.06077.688721.7935.7340.148592734497.765137.344170.112337233.50940.532220.75710143421984.5312.8870.0009608293.8552.8993853.50223319753174085.024974.401262530.41022.4802.9291.3381.006298.7084410.4641.035105.2420.6925.0339121419600.1315097.134.112771180459.911045486.811387616.1310.0253.722413354.595.7632.8658.5958.62292.419755.00572.211258854.210184.510232.810968.615621.6419464.009078.87613.339.499.5510.574.6012.929.079.949.1110.0745.2343733334.18265466104.27883.874.24636117679886718240332649.35716629.069663.431410.599866.081407.62142756.35263451.4816.139294580.082.0018313692.3716393.8849032.7782635.920390434107.15223250732.5807.68456973.8493.4618225794.402.346.83463426654.4516399.746.5942109069173166719521109.1713.4522.5676923.392713225.87830356.90265.1330123871.14375938831303970380.630237.79125.53329.593.572223.2714.024.022.982.994.0422.831406.93109.9170154.568637.60.26338118439.89149918383.4524.183108.405.4960513461533.0289711.7571.9951.565031441232113.1414.24147528014.13271073915.846071.7159440.66041911.5218415.1976.15684.61699118.47785933.80146.3042.83949.01377944.980.48998344.8446.91417582013.241997.392013.7517258520.57171767.96040743410.66165.4073.9273720.002.31324.2913.53041985.31997.853343.386998.668999.1421.980356.5627690.523263.13226.8035966.91561140.9157.862.7574344891.69281938.876244.1726.3119.8617.986787.061.52143127.4463715233.9812.6978.97589.101651.3620.0084.2065.333173.501267433.7153090.1047.173087955.627322117.173.687.08925454.84671.0721714.65230.94206311.447779.4921.8415.04921.6321976.701980.566839.17062643.39934.0075.8532649.3535.42234318.66475.385140.6115.77818.79254.764643180925129.774.8976537566767.9731108.2669907759.8114569.1424.1475.225110.4269.046.58617572315.68.171122.44066901.146.54019.85672752.267011.920.1021.557899.726152663635.4714.5967.1021.434192.27867872.4290.790.8133.49924.418829.44821.623288934825.20167651622.7226.6526.2412.40332324205.54322741112.8615.831159197151.46010.20953.1495.5182.9217.421357.307.499.8715132743.3596910910.1372521216.20149.3357.45328.8038202021.621190378.86835.381438050.4348.6683.279437528.8917.6737.5436.44439.032.513.9073.8321.7611483.016773.8620.48038.780.6117.579400470.6252.02184.49444.83632311017.599223.7499.2400.3866.5227.739288.4734834.046071116999628.20792943527.278646114.906152429.14067.743120346505619.64441351.633739.7225096950.476410.10677152.82685188810139.70590.417612.8930.7530.129242855226.754159.603197.789051271.07034.812256.42111781402300.7269.9210.0008257080.9392.5004367.11856617461403973.8481062.321247037.61179.017207.633.3651.5421.156342.9318210.5340.5592.0770.7927.59408537.941635794.5715585.738.012481336546.211174427.001514902.310.6950.59433091.735.9935.5660.2560.12299.326785.06871.672829055.110068.510099.610701.314958.1939461.098872.55911.468.058.018.993.8212.478.729.998.7410.1146.3936834230.31153383522.44524.318.6831317141339432442056066.787311234.6019922.012389.7619958.412375.48241722.09445963.2314.164827720.583.4617411976.91203386.7200632.9583021.372641529294.27341207619.54612.73894237.6515.95013858394.614.0411.36827162627.5812699.528.590117614827986673063267.9527.7840122762.963890422.23304223.08167.9955644330.715491474882806557740.382148.5079.11455.075.543595.039.086.134.704.746.1718.615788.17182.5945118.143977.90.15365491429.6294322955.3215.94372.272.069838406041.8962418.1563.3231.015822365246658.4821.5521368509.9284970622.93.470961.1349610.50052116.56417266.21114.43653.9161781.82517723.96861.2959.93975.44285661.461.00042661.5834.55416662721.872716.542718.6825875915.694144583.8720633214.94103.2963.4411827.673.31733.339.806890.477275.350886.876888.5351.472224.5490457.683459.14161.6351013.97699439.2529.492.0486679644.88466555.369367.1935.0615.6389.4316649.71.1640264.9235232527.8561.91510.06858.474937.4425.2166.5676.091188.4579313.64.7656.986618850.239547098.224.6015.2794319.87939.9044205.80187.60481313.0214977.2919.9414.31618.9352579.792572.3613099.6153276.521885.9871.4643271.2627.63916319.84467.510135.9732.83619.98250.407774329467152.788.8633368857227.949566.99286022531.4821079.8920.4383.830139.81890450.7229.9290238.6346.90017487057.04.713130.08679674.144.54818.65586737.782081.42878399.981529.05720.3520.617885.036412672423.5515.7237.0222.300173.16110286.7220.770.7724.30322.635132.87223.573304869824.52192336824.4923.4023.4512.96386121441.99598117713.0613.011534215165.6637.62552.1255.8384.8298.301317.278.118.2816254431.867699878.0475498616.36953.9278.03828.2741168124.197180479.60228.271206497.8311.9786.572493528.3820.1018.5741.66431.552.223.4880.0520.1913594.971908.4623.34731.640.5319.969839450.5445.13214.25951.62344312920.114257.57310.0740.3376.1648.861333.2834207.9103431331137546.71756450076.274740132.234852374.52177.797414302406827.18194306.615645.8694393656.786946.61458345.90612218425121.92378.881705.15635.2650.148508124561.185139.252172.551553236.90839.921223.48510302952016.4308.9420.0009548592.7702.8583988.94264619853906083.896983.451162690.91035.225188.063.0041.3681.024302.9271150.4640.885103.7990.7125.23388634.221508082.4115872.034.382851189270.871067438.771388206.1610.0952.987422944.465.8433.5158.6858.59291.076739.70972.132698907.710292.510222.410993.215641.0919468.599117.75113.369.9610.0510.484.6112.949.289.979.2010.1549.1943408350.63264936242.21891.024.39585018739700715028726778.61895463.4510017.981156.269884.621155.88117080.86214541.5928.993923760.665.864586587.57586659.0982154.4728743.738312461378.78520815039.6556.21446017.2032.9326612368.671.985.65415838443.727358.1958.469190588135400015246137.7813.8919.6159431.032202241.68767432.92338.5230232391.45145748165603432290.729304.34157.16827.782.741862.5317.733.232.382.373.1645.463317.9384.5486277.684723.50.29034459856.571820153108.3430.506139.466.0633316391673.452909.8891.6771.913051364424294.4511.34102199017.65421292936.218151.7835890.8757499.6859134.7266.470561051151.85937941.24040.2735.38362.28633440.530.81682639.7257.90810422834.092835.352836.6816353526.26576796.0449199128.61177.9055.0832816.611.67520.3414.79224658.721882.64439.9961886.301884.703.446687.4007499.719295.80275.8058766.76242378.8137.253.5017549617.9225411321157.79125426239.842568321.7024.239253887.4110003.61.80647102.5283858642.8612.5968.02563.196644.2320.03103.7414.225142.371512353.7948665.9639.657880986.585829526.973.869.05154520.45712.2629776.81270.72317510.659697.8318.8418.18925.4592527.862446.026954.79643344.391422.0794.3453342.1346.07327818.35580.068173.7174.08118.93309.984455760633114.494.4524197757969.654655.44773538139.4215389.0420.4696.849132.39157372.7298.8357323.3808.10519655596.35.855135.67951093.155.22224.23355437.253581.26551044.253610.35120.4820.886706.229222560125.1718.1038.6324.901177.8496289.2040.790.7833.58726.789536.34922.943492861122.85204303722.8325.4925.5614.19367122574.80373997514.5714.521384239183.2189.61061.5396.6894.47910.181456.8210.2310.1615576343.131779400.7400404417.39158.6388.92329.5838869925.6481755910.22129.781211354.3296.9687.936528030.2221.3839.1344.37240.032.083.2367.2918.9213999.922858.5424.45939.490.5021.365476900.5142.25224.51754.76324313521.422274.30410.5390.3280.0469.481350.7583948.9987381421221513.59886552977.602865140.134277351.14082.147951283588722.34226287.932605.8254237559.192896.61718844.38571825580114.47474.059746.1837.5520.153417054293.869130.881162.074477222.35742.435210.7219681491887.0328.6430.0009921098.2383.0373756.95980920851122889.682928.431172021.1975.215197.762.8381.2930.969285.2536860.4442.783108.8580.6623.05388132.161414840.4114921.333.082671143594.63997652.461325778.669.0856.098420324.965.5732.0353.8453.75296.164736.22474.989548850.610110.910112.210898.315482.7359360.358945.24210.587.136.927.833.7410.406.958.876.947.8830.1233627229.82127932991.73429.515.7589251124897227936516323.91543306.394953.65705.214984.35704.6971366.17131830.9534.580032288.334.301306907.54356570.4067766.6820971.5791919119914.29812062264.9693.88328528.3441.8084265815.031.273.65246757532.398265.9992.15755695587470010044214.448.1612.538378.081420452.78843683.81508.2616036732.22949483317352133451.173454.49243.84317.241.841161.6827.162.161.641.662.1636.804004.0859.601360231332360.46421561760.502872227156.3145.468207.127.2376325929705.582606.7051.3453.01292581184035.117.9477688026.516120125810.90092.7962771.160176.4705638.5347.01104225.0879660.40332.1626.13134.03768332.270.79821732.4479.66410523345.673348.063345.1010084335.24554800.9923541795.73200.4136.4639812.061.34814.6722.2181745.54581.2981742.791744.753.5602012.4141130.975167.08381.3707119.15124380.7266.225.0323643172.51416062054112.17641573168.174167915.5632.633414226.5817351.22.79034172.3532413754.1494.1457.18359.544453.3610.20136.6233.18495.501779283.3027.164656182.257727284.693.2812.9560362.74496.1145336.89375.70367410.5012350.8716.3219.70728.5971836.511844.318720.88222399.191277.15113.2602409.0327.24515516.18778.658227.1303.62516.12416.273809844583109.553.01159562450212.788668.98558140733.8113019.6120.5498.522104.2559.599.79018589567.24.578119.44482399.759.56427.20289255.481753.020.7120.686573.924743003723.7518.8819.0429.607247.66026572.8910.780.7927.44528.100035.66925.814095982325.76177203725.1523.9824.3711.80331627135.63258623411.9011.881173221158.7919.35762.2366.2894.0077.641356.267.637.6014055834.9145911206.8480451817.43553.6488.07727.4334016823.001190368.59827.611279851.8345.3693.792454427.8117.7047.4736.48733.672.383.7363.5120.2211583.502967.3720.03033.600.6217.279398830.6351.95184.73944.92286611117.560223.7099.2930.3867.3027.745286.4274835.015067116998627.43590843548.631510114.949484429.40567.803915346733990.00480351.989740.5785067549.176027.97656252.79686395419139.83390.459613.54930.8390.128195455238.762159.787197.766733271.32034.835257.00811806822285.2269.6410.0008356380.8482.4954336.40557317162167273.7511039.601155871.81179.304183.953.4091.5551.164343.6068080.5336.98493.0760.7927.44372438.421601070.5016357.238.592621315900.461174520.461510395.939.7852.987424600.075.9635.4860.5760.25302.696775.67169.509479025.410561.910567.211210.015666.7009802.819215.23710.536.556.817.623.3010.556.429.606.587.1324.8527443124.357644.101769.46262.909.84410212953692031725055254.656910901.4716446.882333.5416443.732318.44232455.68428002.9710.885767105.731.4837418311.71123555.3010761.2515423.050583840264.71037866021.42911.53185303.3525.41512627261.933.7010.42744226996.9020585.331.565717124225340002840175.272535.71110766.793622318.06182243.58189.0952143020.790791330385505875480.426168.4087.55647.624.923284.549.995.684.194.205.6815.296455.40163.0049078.548084.70.17357834026.53103436761.2517.67078.942.205439218651.9880917.5253.2671.104862113618257.4720.14197732510.813177264.73.702341.2568820.55344416.26515568.24122.73653.6176989.01144425.88466.9056.528128.29087865.300.64119764.8936.84517352230.352221.372214.7925368916.383135786.9376541413.6593.2363.6013025.353.25630.369.858813.120280.778812.987814.0271.615024.8425659.438426.52172.9145633.61268439.2409.942.2224286547.2813556443435.64813619349.991379832.7216.6981392610.0319645.41.2123085.5614761529.2282.10311.03849.266880.5723.7169.6245.958181.1584263.54.9154.074720934.255849235.924.8317.9677236.99892.4252022.76193.92387914.0018019.0121.939.91319.2662815.802814.1513938.2523757.351992.7674.4813754.8522.94452720.92474.585142.2003.00321.03260.463909532667146.299.8288718253748.055586.66880681433.27204310.4523.5986.698145.010104119.15610.56103789.9857.22014499248.24.821132.04190663.246.09819.94297912.594075.67089487.692368.40822.7523.418476.335182630524.6216.3327.3322.092186.26655696.8130.950.9525.26623.553433.43325.643497807025.94163609225.5824.8724.8213.06402620967.47929720813.1013.091475215175.8477.33854.7736.0588.4157.721403.877.727.8117250832.4687910253.3466486417.21355.5878.35729.3844197825.021173089.93829.661281490.5303.3391.315507029.4220.8618.7942.90433.812.153.3776.1819.5113497.372895.3324.09033.520.5120.465452110.5243.69217.32753.18175013120.790264.67910.3250.3278.0659.140344.2474089.1196511371180531.13832051428.891927135.334893363.09879.846201293437386.10939297.705627.0594327558.390157.28125044.69582684333118.29076.528724.55536.2430.149882934429.474135.006167.515731229.73041.187216.7879994421922.3318.5790.0009675395.4792.9453830.20265120253295785.792949.151197778.91006.923171.732.9371.3360.999294.2737150.4541.521107.4780.6924.06430332.691481804.0014786.933.062711186674.321059976.321319402.589.7055.571404313.155.7132.3554.9754.73305.057783.25171.561358829.910188.610118.010931.615503.0489434.318963.63814.1610.0010.0011.164.9514.6310.3810.6810.5111.5551.3939412325.84230135725.79807.023.62150721652513347050222986.053322.9922996.933310.34611194.211.1080521472.31.2201154177515.03916.230120520.6827.5965.1514.499.5823040.722.5643355766734.485054445179.18137.8371316000.570481857129827756870.323121.2163.47266.676.816.197.795.675.677.8113.666615.81218.7345818.845436.60.1283545723.4393869145.7813.27061.931.930428307301.5391522.4424.1170.8631462155769427.1727193888.5682865104.43.198890.9026110.48421919.328130.1231170.48538993.9563.20595.3966500.81328030.07522031210.041211.541213.5833179113.712149967.7395286315.9479.9772.4038029.3735.258.434732.808243.915734.403735.8191.467754.3614546.532458.24153.9756453.54316432.91.9872699557.23503814.972416.9637.3919436.31.6675067.6585888925.2131.701889.3881070.0925.2361.8146.678203.7486140.659.5917.6850233.731116.83195.34316027.689.18517.6723059.053069.3814015.6604132.1369.4334138.6921.939827473.759138.0773.056254.877179.7213.9424655275418.074590.56451844634.90212528.9285.032166.1087.0634.82543.8178499.235282628927.8315.88722.006185.013289102.4501.021.0224.78222.554133.604722128.68158335020036.19438260015.3514032197.47987.2101372.738.5617381232.855809301.0431450617.02842987524.9331760910.04030.631295588.8303.7591.51220.8628.8442.9602.123.3319.4813597.993903.300.5220.50.52218.42353.19656213420.744265.91710.3370.3278.2299.145341.9334072.7598801381184529.604123135.089449361.79779.722485296.9074283658.4117.98776.217725.22936.2770.148682534425.915167.57310141.166318.6710.0009657995.3872.9523816.07870720352157286.360963.001215056.12.8941.3200.989292.5985200.4541.85107.6971437534.232631183717.451018402.691360852.66386824.545.7232.31792.74270.174906.8021.7116.7114.0615.9417.7839319346.63343637961.641147.363.194662230851783OpenBenchmarking.org

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: SkeincoinEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P140K280K420K560K700KSE +/- 5351.86, N = 13SE +/- 12582.41, N = 12SE +/- 9107.04, N = 12SE +/- 6158.44, N = 12SE +/- 2411.69, N = 3SE +/- 2127.92, N = 3SE +/- 4313.15, N = 3SE +/- 3944.26, N = 15SE +/- 335.77, N = 3SE +/- 981.65, N = 3SE +/- 1328.09, N = 3SE +/- 728.58, N = 3SE +/- 654.80, N = 15SE +/- 611.43, N = 663103860585247050246822732442031725031635021053918240315103015028710514079365636481. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: SkeincoinEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P110K220K330K440K550KMin: 570830 / Avg: 631038.46 / Max: 649140Min: 470320 / Avg: 605851.67 / Max: 630960Min: 373060 / Avg: 470501.67 / Max: 494480Min: 404270 / Avg: 468226.67 / Max: 483970Min: 319750 / Avg: 324420 / Max: 327800Min: 313810 / Avg: 317250 / Max: 321140Min: 310740 / Avg: 316350 / Max: 324830Min: 187570 / Avg: 210538.67 / Max: 242310Min: 181900 / Avg: 182403.33 / Max: 183040Min: 149150 / Avg: 151030 / Max: 152460Min: 148430 / Avg: 150286.67 / Max: 152860Min: 103990 / Avg: 105140 / Max: 106490Min: 75330 / Avg: 79365.33 / Max: 84270Min: 61790 / Avg: 63648.33 / Max: 661401. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Sysbench

This is a benchmark of Sysbench with CPU and memory sub-tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 2018-07-28Test: CPUEPYC 7662EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P20K40K60K80K100KSE +/- 52.65, N = 5SE +/- 97.15, N = 5SE +/- 23.00, N = 5SE +/- 20.39, N = 5SE +/- 9.32, N = 5SE +/- 7.32, N = 5SE +/- 12.89, N = 5SE +/- 7.33, N = 5SE +/- 9.93, N = 5SE +/- 10.65, N = 5SE +/- 5.53, N = 5SE +/- 2.64, N = 5SE +/- 1.09, N = 5108892.22106421.9580743.8956066.7956038.7255254.6642008.1132649.3627632.1926778.6220091.9616323.9213398.101. (CC) gcc options: -pthread -O3 -funroll-loops -ggdb3 -march=amdfam10 -rdynamic -ldl -laio -lm
OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 2018-07-28Test: CPUEPYC 7662EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P20K40K60K80K100KMin: 108721.1 / Avg: 108892.22 / Max: 109052.86Min: 106076.46 / Avg: 106421.95 / Max: 106643.26Min: 80668.55 / Avg: 80743.89 / Max: 80805.58Min: 55989.07 / Avg: 56066.79 / Max: 56109.25Min: 56017.18 / Avg: 56038.72 / Max: 56065.58Min: 55238.42 / Avg: 55254.66 / Max: 55276.83Min: 41974.26 / Avg: 42008.11 / Max: 42044Min: 32621.61 / Avg: 32649.36 / Max: 32663.3Min: 27594.22 / Avg: 27632.19 / Max: 27651.61Min: 26746.06 / Avg: 26778.62 / Max: 26804.47Min: 20074.61 / Avg: 20091.96 / Max: 20104.41Min: 16316.25 / Avg: 16323.92 / Max: 16331.8Min: 13395.23 / Avg: 13398.1 / Max: 13400.851. (CC) gcc options: -pthread -O3 -funroll-loops -ggdb3 -march=amdfam10 -rdynamic -ldl -laio -lm

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU StressEPYC 7662EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P4K8K12K16K20KSE +/- 45.79, N = 3SE +/- 24.42, N = 3SE +/- 23.53, N = 3SE +/- 39.53, N = 3SE +/- 8.85, N = 3SE +/- 14.79, N = 3SE +/- 3.25, N = 3SE +/- 1.01, N = 3SE +/- 1.11, N = 3SE +/- 4.51, N = 3SE +/- 5.85, N = 3SE +/- 4.39, N = 3SE +/- 8.87, N = 320174.7619926.6115958.1611234.6011121.5910901.478541.126629.065622.825463.454078.213306.392708.901. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU StressEPYC 7662EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P3K6K9K12K15KMin: 20083.31 / Avg: 20174.76 / Max: 20224.79Min: 19890.13 / Avg: 19926.61 / Max: 19972.98Min: 15919.41 / Avg: 15958.16 / Max: 16000.66Min: 11159.1 / Avg: 11234.6 / Max: 11292.66Min: 11109.84 / Avg: 11121.59 / Max: 11138.93Min: 10874.37 / Avg: 10901.47 / Max: 10925.3Min: 8535.73 / Avg: 8541.12 / Max: 8546.95Min: 6628.02 / Avg: 6629.06 / Max: 6631.08Min: 5620.68 / Avg: 5622.82 / Max: 5624.41Min: 5456.78 / Avg: 5463.45 / Max: 5472.04Min: 4067.37 / Avg: 4078.21 / Max: 4087.43Min: 3301.92 / Avg: 3306.39 / Max: 3315.16Min: 2691.46 / Avg: 2708.9 / Max: 2720.481. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUEPYC 7662EPYC 7702EPYC 7552EPYC 7642EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7282EPYC 7F52EPYC 7302PEPYC 7272EPYC 7F32EPYC 7232P6K12K18K24K30KSE +/- 46.32, N = 3SE +/- 28.57, N = 3SE +/- 55.79, N = 3SE +/- 31.70, N = 3SE +/- 85.72, N = 3SE +/- 42.04, N = 3SE +/- 22.86, N = 3SE +/- 95.61, N = 3SE +/- 50.79, N = 3SE +/- 43.76, N = 3SE +/- 21.57, N = 3SE +/- 89.67, N = 3SE +/- 52.69, N = 3SE +/- 41.71, N = 1528284.8725067.0423441.9122986.0519922.0118612.5416446.8812800.5910017.989663.439316.806689.954953.654012.40
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUEPYC 7662EPYC 7702EPYC 7552EPYC 7642EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7282EPYC 7F52EPYC 7302PEPYC 7272EPYC 7F32EPYC 7232P5K10K15K20K25KMin: 28192.24 / Avg: 28284.87 / Max: 28331.27Min: 25025.67 / Avg: 25067.04 / Max: 25121.85Min: 23347.44 / Avg: 23441.91 / Max: 23540.57Min: 22927.38 / Avg: 22986.05 / Max: 23036.21Min: 19751.03 / Avg: 19922.01 / Max: 20018.47Min: 18567.53 / Avg: 18612.54 / Max: 18696.54Min: 16403.4 / Avg: 16446.88 / Max: 16480.83Min: 12615.39 / Avg: 12800.59 / Max: 12934.43Min: 9917.01 / Avg: 10017.98 / Max: 10078.03Min: 9575.98 / Avg: 9663.43 / Max: 9710.01Min: 9293.16 / Avg: 9316.8 / Max: 9359.87Min: 6524.74 / Avg: 6689.95 / Max: 6832.97Min: 4855.38 / Avg: 4953.65 / Max: 5035.74Min: 3827.5 / Avg: 4012.4 / Max: 4339.41

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.DEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P9001800270036004500SE +/- 9.36, N = 3SE +/- 9.06, N = 3SE +/- 8.46, N = 3SE +/- 6.05, N = 3SE +/- 5.63, N = 3SE +/- 4.32, N = 3SE +/- 1.11, N = 3SE +/- 1.11, N = 3SE +/- 0.05, N = 3SE +/- 3.28, N = 3SE +/- 1.25, N = 3SE +/- 7.84, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 34019.093989.793322.993292.842389.762380.272333.541813.291410.591190.481156.26860.01705.21579.181. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.DEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P7001400210028003500Min: 4001.46 / Avg: 4019.09 / Max: 4033.33Min: 3972.06 / Avg: 3989.79 / Max: 4001.86Min: 3306.1 / Avg: 3322.99 / Max: 3332.1Min: 3282.03 / Avg: 3292.84 / Max: 3302.95Min: 2378.57 / Avg: 2389.76 / Max: 2396.35Min: 2371.63 / Avg: 2380.27 / Max: 2384.67Min: 2331.32 / Avg: 2333.54 / Max: 2334.74Min: 1811.43 / Avg: 1813.29 / Max: 1815.27Min: 1410.49 / Avg: 1410.59 / Max: 1410.66Min: 1183.95 / Avg: 1190.48 / Max: 1194.25Min: 1153.76 / Avg: 1156.26 / Max: 1157.7Min: 844.33 / Avg: 860.01 / Max: 868.26Min: 705.16 / Avg: 705.21 / Max: 705.27Min: 579.13 / Avg: 579.18 / Max: 579.231. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPUEPYC 7662EPYC 7702EPYC 7552EPYC 7642EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7282EPYC 7F52EPYC 7302PEPYC 7272EPYC 7F32EPYC 7232P6K12K18K24K30KSE +/- 30.27, N = 3SE +/- 48.27, N = 3SE +/- 49.13, N = 3SE +/- 4.60, N = 3SE +/- 75.75, N = 3SE +/- 60.57, N = 3SE +/- 24.60, N = 3SE +/- 157.24, N = 3SE +/- 103.67, N = 4SE +/- 47.83, N = 3SE +/- 6.65, N = 3SE +/- 21.76, N = 3SE +/- 6.26, N = 3SE +/- 46.05, N = 428316.5025012.9823559.1722996.9319958.4118514.6016443.7313220.959884.629866.089314.826803.094984.354115.44
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPUEPYC 7662EPYC 7702EPYC 7552EPYC 7642EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7282EPYC 7F52EPYC 7302PEPYC 7272EPYC 7F32EPYC 7232P5K10K15K20K25KMin: 28279.61 / Avg: 28316.5 / Max: 28376.51Min: 24963.18 / Avg: 25012.98 / Max: 25109.5Min: 23467.34 / Avg: 23559.17 / Max: 23635.37Min: 22990.44 / Avg: 22996.93 / Max: 23005.82Min: 19820.95 / Avg: 19958.41 / Max: 20082.32Min: 18416.86 / Avg: 18514.6 / Max: 18625.46Min: 16395.26 / Avg: 16443.73 / Max: 16475.33Min: 12906.62 / Avg: 13220.95 / Max: 13386.45Min: 9618.02 / Avg: 9884.62 / Max: 10112.13Min: 9816.8 / Avg: 9866.08 / Max: 9961.72Min: 9307.85 / Avg: 9314.82 / Max: 9328.12Min: 6767.48 / Avg: 6803.09 / Max: 6842.56Min: 4971.91 / Avg: 4984.35 / Max: 4991.8Min: 4003.8 / Avg: 4115.44 / Max: 4226.83

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.CEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P9001800270036004500SE +/- 10.02, N = 10SE +/- 13.78, N = 10SE +/- 6.29, N = 9SE +/- 9.01, N = 9SE +/- 4.60, N = 8SE +/- 3.24, N = 8SE +/- 5.08, N = 8SE +/- 1.48, N = 7SE +/- 0.69, N = 6SE +/- 0.45, N = 6SE +/- 0.78, N = 6SE +/- 0.17, N = 5SE +/- 0.27, N = 4SE +/- 0.60, N = 43967.283908.233310.343252.132375.482368.692318.441806.541407.621191.901155.88867.81704.69578.121. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.CEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P7001400210028003500Min: 3922.41 / Avg: 3967.28 / Max: 4021.83Min: 3840.34 / Avg: 3908.23 / Max: 3966.84Min: 3286.05 / Avg: 3310.34 / Max: 3336.05Min: 3210.93 / Avg: 3252.13 / Max: 3290.8Min: 2361.46 / Avg: 2375.48 / Max: 2394.12Min: 2349.79 / Avg: 2368.69 / Max: 2379.63Min: 2296.55 / Avg: 2318.44 / Max: 2338.82Min: 1799.65 / Avg: 1806.54 / Max: 1812.17Min: 1404.92 / Avg: 1407.62 / Max: 1409.44Min: 1190.44 / Avg: 1191.9 / Max: 1193.16Min: 1152.71 / Avg: 1155.88 / Max: 1157.48Min: 867.17 / Avg: 867.81 / Max: 868.12Min: 703.93 / Avg: 704.69 / Max: 705.19Min: 576.33 / Avg: 578.12 / Max: 578.841. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Vector MathEPYC 7662EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P90K180K270K360K450KSE +/- 137.59, N = 3SE +/- 205.94, N = 3SE +/- 189.45, N = 3SE +/- 32.95, N = 3SE +/- 66.32, N = 3SE +/- 47.74, N = 3SE +/- 100.32, N = 3SE +/- 3.54, N = 3SE +/- 3.30, N = 3SE +/- 4.86, N = 3SE +/- 1.61, N = 3SE +/- 4.12, N = 3SE +/- 4.97, N = 3400773.38397326.45328019.70241722.09237596.66232455.68182716.39142756.35120826.81117080.8687851.5871366.1758583.381. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Vector MathEPYC 7662EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P70K140K210K280K350KMin: 400607.54 / Avg: 400773.38 / Max: 401046.47Min: 397034.87 / Avg: 397326.45 / Max: 397724.18Min: 327828.63 / Avg: 328019.7 / Max: 328398.59Min: 241657.95 / Avg: 241722.09 / Max: 241767.28Min: 237474.07 / Avg: 237596.66 / Max: 237701.83Min: 232394.78 / Avg: 232455.68 / Max: 232549.81Min: 182574.85 / Avg: 182716.39 / Max: 182910.32Min: 142752.27 / Avg: 142756.35 / Max: 142763.4Min: 120822.19 / Avg: 120826.81 / Max: 120833.2Min: 117074.78 / Avg: 117080.86 / Max: 117090.47Min: 87849.93 / Avg: 87851.58 / Max: 87854.8Min: 71360.29 / Avg: 71366.17 / Max: 71374.11Min: 58573.44 / Avg: 58583.38 / Max: 58588.421. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: BlowfishEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P16K32K48K64K80KSE +/- 4.67, N = 3SE +/- 15.77, N = 3SE +/- 3.00, N = 3SE +/- 72.25, N = 3SE +/- 17.06, N = 3SE +/- 8.37, N = 3SE +/- 9.84, N = 3SE +/- 1.00, N = 3SE +/- 3.33, N = 3SE +/- 3.06, N = 3SE +/- 12.17, N = 3SE +/- 4.10, N = 3SE +/- 1.67, N = 3SE +/- 2.31, N = 373579700336111959963445964370342800337552634522314214541622413183108251. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2
OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: BlowfishEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P13K26K39K52K65KMin: 73574 / Avg: 73578.67 / Max: 73588Min: 70003 / Avg: 70033.33 / Max: 70056Min: 61113 / Avg: 61119 / Max: 61122Min: 59835 / Avg: 59963.33 / Max: 60085Min: 44563 / Avg: 44596 / Max: 44620Min: 43689 / Avg: 43703.33 / Max: 43718Min: 42782 / Avg: 42799.67 / Max: 42816Min: 33753 / Avg: 33755 / Max: 33756Min: 26342 / Avg: 26345.33 / Max: 26352Min: 22310 / Avg: 22314 / Max: 22320Min: 21436 / Avg: 21453.67 / Max: 21477Min: 16216 / Avg: 16223.67 / Max: 16230Min: 13181 / Avg: 13182.67 / Max: 13186Min: 10821 / Avg: 10825 / Max: 108291. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: Path TracerEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7302PEPYC 7282EPYC 7F52EPYC 7272EPYC 7F32EPYC 7232P1.12282.24563.36844.49125.614SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 34.994.924.214.173.233.032.972.451.651.591.481.220.950.76MIN: 4.95 / MAX: 5.03MIN: 4.88 / MAX: 4.95MIN: 4.1 / MAX: 4.24MIN: 4.12 / MAX: 4.18MIN: 3.19 / MAX: 3.24MIN: 2.99 / MAX: 3.04MIN: 2.93 / MAX: 2.99MIN: 2.43 / MAX: 2.46MIN: 1.63MIN: 1.58 / MAX: 1.6MIN: 1.47 / MAX: 1.49MIN: 1.21MAX: 0.77
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: Path TracerEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7302PEPYC 7282EPYC 7F52EPYC 7272EPYC 7F32EPYC 7232P246810Min: 4.98 / Avg: 4.99 / Max: 5Min: 4.9 / Avg: 4.92 / Max: 4.93Min: 4.2 / Avg: 4.21 / Max: 4.22Min: 4.17 / Avg: 4.17 / Max: 4.17Min: 3.23 / Avg: 3.23 / Max: 3.23Min: 3.03 / Avg: 3.03 / Max: 3.03Min: 2.97 / Avg: 2.97 / Max: 2.97Min: 2.44 / Avg: 2.45 / Max: 2.45Min: 1.65 / Avg: 1.65 / Max: 1.65Min: 1.59 / Avg: 1.59 / Max: 1.59Min: 1.48 / Avg: 1.48 / Max: 1.48Min: 1.22 / Avg: 1.22 / Max: 1.22Min: 0.95 / Avg: 0.95 / Max: 0.95Min: 0.76 / Avg: 0.76 / Max: 0.76

Pennant

Pennant is an application focused on hydrodynamics on general unstructured meshes in 2D. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: leblancbigEPYC 7662EPYC 7702EPYC 7552EPYC 7532EPYC 7542EPYC 7502PEPYC 7F52EPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P1020304050SE +/- 0.054901, N = 6SE +/- 0.058701, N = 6SE +/- 0.095577, N = 5SE +/- 0.014397, N = 5SE +/- 0.043495, N = 4SE +/- 0.026648, N = 4SE +/- 0.007758, N = 4SE +/- 0.019214, N = 3SE +/- 0.036573, N = 3SE +/- 0.019691, N = 3SE +/- 0.292336, N = 3SE +/- 0.037928, N = 3SE +/- 0.039126, N = 36.9953217.0339828.98853810.88576014.16482014.78157016.13929016.65375021.95188028.99392033.80852034.58003045.1546001. (CXX) g++ options: -fopenmp -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: leblancbigEPYC 7662EPYC 7702EPYC 7552EPYC 7532EPYC 7542EPYC 7502PEPYC 7F52EPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P918273645Min: 6.85 / Avg: 7 / Max: 7.2Min: 6.86 / Avg: 7.03 / Max: 7.24Min: 8.8 / Avg: 8.99 / Max: 9.33Min: 10.85 / Avg: 10.89 / Max: 10.92Min: 14.12 / Avg: 14.16 / Max: 14.3Min: 14.73 / Avg: 14.78 / Max: 14.84Min: 16.12 / Avg: 16.14 / Max: 16.16Min: 16.63 / Avg: 16.65 / Max: 16.69Min: 21.9 / Avg: 21.95 / Max: 22.02Min: 28.96 / Avg: 28.99 / Max: 29.03Min: 33.47 / Avg: 33.81 / Max: 34.39Min: 34.52 / Avg: 34.58 / Max: 34.65Min: 45.1 / Avg: 45.15 / Max: 45.231. (CXX) g++ options: -fopenmp -pthread -lmpi_cxx -lmpi

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CryptoEPYC 7662EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P3K6K9K12K15KSE +/- 2.16, N = 3SE +/- 5.99, N = 3SE +/- 6.05, N = 3SE +/- 2.98, N = 3SE +/- 6.32, N = 3SE +/- 2.43, N = 3SE +/- 0.38, N = 3SE +/- 2.53, N = 3SE +/- 5.77, N = 3SE +/- 0.60, N = 3SE +/- 0.42, N = 3SE +/- 2.21, N = 3SE +/- 0.98, N = 312091.5611996.599992.157720.587273.717105.735847.704580.083871.573760.662821.192288.331878.741. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CryptoEPYC 7662EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P2K4K6K8K10KMin: 12087.35 / Avg: 12091.56 / Max: 12094.48Min: 11984.94 / Avg: 11996.59 / Max: 12004.82Min: 9980.12 / Avg: 9992.15 / Max: 9999.28Min: 7715.51 / Avg: 7720.58 / Max: 7725.83Min: 7261.15 / Avg: 7273.71 / Max: 7281.24Min: 7100.89 / Avg: 7105.73 / Max: 7108.61Min: 5846.99 / Avg: 5847.7 / Max: 5848.3Min: 4577.39 / Avg: 4580.08 / Max: 4585.14Min: 3864.76 / Avg: 3871.57 / Max: 3883.04Min: 3759.79 / Avg: 3760.66 / Max: 3761.8Min: 2820.54 / Avg: 2821.19 / Max: 2821.99Min: 2285.96 / Avg: 2288.33 / Max: 2292.74Min: 1877.16 / Avg: 1878.74 / Max: 1880.551. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUEPYC 7662EPYC 7702EPYC 7642EPYC 7532EPYC 7552EPYC 7F52EPYC 7542EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7F32EPYC 7282EPYC 7272EPYC 7232P246810SE +/- 0.002146, N = 7SE +/- 0.002719, N = 7SE +/- 0.002747, N = 7SE +/- 0.001701, N = 7SE +/- 0.003838, N = 7SE +/- 0.002179, N = 7SE +/- 0.011195, N = 7SE +/- 0.008223, N = 7SE +/- 0.005689, N = 7SE +/- 0.006442, N = 7SE +/- 0.007190, N = 7SE +/- 0.008434, N = 7SE +/- 0.005350, N = 7SE +/- 0.004801, N = 70.9937321.0784401.1080501.4837401.7196702.0018303.4617403.4623603.5434703.7064004.3013005.8645805.9534806.376680MIN: 1.28MIN: 1.63MIN: 1.91MIN: 3.38MIN: 3.38MIN: 3.47MIN: 3.6MIN: 4.22MIN: 5.63MIN: 5.82MIN: 6.271. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUEPYC 7662EPYC 7702EPYC 7642EPYC 7532EPYC 7552EPYC 7F52EPYC 7542EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7F32EPYC 7282EPYC 7272EPYC 7232P3691215Min: 0.99 / Avg: 0.99 / Max: 1Min: 1.07 / Avg: 1.08 / Max: 1.09Min: 1.1 / Avg: 1.11 / Max: 1.12Min: 1.47 / Avg: 1.48 / Max: 1.49Min: 1.7 / Avg: 1.72 / Max: 1.73Min: 1.99 / Avg: 2 / Max: 2.01Min: 3.44 / Avg: 3.46 / Max: 3.53Min: 3.43 / Avg: 3.46 / Max: 3.5Min: 3.52 / Avg: 3.54 / Max: 3.56Min: 3.67 / Avg: 3.71 / Max: 3.73Min: 4.26 / Avg: 4.3 / Max: 4.32Min: 5.83 / Avg: 5.86 / Max: 5.9Min: 5.94 / Avg: 5.95 / Max: 5.98Min: 6.36 / Avg: 6.38 / Max: 6.41. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - DegriddingEPYC 7642EPYC 7662EPYC 7702EPYC 7532EPYC 7552EPYC 7F52EPYC 7402PEPYC 7502PEPYC 7542EPYC 7302PEPYC 7F32EPYC 7282EPYC 7272EPYC 7232P5K10K15K20K25KSE +/- 197.00, N = 3SE +/- 121.22, N = 3SE +/- 112.07, N = 3SE +/- 215.40, N = 3SE +/- 61.33, N = 3SE +/- 118.03, N = 3SE +/- 63.07, N = 3SE +/- 137.10, N = 3SE +/- 162.55, N = 3SE +/- 68.60, N = 3SE +/- 74.25, N = 4SE +/- 27.68, N = 3SE +/- 48.79, N = 9SE +/- 28.76, N = 321472.3020993.1020185.5018311.7016990.1013692.3012173.7012067.3011976.9010427.206907.546587.576100.183350.221. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - DegriddingEPYC 7642EPYC 7662EPYC 7702EPYC 7532EPYC 7552EPYC 7F52EPYC 7402PEPYC 7502PEPYC 7542EPYC 7302PEPYC 7F32EPYC 7282EPYC 7272EPYC 7232P4K8K12K16K20KMin: 21275.3 / Avg: 21472.3 / Max: 21866.3Min: 20783.8 / Avg: 20993.07 / Max: 21203.7Min: 19992 / Avg: 20185.5 / Max: 20380.2Min: 18096.3 / Avg: 18311.7 / Max: 18742.5Min: 16928.8 / Avg: 16990.13 / Max: 17112.8Min: 13456.2 / Avg: 13692.27 / Max: 13810.3Min: 12110.6 / Avg: 12173.67 / Max: 12299.8Min: 11793.1 / Avg: 12067.3 / Max: 12204.4Min: 11662 / Avg: 11976.87 / Max: 12204.4Min: 10290 / Avg: 10427.2 / Max: 10495.8Min: 6728.09 / Avg: 6907.54 / Max: 7091.77Min: 6559.89 / Avg: 6587.57 / Max: 6642.93Min: 5788.14 / Avg: 6100.18 / Max: 6247.52Min: 3321.46 / Avg: 3350.22 / Max: 3407.741. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondEPYC 7662EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P400K800K1200K1600K2000KSE +/- 473.83, N = 3SE +/- 1046.01, N = 3SE +/- 10873.39, N = 3SE +/- 1150.50, N = 3SE +/- 609.31, N = 3SE +/- 707.15, N = 3SE +/- 1542.09, N = 3SE +/- 1470.27, N = 3SE +/- 1815.81, N = 3SE +/- 538.09, N = 3SE +/- 270.65, N = 3SE +/- 818.34, N = 3SE +/- 102.44, N = 31867023.461845888.801530338.791203386.721147733.951123555.30902070.15716393.88603381.33586659.10440879.64356570.41293387.401. (CC) gcc options: -O2 -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondEPYC 7662EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P300K600K900K1200K1500KMin: 1866161.25 / Avg: 1867023.46 / Max: 1867795.13Min: 1844712.66 / Avg: 1845888.8 / Max: 1847975.17Min: 1517351.75 / Avg: 1530338.79 / Max: 1551938.41Min: 1201201.2 / Avg: 1203386.72 / Max: 1205102.86Min: 1146516.78 / Avg: 1147733.95 / Max: 1148394.04Min: 1122462.4 / Avg: 1123555.3 / Max: 1124879.16Min: 899666.14 / Avg: 902070.15 / Max: 904945.39Min: 713469.52 / Avg: 716393.88 / Max: 718122.78Min: 599995.31 / Avg: 603381.33 / Max: 606211.3Min: 585752.04 / Avg: 586659.1 / Max: 587614.19Min: 440356.87 / Avg: 440879.64 / Max: 441262.65Min: 355244.72 / Avg: 356570.41 / Max: 358064.49Min: 293258.49 / Avg: 293387.4 / Max: 293589.771. (CC) gcc options: -O2 -lrt" -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUEPYC 7662EPYC 7702EPYC 7642EPYC 7532EPYC 7552EPYC 7F52EPYC 7542EPYC 7502PEPYC 7302PEPYC 7402PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P246810SE +/- 0.01221, N = 5SE +/- 0.00630, N = 5SE +/- 0.01041, N = 5SE +/- 0.00475, N = 5SE +/- 0.00795, N = 5SE +/- 0.01324, N = 5SE +/- 0.00208, N = 5SE +/- 0.00256, N = 5SE +/- 0.00471, N = 5SE +/- 0.01609, N = 5SE +/- 0.00189, N = 5SE +/- 0.00853, N = 5SE +/- 0.00869, N = 5SE +/- 0.01715, N = 51.125021.159431.220111.251542.286112.778262.958302.962833.763954.133824.472874.975126.682097.15030MIN: 1.2MIN: 2.2MIN: 2.68MIN: 2.88MIN: 2.89MIN: 3.68MIN: 3.91MIN: 4.3MIN: 4.75MIN: 6.52MIN: 6.451. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUEPYC 7662EPYC 7702EPYC 7642EPYC 7532EPYC 7552EPYC 7F52EPYC 7542EPYC 7502PEPYC 7302PEPYC 7402PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P3691215Min: 1.1 / Avg: 1.13 / Max: 1.17Min: 1.14 / Avg: 1.16 / Max: 1.17Min: 1.19 / Avg: 1.22 / Max: 1.25Min: 1.24 / Avg: 1.25 / Max: 1.26Min: 2.26 / Avg: 2.29 / Max: 2.3Min: 2.75 / Avg: 2.78 / Max: 2.83Min: 2.95 / Avg: 2.96 / Max: 2.97Min: 2.96 / Avg: 2.96 / Max: 2.97Min: 3.75 / Avg: 3.76 / Max: 3.78Min: 4.08 / Avg: 4.13 / Max: 4.18Min: 4.47 / Avg: 4.47 / Max: 4.48Min: 4.94 / Avg: 4.98 / Max: 4.99Min: 6.66 / Avg: 6.68 / Max: 6.71Min: 7.11 / Avg: 7.15 / Max: 7.191. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

m-queens

A solver for the N-queens problem with multi-threading support via the OpenMP library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterm-queens 1.2Time To SolveEPYC 7662EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P20406080100SE +/- 0.03, N = 4SE +/- 0.03, N = 4SE +/- 0.06, N = 4SE +/- 0.05, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.08, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 313.7513.8416.6121.3722.6323.0528.1935.9242.3843.7458.2271.5887.191. (CXX) g++ options: -fopenmp -O2 -march=native
OpenBenchmarking.orgSeconds, Fewer Is Betterm-queens 1.2Time To SolveEPYC 7662EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P20406080100Min: 13.72 / Avg: 13.75 / Max: 13.84Min: 13.77 / Avg: 13.84 / Max: 13.89Min: 16.48 / Avg: 16.61 / Max: 16.78Min: 21.31 / Avg: 21.37 / Max: 21.46Min: 22.57 / Avg: 22.63 / Max: 22.73Min: 23.01 / Avg: 23.05 / Max: 23.09Min: 28.07 / Avg: 28.19 / Max: 28.33Min: 35.9 / Avg: 35.92 / Max: 35.93Min: 42.35 / Avg: 42.38 / Max: 42.41Min: 43.69 / Avg: 43.74 / Max: 43.81Min: 58.19 / Avg: 58.22 / Max: 58.26Min: 71.55 / Avg: 71.58 / Max: 71.6Min: 87.19 / Avg: 87.19 / Max: 87.21. (CXX) g++ options: -fopenmp -O2 -march=native

Stockfish

This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total TimeEPYC 7702EPYC 7662EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P20M40M60M80M100MSE +/- 597478.91, N = 3SE +/- 1212155.43, N = 4SE +/- 82765.80, N = 3SE +/- 88880.09, N = 3SE +/- 436034.06, N = 15SE +/- 708563.50, N = 4SE +/- 304616.08, N = 3SE +/- 416432.88, N = 3SE +/- 332801.18, N = 6SE +/- 236966.05, N = 10SE +/- 71263.56, N = 3SE +/- 79798.13, N = 3SE +/- 150250.14, N = 31009084539884746982397132641529296086461658384026488047323904341032973749312461372431444319191199160349941. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total TimeEPYC 7702EPYC 7662EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P20M40M60M80M100MMin: 99825755 / Avg: 100908453 / Max: 101887714Min: 95322228 / Avg: 98847468.5 / Max: 100858323Min: 82260216 / Avg: 82397132 / Max: 82546157Min: 63992140 / Avg: 64152928.67 / Max: 64298968Min: 57227030 / Avg: 60864616.13 / Max: 63501145Min: 56735799 / Avg: 58384025.75 / Max: 60090226Min: 48420286 / Avg: 48804732.33 / Max: 49406252Min: 38530677 / Avg: 39043410.33 / Max: 39868176Min: 32141990 / Avg: 32973748.83 / Max: 34537511Min: 30342075 / Avg: 31246136.8 / Max: 32525396Min: 24195092 / Avg: 24314443.33 / Max: 24441587Min: 19064701 / Avg: 19191199.33 / Max: 19338721Min: 15735361 / Avg: 16034994 / Max: 162045681. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver

N-Queens

This is a test of the OpenMP version of a test that solves the N-queens problem. The board problem size is 18. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterN-Queens 1.0Elapsed TimeEPYC 7662EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P48121620SE +/- 0.001, N = 10SE +/- 0.001, N = 10SE +/- 0.001, N = 9SE +/- 0.000, N = 8SE +/- 0.001, N = 8SE +/- 0.001, N = 8SE +/- 0.001, N = 7SE +/- 0.000, N = 6SE +/- 0.000, N = 5SE +/- 0.004, N = 5SE +/- 0.000, N = 4SE +/- 0.000, N = 4SE +/- 0.001, N = 32.7872.8023.3524.2734.5994.7105.6457.1528.4498.78511.61814.29817.4211. (CC) gcc options: -static -fopenmp -O3 -march=native
OpenBenchmarking.orgSeconds, Fewer Is BetterN-Queens 1.0Elapsed TimeEPYC 7662EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P48121620Min: 2.78 / Avg: 2.79 / Max: 2.79Min: 2.8 / Avg: 2.8 / Max: 2.81Min: 3.35 / Avg: 3.35 / Max: 3.36Min: 4.27 / Avg: 4.27 / Max: 4.27Min: 4.6 / Avg: 4.6 / Max: 4.6Min: 4.71 / Avg: 4.71 / Max: 4.71Min: 5.64 / Avg: 5.65 / Max: 5.65Min: 7.15 / Avg: 7.15 / Max: 7.15Min: 8.45 / Avg: 8.45 / Max: 8.45Min: 8.78 / Avg: 8.79 / Max: 8.8Min: 11.62 / Avg: 11.62 / Max: 11.62Min: 14.3 / Avg: 14.3 / Max: 14.3Min: 17.42 / Avg: 17.42 / Max: 17.421. (CC) gcc options: -static -fopenmp -O3 -march=native

BRL-CAD

BRL-CAD 7.28.0 is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.30.8VGR Performance MetricEPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P140K280K420K560K700K6600796526025417755405584120763877923786603176612325072190552081501553471206221060731. (CXX) g++ options: -std=c++11 -pipe -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -rdynamic -lSM -lICE -lXi -lGLU -lGL -lGLdispatch -lX11 -lXext -lXrender -lpthread -ldl -luuid -lm

C-Ray

This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P20406080100SE +/- 0.02, N = 4SE +/- 0.01, N = 4SE +/- 0.01, N = 4SE +/- 0.02, N = 4SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 312.8212.8515.0415.2119.5520.9121.4325.6832.5838.4939.6652.8164.9779.171. (CC) gcc options: -lm -lpthread -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P1530456075Min: 12.79 / Avg: 12.81 / Max: 12.87Min: 12.83 / Avg: 12.85 / Max: 12.87Min: 15.01 / Avg: 15.04 / Max: 15.06Min: 15.17 / Avg: 15.21 / Max: 15.24Min: 19.49 / Avg: 19.55 / Max: 19.6Min: 20.89 / Avg: 20.91 / Max: 20.94Min: 21.4 / Avg: 21.43 / Max: 21.45Min: 25.67 / Avg: 25.68 / Max: 25.7Min: 32.55 / Avg: 32.58 / Max: 32.61Min: 38.48 / Avg: 38.49 / Max: 38.51Min: 39.61 / Avg: 39.66 / Max: 39.7Min: 52.8 / Avg: 52.81 / Max: 52.82Min: 64.93 / Avg: 64.97 / Max: 64.99Min: 79.13 / Avg: 79.17 / Max: 79.221. (CC) gcc options: -lm -lpthread -O3

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarEPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P510152025SE +/- 0.043, N = 3SE +/- 0.063, N = 3SE +/- 0.035, N = 3SE +/- 0.038, N = 3SE +/- 0.008, N = 3SE +/- 0.031, N = 3SE +/- 0.016, N = 3SE +/- 0.014, N = 3SE +/- 0.013, N = 3SE +/- 0.006, N = 3SE +/- 0.005, N = 3SE +/- 0.010, N = 3SE +/- 0.002, N = 3SE +/- 0.002, N = 319.09519.06216.23015.98412.73811.85411.5319.6307.6846.5756.2144.8553.8833.117
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarEPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P510152025Min: 19.02 / Avg: 19.1 / Max: 19.16Min: 18.94 / Avg: 19.06 / Max: 19.14Min: 16.18 / Avg: 16.23 / Max: 16.3Min: 15.93 / Avg: 15.98 / Max: 16.06Min: 12.72 / Avg: 12.74 / Max: 12.75Min: 11.8 / Avg: 11.85 / Max: 11.9Min: 11.52 / Avg: 11.53 / Max: 11.56Min: 9.6 / Avg: 9.63 / Max: 9.65Min: 7.67 / Avg: 7.68 / Max: 7.71Min: 6.57 / Avg: 6.58 / Max: 6.59Min: 6.21 / Avg: 6.21 / Max: 6.22Min: 4.84 / Avg: 4.85 / Max: 4.87Min: 3.88 / Avg: 3.88 / Max: 3.89Min: 3.11 / Avg: 3.12 / Max: 3.12

Aircrack-ng

Aircrack-ng is a tool for assessing WiFi/WLAN network security. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.5.2EPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P30K60K90K120K150KSE +/- 32.87, N = 3SE +/- 31.81, N = 3SE +/- 45.55, N = 3SE +/- 87.07, N = 3SE +/- 28.55, N = 3SE +/- 15.49, N = 3SE +/- 7.45, N = 3SE +/- 16.32, N = 3SE +/- 28.27, N = 3SE +/- 34.38, N = 3SE +/- 30.64, N = 3SE +/- 28.65, N = 3SE +/- 28.30, N = 3SE +/- 6.91, N = 3143292.79143169.12120520.68119683.6594237.6587391.7985303.3571372.7956973.8548284.8346017.2035086.9528528.3423406.401. (CXX) g++ options: -O3 -fvisibility=hidden -masm=intel -fcommon -rdynamic -lpthread -lz -lcrypto -lhwloc -ldl -lm -pthread
OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.5.2EPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P20K40K60K80K100KMin: 143233.45 / Avg: 143292.79 / Max: 143346.95Min: 143136.28 / Avg: 143169.12 / Max: 143232.73Min: 120433.36 / Avg: 120520.68 / Max: 120586.84Min: 119543.75 / Avg: 119683.65 / Max: 119843.41Min: 94189.9 / Avg: 94237.65 / Max: 94288.64Min: 87370.61 / Avg: 87391.79 / Max: 87421.95Min: 85294.1 / Avg: 85303.35 / Max: 85318.1Min: 71350.26 / Avg: 71372.79 / Max: 71404.51Min: 56938.61 / Avg: 56973.85 / Max: 57029.76Min: 48218.5 / Avg: 48284.83 / Max: 48333.69Min: 45958.39 / Avg: 46017.2 / Max: 46061.5Min: 35057.87 / Avg: 35086.95 / Max: 35144.26Min: 28484.88 / Avg: 28528.34 / Max: 28581.47Min: 23398.23 / Avg: 23406.4 / Max: 23420.131. (CXX) g++ options: -O3 -fvisibility=hidden -masm=intel -fcommon -rdynamic -lpthread -lz -lcrypto -lhwloc -ldl -lm -pthread

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P246810SE +/- 0.013, N = 3SE +/- 0.006, N = 3SE +/- 0.020, N = 3SE +/- 0.007, N = 3SE +/- 0.002, N = 3SE +/- 0.006, N = 3SE +/- 0.008, N = 3SE +/- 0.009, N = 3SE +/- 0.004, N = 3SE +/- 0.006, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.002, N = 3SE +/- 0.002, N = 38.8988.8117.5967.5555.9505.4775.4154.4943.4613.1292.9322.2891.8081.456
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P3691215Min: 8.88 / Avg: 8.9 / Max: 8.93Min: 8.8 / Avg: 8.81 / Max: 8.82Min: 7.57 / Avg: 7.6 / Max: 7.64Min: 7.54 / Avg: 7.56 / Max: 7.57Min: 5.95 / Avg: 5.95 / Max: 5.96Min: 5.47 / Avg: 5.48 / Max: 5.49Min: 5.4 / Avg: 5.42 / Max: 5.43Min: 4.48 / Avg: 4.49 / Max: 4.51Min: 3.46 / Avg: 3.46 / Max: 3.47Min: 3.12 / Avg: 3.13 / Max: 3.14Min: 2.93 / Avg: 2.93 / Max: 2.93Min: 2.29 / Avg: 2.29 / Max: 2.29Min: 1.8 / Avg: 1.81 / Max: 1.81Min: 1.45 / Avg: 1.46 / Max: 1.46

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Context SwitchingEPYC 7662EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P4M8M12M16M20MSE +/- 163606.09, N = 3SE +/- 235328.27, N = 3SE +/- 29075.41, N = 3SE +/- 65922.96, N = 3SE +/- 19374.10, N = 3SE +/- 55451.20, N = 3SE +/- 28222.94, N = 3SE +/- 30731.73, N = 3SE +/- 35625.01, N = 3SE +/- 32954.83, N = 3SE +/- 15626.15, N = 3SE +/- 6120.44, N = 3SE +/- 9437.39, N = 320981193.8820917619.7617727598.8413858394.6112896969.6412627261.9310443329.128225794.407003110.186612368.675095560.164265815.033438510.351. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Context SwitchingEPYC 7662EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P4M8M12M16M20MMin: 20708784.06 / Avg: 20981193.88 / Max: 21274387.73Min: 20467620.04 / Avg: 20917619.76 / Max: 21262048.98Min: 17694612.55 / Avg: 17727598.84 / Max: 17785565.68Min: 13729117.41 / Avg: 13858394.61 / Max: 13945462.29Min: 12861868.91 / Avg: 12896969.64 / Max: 12928733.42Min: 12516504.31 / Avg: 12627261.93 / Max: 12687546.78Min: 10405966.38 / Avg: 10443329.12 / Max: 10498652.3Min: 8193174.36 / Avg: 8225794.4 / Max: 8287218.36Min: 6942183.09 / Avg: 7003110.18 / Max: 7065563.44Min: 6549932.25 / Avg: 6612368.67 / Max: 6661871.66Min: 5064455.01 / Avg: 5095560.16 / Max: 5113736.06Min: 4253702.33 / Avg: 4265815.03 / Max: 4273401.47Min: 3425266.43 / Avg: 3438510.35 / Max: 3456778.841. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: XFrog Forest - Renderer: Path TracerEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P246810SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 36.005.925.155.104.043.753.703.072.342.151.981.551.270.99MIN: 5.92 / MAX: 6.06MIN: 5.85 / MAX: 5.99MIN: 5.08 / MAX: 5.21MIN: 5.05 / MAX: 5.15MIN: 3.98 / MAX: 4.07MIN: 3.72 / MAX: 3.77MIN: 3.64 / MAX: 3.73MIN: 3.02 / MAX: 3.11MIN: 2.29 / MAX: 2.38MIN: 2.13 / MAX: 2.16MIN: 1.96 / MAX: 1.99MIN: 1.54 / MAX: 1.57MIN: 1.26 / MAX: 1.28MAX: 1
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: XFrog Forest - Renderer: Path TracerEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P246810Min: 5.99 / Avg: 6 / Max: 6.02Min: 5.92 / Avg: 5.92 / Max: 5.92Min: 5.13 / Avg: 5.15 / Max: 5.15Min: 5.1 / Avg: 5.1 / Max: 5.1Min: 4.03 / Avg: 4.04 / Max: 4.05Min: 3.75 / Avg: 3.75 / Max: 3.75Min: 3.7 / Avg: 3.7 / Max: 3.7Min: 3.06 / Avg: 3.07 / Max: 3.08Min: 2.33 / Avg: 2.34 / Max: 2.35Min: 2.15 / Avg: 2.15 / Max: 2.15Min: 1.98 / Avg: 1.98 / Max: 1.98Min: 1.55 / Avg: 1.55 / Max: 1.55Min: 1.27 / Avg: 1.27 / Max: 1.27Min: 0.99 / Avg: 0.99 / Max: 0.99

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: Path TracerEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P48121620SE +/- 0.00, N = 4SE +/- 0.00, N = 4SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.04, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 316.9516.6714.4914.2911.3610.6010.428.776.836.075.654.393.652.82MIN: 16.39 / MAX: 17.24MIN: 16.39 / MAX: 16.95MIN: 14.08 / MAX: 14.71MIN: 13.89 / MAX: 14.49MIN: 11.11 / MAX: 11.49MIN: 10.42 / MAX: 10.75MIN: 10.2 / MAX: 10.53MIN: 8.62 / MAX: 8.85MIN: 6.67 / MAX: 6.94MIN: 5.99 / MAX: 6.13MIN: 5.59 / MAX: 5.75MIN: 4.33 / MAX: 4.44MIN: 3.58 / MAX: 3.69MIN: 2.79 / MAX: 2.86
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: Path TracerEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P48121620Min: 16.95 / Avg: 16.95 / Max: 16.95Min: 16.67 / Avg: 16.67 / Max: 16.67Min: 14.49 / Avg: 14.49 / Max: 14.49Min: 14.29 / Avg: 14.29 / Max: 14.29Min: 11.36 / Avg: 11.36 / Max: 11.36Min: 10.53 / Avg: 10.6 / Max: 10.64Min: 10.42 / Avg: 10.42 / Max: 10.42Min: 8.77 / Avg: 8.77 / Max: 8.77Min: 6.8 / Avg: 6.83 / Max: 6.85Min: 6.06 / Avg: 6.07 / Max: 6.1Min: 5.65 / Avg: 5.65 / Max: 5.65Min: 4.39 / Avg: 4.39 / Max: 4.39Min: 3.64 / Avg: 3.65 / Max: 3.65Min: 2.82 / Avg: 2.82 / Max: 2.83

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthEPYC 7662EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P30M60M90M120M150MSE +/- 1202983.30, N = 3SE +/- 688813.01, N = 3SE +/- 373542.05, N = 3SE +/- 394553.31, N = 3SE +/- 375595.74, N = 3SE +/- 939330.26, N = 3SE +/- 395352.82, N = 3SE +/- 491333.28, N = 3SE +/- 230890.74, N = 3SE +/- 215295.19, N = 3SE +/- 91373.33, N = 3SE +/- 83002.90, N = 3SE +/- 57670.67, N = 312629007712284946110530457882716262780846847442269962581775463426654211221941583844317918132467575321027795
OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthEPYC 7662EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P20M40M60M80M100MMin: 124900585 / Avg: 126290076.67 / Max: 128685849Min: 122057435 / Avg: 122849461.33 / Max: 124221646Min: 104841596 / Avg: 105304577.67 / Max: 106043845Min: 82311763 / Avg: 82716261.67 / Max: 83505284Min: 77335639 / Avg: 78084684 / Max: 78508351Min: 72577275 / Avg: 74422699 / Max: 75650094Min: 61945670 / Avg: 62581775.33 / Max: 63306570Min: 45648904 / Avg: 46342664.67 / Max: 47292245Min: 41683109 / Avg: 42112219 / Max: 42474524Min: 41298645 / Avg: 41583844.33 / Max: 42005822Min: 31610049 / Avg: 31791813.33 / Max: 31899083Min: 24510321 / Avg: 24675752.67 / Max: 24770415Min: 20912491 / Avg: 21027794.67 / Max: 21087999

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: XFrog Forest - Renderer: SciVisEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P3691215SE +/- 0.00, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 311.2411.079.589.527.587.016.905.764.454.023.722.932.391.89MIN: 10.99 / MAX: 11.36MIN: 10.75 / MAX: 11.24MIN: 9.17 / MAX: 9.71MIN: 9.35 / MAX: 9.62MIN: 7.46 / MAX: 7.63MIN: 6.94 / MAX: 7.09MIN: 6.8 / MAX: 6.99MIN: 5.68 / MAX: 5.81MIN: 4.31 / MAX: 4.52MIN: 3.98 / MAX: 4.05MIN: 3.69 / MAX: 3.76MIN: 2.9 / MAX: 2.97MIN: 2.37 / MAX: 2.42MIN: 1.84 / MAX: 1.91
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: XFrog Forest - Renderer: SciVisEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P3691215Min: 11.24 / Avg: 11.24 / Max: 11.24Min: 10.99 / Avg: 11.07 / Max: 11.11Min: 9.52 / Avg: 9.58 / Max: 9.62Min: 9.52 / Avg: 9.52 / Max: 9.52Min: 7.58 / Avg: 7.58 / Max: 7.58Min: 6.99 / Avg: 7.01 / Max: 7.04Min: 6.9 / Avg: 6.9 / Max: 6.9Min: 5.75 / Avg: 5.76 / Max: 5.78Min: 4.44 / Avg: 4.45 / Max: 4.46Min: 4.02 / Avg: 4.02 / Max: 4.02Min: 3.72 / Avg: 3.72 / Max: 3.72Min: 2.93 / Avg: 2.93 / Max: 2.93Min: 2.39 / Avg: 2.39 / Max: 2.39Min: 1.89 / Avg: 1.89 / Max: 1.89

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - GriddingEPYC 7642EPYC 7662EPYC 7532EPYC 7702EPYC 7552EPYC 7F52EPYC 7402PEPYC 7542EPYC 7502PEPYC 7302PEPYC 7F32EPYC 7282EPYC 7272EPYC 7232P5K10K15K20K25KSE +/- 111.87, N = 3SE +/- 233.09, N = 3SE +/- 197.85, N = 2SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 82.20, N = 3SE +/- 136.29, N = 3SE +/- 101.60, N = 3SE +/- 0.00, N = 3SE +/- 66.13, N = 4SE +/- 69.42, N = 3SE +/- 54.85, N = 9SE +/- 50.92, N = 323040.7020991.7020585.3020382.2017300.8016399.7013892.5012699.5012698.2012204.408265.997358.197130.893879.101. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - GriddingEPYC 7642EPYC 7662EPYC 7532EPYC 7702EPYC 7552EPYC 7F52EPYC 7402PEPYC 7542EPYC 7502PEPYC 7302PEPYC 7F32EPYC 7282EPYC 7272EPYC 7232P4K8K12K16K20KMin: 22817 / Avg: 23040.73 / Max: 23152.6Min: 20184.3 / Avg: 20585.33 / Max: 20991.7Min: 20184.3 / Avg: 20382.15 / Max: 20580Min: 17300.8 / Avg: 17300.8 / Max: 17300.8Min: 16399.7 / Avg: 16399.7 / Max: 16399.7Min: 13810.3 / Avg: 13892.5 / Max: 14056.9Min: 12495 / Avg: 12699.47 / Max: 12957.8Min: 12495 / Avg: 12698.2 / Max: 12799.8Min: 12204.4 / Avg: 12204.4 / Max: 12204.4Min: 8199.86 / Avg: 8265.99 / Max: 8464.38Min: 7288.77 / Avg: 7358.19 / Max: 7497.02Min: 6786.09 / Avg: 7130.89 / Max: 7288.77Min: 3802.84 / Avg: 3879.1 / Max: 3975.691. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

Tachyon

This is a test of the threaded Tachyon, a parallel ray-tracing system, measuring the time to ray-trace a sample scene. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99b6Total TimeEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P306090120150SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.07, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.27, N = 3SE +/- 0.04, N = 3SE +/- 0.14, N = 3SE +/- 0.26, N = 319.0819.1222.5622.8128.5930.9031.5737.7346.5954.8458.4775.0492.16112.391. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99b6Total TimeEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P20406080100Min: 19.06 / Avg: 19.08 / Max: 19.12Min: 19.11 / Avg: 19.12 / Max: 19.14Min: 22.52 / Avg: 22.56 / Max: 22.63Min: 22.76 / Avg: 22.81 / Max: 22.84Min: 28.5 / Avg: 28.59 / Max: 28.69Min: 30.86 / Avg: 30.9 / Max: 30.92Min: 31.53 / Avg: 31.57 / Max: 31.59Min: 37.62 / Avg: 37.73 / Max: 37.87Min: 46.49 / Avg: 46.59 / Max: 46.65Min: 54.8 / Avg: 54.84 / Max: 54.9Min: 58.2 / Avg: 58.47 / Max: 59Min: 74.96 / Avg: 75.04 / Max: 75.08Min: 91.95 / Avg: 92.16 / Max: 92.42Min: 112.07 / Avg: 112.39 / Max: 112.91. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread

7-Zip Compression

This is a test of 7-Zip using p7zip with its integrated benchmark feature or upstream 7-Zip for the Windows x64 build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestEPYC 7662EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P60K120K180K240K300KSE +/- 419.38, N = 3SE +/- 226.28, N = 3SE +/- 453.51, N = 3SE +/- 190.42, N = 3SE +/- 249.88, N = 3SE +/- 894.75, N = 3SE +/- 315.33, N = 3SE +/- 558.99, N = 3SE +/- 192.03, N = 3SE +/- 313.91, N = 3SE +/- 108.98, N = 3SE +/- 288.04, N = 3SE +/- 84.48, N = 327014026490822974717614817136217124213857510906995137905886965556955459411. (CXX) g++ options: -pipe -lpthread
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestEPYC 7662EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P50K100K150K200K250KMin: 269353 / Avg: 270139.67 / Max: 270785Min: 264563 / Avg: 264907.67 / Max: 265334Min: 229189 / Avg: 229746.67 / Max: 230645Min: 175784 / Avg: 176148 / Max: 176427Min: 170978 / Avg: 171362 / Max: 171831Min: 169527 / Avg: 171242 / Max: 172542Min: 138164 / Avg: 138575.33 / Max: 139195Min: 108136 / Avg: 109069.33 / Max: 110069Min: 94939 / Avg: 95137 / Max: 95521Min: 90180 / Avg: 90587.67 / Max: 91205Min: 69500 / Avg: 69654.67 / Max: 69865Min: 56532 / Avg: 56954.67 / Max: 57505Min: 45801 / Avg: 45941.33 / Max: 460931. (CXX) g++ options: -pipe -lpthread

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: MD5EPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P900K1800K2700K3600K4500KSE +/- 3282.95, N = 3SE +/- 881.92, N = 3SE +/- 881.92, N = 3SE +/- 2905.93, N = 3SE +/- 2185.81, N = 3SE +/- 1527.53, N = 3SE +/- 1452.97, N = 3SE +/- 1763.83, N = 3SE +/- 2603.42, N = 3SE +/- 2000.00, N = 3SE +/- 881.92, N = 3SE +/- 525.27, N = 3SE +/- 798.79, N = 34195333418300035576673528667279866725963332534000212733317316671460333135400010723338747007171751. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2
OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: MD5EPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P700K1400K2100K2800K3500KMin: 4189000 / Avg: 4195333.33 / Max: 4200000Min: 3556000 / Avg: 3557666.67 / Max: 3559000Min: 3527000 / Avg: 3528666.67 / Max: 3530000Min: 2794000 / Avg: 2798666.67 / Max: 2804000Min: 2592000 / Avg: 2596333.33 / Max: 2599000Min: 2531000 / Avg: 2534000 / Max: 2536000Min: 2125000 / Avg: 2127333.33 / Max: 2130000Min: 1729000 / Avg: 1731666.67 / Max: 1735000Min: 1456000 / Avg: 1460333.33 / Max: 1465000Min: 1352000 / Avg: 1354000 / Max: 1358000Min: 1071000 / Avg: 1072333.33 / Max: 1074000Min: 873984 / Avg: 874700.33 / Max: 875724Min: 716134 / Avg: 717175 / Max: 7187451. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. V-RAY is a commercial renderer that can integrate with various creator software products like SketchUp and 3ds Max. The V-RAY benchmark is standalone and supports CPU and NVIDIA CUDA/RTX based rendering. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5Mode: CPUEPYC 7702EPYC 7662EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7F52EPYC 7282EPYC 7272EPYC 7F32EPYC 7232P10K20K30K40K50KSE +/- 469.86, N = 3SE +/- 392.90, N = 8SE +/- 284.86, N = 11SE +/- 305.20, N = 6SE +/- 298.29, N = 3SE +/- 144.16, N = 3SE +/- 165.84, N = 3SE +/- 120.35, N = 3SE +/- 140.41, N = 4SE +/- 117.08, N = 4SE +/- 50.21, N = 3452924465837843306322844828401195211524612198100447864
OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5Mode: CPUEPYC 7702EPYC 7662EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7F52EPYC 7282EPYC 7272EPYC 7F32EPYC 7232P8K16K24K32K40KMin: 44695 / Avg: 45292 / Max: 46219Min: 43124 / Avg: 44658.38 / Max: 46027Min: 36221 / Avg: 37842.73 / Max: 38879Min: 29652 / Avg: 30632 / Max: 31469Min: 28095 / Avg: 28448 / Max: 29041Min: 28141 / Avg: 28400.67 / Max: 28639Min: 19190 / Avg: 19521 / Max: 19705Min: 15016 / Avg: 15246.33 / Max: 15422Min: 11788 / Avg: 12198.25 / Max: 12425Min: 9719 / Avg: 10044.25 / Max: 10235Min: 7769 / Avg: 7863.67 / Max: 7940

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ExhaustiveEPYC 7702EPYC 7662EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P60120180240300SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.16, N = 3SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.12, N = 345.8045.8454.3467.9573.3875.2789.07109.17129.26137.78176.04214.44262.141. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ExhaustiveEPYC 7702EPYC 7662EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P50100150200250Min: 45.79 / Avg: 45.8 / Max: 45.82Min: 45.83 / Avg: 45.84 / Max: 45.85Min: 54.32 / Avg: 54.34 / Max: 54.35Min: 67.93 / Avg: 67.95 / Max: 67.96Min: 73.37 / Avg: 73.38 / Max: 73.4Min: 75.26 / Avg: 75.27 / Max: 75.27Min: 89.06 / Avg: 89.07 / Max: 89.08Min: 108.86 / Avg: 109.17 / Max: 109.35Min: 129.21 / Avg: 129.26 / Max: 129.36Min: 137.74 / Avg: 137.78 / Max: 137.82Min: 176.03 / Avg: 176.04 / Max: 176.05Min: 214.41 / Avg: 214.44 / Max: 214.47Min: 262.01 / Avg: 262.14 / Max: 262.391. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: Magnetic Reconnection - Renderer: SciVisEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7302PEPYC 7282EPYC 7F52EPYC 7272EPYC 7F32EPYC 7232P918273645SE +/- 0.00, N = 6SE +/- 0.00, N = 6SE +/- 0.00, N = 5SE +/- 0.17, N = 5SE +/- 0.00, N = 5SE +/- 0.00, N = 4SE +/- 0.00, N = 4SE +/- 0.06, N = 3SE +/- 0.00, N = 3SE +/- 0.08, N = 6SE +/- 0.00, N = 340.0040.0034.4834.4827.7826.0525.0020.8314.0813.8913.4510.428.166.99MIN: 38.46 / MAX: 41.67MIN: 34.48 / MAX: 41.67MIN: 33.33 / MAX: 35.71MIN: 33.33 / MAX: 35.71MIN: 27.03MIN: 25.64 / MAX: 26.32MIN: 23.81 / MAX: 25.64MIN: 20.41 / MAX: 21.28MIN: 12.99 / MAX: 14.29MIN: 13.33 / MAX: 14.08MIN: 12.82 / MAX: 13.89MIN: 10.1 / MAX: 10.64MIN: 6.85 / MAX: 8.4MIN: 5.95 / MAX: 7.04
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: Magnetic Reconnection - Renderer: SciVisEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7302PEPYC 7282EPYC 7F52EPYC 7272EPYC 7F32EPYC 7232P816243240Min: 34.48 / Avg: 34.48 / Max: 34.48Min: 34.48 / Avg: 34.48 / Max: 34.48Min: 27.78 / Avg: 27.78 / Max: 27.78Min: 25.64 / Avg: 26.05 / Max: 26.32Min: 20.83 / Avg: 20.83 / Max: 20.83Min: 14.08 / Avg: 14.08 / Max: 14.08Min: 13.89 / Avg: 13.89 / Max: 13.89Min: 13.33 / Avg: 13.45 / Max: 13.51Min: 10.42 / Avg: 10.42 / Max: 10.42Min: 7.75 / Avg: 8.16 / Max: 8.26Min: 6.99 / Avg: 6.99 / Max: 6.99

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: SciVisEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P1326395265SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.17, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 358.8255.5650.0050.0040.0037.0435.7130.3022.5620.8319.6115.6312.5010.31MIN: 52.63 / MAX: 62.5MIN: 50 / MAX: 62.5MIN: 47.62 / MAX: 52.63MIN: 47.62 / MAX: 52.63MIN: 37.04 / MAX: 41.67MIN: 34.48 / MAX: 40MIN: 33.33 / MAX: 38.46MIN: 29.41 / MAX: 32.26MIN: 21.28 / MAX: 24.39MIN: 20.41 / MAX: 22.73MIN: 19.23 / MAX: 21.28MIN: 15.15 / MAX: 16.67MIN: 12.2 / MAX: 13.33MIN: 10.1 / MAX: 10.99
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: SciVisEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P1224364860Min: 58.82 / Avg: 58.82 / Max: 58.82Min: 55.56 / Avg: 55.56 / Max: 55.56Min: 37.04 / Avg: 37.04 / Max: 37.04Min: 35.71 / Avg: 35.71 / Max: 35.71Min: 30.3 / Avg: 30.3 / Max: 30.3Min: 22.22 / Avg: 22.56 / Max: 22.73Min: 20.83 / Avg: 20.83 / Max: 20.83Min: 19.61 / Avg: 19.61 / Max: 19.61Min: 15.63 / Avg: 15.63 / Max: 15.63Min: 12.5 / Avg: 12.5 / Max: 12.5Min: 10.31 / Avg: 10.31 / Max: 10.31

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Matrix MathEPYC 7662EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P40K80K120K160K200KSE +/- 1795.92, N = 3SE +/- 558.83, N = 3SE +/- 457.19, N = 3SE +/- 157.32, N = 3SE +/- 1288.21, N = 3SE +/- 270.71, N = 3SE +/- 377.08, N = 3SE +/- 417.25, N = 3SE +/- 123.36, N = 3SE +/- 294.16, N = 3SE +/- 183.06, N = 3SE +/- 382.53, N = 3SE +/- 186.09, N = 3179496.78179226.24152001.95122762.96112205.01110766.7992207.0776923.3963930.7259431.0346780.9438378.0831690.601. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Matrix MathEPYC 7662EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P30K60K90K120K150KMin: 175929.7 / Avg: 179496.78 / Max: 181644.9Min: 178344.47 / Avg: 179226.24 / Max: 180261.88Min: 151506.09 / Avg: 152001.95 / Max: 152915.2Min: 122497.64 / Avg: 122762.96 / Max: 123042.09Min: 109640.44 / Avg: 112205.01 / Max: 113701Min: 110323.78 / Avg: 110766.79 / Max: 111257.86Min: 91633.53 / Avg: 92207.07 / Max: 92917.94Min: 76194.59 / Avg: 76923.39 / Max: 77639.82Min: 63684.02 / Avg: 63930.72 / Max: 64057.16Min: 59024.95 / Avg: 59431.03 / Max: 60002.73Min: 46417.91 / Avg: 46780.94 / Max: 47003.53Min: 37633.18 / Avg: 38378.08 / Max: 38901.65Min: 31367.04 / Avg: 31690.6 / Max: 32011.651. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgKsamples, More Is BetterChaos Group V-RAY 4.10.07Mode: CPUEPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P13K26K39K52K65KSE +/- 487.40, N = 3SE +/- 467.21, N = 3SE +/- 179.29, N = 3SE +/- 328.54, N = 3SE +/- 391.02, N = 6SE +/- 264.62, N = 15SE +/- 397.54, N = 5SE +/- 87.62, N = 3SE +/- 71.92, N = 3SE +/- 114.11, N = 3SE +/- 122.98, N = 3SE +/- 80.88, N = 3SE +/- 31.21, N = 3SE +/- 44.44, N = 36275462295544455357038904370853622333184271322348622022175501420411136
OpenBenchmarking.orgKsamples, More Is BetterChaos Group V-RAY 4.10.07Mode: CPUEPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P11K22K33K44K55KMin: 62183 / Avg: 62754.33 / Max: 63724Min: 61378 / Avg: 62294.67 / Max: 62910Min: 54235 / Avg: 54445.33 / Max: 54802Min: 53078 / Avg: 53569.67 / Max: 54193Min: 37112 / Avg: 38903.83 / Max: 39603Min: 35376 / Avg: 37084.67 / Max: 38925Min: 35373 / Avg: 36223.2 / Max: 37595Min: 33018 / Avg: 33183.67 / Max: 33316Min: 27010 / Avg: 27132 / Max: 27259Min: 23305 / Avg: 23486.33 / Max: 23697Min: 21875 / Avg: 22021.67 / Max: 22266Min: 17397 / Avg: 17550 / Max: 17672Min: 14151 / Avg: 14203.67 / Max: 14259Min: 11048 / Avg: 11136.33 / Max: 11189

Pennant

Pennant is an application focused on hydrodynamics on general unstructured meshes in 2D. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: sedovbigEPYC 7662EPYC 7702EPYC 7552EPYC 7532EPYC 7542EPYC 7502PEPYC 7F52EPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P1530456075SE +/- 0.03, N = 4SE +/- 0.03, N = 4SE +/- 0.02, N = 4SE +/- 0.02, N = 3SE +/- 0.08, N = 3SE +/- 0.10, N = 3SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.07, N = 3SE +/- 0.06, N = 3SE +/- 0.11, N = 3SE +/- 0.02, N = 3SE +/- 0.14, N = 311.8011.9415.3518.0622.2323.0325.8825.8934.0341.6949.1152.7966.421. (CXX) g++ options: -fopenmp -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: sedovbigEPYC 7662EPYC 7702EPYC 7552EPYC 7532EPYC 7542EPYC 7502PEPYC 7F52EPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P1326395265Min: 11.75 / Avg: 11.8 / Max: 11.87Min: 11.88 / Avg: 11.94 / Max: 12Min: 15.3 / Avg: 15.35 / Max: 15.39Min: 18.03 / Avg: 18.06 / Max: 18.09Min: 22.07 / Avg: 22.23 / Max: 22.32Min: 22.87 / Avg: 23.03 / Max: 23.23Min: 25.85 / Avg: 25.88 / Max: 25.92Min: 25.8 / Avg: 25.89 / Max: 25.96Min: 33.93 / Avg: 34.03 / Max: 34.16Min: 41.58 / Avg: 41.69 / Max: 41.79Min: 48.99 / Avg: 49.11 / Max: 49.32Min: 52.75 / Avg: 52.79 / Max: 52.81Min: 66.13 / Avg: 66.42 / Max: 66.581. (CXX) g++ options: -fopenmp -pthread -lmpi_cxx -lmpi

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Barbershop - Compute: CPU-OnlyEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P2004006008001000SE +/- 0.20, N = 3SE +/- 0.04, N = 3SE +/- 0.23, N = 3SE +/- 0.31, N = 3SE +/- 0.07, N = 3SE +/- 0.46, N = 3SE +/- 0.14, N = 3SE +/- 0.35, N = 3SE +/- 0.69, N = 3SE +/- 0.25, N = 3SE +/- 0.65, N = 3SE +/- 1.20, N = 3SE +/- 0.11, N = 3SE +/- 0.62, N = 3154.23154.42179.18180.94223.08237.65243.58291.52356.90411.56432.92554.64683.81866.85
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Barbershop - Compute: CPU-OnlyEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P150300450600750Min: 153.83 / Avg: 154.23 / Max: 154.49Min: 154.37 / Avg: 154.42 / Max: 154.49Min: 178.74 / Avg: 179.18 / Max: 179.54Min: 180.32 / Avg: 180.94 / Max: 181.27Min: 222.95 / Avg: 223.08 / Max: 223.18Min: 237.18 / Avg: 237.65 / Max: 238.58Min: 243.32 / Avg: 243.58 / Max: 243.79Min: 291.13 / Avg: 291.52 / Max: 292.22Min: 356.09 / Avg: 356.9 / Max: 358.27Min: 411.08 / Avg: 411.56 / Max: 411.94Min: 432.21 / Avg: 432.92 / Max: 434.21Min: 552.98 / Avg: 554.64 / Max: 556.98Min: 683.61 / Avg: 683.81 / Max: 683.97Min: 865.62 / Avg: 866.85 / Max: 867.52

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Pabellon Barcelona - Compute: CPU-OnlyEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P140280420560700SE +/- 0.39, N = 3SE +/- 0.15, N = 3SE +/- 0.14, N = 3SE +/- 0.92, N = 3SE +/- 0.07, N = 3SE +/- 0.14, N = 3SE +/- 0.33, N = 3SE +/- 1.69, N = 3SE +/- 0.98, N = 3SE +/- 0.50, N = 3SE +/- 1.19, N = 3SE +/- 0.64, N = 3SE +/- 0.51, N = 3SE +/- 0.72, N = 3118.26119.17137.83139.21167.99182.76189.09219.75265.13317.27338.52426.59508.26664.29
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Pabellon Barcelona - Compute: CPU-OnlyEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P120240360480600Min: 117.87 / Avg: 118.26 / Max: 119.05Min: 118.86 / Avg: 119.17 / Max: 119.35Min: 137.55 / Avg: 137.83 / Max: 138.03Min: 138.06 / Avg: 139.21 / Max: 141.03Min: 167.9 / Avg: 167.99 / Max: 168.12Min: 182.48 / Avg: 182.76 / Max: 182.95Min: 188.42 / Avg: 189.09 / Max: 189.45Min: 217.98 / Avg: 219.75 / Max: 223.14Min: 263.71 / Avg: 265.13 / Max: 267.02Min: 316.57 / Avg: 317.27 / Max: 318.25Min: 336.65 / Avg: 338.52 / Max: 340.74Min: 425.64 / Avg: 426.59 / Max: 427.81Min: 507.47 / Avg: 508.26 / Max: 509.22Min: 662.88 / Avg: 664.29 / Max: 665.27

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Read While WritingEPYC 7702EPYC 7662EPYC 7552EPYC 7642EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7302PEPYC 7282EPYC 7F52EPYC 7272EPYC 7F32EPYC 7232P2M4M6M8M10MSE +/- 47749.78, N = 3SE +/- 27617.12, N = 3SE +/- 44027.23, N = 3SE +/- 69440.76, N = 6SE +/- 55384.00, N = 3SE +/- 24292.51, N = 3SE +/- 32730.71, N = 3SE +/- 43537.48, N = 3SE +/- 25144.15, N = 3SE +/- 26792.49, N = 3SE +/- 8186.58, N = 3SE +/- 24217.23, N = 3SE +/- 10805.74, N = 13SE +/- 1710.84, N = 3823247481753837183579713160055644335486066521430243976953086237302323930123872357928160367314706211. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Read While WritingEPYC 7702EPYC 7662EPYC 7552EPYC 7642EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7302PEPYC 7282EPYC 7F52EPYC 7272EPYC 7F32EPYC 7232P1.4M2.8M4.2M5.6M7MMin: 8181136 / Avg: 8232473.67 / Max: 8327881Min: 8142180 / Avg: 8175382.67 / Max: 8230211Min: 7101279 / Avg: 7183579 / Max: 7251844Min: 7048399 / Avg: 7131599.5 / Max: 7477000Min: 5463565 / Avg: 5564432.67 / Max: 5654508Min: 5439082 / Avg: 5486065.67 / Max: 5520271Min: 5150993 / Avg: 5214301.67 / Max: 5260375Min: 4337144 / Avg: 4397695 / Max: 4482162Min: 3036500 / Avg: 3086236.67 / Max: 3117538Min: 2989438 / Avg: 3023238.67 / Max: 3076148Min: 3002918 / Avg: 3012386.67 / Max: 3028689Min: 2309505 / Avg: 2357928 / Max: 2383052Min: 1550246 / Avg: 1603672.85 / Max: 1683898Min: 1467509 / Avg: 1470620.67 / Max: 14734091. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P0.61321.22641.83962.45283.066SE +/- 0.00014, N = 3SE +/- 0.00053, N = 3SE +/- 0.00034, N = 3SE +/- 0.00037, N = 3SE +/- 0.00071, N = 3SE +/- 0.00041, N = 3SE +/- 0.00074, N = 3SE +/- 0.00125, N = 3SE +/- 0.00173, N = 3SE +/- 0.00048, N = 3SE +/- 0.00125, N = 3SE +/- 0.00045, N = 3SE +/- 0.00063, N = 3SE +/- 0.00080, N = 30.489080.492640.570480.574840.715490.774390.790790.937051.143751.350581.451451.838952.229492.72553
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P246810Min: 0.49 / Avg: 0.49 / Max: 0.49Min: 0.49 / Avg: 0.49 / Max: 0.49Min: 0.57 / Avg: 0.57 / Max: 0.57Min: 0.57 / Avg: 0.57 / Max: 0.58Min: 0.71 / Avg: 0.72 / Max: 0.72Min: 0.77 / Avg: 0.77 / Max: 0.78Min: 0.79 / Avg: 0.79 / Max: 0.79Min: 0.94 / Avg: 0.94 / Max: 0.94Min: 1.14 / Avg: 1.14 / Max: 1.15Min: 1.35 / Avg: 1.35 / Max: 1.35Min: 1.45 / Avg: 1.45 / Max: 1.45Min: 1.84 / Avg: 1.84 / Max: 1.84Min: 2.23 / Avg: 2.23 / Max: 2.23Min: 2.72 / Avg: 2.73 / Max: 2.73

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random ReadEPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P50M100M150M200M250MSE +/- 2628540.04, N = 3SE +/- 1573991.80, N = 3SE +/- 1627088.40, N = 3SE +/- 1503518.94, N = 3SE +/- 979987.31, N = 3SE +/- 298644.62, N = 3SE +/- 577476.24, N = 3SE +/- 701112.41, N = 3SE +/- 33050.19, N = 3SE +/- 731623.27, N = 6SE +/- 860580.20, N = 3SE +/- 532212.35, N = 3SE +/- 173513.15, N = 3SE +/- 468827.18, N = 32184698242147177601857129821850950081474882801393380561330385501143914349388313076972090748165605692896748331735392061561. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random ReadEPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P40M80M120M160M200MMin: 213921113 / Avg: 218469824.33 / Max: 223026632Min: 211571907 / Avg: 214717760.33 / Max: 216390965Min: 182486843 / Avg: 185712982.33 / Max: 187695198Min: 182409655 / Avg: 185095007.67 / Max: 187609592Min: 146389489 / Avg: 147488280.33 / Max: 149443243Min: 138995330 / Avg: 139338056 / Max: 139933058Min: 131941123 / Avg: 133038549.67 / Max: 133898996Min: 112997116 / Avg: 114391434.33 / Max: 115217369Min: 93839853 / Avg: 93883129.67 / Max: 93948038Min: 75222240 / Avg: 76972090.17 / Max: 79656987Min: 73275140 / Avg: 74816560.33 / Max: 76250454Min: 55956147 / Avg: 56928967.33 / Max: 57789497Min: 48063107 / Avg: 48331734.67 / Max: 48656311Min: 38308495 / Avg: 39206155.67 / Max: 398896151. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read OnlyEPYC 7662EPYC 7702EPYC 7552EPYC 7642EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P200K400K600K800K1000KSE +/- 6923.68, N = 3SE +/- 5067.03, N = 3SE +/- 3710.21, N = 3SE +/- 1360.28, N = 3SE +/- 1178.79, N = 3SE +/- 1108.32, N = 3SE +/- 853.89, N = 3SE +/- 1318.12, N = 3SE +/- 248.26, N = 3SE +/- 891.46, N = 3SE +/- 804.68, N = 3SE +/- 696.35, N = 3SE +/- 681.51, N = 3SE +/- 518.37, N = 39397508999927914567756876557746141835875485216533970383684493432292756772133451698321. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read OnlyEPYC 7662EPYC 7702EPYC 7552EPYC 7642EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P160K320K480K640K800KMin: 926183.25 / Avg: 939750.48 / Max: 948934.04Min: 889916.01 / Avg: 899992.3 / Max: 905966.24Min: 784040.65 / Avg: 791455.71 / Max: 795407.24Min: 773949.79 / Avg: 775687.15 / Max: 778368.93Min: 654021.41 / Avg: 655774.08 / Max: 658015.96Min: 613074.34 / Avg: 614182.67 / Max: 616399.31Min: 585986.61 / Avg: 587547.73 / Max: 588927.93Min: 519362.06 / Avg: 521653.42 / Max: 523928.08Min: 396660.01 / Avg: 397038.18 / Max: 397505.91Min: 367104.95 / Avg: 368449.41 / Max: 370135.74Min: 341698.14 / Avg: 343228.63 / Max: 344424.83Min: 274390.74 / Avg: 275676.66 / Max: 276782.78Min: 212208.72 / Avg: 213344.58 / Max: 214564.99Min: 168803.67 / Avg: 169832.3 / Max: 170458.781. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average LatencyEPYC 7662EPYC 7702EPYC 7552EPYC 7642EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P0.33140.66280.99421.32561.657SE +/- 0.002, N = 3SE +/- 0.002, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.002, N = 3SE +/- 0.002, N = 3SE +/- 0.002, N = 3SE +/- 0.004, N = 3SE +/- 0.005, N = 30.2670.2790.3160.3230.3820.4070.4260.4800.6300.6790.7290.9081.1731.4731. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average LatencyEPYC 7662EPYC 7702EPYC 7552EPYC 7642EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P246810Min: 0.26 / Avg: 0.27 / Max: 0.27Min: 0.28 / Avg: 0.28 / Max: 0.28Min: 0.32 / Avg: 0.32 / Max: 0.32Min: 0.32 / Avg: 0.32 / Max: 0.32Min: 0.38 / Avg: 0.38 / Max: 0.38Min: 0.41 / Avg: 0.41 / Max: 0.41Min: 0.43 / Avg: 0.43 / Max: 0.43Min: 0.48 / Avg: 0.48 / Max: 0.48Min: 0.63 / Avg: 0.63 / Max: 0.63Min: 0.68 / Avg: 0.68 / Max: 0.68Min: 0.73 / Avg: 0.73 / Max: 0.73Min: 0.9 / Avg: 0.91 / Max: 0.91Min: 1.17 / Avg: 1.17 / Max: 1.18Min: 1.47 / Avg: 1.47 / Max: 1.481. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Classroom - Compute: CPU-OnlyEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P120240360480600SE +/- 0.25, N = 3SE +/- 0.41, N = 3SE +/- 0.77, N = 3SE +/- 0.69, N = 3SE +/- 0.41, N = 3SE +/- 0.20, N = 3SE +/- 1.09, N = 3SE +/- 0.04, N = 3SE +/- 0.74, N = 3SE +/- 0.64, N = 3SE +/- 0.83, N = 3SE +/- 0.31, N = 3SE +/- 0.25, N = 3SE +/- 0.21, N = 3101.68103.55121.21122.03148.50162.29168.40196.70237.79287.81304.34387.23454.49558.87
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Classroom - Compute: CPU-OnlyEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P100200300400500Min: 101.18 / Avg: 101.68 / Max: 101.94Min: 103.02 / Avg: 103.55 / Max: 104.35Min: 120.06 / Avg: 121.21 / Max: 122.68Min: 120.66 / Avg: 122.03 / Max: 122.86Min: 147.97 / Avg: 148.5 / Max: 149.3Min: 161.98 / Avg: 162.29 / Max: 162.65Min: 167.27 / Avg: 168.4 / Max: 170.57Min: 196.63 / Avg: 196.7 / Max: 196.77Min: 236.43 / Avg: 237.79 / Max: 238.96Min: 286.64 / Avg: 287.81 / Max: 288.83Min: 303.05 / Avg: 304.34 / Max: 305.88Min: 386.68 / Avg: 387.23 / Max: 387.74Min: 454.13 / Avg: 454.49 / Max: 454.97Min: 558.51 / Avg: 558.87 / Max: 559.22

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P60120180240300SE +/- 0.19, N = 3SE +/- 0.02, N = 3SE +/- 0.08, N = 3SE +/- 0.06, N = 3SE +/- 0.08, N = 3SE +/- 0.11, N = 3SE +/- 0.12, N = 3SE +/- 0.13, N = 3SE +/- 0.09, N = 3SE +/- 0.16, N = 3SE +/- 0.20, N = 3SE +/- 0.51, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 354.6554.7163.4764.2179.1185.0187.56103.00125.53147.38157.17200.35243.84296.691. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P50100150200250Min: 54.27 / Avg: 54.65 / Max: 54.89Min: 54.68 / Avg: 54.71 / Max: 54.75Min: 63.31 / Avg: 63.47 / Max: 63.56Min: 64.09 / Avg: 64.21 / Max: 64.3Min: 79.03 / Avg: 79.11 / Max: 79.27Min: 84.8 / Avg: 85 / Max: 85.18Min: 87.32 / Avg: 87.56 / Max: 87.75Min: 102.75 / Avg: 103 / Max: 103.2Min: 125.35 / Avg: 125.53 / Max: 125.64Min: 147.07 / Avg: 147.38 / Max: 147.56Min: 156.97 / Avg: 157.17 / Max: 157.56Min: 199.81 / Avg: 200.35 / Max: 201.37Min: 243.81 / Avg: 243.84 / Max: 243.89Min: 296.63 / Avg: 296.69 / Max: 296.791. (CXX) g++ options: -O2 -lOpenCL

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: SciVisEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P20406080100SE +/- 0.00, N = 7SE +/- 0.00, N = 7SE +/- 0.00, N = 7SE +/- 0.00, N = 7SE +/- 0.49, N = 6SE +/- 0.00, N = 6SE +/- 0.00, N = 6SE +/- 0.18, N = 5SE +/- 0.00, N = 5SE +/- 0.00, N = 5SE +/- 0.00, N = 4SE +/- 0.00, N = 4SE +/- 0.00, N = 376.9276.9266.6766.6755.0750.0047.6241.6729.5929.4127.7821.7417.2414.29MIN: 62.5 / MAX: 83.33MIN: 71.43 / MAX: 83.33MIN: 62.5 / MAX: 71.43MIN: 58.82 / MAX: 71.43MIN: 50 / MAX: 55.56MIN: 45.45 / MAX: 52.63MIN: 45.45 / MAX: 50MIN: 38.46 / MAX: 43.48MIN: 27.78 / MAX: 31.25MIN: 27.78MIN: 26.32MIN: 20.83 / MAX: 22.22MIN: 16.67 / MAX: 17.86MIN: 13.89 / MAX: 14.49
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: SciVisEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P1530456075Min: 76.92 / Avg: 76.92 / Max: 76.92Min: 76.92 / Avg: 76.92 / Max: 76.92Min: 66.67 / Avg: 66.67 / Max: 66.67Min: 66.67 / Avg: 66.67 / Max: 66.67Min: 52.63 / Avg: 55.07 / Max: 55.56Min: 47.62 / Avg: 47.62 / Max: 47.62Min: 41.67 / Avg: 41.67 / Max: 41.67Min: 29.41 / Avg: 29.59 / Max: 30.3Min: 29.41 / Avg: 29.41 / Max: 29.41Min: 27.78 / Avg: 27.78 / Max: 27.78Min: 21.74 / Avg: 21.74 / Max: 21.74Min: 17.24 / Avg: 17.24 / Max: 17.24Min: 14.29 / Avg: 14.29 / Max: 14.29

LuxCoreRender

LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on the CPU as opposed to the OpenCL version. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: Rainbow Colors and PrismEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P246810SE +/- 0.04, N = 3SE +/- 0.07, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 37.927.826.816.755.545.044.924.173.572.912.742.141.841.50MIN: 7.72 / MAX: 8.01MIN: 7.53 / MAX: 7.98MIN: 6.61 / MAX: 6.89MIN: 6.64 / MAX: 6.79MIN: 5.5 / MAX: 5.57MIN: 4.94 / MAX: 5.12MIN: 4.81 / MAX: 5.05MIN: 4.11 / MAX: 4.26MIN: 3.48 / MAX: 3.63MIN: 2.88 / MAX: 2.96MIN: 2.69 / MAX: 2.78MIN: 2.08 / MAX: 2.17MIN: 1.81 / MAX: 1.92MIN: 1.48 / MAX: 1.54
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: Rainbow Colors and PrismEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P3691215Min: 7.83 / Avg: 7.92 / Max: 7.98Min: 7.71 / Avg: 7.82 / Max: 7.94Min: 6.71 / Avg: 6.81 / Max: 6.88Min: 6.73 / Avg: 6.75 / Max: 6.78Min: 5.52 / Avg: 5.54 / Max: 5.55Min: 4.94 / Avg: 5.04 / Max: 5.12Min: 4.81 / Avg: 4.92 / Max: 5.01Min: 4.11 / Avg: 4.17 / Max: 4.22Min: 3.53 / Avg: 3.57 / Max: 3.6Min: 2.89 / Avg: 2.91 / Max: 2.95Min: 2.73 / Avg: 2.74 / Max: 2.76Min: 2.12 / Avg: 2.14 / Max: 2.15Min: 1.83 / Avg: 1.84 / Max: 1.87Min: 1.5 / Avg: 1.5 / Max: 1.51

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkEPYC 7662EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P110220330440550SE +/- 1.00, N = 3SE +/- 1.86, N = 3SE +/- 0.67, N = 3SE +/- 0.58, N = 3SE +/- 0.58, N = 3SE +/- 1.15, N = 350449244135933732827922219218614211696MIN: 1 / MAX: 1715MIN: 1 / MAX: 1688MIN: 1 / MAX: 1539MIN: 1 / MAX: 1250MIN: 1 / MAX: 1172MIN: 1 / MAX: 1145MIN: 1 / MAX: 983MIN: 1 / MAX: 779MIN: 1 / MAX: 672MIN: 1 / MAX: 658MIN: 1 / MAX: 507MIN: 1 / MAX: 413MIN: 1 / MAX: 343
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkEPYC 7662EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P90180270360450Min: 502 / Avg: 504 / Max: 505Min: 488 / Avg: 491.67 / Max: 494Min: 440 / Avg: 441.33 / Max: 442Min: 358 / Avg: 359 / Max: 360Min: 327 / Avg: 328 / Max: 329Min: 277 / Avg: 279 / Max: 281

LuxCoreRender

LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on the CPU as opposed to the OpenCL version. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: DLSCEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P246810SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.06, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 37.217.046.196.135.034.654.543.833.272.682.531.981.681.38MIN: 7.03 / MAX: 7.62MIN: 6.88 / MAX: 7.39MIN: 6.04 / MAX: 6.75MIN: 5.97 / MAX: 6.57MIN: 4.8 / MAX: 5.35MIN: 4.49 / MAX: 5.07MIN: 4.38 / MAX: 4.8MIN: 3.74 / MAX: 4.01MIN: 3.18 / MAX: 3.41MIN: 2.58 / MAX: 2.77MIN: 2.46 / MAX: 2.62MIN: 1.91 / MAX: 2.03MIN: 1.61 / MAX: 1.72MIN: 1.34 / MAX: 1.42
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: DLSCEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P3691215Min: 7.13 / Avg: 7.21 / Max: 7.27Min: 6.98 / Avg: 7.04 / Max: 7.07Min: 6.14 / Avg: 6.19 / Max: 6.26Min: 6.07 / Avg: 6.13 / Max: 6.22Min: 4.96 / Avg: 5.03 / Max: 5.09Min: 4.58 / Avg: 4.65 / Max: 4.76Min: 4.47 / Avg: 4.54 / Max: 4.64Min: 3.81 / Avg: 3.83 / Max: 3.84Min: 3.26 / Avg: 3.27 / Max: 3.29Min: 2.66 / Avg: 2.68 / Max: 2.71Min: 2.52 / Avg: 2.53 / Max: 2.55Min: 1.97 / Avg: 1.98 / Max: 1.99Min: 1.66 / Avg: 1.68 / Max: 1.69Min: 1.37 / Avg: 1.38 / Max: 1.39

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughEPYC 7702EPYC 7662EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P816243240SE +/- 0.00, N = 4SE +/- 0.01, N = 4SE +/- 0.00, N = 4SE +/- 0.00, N = 4SE +/- 0.00, N = 4SE +/- 0.00, N = 4SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 36.356.387.429.089.739.9911.7114.0216.6817.7322.4027.1633.151. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughEPYC 7702EPYC 7662EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P714212835Min: 6.35 / Avg: 6.35 / Max: 6.36Min: 6.36 / Avg: 6.38 / Max: 6.4Min: 7.41 / Avg: 7.42 / Max: 7.42Min: 9.08 / Avg: 9.08 / Max: 9.08Min: 9.73 / Avg: 9.73 / Max: 9.74Min: 9.99 / Avg: 9.99 / Max: 9.99Min: 11.71 / Avg: 11.71 / Max: 11.72Min: 14 / Avg: 14.02 / Max: 14.03Min: 16.67 / Avg: 16.68 / Max: 16.69Min: 17.72 / Avg: 17.73 / Max: 17.74Min: 22.39 / Avg: 22.4 / Max: 22.42Min: 27.15 / Avg: 27.16 / Max: 27.17Min: 33.13 / Avg: 33.15 / Max: 33.181. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPUEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P3691215SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 38.987.887.797.226.135.725.684.734.023.323.232.472.161.74
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPUEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P3691215Min: 8.97 / Avg: 8.98 / Max: 9.01Min: 7.84 / Avg: 7.88 / Max: 7.9Min: 7.78 / Avg: 7.79 / Max: 7.81Min: 7.19 / Avg: 7.22 / Max: 7.24Min: 6.03 / Avg: 6.13 / Max: 6.19Min: 5.7 / Avg: 5.72 / Max: 5.74Min: 5.67 / Avg: 5.68 / Max: 5.69Min: 4.68 / Avg: 4.73 / Max: 4.79Min: 4.02 / Avg: 4.02 / Max: 4.03Min: 3.31 / Avg: 3.32 / Max: 3.32Min: 3.2 / Avg: 3.23 / Max: 3.25Min: 2.44 / Avg: 2.47 / Max: 2.5Min: 2.14 / Avg: 2.16 / Max: 2.18Min: 1.71 / Avg: 1.74 / Max: 1.76

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPUEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P246810SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 36.606.055.675.534.704.364.193.512.982.512.381.811.641.28
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPUEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P3691215Min: 6.58 / Avg: 6.6 / Max: 6.63Min: 6.03 / Avg: 6.05 / Max: 6.06Min: 5.66 / Avg: 5.67 / Max: 5.68Min: 5.52 / Avg: 5.53 / Max: 5.54Min: 4.69 / Avg: 4.7 / Max: 4.72Min: 4.32 / Avg: 4.36 / Max: 4.4Min: 4.18 / Avg: 4.19 / Max: 4.2Min: 3.5 / Avg: 3.51 / Max: 3.53Min: 2.97 / Avg: 2.98 / Max: 2.99Min: 2.5 / Avg: 2.51 / Max: 2.52Min: 2.37 / Avg: 2.38 / Max: 2.39Min: 1.8 / Avg: 1.81 / Max: 1.82Min: 1.64 / Avg: 1.64 / Max: 1.65Min: 1.27 / Avg: 1.28 / Max: 1.29

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPUEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P246810SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 36.626.025.675.534.744.374.203.532.992.512.371.811.661.29
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPUEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P3691215Min: 6.6 / Avg: 6.62 / Max: 6.64Min: 6.01 / Avg: 6.02 / Max: 6.03Min: 5.66 / Avg: 5.67 / Max: 5.68Min: 5.51 / Avg: 5.53 / Max: 5.56Min: 4.72 / Avg: 4.74 / Max: 4.75Min: 4.33 / Avg: 4.37 / Max: 4.4Min: 4.19 / Avg: 4.2 / Max: 4.21Min: 3.52 / Avg: 3.53 / Max: 3.56Min: 2.97 / Avg: 2.99 / Max: 3.01Min: 2.5 / Avg: 2.51 / Max: 2.52Min: 2.34 / Avg: 2.37 / Max: 2.38Min: 1.8 / Avg: 1.81 / Max: 1.82Min: 1.65 / Avg: 1.66 / Max: 1.67Min: 1.29 / Avg: 1.29 / Max: 1.3

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPUEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P3691215SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 38.977.887.817.206.175.735.684.764.043.313.162.472.161.75
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPUEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P3691215Min: 8.97 / Avg: 8.97 / Max: 8.97Min: 7.84 / Avg: 7.88 / Max: 7.9Min: 7.8 / Avg: 7.81 / Max: 7.83Min: 7.19 / Avg: 7.2 / Max: 7.22Min: 6.13 / Avg: 6.17 / Max: 6.19Min: 5.73 / Avg: 5.73 / Max: 5.73Min: 5.67 / Avg: 5.68 / Max: 5.68Min: 4.74 / Avg: 4.76 / Max: 4.8Min: 4.02 / Avg: 4.04 / Max: 4.07Min: 3.3 / Avg: 3.31 / Max: 3.32Min: 3.14 / Avg: 3.16 / Max: 3.18Min: 2.45 / Avg: 2.47 / Max: 2.48Min: 2.14 / Avg: 2.16 / Max: 2.2Min: 1.72 / Avg: 1.75 / Max: 1.76

CloverLeaf

CloverLeaf is a Lagrangian-Eulerian hydrodynamics benchmark. This test profile currently makes use of CloverLeaf's OpenMP version and benchmarked with the clover_bm.in input file (Problem 5). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeafLagrangian-Eulerian HydrodynamicsEPYC 7642EPYC 7662EPYC 7702EPYC 7552EPYC 7532EPYC 7542EPYC 7502PEPYC 7402PEPYC 7F52EPYC 7302PEPYC 7F32EPYC 7282EPYC 7272EPYC 7232P1632486480SE +/- 0.08, N = 4SE +/- 0.07, N = 4SE +/- 0.05, N = 4SE +/- 0.02, N = 4SE +/- 0.07, N = 4SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.06, N = 3SE +/- 0.06, N = 3SE +/- 0.09, N = 3SE +/- 0.09, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 313.6614.0314.7015.2815.2918.6119.0720.3522.8325.2136.8045.4648.2969.911. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp
OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeafLagrangian-Eulerian HydrodynamicsEPYC 7642EPYC 7662EPYC 7702EPYC 7552EPYC 7532EPYC 7542EPYC 7502PEPYC 7402PEPYC 7F52EPYC 7302PEPYC 7F32EPYC 7282EPYC 7272EPYC 7232P1428425670Min: 13.55 / Avg: 13.66 / Max: 13.9Min: 13.91 / Avg: 14.03 / Max: 14.23Min: 14.59 / Avg: 14.7 / Max: 14.82Min: 15.23 / Avg: 15.28 / Max: 15.32Min: 15.1 / Avg: 15.29 / Max: 15.4Min: 18.57 / Avg: 18.61 / Max: 18.67Min: 19 / Avg: 19.07 / Max: 19.13Min: 20.26 / Avg: 20.35 / Max: 20.46Min: 22.76 / Avg: 22.83 / Max: 22.94Min: 25.12 / Avg: 25.21 / Max: 25.39Min: 36.61 / Avg: 36.8 / Max: 36.9Min: 45.38 / Avg: 45.46 / Max: 45.52Min: 48.21 / Avg: 48.29 / Max: 48.38Min: 69.87 / Avg: 69.91 / Max: 69.961. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - DegriddingEPYC 7702EPYC 7662EPYC 7642EPYC 7532EPYC 7552EPYC 7542EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7F32EPYC 7282EPYC 7272EPYC 7232PEPYC 7F5215003000450060007500SE +/- 47.34, N = 4SE +/- 42.67, N = 4SE +/- 40.59, N = 4SE +/- 38.66, N = 4SE +/- 36.86, N = 4SE +/- 0.00, N = 4SE +/- 0.00, N = 4SE +/- 35.55, N = 4SE +/- 37.73, N = 3SE +/- 30.11, N = 2SE +/- 10.27, N = 4SE +/- 10.02, N = 4SE +/- 6.46, N = 4SE +/- 3.57, N = 47148.776699.076615.816455.406302.575788.175788.175726.605509.304004.083317.933257.042616.811406.931. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - DegriddingEPYC 7702EPYC 7662EPYC 7642EPYC 7532EPYC 7552EPYC 7542EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7F32EPYC 7282EPYC 7272EPYC 7232PEPYC 7F5212002400360048006000Min: 7006.74 / Avg: 7148.77 / Max: 7196.11Min: 6656.4 / Avg: 6699.07 / Max: 6827.08Min: 6494.05 / Avg: 6615.81 / Max: 6656.4Min: 6339.43 / Avg: 6455.4 / Max: 6494.05Min: 6192 / Avg: 6302.57 / Max: 6339.43Min: 5788.17 / Avg: 5788.17 / Max: 5788.17Min: 5788.17 / Avg: 5788.17 / Max: 5788.17Min: 5665.02 / Avg: 5726.6 / Max: 5788.17Min: 5433.8 / Avg: 5509.27 / Max: 5547Min: 3973.97 / Avg: 4004.08 / Max: 4034.18Min: 3287.11 / Avg: 3317.93 / Max: 3328.2Min: 3247.02 / Avg: 3257.04 / Max: 3287.11Min: 2610.35 / Avg: 2616.81 / Max: 2636.2Min: 1401.35 / Avg: 1406.93 / Max: 1416.261. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

rays1bench

This is a test of rays1bench, a simple path-tracer / ray-tracing that supports SSE and AVX instructions, multi-threading, and other features. This test profile is measuring the performance of the "large scene" in rays1bench. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmrays/s, More Is Betterrays1bench 2020-01-09Large SceneEPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P50100150200250SE +/- 0.20, N = 7SE +/- 0.21, N = 7SE +/- 0.11, N = 7SE +/- 0.08, N = 7SE +/- 0.03, N = 6SE +/- 0.06, N = 6SE +/- 0.06, N = 6SE +/- 0.04, N = 6SE +/- 0.10, N = 5SE +/- 0.08, N = 5SE +/- 0.06, N = 4SE +/- 0.04, N = 4SE +/- 0.01, N = 4SE +/- 0.08, N = 3243.57243.25218.73217.70182.59167.75163.00134.19109.9190.3784.5468.5159.6048.61
OpenBenchmarking.orgmrays/s, More Is Betterrays1bench 2020-01-09Large SceneEPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P4080120160200Min: 242.82 / Avg: 243.57 / Max: 244.37Min: 242.31 / Avg: 243.25 / Max: 243.89Min: 218.36 / Avg: 218.73 / Max: 219.13Min: 217.39 / Avg: 217.7 / Max: 217.92Min: 182.51 / Avg: 182.59 / Max: 182.66Min: 167.52 / Avg: 167.75 / Max: 167.95Min: 162.8 / Avg: 163 / Max: 163.17Min: 134.04 / Avg: 134.19 / Max: 134.3Min: 109.61 / Avg: 109.91 / Max: 110.19Min: 90.09 / Avg: 90.37 / Max: 90.53Min: 84.42 / Avg: 84.54 / Max: 84.72Min: 68.41 / Avg: 68.51 / Max: 68.59Min: 59.57 / Avg: 59.6 / Max: 59.62Min: 48.46 / Avg: 48.61 / Max: 48.73

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantEPYC 7662EPYC 7702EPYC 7542EPYC 7642EPYC 7502PEPYC 7532EPYC 7552EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P40K80K120K160K200KSE +/- 61.64, N = 3SE +/- 118.11, N = 3SE +/- 72.07, N = 3SE +/- 392.20, N = 3SE +/- 44.27, N = 3SE +/- 69.37, N = 3SE +/- 518.49, N = 3SE +/- 40.77, N = 3SE +/- 61.58, N = 3SE +/- 15.21, N = 3SE +/- 47.75, N = 3SE +/- 28.48, N = 3SE +/- 13.05, N = 3SE +/- 20.79, N = 333024.036468.845118.145818.847911.749078.550401.862904.170154.582490.486277.6120119.0136023.0165426.0
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantEPYC 7662EPYC 7702EPYC 7542EPYC 7642EPYC 7502PEPYC 7532EPYC 7552EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P30K60K90K120K150KMin: 32924.1 / Avg: 33024 / Max: 33136.5Min: 36258.5 / Avg: 36468.83 / Max: 36667.1Min: 45013.3 / Avg: 45118.1 / Max: 45256.2Min: 45327.1 / Avg: 45818.77 / Max: 46593.9Min: 47856.7 / Avg: 47911.7 / Max: 47999.3Min: 48987 / Avg: 49078.53 / Max: 49214.6Min: 49377.6 / Avg: 50401.83 / Max: 51054.3Min: 62851.8 / Avg: 62904.07 / Max: 62984.4Min: 70069.3 / Avg: 70154.47 / Max: 70274.1Min: 82465 / Avg: 82490.43 / Max: 82517.6Min: 86198 / Avg: 86277.57 / Max: 86363.1Min: 120062 / Avg: 120118.67 / Max: 120152Min: 135998 / Avg: 136023 / Max: 136042Min: 165394 / Avg: 165426 / Max: 165465

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatEPYC 7662EPYC 7702EPYC 7542EPYC 7642EPYC 7502PEPYC 7532EPYC 7552EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P30K60K90K120K150KSE +/- 187.50, N = 3SE +/- 280.28, N = 3SE +/- 86.41, N = 3SE +/- 322.42, N = 3SE +/- 76.82, N = 3SE +/- 28.71, N = 3SE +/- 194.69, N = 3SE +/- 193.63, N = 3SE +/- 13.80, N = 3SE +/- 21.22, N = 3SE +/- 63.38, N = 3SE +/- 107.22, N = 3SE +/- 599.00, N = 3SE +/- 22.85, N = 332085.535037.443977.945436.647083.448084.748507.461419.968637.680320.584723.5117060.0133236.0160713.0
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatEPYC 7662EPYC 7702EPYC 7542EPYC 7642EPYC 7502PEPYC 7532EPYC 7552EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P30K60K90K120K150KMin: 31772.4 / Avg: 32085.53 / Max: 32420.8Min: 34603.7 / Avg: 35037.37 / Max: 35561.8Min: 43815.4 / Avg: 43977.87 / Max: 44110.1Min: 44937.4 / Avg: 45436.6 / Max: 46039.7Min: 46946.3 / Avg: 47083.37 / Max: 47212Min: 48044.8 / Avg: 48084.67 / Max: 48140.4Min: 48205.1 / Avg: 48507.43 / Max: 48871.1Min: 61061.3 / Avg: 61419.9 / Max: 61725.8Min: 68610.2 / Avg: 68637.6 / Max: 68654.2Min: 80284 / Avg: 80320.5 / Max: 80357.5Min: 84636.8 / Avg: 84723.47 / Max: 84846.9Min: 116941 / Avg: 117060 / Max: 117274Min: 132636 / Avg: 133236 / Max: 134434Min: 160685 / Avg: 160712.67 / Max: 160758

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average LatencyEPYC 7662EPYC 7702EPYC 7552EPYC 7642EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P0.12220.24440.36660.48880.611SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 30.1090.1120.1200.1200.1530.1640.1730.2010.2630.2800.2900.3590.4640.5431. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average LatencyEPYC 7662EPYC 7702EPYC 7552EPYC 7642EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P246810Min: 0.11 / Avg: 0.11 / Max: 0.11Min: 0.11 / Avg: 0.11 / Max: 0.11Min: 0.12 / Avg: 0.12 / Max: 0.12Min: 0.12 / Avg: 0.12 / Max: 0.12Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.16 / Avg: 0.16 / Max: 0.16Min: 0.17 / Avg: 0.17 / Max: 0.17Min: 0.2 / Avg: 0.2 / Max: 0.2Min: 0.26 / Avg: 0.26 / Max: 0.26Min: 0.28 / Avg: 0.28 / Max: 0.28Min: 0.29 / Avg: 0.29 / Max: 0.29Min: 0.36 / Avg: 0.36 / Max: 0.36Min: 0.46 / Avg: 0.46 / Max: 0.47Min: 0.54 / Avg: 0.54 / Max: 0.541. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read OnlyEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P200K400K600K800K1000KSE +/- 2726.39, N = 3SE +/- 3714.01, N = 3SE +/- 492.11, N = 3SE +/- 2696.01, N = 3SE +/- 684.57, N = 3SE +/- 1605.99, N = 3SE +/- 1146.88, N = 3SE +/- 1145.86, N = 3SE +/- 501.41, N = 3SE +/- 406.00, N = 3SE +/- 721.66, N = 3SE +/- 527.65, N = 3SE +/- 196.73, N = 3SE +/- 258.28, N = 39155238974538354578325096549146106675783404991383811843577813445982785102156171842511. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read OnlyEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P160K320K480K640K800KMin: 911128.88 / Avg: 915522.69 / Max: 920516.11Min: 891671.21 / Avg: 897453.24 / Max: 904382.54Min: 834632.97 / Avg: 835456.65 / Max: 836335.05Min: 827135.25 / Avg: 832508.89 / Max: 835580.98Min: 654192.39 / Avg: 654914.4 / Max: 656282.85Min: 608958.8 / Avg: 610666.61 / Max: 613876.39Min: 576053.18 / Avg: 578339.82 / Max: 579639.5Min: 497224.75 / Avg: 499138.43 / Max: 501187.22Min: 380600.92 / Avg: 381183.76 / Max: 382181.9Min: 357317.88 / Avg: 357781.31 / Max: 358590.46Min: 343314.32 / Avg: 344597.63 / Max: 345811.31Min: 277469.8 / Avg: 278509.56 / Max: 279185.65Min: 215250.14 / Avg: 215616.98 / Max: 215923.6Min: 183809.92 / Avg: 184251.3 / Max: 184704.381. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenFOAM

OpenFOAM is the leading free, open source software for computational fluid dynamics (CFD). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 30MEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7532EPYC 7542EPYC 7502PEPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7F32EPYC 7272EPYC 7232P306090120150SE +/- 0.10, N = 3SE +/- 0.18, N = 3SE +/- 0.07, N = 3SE +/- 0.09, N = 3SE +/- 0.09, N = 3SE +/- 0.09, N = 3SE +/- 0.06, N = 3SE +/- 0.08, N = 3SE +/- 0.05, N = 3SE +/- 0.17, N = 3SE +/- 0.05, N = 3SE +/- 0.11, N = 3SE +/- 0.08, N = 3SE +/- 0.30, N = 322.8323.3523.4325.1726.5329.6230.6233.8139.8940.8756.5760.5067.23113.161. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -ldecompose -lgenericPatchFields -lmetisDecomp -lscotchDecomp -llagrangian -lregionModels -lOpenFOAM -ldl -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 30MEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7532EPYC 7542EPYC 7502PEPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7F32EPYC 7272EPYC 7232P20406080100Min: 22.63 / Avg: 22.83 / Max: 22.97Min: 23.06 / Avg: 23.35 / Max: 23.68Min: 23.3 / Avg: 23.43 / Max: 23.53Min: 25.06 / Avg: 25.17 / Max: 25.35Min: 26.35 / Avg: 26.53 / Max: 26.65Min: 29.5 / Avg: 29.62 / Max: 29.8Min: 30.52 / Avg: 30.62 / Max: 30.71Min: 33.73 / Avg: 33.81 / Max: 33.97Min: 39.79 / Avg: 39.89 / Max: 39.94Min: 40.67 / Avg: 40.87 / Max: 41.21Min: 56.49 / Avg: 56.57 / Max: 56.65Min: 60.32 / Avg: 60.5 / Max: 60.7Min: 67.08 / Avg: 67.23 / Max: 67.35Min: 112.77 / Avg: 113.16 / Max: 113.751. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -ldecompose -lgenericPatchFields -lmetisDecomp -lscotchDecomp -llagrangian -lregionModels -lOpenFOAM -ldl -lm

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4EPYC 7662EPYC 7702EPYC 7642EPYC 7542EPYC 7552EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P700K1400K2100K2800K3500KSE +/- 3136.75, N = 3SE +/- 6069.08, N = 3SE +/- 1781.71, N = 3SE +/- 425.74, N = 3SE +/- 907.34, N = 3SE +/- 487.90, N = 3SE +/- 143.80, N = 3SE +/- 1233.46, N = 3SE +/- 192.21, N = 3SE +/- 272.21, N = 3SE +/- 565.25, N = 3SE +/- 436.36, N = 3SE +/- 227.03, N = 3SE +/- 1129.29, N = 3700833764269938691943229978828100634710343671366410149918317428901820153249374728722273472940
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4EPYC 7662EPYC 7702EPYC 7642EPYC 7542EPYC 7552EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P600K1200K1800K2400K3000KMin: 695393 / Avg: 700833.33 / Max: 706259Min: 752152 / Avg: 764269 / Max: 770948Min: 935608 / Avg: 938691 / Max: 941780Min: 942378 / Avg: 943229 / Max: 943679Min: 977917 / Avg: 978828.33 / Max: 980643Min: 1005560 / Avg: 1006346.67 / Max: 1007240Min: 1034080 / Avg: 1034366.67 / Max: 1034530Min: 1365020 / Avg: 1366410 / Max: 1368870Min: 1498800 / Avg: 1499183.33 / Max: 1499400Min: 1742350 / Avg: 1742890 / Max: 1743220Min: 1819400 / Avg: 1820153.33 / Max: 1821260Min: 2492910 / Avg: 2493746.67 / Max: 2494380Min: 2871920 / Avg: 2872226.67 / Max: 2872670Min: 3471470 / Avg: 3472940 / Max: 3475160

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: BMW27 - Compute: CPU-OnlyEPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P4080120160200SE +/- 0.12, N = 3SE +/- 0.27, N = 3SE +/- 0.19, N = 3SE +/- 0.42, N = 3SE +/- 0.06, N = 3SE +/- 0.37, N = 3SE +/- 0.21, N = 3SE +/- 0.08, N = 3SE +/- 0.27, N = 3SE +/- 0.95, N = 3SE +/- 0.34, N = 3SE +/- 0.34, N = 3SE +/- 0.25, N = 3SE +/- 1.01, N = 340.0740.3345.7846.5955.3260.7561.2569.8483.45102.29108.34136.20156.31198.45
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: BMW27 - Compute: CPU-OnlyEPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P4080120160200Min: 39.84 / Avg: 40.07 / Max: 40.23Min: 39.99 / Avg: 40.33 / Max: 40.86Min: 45.39 / Avg: 45.78 / Max: 45.99Min: 45.95 / Avg: 46.59 / Max: 47.39Min: 55.22 / Avg: 55.32 / Max: 55.43Min: 60 / Avg: 60.75 / Max: 61.17Min: 60.93 / Avg: 61.25 / Max: 61.64Min: 69.76 / Avg: 69.84 / Max: 69.99Min: 83.1 / Avg: 83.45 / Max: 83.98Min: 100.63 / Avg: 102.29 / Max: 103.93Min: 107.74 / Avg: 108.34 / Max: 108.92Min: 135.6 / Avg: 136.2 / Max: 136.76Min: 155.93 / Avg: 156.31 / Max: 156.78Min: 196.87 / Avg: 198.45 / Max: 200.33

POV-Ray

This is a test of POV-Ray, the Persistence of Vision Raytracer. POV-Ray is used to create 3D graphics using ray-tracing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace TimeEPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P1326395265SE +/- 0.02, N = 4SE +/- 0.03, N = 4SE +/- 0.04, N = 4SE +/- 0.04, N = 4SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.24, N = 311.5011.5013.2713.2715.9417.1417.6720.4624.1828.7030.5138.4145.4755.781. (CXX) g++ options: -pipe -O3 -ffast-math -march=native -pthread -lSDL -lSM -lICE -lX11 -lIlmImf -lImath -lHalf -lIex -lIexMath -lIlmThread -lpthread -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system
OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace TimeEPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P1122334455Min: 11.45 / Avg: 11.5 / Max: 11.55Min: 11.4 / Avg: 11.5 / Max: 11.56Min: 13.19 / Avg: 13.27 / Max: 13.35Min: 13.17 / Avg: 13.27 / Max: 13.35Min: 15.92 / Avg: 15.94 / Max: 15.96Min: 17.12 / Avg: 17.14 / Max: 17.17Min: 17.65 / Avg: 17.67 / Max: 17.7Min: 20.41 / Avg: 20.46 / Max: 20.56Min: 24.15 / Avg: 24.18 / Max: 24.23Min: 28.69 / Avg: 28.7 / Max: 28.72Min: 30.42 / Avg: 30.51 / Max: 30.57Min: 38.36 / Avg: 38.41 / Max: 38.49Min: 45.44 / Avg: 45.47 / Max: 45.5Min: 55.44 / Avg: 55.78 / Max: 56.241. (CXX) g++ options: -pipe -O3 -ffast-math -march=native -pthread -lSDL -lSM -lICE -lX11 -lIlmImf -lImath -lHalf -lIex -lIexMath -lIlmThread -lpthread -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Fishy Cat - Compute: CPU-OnlyEPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P60120180240300SE +/- 0.11, N = 3SE +/- 0.03, N = 3SE +/- 0.14, N = 3SE +/- 0.22, N = 3SE +/- 0.12, N = 3SE +/- 0.04, N = 3SE +/- 0.30, N = 3SE +/- 0.46, N = 3SE +/- 0.20, N = 3SE +/- 0.08, N = 3SE +/- 0.24, N = 3SE +/- 0.81, N = 3SE +/- 0.04, N = 3SE +/- 0.21, N = 355.4055.8261.9362.7272.2778.0178.9491.81108.40129.81139.46177.65207.12267.24
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Fishy Cat - Compute: CPU-OnlyEPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P50100150200250Min: 55.2 / Avg: 55.4 / Max: 55.58Min: 55.77 / Avg: 55.82 / Max: 55.85Min: 61.67 / Avg: 61.93 / Max: 62.15Min: 62.45 / Avg: 62.72 / Max: 63.15Min: 72.03 / Avg: 72.27 / Max: 72.43Min: 77.95 / Avg: 78.01 / Max: 78.08Min: 78.44 / Avg: 78.94 / Max: 79.47Min: 90.99 / Avg: 91.81 / Max: 92.58Min: 108.03 / Avg: 108.4 / Max: 108.72Min: 129.68 / Avg: 129.81 / Max: 129.95Min: 139.16 / Avg: 139.46 / Max: 139.93Min: 176.74 / Avg: 177.65 / Max: 179.26Min: 207.06 / Avg: 207.12 / Max: 207.19Min: 267.03 / Avg: 267.24 / Max: 267.66

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7272EPYC 7F52EPYC 7402PEPYC 7302PEPYC 7282EPYC 7F32EPYC 7232P246810SE +/- 0.00798, N = 3SE +/- 0.00645, N = 3SE +/- 0.00608, N = 3SE +/- 0.00201, N = 3SE +/- 0.00531, N = 3SE +/- 0.00464, N = 3SE +/- 0.00309, N = 3SE +/- 0.00535, N = 3SE +/- 0.02104, N = 3SE +/- 0.02587, N = 3SE +/- 0.00946, N = 3SE +/- 0.02527, N = 3SE +/- 0.00611, N = 3SE +/- 0.02294, N = 31.798741.888421.930421.997072.069832.151942.205435.253315.496055.922046.034976.063337.237638.67154MIN: 1.69MIN: 1.8MIN: 1.82MIN: 1.91MIN: 2MIN: 2.02MIN: 2.05MIN: 5.07MIN: 5.32MIN: 5.79MIN: 5.92MIN: 5.87MIN: 7.01MIN: 8.411. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7272EPYC 7F52EPYC 7402PEPYC 7302PEPYC 7282EPYC 7F32EPYC 7232P3691215Min: 1.78 / Avg: 1.8 / Max: 1.81Min: 1.88 / Avg: 1.89 / Max: 1.9Min: 1.92 / Avg: 1.93 / Max: 1.94Min: 1.99 / Avg: 2 / Max: 2Min: 2.06 / Avg: 2.07 / Max: 2.08Min: 2.15 / Avg: 2.15 / Max: 2.16Min: 2.2 / Avg: 2.21 / Max: 2.21Min: 5.25 / Avg: 5.25 / Max: 5.26Min: 5.46 / Avg: 5.5 / Max: 5.54Min: 5.89 / Avg: 5.92 / Max: 5.97Min: 6.02 / Avg: 6.03 / Max: 6.04Min: 6.02 / Avg: 6.06 / Max: 6.1Min: 7.23 / Avg: 7.24 / Max: 7.25Min: 8.63 / Avg: 8.67 / Max: 8.711. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2EPYC 7662EPYC 7702EPYC 7642EPYC 7542EPYC 7552EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P700K1400K2100K2800K3500KSE +/- 4181.60, N = 3SE +/- 6234.46, N = 7SE +/- 837.49, N = 3SE +/- 149.07, N = 3SE +/- 1818.31, N = 3SE +/- 466.83, N = 3SE +/- 386.91, N = 3SE +/- 1259.14, N = 3SE +/- 1646.01, N = 3SE +/- 366.38, N = 3SE +/- 1027.95, N = 3SE +/- 518.11, N = 3SE +/- 314.32, N = 3SE +/- 326.24, N = 36552797026078307308406048935178978849218651206790134615315670701639167224295025929703136920
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2EPYC 7662EPYC 7702EPYC 7642EPYC 7542EPYC 7552EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P500K1000K1500K2000K2500KMin: 649807 / Avg: 655278.67 / Max: 663492Min: 690630 / Avg: 702607 / Max: 737249Min: 829366 / Avg: 830730.33 / Max: 832254Min: 840370 / Avg: 840604 / Max: 840881Min: 889933 / Avg: 893516.67 / Max: 895844Min: 897006 / Avg: 897884 / Max: 898598Min: 921416 / Avg: 921864.67 / Max: 922635Min: 1205130 / Avg: 1206790 / Max: 1209260Min: 1343550 / Avg: 1346153.33 / Max: 1349200Min: 1566650 / Avg: 1567070 / Max: 1567800Min: 1637410 / Avg: 1639166.67 / Max: 1640970Min: 2241960 / Avg: 2242950 / Max: 2243710Min: 2592570 / Avg: 2592970 / Max: 2593590Min: 3136350 / Avg: 3136920 / Max: 3137480

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P246810SE +/- 0.00834, N = 9SE +/- 0.00637, N = 9SE +/- 0.00625, N = 9SE +/- 0.00157, N = 9SE +/- 0.00215, N = 9SE +/- 0.00338, N = 9SE +/- 0.00104, N = 9SE +/- 0.00168, N = 9SE +/- 0.00628, N = 9SE +/- 0.00304, N = 9SE +/- 0.00324, N = 9SE +/- 0.00336, N = 9SE +/- 0.00176, N = 9SE +/- 0.00235, N = 91.431241.494461.539151.554941.896241.948451.988092.383063.028973.346633.452904.527585.582606.79445MIN: 1.26MIN: 1.36MIN: 1.37MIN: 1.42MIN: 1.77MIN: 1.82MIN: 1.83MIN: 2.33MIN: 2.92MIN: 3.25MIN: 3.34MIN: 4.47MIN: 5.52MIN: 6.751. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P3691215Min: 1.41 / Avg: 1.43 / Max: 1.48Min: 1.48 / Avg: 1.49 / Max: 1.53Min: 1.52 / Avg: 1.54 / Max: 1.57Min: 1.55 / Avg: 1.55 / Max: 1.56Min: 1.89 / Avg: 1.9 / Max: 1.91Min: 1.94 / Avg: 1.95 / Max: 1.97Min: 1.98 / Avg: 1.99 / Max: 1.99Min: 2.38 / Avg: 2.38 / Max: 2.39Min: 2.99 / Avg: 3.03 / Max: 3.05Min: 3.33 / Avg: 3.35 / Max: 3.36Min: 3.44 / Avg: 3.45 / Max: 3.47Min: 4.51 / Avg: 4.53 / Max: 4.54Min: 5.57 / Avg: 5.58 / Max: 5.59Min: 6.79 / Avg: 6.79 / Max: 6.811. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: 20k AtomsEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P612182430SE +/- 0.056, N = 3SE +/- 0.024, N = 3SE +/- 0.029, N = 3SE +/- 0.112, N = 3SE +/- 0.062, N = 3SE +/- 0.059, N = 3SE +/- 0.016, N = 3SE +/- 0.010, N = 3SE +/- 0.053, N = 3SE +/- 0.009, N = 3SE +/- 0.021, N = 3SE +/- 0.023, N = 3SE +/- 0.004, N = 3SE +/- 0.007, N = 325.20624.81822.44222.04618.15617.61417.52514.90711.75710.6029.8897.7376.7055.4061. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: 20k AtomsEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P612182430Min: 25.11 / Avg: 25.21 / Max: 25.31Min: 24.78 / Avg: 24.82 / Max: 24.86Min: 22.4 / Avg: 22.44 / Max: 22.5Min: 21.83 / Avg: 22.05 / Max: 22.19Min: 18.03 / Avg: 18.16 / Max: 18.22Min: 17.51 / Avg: 17.61 / Max: 17.71Min: 17.5 / Avg: 17.53 / Max: 17.55Min: 14.89 / Avg: 14.91 / Max: 14.93Min: 11.66 / Avg: 11.76 / Max: 11.85Min: 10.58 / Avg: 10.6 / Max: 10.62Min: 9.85 / Avg: 9.89 / Max: 9.92Min: 7.71 / Avg: 7.74 / Max: 7.78Min: 6.7 / Avg: 6.7 / Max: 6.71Min: 5.39 / Avg: 5.41 / Max: 5.411. (CXX) g++ options: -O3 -pthread -lm

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water BenchmarkEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7F52EPYC 7282EPYC 7272EPYC 7F32EPYC 7232P1.02172.04343.06514.08685.1085SE +/- 0.008, N = 3SE +/- 0.003, N = 3SE +/- 0.013, N = 3SE +/- 0.004, N = 3SE +/- 0.002, N = 3SE +/- 0.008, N = 3SE +/- 0.011, N = 3SE +/- 0.001, N = 3SE +/- 0.004, N = 3SE +/- 0.005, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.003, N = 3SE +/- 0.003, N = 34.5414.3734.1173.8633.3233.2673.1282.7412.0141.9951.6771.4091.3450.9851. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water BenchmarkEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7F52EPYC 7282EPYC 7272EPYC 7F32EPYC 7232P246810Min: 4.53 / Avg: 4.54 / Max: 4.55Min: 4.37 / Avg: 4.37 / Max: 4.38Min: 4.09 / Avg: 4.12 / Max: 4.13Min: 3.86 / Avg: 3.86 / Max: 3.87Min: 3.32 / Avg: 3.32 / Max: 3.33Min: 3.25 / Avg: 3.27 / Max: 3.28Min: 3.11 / Avg: 3.13 / Max: 3.14Min: 2.74 / Avg: 2.74 / Max: 2.74Min: 2.01 / Avg: 2.01 / Max: 2.02Min: 1.99 / Avg: 2 / Max: 2.01Min: 1.68 / Avg: 1.68 / Max: 1.68Min: 1.41 / Avg: 1.41 / Max: 1.41Min: 1.34 / Avg: 1.35 / Max: 1.35Min: 0.98 / Avg: 0.99 / Max: 0.991. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P0.80631.61262.41893.22524.0315SE +/- 0.000537, N = 4SE +/- 0.002127, N = 4SE +/- 0.001659, N = 4SE +/- 0.000788, N = 4SE +/- 0.005244, N = 4SE +/- 0.003967, N = 4SE +/- 0.001948, N = 4SE +/- 0.009261, N = 15SE +/- 0.001361, N = 4SE +/- 0.001219, N = 4SE +/- 0.001269, N = 4SE +/- 0.001298, N = 4SE +/- 0.000640, N = 4SE +/- 0.001594, N = 40.7808030.8006430.8631460.8680241.0158201.0739901.1048601.2959901.5650301.8417801.9130502.4937203.0129203.583630MIN: 0.7MIN: 0.76MIN: 0.77MIN: 0.81MIN: 0.98MIN: 0.98MIN: 0.99MIN: 1.25MIN: 1.54MIN: 1.82MIN: 1.87MIN: 2.45MIN: 2.73MIN: 3.541. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P246810Min: 0.78 / Avg: 0.78 / Max: 0.78Min: 0.79 / Avg: 0.8 / Max: 0.8Min: 0.86 / Avg: 0.86 / Max: 0.87Min: 0.87 / Avg: 0.87 / Max: 0.87Min: 1 / Avg: 1.02 / Max: 1.03Min: 1.07 / Avg: 1.07 / Max: 1.08Min: 1.1 / Avg: 1.1 / Max: 1.11Min: 1.27 / Avg: 1.3 / Max: 1.37Min: 1.56 / Avg: 1.57 / Max: 1.57Min: 1.84 / Avg: 1.84 / Max: 1.85Min: 1.91 / Avg: 1.91 / Max: 1.92Min: 2.49 / Avg: 2.49 / Max: 2.5Min: 3.01 / Avg: 3.01 / Max: 3.01Min: 3.58 / Avg: 3.58 / Max: 3.591. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 3.11.4Test: WritesEPYC 7542EPYC 7502PEPYC 7552EPYC 7702EPYC 7662EPYC 7642EPYC 7532EPYC 7402PEPYC 7F52EPYC 7282EPYC 7302PEPYC 7272EPYC 7F32EPYC 7232P50K100K150K200K250KSE +/- 2207.46, N = 15SE +/- 2790.65, N = 3SE +/- 1126.54, N = 3SE +/- 1230.73, N = 3SE +/- 2776.27, N = 3SE +/- 1690.31, N = 3SE +/- 2221.55, N = 3SE +/- 2693.44, N = 3SE +/- 1803.14, N = 3SE +/- 936.51, N = 3SE +/- 1793.27, N = 3SE +/- 388.59, N = 3SE +/- 224.51, N = 3SE +/- 244.34, N = 3236524233730230871227564219380215576211361200705144123136442135260931555811851734
OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 3.11.4Test: WritesEPYC 7542EPYC 7502PEPYC 7552EPYC 7702EPYC 7662EPYC 7642EPYC 7532EPYC 7402PEPYC 7F52EPYC 7282EPYC 7302PEPYC 7272EPYC 7F32EPYC 7232P40K80K120K160K200KMin: 227162 / Avg: 236524.33 / Max: 255532Min: 228668 / Avg: 233730 / Max: 238297Min: 229505 / Avg: 230871.33 / Max: 233106Min: 225103 / Avg: 227564.33 / Max: 228816Min: 214454 / Avg: 219380 / Max: 224062Min: 212562 / Avg: 215576 / Max: 218409Min: 208632 / Avg: 211361 / Max: 215762Min: 197409 / Avg: 200705 / Max: 206043Min: 141196 / Avg: 144123.33 / Max: 147411Min: 134572 / Avg: 136442 / Max: 137469Min: 131775 / Avg: 135259.67 / Max: 137737Min: 92380 / Avg: 93154.67 / Max: 93596Min: 57842 / Avg: 58118.33 / Max: 58563Min: 51248 / Avg: 51734.33 / Max: 52019

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - GriddingEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7532EPYC 7542EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7F32EPYC 7272EPYC 7232PEPYC 7F522K4K6K8K10KSE +/- 0.00, N = 4SE +/- 0.00, N = 4SE +/- 81.97, N = 4SE +/- 77.48, N = 4SE +/- 63.04, N = 4SE +/- 67.99, N = 4SE +/- 79.54, N = 4SE +/- 72.46, N = 4SE +/- 57.68, N = 5SE +/- 0.00, N = 4SE +/- 35.30, N = 4SE +/- 17.38, N = 4SE +/- 7.61, N = 4SE +/- 0.00, N = 49509.149509.149427.178454.708257.476658.486617.906379.885503.594294.454035.114004.082840.132113.141. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - GriddingEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7532EPYC 7542EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7F32EPYC 7272EPYC 7232PEPYC 7F5217003400510068008500Min: 9509.14 / Avg: 9509.14 / Max: 9509.14Min: 9509.14 / Avg: 9509.14 / Max: 9509.14Min: 9181.24 / Avg: 9427.17 / Max: 9509.14Min: 8320.5 / Avg: 8454.7 / Max: 8588.9Min: 8068.36 / Avg: 8257.47 / Max: 8320.5Min: 6494.05 / Avg: 6658.48 / Max: 6827.08Min: 6494.05 / Avg: 6617.9 / Max: 6827.08Min: 6192 / Avg: 6379.88 / Max: 6494.05Min: 5325.12 / Avg: 5503.59 / Max: 5665.02Min: 4294.45 / Avg: 4294.45 / Max: 4294.45Min: 3973.97 / Avg: 4035.11 / Max: 4096.25Min: 3973.97 / Avg: 4004.08 / Max: 4034.18Min: 2832.51 / Avg: 2840.13 / Max: 2862.97Min: 2113.14 / Avg: 2113.14 / Max: 2113.141. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.2.0Scene: MemorialEPYC 7662EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P714212835SE +/- 0.04, N = 6SE +/- 0.03, N = 6SE +/- 0.02, N = 6SE +/- 0.01, N = 5SE +/- 0.01, N = 5SE +/- 0.01, N = 5SE +/- 0.01, N = 4SE +/- 0.02, N = 4SE +/- 0.12, N = 6SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 329.4127.6726.1321.5520.2520.1417.1714.2412.1811.349.037.946.55
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.2.0Scene: MemorialEPYC 7662EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P714212835Min: 29.29 / Avg: 29.41 / Max: 29.54Min: 27.55 / Avg: 27.67 / Max: 27.73Min: 26.05 / Avg: 26.13 / Max: 26.17Min: 21.52 / Avg: 21.55 / Max: 21.6Min: 20.23 / Avg: 20.25 / Max: 20.28Min: 20.11 / Avg: 20.14 / Max: 20.18Min: 17.16 / Avg: 17.17 / Max: 17.19Min: 14.2 / Avg: 14.24 / Max: 14.27Min: 11.56 / Avg: 12.18 / Max: 12.32Min: 11.34 / Avg: 11.34 / Max: 11.36Min: 9.02 / Avg: 9.03 / Max: 9.03Min: 7.92 / Avg: 7.94 / Max: 7.96Min: 6.54 / Avg: 6.55 / Max: 6.55

ebizzy

This is a test of ebizzy, a program to generate workloads resembling web server workloads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRecords/s, More Is Betterebizzy 0.3EPYC 7662EPYC 7642EPYC 7702EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P600K1200K1800K2400K3000KSE +/- 34594.95, N = 12SE +/- 37169.94, N = 3SE +/- 25794.01, N = 15SE +/- 24033.50, N = 3SE +/- 8453.72, N = 3SE +/- 15669.57, N = 3SE +/- 10663.27, N = 3SE +/- 15686.47, N = 3SE +/- 7117.75, N = 3SE +/- 8057.83, N = 3SE +/- 8956.45, N = 3SE +/- 906.56, N = 3SE +/- 6904.71, N = 7SE +/- 4933.88, N = 3276264727193882701767245651121368501977325194783617218541475280120846610219908839657768806232721. (CC) gcc options: -pthread -lpthread -O3 -march=native
OpenBenchmarking.orgRecords/s, More Is Betterebizzy 0.3EPYC 7662EPYC 7642EPYC 7702EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P500K1000K1500K2000K2500KMin: 2508914 / Avg: 2762646.92 / Max: 2926207Min: 2645052 / Avg: 2719388 / Max: 2757214Min: 2510210 / Avg: 2701767.07 / Max: 2845248Min: 2420178 / Avg: 2456510.67 / Max: 2501931Min: 2127912 / Avg: 2136850 / Max: 2153748Min: 1951917 / Avg: 1977324.67 / Max: 2005917Min: 1926759 / Avg: 1947835.67 / Max: 1961193Min: 1691610 / Avg: 1721854 / Max: 1744199Min: 1461618 / Avg: 1475280 / Max: 1485575Min: 1192406 / Avg: 1208466.33 / Max: 1217652Min: 1009030 / Avg: 1021990 / Max: 1039179Min: 882270 / Avg: 883965 / Max: 885370Min: 747575 / Avg: 776880.43 / Max: 796470Min: 616705 / Avg: 623272 / Max: 6329341. (CC) gcc options: -pthread -lpthread -O3 -march=native

Tungsten Renderer

Tungsten is a C++ physically based renderer that makes use of Intel's Embree ray tracing library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: HairEPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P816243240SE +/- 0.02278, N = 6SE +/- 0.02077, N = 6SE +/- 0.03455, N = 5SE +/- 0.04134, N = 5SE +/- 0.02895, N = 5SE +/- 0.00983, N = 5SE +/- 0.03618, N = 5SE +/- 0.02622, N = 4SE +/- 0.03705, N = 4SE +/- 0.02745, N = 3SE +/- 0.03890, N = 3SE +/- 0.03918, N = 3SE +/- 0.01111, N = 3SE +/- 0.03115, N = 37.507067.550938.568288.719539.9284910.5740010.8131012.2623014.1327016.6175017.6542021.9958026.5161032.538801. (CXX) g++ options: -std=c++0x -march=znver1 -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -mfma -mbmi2 -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -lpthread -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: HairEPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P714212835Min: 7.46 / Avg: 7.51 / Max: 7.59Min: 7.5 / Avg: 7.55 / Max: 7.63Min: 8.43 / Avg: 8.57 / Max: 8.61Min: 8.63 / Avg: 8.72 / Max: 8.86Min: 9.86 / Avg: 9.93 / Max: 10.01Min: 10.56 / Avg: 10.57 / Max: 10.61Min: 10.71 / Avg: 10.81 / Max: 10.91Min: 12.21 / Avg: 12.26 / Max: 12.33Min: 14.08 / Avg: 14.13 / Max: 14.23Min: 16.56 / Avg: 16.62 / Max: 16.65Min: 17.6 / Avg: 17.65 / Max: 17.73Min: 21.93 / Avg: 22 / Max: 22.07Min: 26.5 / Avg: 26.52 / Max: 26.54Min: 32.48 / Avg: 32.54 / Max: 32.591. (CXX) g++ options: -std=c++0x -march=znver1 -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -mfma -mbmi2 -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -lpthread -ldl

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P50K100K150K200K250KSE +/- 174.48, N = 3SE +/- 39.50, N = 3SE +/- 111.10, N = 3SE +/- 74.34, N = 3SE +/- 77.46, N = 3SE +/- 126.18, N = 3SE +/- 39.58, N = 3SE +/- 6.99, N = 3SE +/- 132.22, N = 3SE +/- 25.85, N = 3SE +/- 38.26, N = 3SE +/- 82.39, N = 3SE +/- 44.74, N = 3SE +/- 202.36, N = 356195.561679.665104.468430.470622.975080.077264.793404.1107391.0123947.0129293.0173807.0201258.0242480.0
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P40K80K120K160K200KMin: 55903.9 / Avg: 56195.5 / Max: 56507.3Min: 61602.1 / Avg: 61679.57 / Max: 61731.7Min: 64904.3 / Avg: 65104.43 / Max: 65288.1Min: 68340.5 / Avg: 68430.4 / Max: 68577.9Min: 70510.4 / Avg: 70622.9 / Max: 70771.4Min: 74862.2 / Avg: 75080 / Max: 75299.3Min: 77193.3 / Avg: 77264.7 / Max: 77330Min: 93391.7 / Avg: 93404.1 / Max: 93415.9Min: 107127 / Avg: 107390.67 / Max: 107540Min: 123895 / Avg: 123946.67 / Max: 123974Min: 129222 / Avg: 129293.33 / Max: 129353Min: 173701 / Avg: 173806.67 / Max: 173969Min: 201180 / Avg: 201257.67 / Max: 201335Min: 242075 / Avg: 242479.67 / Max: 242688

Tungsten Renderer

Tungsten is a C++ physically based renderer that makes use of Intel's Embree ray tracing library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Non-ExponentialEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7282EPYC 7302PEPYC 7272EPYC 7F32EPYC 7232P3691215SE +/- 0.01113, N = 9SE +/- 0.00885, N = 9SE +/- 0.06689, N = 15SE +/- 0.01129, N = 9SE +/- 0.00498, N = 9SE +/- 0.00918, N = 9SE +/- 0.00744, N = 8SE +/- 0.01244, N = 7SE +/- 0.01335, N = 7SE +/- 0.01670, N = 7SE +/- 0.00686, N = 6SE +/- 0.01455, N = 5SE +/- 0.02852, N = 5SE +/- 0.02902, N = 42.801422.881753.198893.312623.470963.565553.702345.714265.846076.218156.365139.8310610.9009011.965401. (CXX) g++ options: -std=c++0x -march=znver1 -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -mfma -mbmi2 -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -lpthread -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Non-ExponentialEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7282EPYC 7302PEPYC 7272EPYC 7F32EPYC 7232P3691215Min: 2.77 / Avg: 2.8 / Max: 2.88Min: 2.85 / Avg: 2.88 / Max: 2.93Min: 2.55 / Avg: 3.2 / Max: 3.35Min: 3.27 / Avg: 3.31 / Max: 3.37Min: 3.44 / Avg: 3.47 / Max: 3.49Min: 3.5 / Avg: 3.57 / Max: 3.61Min: 3.66 / Avg: 3.7 / Max: 3.73Min: 5.67 / Avg: 5.71 / Max: 5.76Min: 5.8 / Avg: 5.85 / Max: 5.89Min: 6.17 / Avg: 6.22 / Max: 6.29Min: 6.35 / Avg: 6.37 / Max: 6.39Min: 9.79 / Avg: 9.83 / Max: 9.88Min: 10.84 / Avg: 10.9 / Max: 11Min: 11.89 / Avg: 11.97 / Max: 12.031. (CXX) g++ options: -std=c++0x -march=znver1 -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -mfma -mbmi2 -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -lpthread -ldl

Parboil

The Parboil Benchmarks from the IMPACT Research Group at University of Illinois are a set of throughput computing applications for looking at computing architecture and compilers. Parboil test-cases support OpenMP, OpenCL, and CUDA multi-processing environments. However, at this time the test profile is just making use of the OpenMP and OpenCL test workloads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterParboil 2.5Test: OpenMP CUTCPEPYC 7662EPYC 7702EPYC 7552EPYC 7642EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7282EPYC 7302PEPYC 7F32EPYC 7272EPYC 7232P0.72611.45222.17832.90443.6305SE +/- 0.001641, N = 13SE +/- 0.002056, N = 13SE +/- 0.001391, N = 13SE +/- 0.001666, N = 13SE +/- 0.005531, N = 12SE +/- 0.006410, N = 12SE +/- 0.006800, N = 12SE +/- 0.003909, N = 12SE +/- 0.007208, N = 11SE +/- 0.005522, N = 11SE +/- 0.010317, N = 11SE +/- 0.019546, N = 10SE +/- 0.006375, N = 9SE +/- 0.016438, N = 90.7622380.7734740.8907260.9026111.1349611.1492821.2568821.5616881.7159441.7835891.8738042.7962772.9651363.2271031. (CXX) g++ options: -lm -lpthread -lgomp -O3 -ffast-math -fopenmp
OpenBenchmarking.orgSeconds, Fewer Is BetterParboil 2.5Test: OpenMP CUTCPEPYC 7662EPYC 7702EPYC 7552EPYC 7642EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7282EPYC 7302PEPYC 7F32EPYC 7272EPYC 7232P246810Min: 0.75 / Avg: 0.76 / Max: 0.77Min: 0.76 / Avg: 0.77 / Max: 0.79Min: 0.88 / Avg: 0.89 / Max: 0.9Min: 0.89 / Avg: 0.9 / Max: 0.91Min: 1.11 / Avg: 1.13 / Max: 1.17Min: 1.12 / Avg: 1.15 / Max: 1.18Min: 1.22 / Avg: 1.26 / Max: 1.3Min: 1.54 / Avg: 1.56 / Max: 1.59Min: 1.66 / Avg: 1.72 / Max: 1.74Min: 1.76 / Avg: 1.78 / Max: 1.82Min: 1.83 / Avg: 1.87 / Max: 1.93Min: 2.71 / Avg: 2.8 / Max: 2.88Min: 2.92 / Avg: 2.97 / Max: 2.99Min: 3.17 / Avg: 3.23 / Max: 3.291. (CXX) g++ options: -lm -lpthread -lgomp -O3 -ffast-math -fopenmp

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUEPYC 7642EPYC 7542EPYC 7662EPYC 7552EPYC 7502PEPYC 7532EPYC 7702EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P0.45410.90821.36231.81642.2705SE +/- 0.003528, N = 4SE +/- 0.000738, N = 4SE +/- 0.005359, N = 5SE +/- 0.004962, N = 15SE +/- 0.000902, N = 4SE +/- 0.005528, N = 6SE +/- 0.002887, N = 4SE +/- 0.000660, N = 4SE +/- 0.001140, N = 4SE +/- 0.000863, N = 4SE +/- 0.000258, N = 4SE +/- 0.000447, N = 4SE +/- 0.001580, N = 4SE +/- 0.005714, N = 40.4842190.5005210.5170230.5339610.5376360.5534440.5874430.5923080.6604190.8022720.8757491.0934401.1601702.018380MIN: 0.46MIN: 0.48MIN: 0.47MIN: 0.49MIN: 0.51MIN: 0.49MIN: 0.54MIN: 0.56MIN: 0.62MIN: 0.72MIN: 0.74MIN: 0.96MIN: 1.13MIN: 1.951. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUEPYC 7642EPYC 7542EPYC 7662EPYC 7552EPYC 7502PEPYC 7532EPYC 7702EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P246810Min: 0.48 / Avg: 0.48 / Max: 0.49Min: 0.5 / Avg: 0.5 / Max: 0.5Min: 0.5 / Avg: 0.52 / Max: 0.53Min: 0.51 / Avg: 0.53 / Max: 0.57Min: 0.54 / Avg: 0.54 / Max: 0.54Min: 0.54 / Avg: 0.55 / Max: 0.58Min: 0.58 / Avg: 0.59 / Max: 0.6Min: 0.59 / Avg: 0.59 / Max: 0.59Min: 0.66 / Avg: 0.66 / Max: 0.66Min: 0.8 / Avg: 0.8 / Max: 0.8Min: 0.88 / Avg: 0.88 / Max: 0.88Min: 1.09 / Avg: 1.09 / Max: 1.09Min: 1.16 / Avg: 1.16 / Max: 1.16Min: 2 / Avg: 2.02 / Max: 2.031. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P510152025SE +/- 0.301, N = 15SE +/- 0.257, N = 15SE +/- 0.291, N = 15SE +/- 0.286, N = 15SE +/- 0.252, N = 15SE +/- 0.178, N = 15SE +/- 0.286, N = 15SE +/- 0.089, N = 15SE +/- 0.028, N = 11SE +/- 0.024, N = 10SE +/- 0.026, N = 10SE +/- 0.008, N = 9SE +/- 0.003, N = 9SE +/- 0.002, N = 821.76319.99719.32818.50616.56416.26515.68514.03811.52110.3749.6857.5826.4705.2371. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P510152025Min: 20.67 / Avg: 21.76 / Max: 24.47Min: 19.07 / Avg: 20 / Max: 22.21Min: 18.26 / Avg: 19.33 / Max: 21.71Min: 17.37 / Avg: 18.51 / Max: 20.7Min: 14.32 / Avg: 16.56 / Max: 18.32Min: 14.97 / Avg: 16.27 / Max: 17.08Min: 13.34 / Avg: 15.69 / Max: 17.35Min: 13.39 / Avg: 14.04 / Max: 14.46Min: 11.4 / Avg: 11.52 / Max: 11.65Min: 10.19 / Avg: 10.37 / Max: 10.46Min: 9.59 / Avg: 9.69 / Max: 9.82Min: 7.55 / Avg: 7.58 / Max: 7.61Min: 6.45 / Avg: 6.47 / Max: 6.49Min: 5.23 / Avg: 5.24 / Max: 5.251. (CXX) g++ options: -O3 -pthread -lm

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Socket ActivityEPYC 7662EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7302PEPYC 7282EPYC 7F52EPYC 7272EPYC 7F32EPYC 7232P4K8K12K16K20KSE +/- 64.33, N = 3SE +/- 57.10, N = 3SE +/- 24.65, N = 3SE +/- 32.93, N = 3SE +/- 30.09, N = 3SE +/- 125.44, N = 9SE +/- 16.45, N = 3SE +/- 6.53, N = 3SE +/- 16.90, N = 3SE +/- 17.97, N = 3SE +/- 8.24, N = 3SE +/- 13.57, N = 3SE +/- 24.34, N = 319990.7519577.0318192.3217266.2116396.5715568.2413570.839499.339134.728415.197332.825638.534965.961. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Socket ActivityEPYC 7662EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7302PEPYC 7282EPYC 7F52EPYC 7272EPYC 7F32EPYC 7232P3K6K9K12K15KMin: 19911.11 / Avg: 19990.75 / Max: 20118.07Min: 19482.9 / Avg: 19577.03 / Max: 19680.11Min: 18145.03 / Avg: 18192.32 / Max: 18228.03Min: 17206.64 / Avg: 17266.21 / Max: 17320.33Min: 16363.28 / Avg: 16396.57 / Max: 16456.63Min: 14571.02 / Avg: 15568.24 / Max: 15743.07Min: 13539.78 / Avg: 13570.83 / Max: 13595.76Min: 9488.78 / Avg: 9499.33 / Max: 9511.28Min: 9104 / Avg: 9134.72 / Max: 9162.29Min: 8394.92 / Avg: 8415.19 / Max: 8451.03Min: 7316.76 / Avg: 7332.82 / Max: 7344.03Min: 5624.62 / Avg: 5638.53 / Max: 5665.66Min: 4920.56 / Avg: 4965.96 / Max: 5003.891. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19EPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7532EPYC 7542EPYC 7502PEPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P306090120150SE +/- 0.55, N = 3SE +/- 1.22, N = 3SE +/- 0.12, N = 3SE +/- 0.15, N = 3SE +/- 0.00, N = 3SE +/- 0.15, N = 3SE +/- 0.09, N = 3SE +/- 0.09, N = 3SE +/- 0.79, N = 3SE +/- 0.03, N = 3SE +/- 0.12, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 3149.6147.5130.1125.7122.7114.4114.396.976.173.866.455.247.037.31. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19EPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7532EPYC 7542EPYC 7502PEPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P306090120150Min: 148.7 / Avg: 149.6 / Max: 150.6Min: 145.1 / Avg: 147.53 / Max: 148.8Min: 129.9 / Avg: 130.1 / Max: 130.3Min: 125.4 / Avg: 125.67 / Max: 125.9Min: 122.7 / Avg: 122.7 / Max: 122.7Min: 114.2 / Avg: 114.4 / Max: 114.7Min: 114.2 / Avg: 114.33 / Max: 114.5Min: 96.8 / Avg: 96.93 / Max: 97.1Min: 74.9 / Avg: 76.13 / Max: 77.6Min: 73.8 / Avg: 73.83 / Max: 73.9Min: 66.2 / Avg: 66.4 / Max: 66.6Min: 55.1 / Avg: 55.2 / Max: 55.3Min: 47 / Avg: 47.03 / Max: 47.1Min: 37.2 / Avg: 37.3 / Max: 37.41. (CC) gcc options: -O3 -pthread -lz -llzma

NWChem

NWChem is an open-source high performance computational chemistry package. Per NWChem's documentation, "NWChem aims to provide its users with computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNWChem 7.0.2Input: C240 BuckyballEPYC 7662EPYC 7702EPYC 7552EPYC 7532EPYC 7542EPYC 7502PEPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 72722K4K6K8K10K2220.72247.02716.73653.63653.93678.84709.55684.66676.47056.08844.11. (F9X) gfortran options: -lnwctask -lccsd -lmcscf -lselci -lmp2 -lmoints -lstepper -ldriver -loptim -lnwdft -lgradients -lcphf -lesp -lddscf -ldangchang -lguess -lhessian -lvib -lnwcutil -lrimp2 -lproperty -lsolvation -lnwints -lprepar -lnwmd -lnwpw -lofpw -lpaw -lpspw -lband -lnwpwlib -lcafe -lspace -lanalyze -lqhop -lpfft -ldplot -ldrdy -lvscf -lqmmm -lqmd -letrans -ltce -lbq -lmm -lcons -lperfm -ldntmc -lccca -ldimqm -lga -larmci -lpeigs -l64to32 -lopenblas -lpthread -lrt -llapack -lnwcblas -lmpi_usempif08 -lmpi_mpifh -lmpi -lcomex -lm -m64 -ffast-math -std=legacy -fdefault-integer-8 -finline-functions -O2

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: EigenEPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7532EPYC 7F52EPYC 7542EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7F32EPYC 7282EPYC 7272EPYC 7232P6001200180024003000SE +/- 26.03, N = 3SE +/- 29.49, N = 9SE +/- 37.70, N = 9SE +/- 31.31, N = 9SE +/- 20.90, N = 3SE +/- 17.43, N = 5SE +/- 9.84, N = 3SE +/- 14.82, N = 9SE +/- 5.90, N = 3SE +/- 12.27, N = 9SE +/- 16.37, N = 9SE +/- 11.49, N = 9SE +/- 11.79, N = 9SE +/- 8.88, N = 32686240823111927176916991617153914391233110410519256811. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: EigenEPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7532EPYC 7F52EPYC 7542EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7F32EPYC 7282EPYC 7272EPYC 7232P5001000150020002500Min: 2634 / Avg: 2685.67 / Max: 2717Min: 2310 / Avg: 2408 / Max: 2551Min: 2112 / Avg: 2311.33 / Max: 2527Min: 1773 / Avg: 1926.89 / Max: 2050Min: 1729 / Avg: 1769.33 / Max: 1799Min: 1639 / Avg: 1699.2 / Max: 1747Min: 1597 / Avg: 1616.67 / Max: 1627Min: 1454 / Avg: 1538.78 / Max: 1593Min: 1432 / Avg: 1439.33 / Max: 1451Min: 1194 / Avg: 1232.78 / Max: 1292Min: 1024 / Avg: 1103.67 / Max: 1176Min: 1008 / Avg: 1051.22 / Max: 1107Min: 870 / Avg: 924.56 / Max: 993Min: 663 / Avg: 680.67 / Max: 6911. (CXX) g++ options: -flto -pthread

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Disney MaterialEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P6012018024030067.3367.7470.4973.3381.8387.5189.01105.69118.48149.34151.86195.08225.09265.54

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 3EPYC 7662EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P1632486480SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 318.6618.8720.8023.9725.2825.8829.1633.8039.7541.2451.7760.4073.431. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 3EPYC 7662EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P1428425670Min: 18.65 / Avg: 18.66 / Max: 18.66Min: 18.86 / Avg: 18.87 / Max: 18.88Min: 20.77 / Avg: 20.8 / Max: 20.82Min: 23.96 / Avg: 23.97 / Max: 23.98Min: 25.27 / Avg: 25.28 / Max: 25.31Min: 25.87 / Avg: 25.88 / Max: 25.91Min: 29.15 / Avg: 29.16 / Max: 29.17Min: 33.79 / Avg: 33.8 / Max: 33.81Min: 39.75 / Avg: 39.75 / Max: 39.76Min: 41.2 / Avg: 41.24 / Max: 41.3Min: 51.75 / Avg: 51.77 / Max: 51.78Min: 60.39 / Avg: 60.4 / Max: 60.41Min: 73.43 / Avg: 73.43 / Max: 73.441. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mEPYC 7232PEPYC 7F32EPYC 7272EPYC 7282EPYC 7302PEPYC 7F52EPYC 7402PEPYC 7502PEPYC 7542EPYC 7532EPYC 7552EPYC 7642EPYC 7662EPYC 7702306090120150SE +/- 0.30, N = 3SE +/- 0.20, N = 3SE +/- 0.04, N = 3SE +/- 0.34, N = 3SE +/- 0.51, N = 3SE +/- 0.05, N = 3SE +/- 0.47, N = 3SE +/- 0.89, N = 3SE +/- 1.20, N = 3SE +/- 0.60, N = 11SE +/- 2.91, N = 3SE +/- 1.44, N = 12SE +/- 1.58, N = 9SE +/- 0.92, N = 931.1632.1634.2440.2741.3546.3051.0960.7761.2966.9091.9093.95115.48119.33MIN: 30.33 / MAX: 33.9MIN: 31.34 / MAX: 32.97MIN: 33.82 / MAX: 47.56MIN: 38.44 / MAX: 167.34MIN: 40.02 / MAX: 44.02MIN: 45.45 / MAX: 121.75MIN: 49.4 / MAX: 53.35MIN: 58.26 / MAX: 140.56MIN: 59.46 / MAX: 72.41MIN: 63.42 / MAX: 212.32MIN: 86.84 / MAX: 102.42MIN: 86.48 / MAX: 1086.99MIN: 104.38 / MAX: 272.84MIN: 110.56 / MAX: 268.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mEPYC 7232PEPYC 7F32EPYC 7272EPYC 7282EPYC 7302PEPYC 7F52EPYC 7402PEPYC 7502PEPYC 7542EPYC 7532EPYC 7552EPYC 7642EPYC 7662EPYC 770220406080100Min: 30.61 / Avg: 31.16 / Max: 31.62Min: 31.76 / Avg: 32.16 / Max: 32.37Min: 34.18 / Avg: 34.24 / Max: 34.32Min: 39.89 / Avg: 40.27 / Max: 40.94Min: 40.41 / Avg: 41.35 / Max: 42.16Min: 46.2 / Avg: 46.3 / Max: 46.36Min: 50.14 / Avg: 51.09 / Max: 51.59Min: 59.06 / Avg: 60.77 / Max: 62.06Min: 60 / Avg: 61.29 / Max: 63.69Min: 64.25 / Avg: 66.9 / Max: 69.82Min: 88.94 / Avg: 91.9 / Max: 97.72Min: 87.38 / Avg: 93.95 / Max: 100.86Min: 105.72 / Avg: 115.48 / Max: 120.7Min: 113.13 / Avg: 119.33 / Max: 122.461. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P20406080100SE +/- 0.32, N = 5SE +/- 0.27, N = 5SE +/- 0.16, N = 5SE +/- 0.20, N = 5SE +/- 0.18, N = 6SE +/- 0.25, N = 6SE +/- 0.17, N = 6SE +/- 0.03, N = 6SE +/- 0.21, N = 5SE +/- 0.17, N = 5SE +/- 0.09, N = 5SE +/- 0.11, N = 4SE +/- 0.03, N = 4SE +/- 0.07, N = 483.4079.8263.2162.4059.9457.3256.5354.8742.8437.0435.3830.3826.1321.761. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P1632486480Min: 82.2 / Avg: 83.4 / Max: 84.06Min: 79.13 / Avg: 79.82 / Max: 80.5Min: 62.71 / Avg: 63.21 / Max: 63.66Min: 61.9 / Avg: 62.4 / Max: 62.94Min: 59.31 / Avg: 59.94 / Max: 60.53Min: 56.32 / Avg: 57.32 / Max: 57.97Min: 56.08 / Avg: 56.53 / Max: 57.07Min: 54.76 / Avg: 54.87 / Max: 54.97Min: 42.28 / Avg: 42.84 / Max: 43.39Min: 36.48 / Avg: 37.04 / Max: 37.5Min: 35.07 / Avg: 35.38 / Max: 35.57Min: 30.14 / Avg: 30.38 / Max: 30.6Min: 26.07 / Avg: 26.13 / Max: 26.22Min: 21.59 / Avg: 21.76 / Max: 21.911. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

Parboil

The Parboil Benchmarks from the IMPACT Research Group at University of Illinois are a set of throughput computing applications for looking at computing architecture and compilers. Parboil test-cases support OpenMP, OpenCL, and CUDA multi-processing environments. However, at this time the test profile is just making use of the OpenMP and OpenCL test workloads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterParboil 2.5Test: OpenMP MRI GriddingEPYC 7F32EPYC 7F52EPYC 7272EPYC 7282EPYC 7402PEPYC 7232PEPYC 7542EPYC 7502PEPYC 7552EPYC 7642EPYC 7302PEPYC 7662EPYC 7702EPYC 7532306090120150SE +/- 0.15, N = 3SE +/- 0.24, N = 3SE +/- 0.24, N = 3SE +/- 0.21, N = 3SE +/- 0.03, N = 3SE +/- 0.88, N = 3SE +/- 0.29, N = 3SE +/- 0.22, N = 3SE +/- 0.29, N = 3SE +/- 0.83, N = 3SE +/- 0.11, N = 3SE +/- 0.63, N = 3SE +/- 0.77, N = 3SE +/- 0.28, N = 334.0449.0151.6562.2965.4275.4375.4476.1993.4895.4095.77109.30110.47128.291. (CXX) g++ options: -lm -lpthread -lgomp -O3 -ffast-math -fopenmp
OpenBenchmarking.orgSeconds, Fewer Is BetterParboil 2.5Test: OpenMP MRI GriddingEPYC 7F32EPYC 7F52EPYC 7272EPYC 7282EPYC 7402PEPYC 7232PEPYC 7542EPYC 7502PEPYC 7552EPYC 7642EPYC 7302PEPYC 7662EPYC 7702EPYC 753220406080100Min: 33.82 / Avg: 34.04 / Max: 34.32Min: 48.58 / Avg: 49.01 / Max: 49.42Min: 51.36 / Avg: 51.65 / Max: 52.13Min: 61.91 / Avg: 62.29 / Max: 62.65Min: 65.38 / Avg: 65.42 / Max: 65.48Min: 73.67 / Avg: 75.43 / Max: 76.37Min: 75.01 / Avg: 75.44 / Max: 76Min: 75.76 / Avg: 76.19 / Max: 76.51Min: 93.04 / Avg: 93.48 / Max: 94.02Min: 93.75 / Avg: 95.4 / Max: 96.31Min: 95.59 / Avg: 95.77 / Max: 95.96Min: 108.54 / Avg: 109.3 / Max: 110.55Min: 109.25 / Avg: 110.47 / Max: 111.89Min: 127.98 / Avg: 128.29 / Max: 128.851. (CXX) g++ options: -lm -lpthread -lgomp -O3 -ffast-math -fopenmp

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3-v3 - Model: regnety_400mEPYC 7F32EPYC 7282EPYC 7F52EPYC 7542EPYC 7502PEPYC 7532EPYC 7702306090120150SE +/- 0.30, N = 3SE +/- 0.09, N = 3SE +/- 0.14, N = 3SE +/- 0.96, N = 3SE +/- 0.78, N = 3SE +/- 1.18, N = 3SE +/- 0.90, N = 1232.2740.5344.9861.4661.5165.30117.29MIN: 31.53 / MAX: 34.46MIN: 38.3 / MAX: 142.23MIN: 44.41 / MAX: 120.73MIN: 59.76 / MAX: 66.21MIN: 59.23 / MAX: 65.07MIN: 62.09 / MAX: 107.9MIN: 110.16 / MAX: 380.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3-v3 - Model: regnety_400mEPYC 7F32EPYC 7282EPYC 7F52EPYC 7542EPYC 7502PEPYC 7532EPYC 770220406080100Min: 31.9 / Avg: 32.27 / Max: 32.86Min: 40.35 / Avg: 40.53 / Max: 40.64Min: 44.71 / Avg: 44.98 / Max: 45.19Min: 60.29 / Avg: 61.46 / Max: 63.37Min: 59.96 / Avg: 61.51 / Max: 62.44Min: 63.08 / Avg: 65.3 / Max: 67.09Min: 111.85 / Avg: 117.29 / Max: 122.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUEPYC 7F52EPYC 7302PEPYC 7532EPYC 7402PEPYC 7F32EPYC 7642EPYC 7282EPYC 7502PEPYC 7542EPYC 7272EPYC 7552EPYC 7662EPYC 7702EPYC 7232P0.40040.80081.20121.60162.002SE +/- 0.002832, N = 5SE +/- 0.001293, N = 5SE +/- 0.002578, N = 5SE +/- 0.003852, N = 5SE +/- 0.006135, N = 5SE +/- 0.002582, N = 5SE +/- 0.003097, N = 5SE +/- 0.001432, N = 5SE +/- 0.003216, N = 5SE +/- 0.003339, N = 5SE +/- 0.005510, N = 5SE +/- 0.000841, N = 5SE +/- 0.001522, N = 5SE +/- 0.002555, N = 50.4899830.6012880.6411970.7113170.7982170.8132800.8168260.9903161.0004261.0389501.1323701.1460101.1627301.779470MIN: 0.45MIN: 0.56MIN: 0.58MIN: 0.66MIN: 0.75MIN: 0.76MIN: 0.74MIN: 0.95MIN: 0.94MIN: 0.95MIN: 1.07MIN: 1.09MIN: 1.1MIN: 1.61. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUEPYC 7F52EPYC 7302PEPYC 7532EPYC 7402PEPYC 7F32EPYC 7642EPYC 7282EPYC 7502PEPYC 7542EPYC 7272EPYC 7552EPYC 7662EPYC 7702EPYC 7232P246810Min: 0.48 / Avg: 0.49 / Max: 0.5Min: 0.6 / Avg: 0.6 / Max: 0.6Min: 0.63 / Avg: 0.64 / Max: 0.65Min: 0.7 / Avg: 0.71 / Max: 0.72Min: 0.79 / Avg: 0.8 / Max: 0.82Min: 0.81 / Avg: 0.81 / Max: 0.82Min: 0.81 / Avg: 0.82 / Max: 0.83Min: 0.99 / Avg: 0.99 / Max: 1Min: 0.99 / Avg: 1 / Max: 1.01Min: 1.03 / Avg: 1.04 / Max: 1.05Min: 1.11 / Avg: 1.13 / Max: 1.14Min: 1.14 / Avg: 1.15 / Max: 1.15Min: 1.16 / Avg: 1.16 / Max: 1.17Min: 1.78 / Avg: 1.78 / Max: 1.791. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2-v2 - Model: regnety_400mEPYC 7F32EPYC 7282EPYC 7F52EPYC 7502PEPYC 7542EPYC 7532EPYC 7702306090120150SE +/- 0.25, N = 3SE +/- 0.22, N = 3SE +/- 0.54, N = 3SE +/- 1.51, N = 3SE +/- 1.36, N = 3SE +/- 0.38, N = 3SE +/- 0.90, N = 1232.4439.7244.8461.4261.5864.89117.56MIN: 31.57 / MAX: 33.32MIN: 38.43 / MAX: 83.52MIN: 43.55 / MAX: 46.35MIN: 58.39 / MAX: 205.71MIN: 59.2 / MAX: 77.53MIN: 63.26 / MAX: 68.33MIN: 110.38 / MAX: 376.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2-v2 - Model: regnety_400mEPYC 7F32EPYC 7282EPYC 7F52EPYC 7502PEPYC 7542EPYC 7532EPYC 770220406080100Min: 32 / Avg: 32.44 / Max: 32.88Min: 39.42 / Avg: 39.72 / Max: 40.14Min: 43.86 / Avg: 44.84 / Max: 45.71Min: 59.66 / Avg: 61.42 / Max: 64.42Min: 59.98 / Avg: 61.58 / Max: 64.28Min: 64.17 / Avg: 64.89 / Max: 65.48Min: 113.15 / Avg: 117.56 / Max: 124.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P20406080100SE +/- 0.28, N = 6SE +/- 0.26, N = 7SE +/- 0.30, N = 6SE +/- 0.30, N = 6SE +/- 0.41, N = 4SE +/- 0.42, N = 4SE +/- 0.42, N = 4SE +/- 0.47, N = 4SE +/- 0.43, N = 3SE +/- 0.65, N = 4SE +/- 0.75, N = 3SE +/- 0.59, N = 3SE +/- 0.46, N = 3SE +/- 0.42, N = 327.9828.1330.0830.2134.5536.3936.8541.5046.9153.4057.9169.6179.66101.09
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P20406080100Min: 27.59 / Avg: 27.98 / Max: 29.37Min: 27.79 / Avg: 28.13 / Max: 29.66Min: 29.67 / Avg: 30.08 / Max: 31.58Min: 29.87 / Avg: 30.21 / Max: 31.71Min: 34.07 / Avg: 34.55 / Max: 35.78Min: 35.93 / Avg: 36.39 / Max: 37.65Min: 36.38 / Avg: 36.85 / Max: 38.1Min: 40.71 / Avg: 41.5 / Max: 42.85Min: 46.21 / Avg: 46.91 / Max: 47.7Min: 52.39 / Avg: 53.4 / Max: 55.3Min: 56.64 / Avg: 57.91 / Max: 59.25Min: 68.52 / Avg: 69.61 / Max: 70.53Min: 79.06 / Avg: 79.66 / Max: 80.57Min: 100.61 / Avg: 101.09 / Max: 101.92

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLASEPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7F52EPYC 7532EPYC 7542EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7F32EPYC 7282EPYC 7272EPYC 7232P6001200180024003000SE +/- 14.38, N = 3SE +/- 41.44, N = 9SE +/- 26.91, N = 3SE +/- 27.18, N = 3SE +/- 31.35, N = 9SE +/- 22.27, N = 9SE +/- 20.09, N = 3SE +/- 23.71, N = 9SE +/- 20.26, N = 3SE +/- 13.76, N = 9SE +/- 22.57, N = 9SE +/- 3.71, N = 3SE +/- 7.54, N = 3SE +/- 4.36, N = 32699237622031969175817351666155915211253105210429467471. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLASEPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7F52EPYC 7532EPYC 7542EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7F32EPYC 7282EPYC 7272EPYC 7232P5001000150020002500Min: 2683 / Avg: 2699.33 / Max: 2728Min: 2146 / Avg: 2376 / Max: 2558Min: 2155 / Avg: 2203.33 / Max: 2248Min: 1916 / Avg: 1969 / Max: 2006Min: 1652 / Avg: 1758 / Max: 1943Min: 1624 / Avg: 1734.78 / Max: 1815Min: 1626 / Avg: 1665.67 / Max: 1691Min: 1445 / Avg: 1559.33 / Max: 1649Min: 1492 / Avg: 1521 / Max: 1560Min: 1190 / Avg: 1253.33 / Max: 1313Min: 958 / Avg: 1052.44 / Max: 1142Min: 1035 / Avg: 1042.33 / Max: 1047Min: 931 / Avg: 945.67 / Max: 956Min: 739 / Avg: 747 / Max: 7541. (CXX) g++ options: -flto -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUEPYC 7642EPYC 7552EPYC 7402PEPYC 7F52EPYC 7662EPYC 7532EPYC 7702EPYC 7302PEPYC 7542EPYC 7502PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P9001800270036004500SE +/- 2.03, N = 3SE +/- 1.92, N = 3SE +/- 1.08, N = 3SE +/- 6.65, N = 3SE +/- 11.35, N = 3SE +/- 5.23, N = 3SE +/- 3.86, N = 3SE +/- 0.38, N = 3SE +/- 2.43, N = 3SE +/- 2.18, N = 3SE +/- 2.25, N = 3SE +/- 3.17, N = 3SE +/- 0.89, N = 3SE +/- 1.97, N = 31210.041352.621674.662013.242230.182230.352296.532410.232721.872743.712834.093057.083345.674347.66MIN: 1190.59MIN: 1333.53MIN: 1660.31MIN: 1992.69MIN: 2194.78MIN: 2212.66MIN: 2270.01MIN: 2395.31MIN: 2708.11MIN: 2727.14MIN: 2799.34MIN: 3044.36MIN: 3327MIN: 4308.621. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUEPYC 7642EPYC 7552EPYC 7402PEPYC 7F52EPYC 7662EPYC 7532EPYC 7702EPYC 7302PEPYC 7542EPYC 7502PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P8001600240032004000Min: 1206.71 / Avg: 1210.04 / Max: 1213.71Min: 1350.2 / Avg: 1352.62 / Max: 1356.42Min: 1672.87 / Avg: 1674.66 / Max: 1676.6Min: 2003.33 / Avg: 2013.24 / Max: 2025.88Min: 2212.51 / Avg: 2230.18 / Max: 2251.36Min: 2221.2 / Avg: 2230.35 / Max: 2239.31Min: 2288.81 / Avg: 2296.53 / Max: 2300.61Min: 2409.5 / Avg: 2410.23 / Max: 2410.77Min: 2718.51 / Avg: 2721.87 / Max: 2726.6Min: 2741.09 / Avg: 2743.71 / Max: 2748.05Min: 2829.71 / Avg: 2834.09 / Max: 2837.17Min: 3051.79 / Avg: 3057.08 / Max: 3062.75Min: 3344.19 / Avg: 3345.67 / Max: 3347.26Min: 4343.96 / Avg: 4347.66 / Max: 4350.691. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUEPYC 7642EPYC 7552EPYC 7402PEPYC 7F52EPYC 7662EPYC 7532EPYC 7702EPYC 7302PEPYC 7542EPYC 7502PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P9001800270036004500SE +/- 3.66, N = 3SE +/- 1.28, N = 3SE +/- 0.94, N = 3SE +/- 0.63, N = 3SE +/- 0.34, N = 3SE +/- 5.96, N = 3SE +/- 9.75, N = 3SE +/- 3.71, N = 3SE +/- 2.51, N = 3SE +/- 4.79, N = 3SE +/- 1.77, N = 3SE +/- 0.81, N = 3SE +/- 2.55, N = 3SE +/- 4.35, N = 31211.541350.131673.311997.392203.162221.372314.362407.932716.542740.522835.353059.193348.064348.88MIN: 1187.15MIN: 1333.75MIN: 1660.31MIN: 1989.1MIN: 2183.71MIN: 2198.48MIN: 2276.78MIN: 2392.45MIN: 2705.25MIN: 2722.51MIN: 2800.15MIN: 3051.11MIN: 3331.21MIN: 4311.941. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUEPYC 7642EPYC 7552EPYC 7402PEPYC 7F52EPYC 7662EPYC 7532EPYC 7702EPYC 7302PEPYC 7542EPYC 7502PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P8001600240032004000Min: 1207.18 / Avg: 1211.54 / Max: 1218.81Min: 1347.63 / Avg: 1350.13 / Max: 1351.85Min: 1671.47 / Avg: 1673.31 / Max: 1674.53Min: 1996.27 / Avg: 1997.39 / Max: 1998.44Min: 2202.51 / Avg: 2203.16 / Max: 2203.65Min: 2213.13 / Avg: 2221.37 / Max: 2232.96Min: 2296.67 / Avg: 2314.36 / Max: 2330.29Min: 2401.58 / Avg: 2407.93 / Max: 2414.43Min: 2713.68 / Avg: 2716.54 / Max: 2721.54Min: 2731.74 / Avg: 2740.52 / Max: 2748.21Min: 2831.82 / Avg: 2835.35 / Max: 2837.39Min: 3057.83 / Avg: 3059.19 / Max: 3060.62Min: 3342.99 / Avg: 3348.06 / Max: 3351.09Min: 4341.28 / Avg: 4348.88 / Max: 4356.331. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUEPYC 7642EPYC 7552EPYC 7402PEPYC 7F52EPYC 7532EPYC 7662EPYC 7702EPYC 7302PEPYC 7542EPYC 7502PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P9001800270036004500SE +/- 0.57, N = 3SE +/- 1.32, N = 3SE +/- 0.54, N = 3SE +/- 2.05, N = 3SE +/- 2.63, N = 3SE +/- 8.27, N = 3SE +/- 5.75, N = 3SE +/- 2.99, N = 3SE +/- 1.42, N = 3SE +/- 5.60, N = 3SE +/- 1.43, N = 3SE +/- 1.59, N = 3SE +/- 0.86, N = 3SE +/- 5.10, N = 31213.581350.551672.242013.752214.792221.332306.332412.792718.682738.422836.683056.053345.104345.12MIN: 1190.99MIN: 1331.05MIN: 1658.48MIN: 1995.38MIN: 2191.84MIN: 2189.19MIN: 2274.33MIN: 2397.68MIN: 2705.41MIN: 2722.76MIN: 2798.53MIN: 3041.35MIN: 3322.56MIN: 4298.141. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUEPYC 7642EPYC 7552EPYC 7402PEPYC 7F52EPYC 7532EPYC 7662EPYC 7702EPYC 7302PEPYC 7542EPYC 7502PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P8001600240032004000Min: 1212.83 / Avg: 1213.58 / Max: 1214.69Min: 1347.94 / Avg: 1350.55 / Max: 1352.2Min: 1671.4 / Avg: 1672.24 / Max: 1673.25Min: 2010.58 / Avg: 2013.75 / Max: 2017.58Min: 2209.9 / Avg: 2214.79 / Max: 2218.93Min: 2206.05 / Avg: 2221.33 / Max: 2234.44Min: 2294.83 / Avg: 2306.33 / Max: 2312.48Min: 2408.58 / Avg: 2412.79 / Max: 2418.57Min: 2716.43 / Avg: 2718.68 / Max: 2721.31Min: 2731.04 / Avg: 2738.42 / Max: 2749.41Min: 2834.78 / Avg: 2836.68 / Max: 2839.49Min: 3053.73 / Avg: 3056.05 / Max: 3059.09Min: 3343.83 / Avg: 3345.1 / Max: 3346.73Min: 4339.98 / Avg: 4345.12 / Max: 4355.321. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random Fill SyncEPYC 7662EPYC 7642EPYC 7552EPYC 7702EPYC 7502PEPYC 7542EPYC 7532EPYC 7402PEPYC 7F52EPYC 7282EPYC 7302PEPYC 7272EPYC 7F32EPYC 7232P70K140K210K280K350KSE +/- 735.95, N = 3SE +/- 768.33, N = 3SE +/- 749.32, N = 3SE +/- 166.32, N = 3SE +/- 219.22, N = 3SE +/- 344.84, N = 3SE +/- 622.13, N = 3SE +/- 607.97, N = 3SE +/- 436.15, N = 3SE +/- 203.74, N = 3SE +/- 223.01, N = 3SE +/- 175.05, N = 3SE +/- 204.78, N = 3333412331791331300287115259353258759253689216427172585163535163248131583100843943711. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random Fill SyncEPYC 7662EPYC 7642EPYC 7552EPYC 7702EPYC 7502PEPYC 7542EPYC 7532EPYC 7402PEPYC 7F52EPYC 7282EPYC 7302PEPYC 7272EPYC 7F32EPYC 7232P60K120K180K240K300KMin: 332027 / Avg: 333412 / Max: 334536Min: 330767 / Avg: 331790.67 / Max: 333295Min: 329808 / Avg: 331300 / Max: 332168Min: 286867 / Avg: 287115 / Max: 287431Min: 259071 / Avg: 259353.33 / Max: 259785Min: 258263 / Avg: 258759 / Max: 259422Min: 252478 / Avg: 253689 / Max: 254542Min: 215402 / Avg: 216427 / Max: 217506Min: 171773 / Avg: 172585 / Max: 173267Min: 163163 / Avg: 163535 / Max: 163865Min: 131216 / Avg: 131583 / Max: 131986Min: 100631 / Avg: 100842.67 / Max: 101190Min: 93963 / Avg: 94371 / Max: 946061. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

Timed MPlayer Compilation

This test times how long it takes to build the MPlayer open-source media player program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.4Time To CompileEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P1020304050SE +/- 0.03, N = 4SE +/- 0.04, N = 4SE +/- 0.04, N = 4SE +/- 0.03, N = 4SE +/- 0.07, N = 4SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 3SE +/- 0.07, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 312.9612.9913.7113.8215.6916.3816.4618.5920.5724.0626.2731.1435.2545.59
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.4Time To CompileEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P918273645Min: 12.87 / Avg: 12.96 / Max: 13.01Min: 12.91 / Avg: 12.99 / Max: 13.09Min: 13.64 / Avg: 13.71 / Max: 13.81Min: 13.75 / Avg: 13.82 / Max: 13.87Min: 15.6 / Avg: 15.69 / Max: 15.9Min: 16.35 / Avg: 16.38 / Max: 16.42Min: 16.42 / Avg: 16.46 / Max: 16.52Min: 18.5 / Avg: 18.59 / Max: 18.68Min: 20.43 / Avg: 20.57 / Max: 20.67Min: 24.05 / Avg: 24.06 / Max: 24.07Min: 26.23 / Avg: 26.27 / Max: 26.29Min: 31.1 / Avg: 31.14 / Max: 31.21Min: 35.21 / Avg: 35.25 / Max: 35.28Min: 45.54 / Avg: 45.59 / Max: 45.62

FFTE

FFTE is a package by Daisuke Takahashi to compute Discrete Fourier Transforms of 1-, 2- and 3- dimensional sequences of length (2^p)*(3^q)*(5^r). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT RoutineEPYC 7662EPYC 7642EPYC 7702EPYC 7542EPYC 7502PEPYC 7552EPYC 7532EPYC 7402PEPYC 7302PEPYC 7282EPYC 7F52EPYC 7272EPYC 7F32EPYC 7232P30K60K90K120K150KSE +/- 1366.04, N = 15SE +/- 641.51, N = 9SE +/- 1033.87, N = 15SE +/- 143.74, N = 9SE +/- 160.95, N = 8SE +/- 375.36, N = 8SE +/- 154.09, N = 8SE +/- 134.09, N = 10SE +/- 80.44, N = 9SE +/- 71.59, N = 9SE +/- 308.97, N = 9SE +/- 22.37, N = 11SE +/- 37.94, N = 10SE +/- 139.55, N = 9155375.98149967.74145868.29144583.87136276.50135838.12135786.94111563.8484316.2376796.0471767.9661838.5254800.9944242.981. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp
OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT RoutineEPYC 7662EPYC 7642EPYC 7702EPYC 7542EPYC 7502PEPYC 7552EPYC 7532EPYC 7402PEPYC 7302PEPYC 7282EPYC 7F52EPYC 7272EPYC 7F32EPYC 7232P30K60K90K120K150KMin: 151616.85 / Avg: 155375.98 / Max: 173605.93Min: 148789.93 / Avg: 149967.74 / Max: 154980.74Min: 143378.84 / Avg: 145868.29 / Max: 159924Min: 143957.47 / Avg: 144583.87 / Max: 145193.47Min: 135695.57 / Avg: 136276.5 / Max: 136949.63Min: 133822.4 / Avg: 135838.12 / Max: 137193.41Min: 135207.86 / Avg: 135786.94 / Max: 136435.92Min: 110544.87 / Avg: 111563.84 / Max: 112037.3Min: 83921.86 / Avg: 84316.23 / Max: 84643.19Min: 76508.11 / Avg: 76796.04 / Max: 77062.73Min: 70448.73 / Avg: 71767.96 / Max: 73371.09Min: 61730.3 / Avg: 61838.52 / Max: 61950.09Min: 54543.54 / Avg: 54800.99 / Max: 54955.54Min: 43253.52 / Avg: 44242.98 / Max: 44735.981. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: MediumEPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7702EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P48121620SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 316.4415.9415.5014.9414.9413.9713.6511.2210.669.158.616.495.734.721. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: MediumEPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7702EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P48121620Min: 16.42 / Avg: 16.44 / Max: 16.48Min: 15.92 / Avg: 15.94 / Max: 15.97Min: 15.49 / Avg: 15.5 / Max: 15.51Min: 14.93 / Avg: 14.94 / Max: 14.95Min: 14.93 / Avg: 14.94 / Max: 14.97Min: 13.95 / Avg: 13.97 / Max: 13.98Min: 13.61 / Avg: 13.65 / Max: 13.67Min: 11.18 / Avg: 11.22 / Max: 11.24Min: 10.64 / Avg: 10.66 / Max: 10.69Min: 9.13 / Avg: 9.15 / Max: 9.17Min: 8.58 / Avg: 8.61 / Max: 8.63Min: 6.48 / Avg: 6.49 / Max: 6.49Min: 5.71 / Avg: 5.73 / Max: 5.74Min: 4.72 / Avg: 4.72 / Max: 4.731. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 20.1Input: Carbon NanotubeEPYC 7662EPYC 7642EPYC 7702EPYC 7552EPYC 7532EPYC 7542EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7F52EPYC 7282EPYC 7F32EPYC 7272EPYC 7232P60120180240300SE +/- 0.05, N = 3SE +/- 0.26, N = 3SE +/- 0.15, N = 3SE +/- 0.11, N = 3SE +/- 0.02, N = 3SE +/- 0.27, N = 3SE +/- 0.20, N = 3SE +/- 0.46, N = 3SE +/- 0.19, N = 3SE +/- 0.87, N = 3SE +/- 0.09, N = 3SE +/- 0.20, N = 3SE +/- 0.37, N = 3SE +/- 0.67, N = 378.3979.9881.0686.7493.24103.30105.67116.41139.15165.41177.91200.41205.89271.161. (CC) gcc options: -pthread -shared -fwrapv -O2 -lxc -lblas -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 20.1Input: Carbon NanotubeEPYC 7662EPYC 7642EPYC 7702EPYC 7552EPYC 7532EPYC 7542EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7F52EPYC 7282EPYC 7F32EPYC 7272EPYC 7232P50100150200250Min: 78.29 / Avg: 78.39 / Max: 78.44Min: 79.56 / Avg: 79.98 / Max: 80.47Min: 80.78 / Avg: 81.06 / Max: 81.29Min: 86.54 / Avg: 86.74 / Max: 86.91Min: 93.21 / Avg: 93.24 / Max: 93.28Min: 102.87 / Avg: 103.3 / Max: 103.79Min: 105.35 / Avg: 105.67 / Max: 106.04Min: 115.76 / Avg: 116.41 / Max: 117.3Min: 138.85 / Avg: 139.15 / Max: 139.52Min: 163.66 / Avg: 165.41 / Max: 166.3Min: 177.8 / Avg: 177.9 / Max: 178.09Min: 200.15 / Avg: 200.41 / Max: 200.81Min: 205.17 / Avg: 205.89 / Max: 206.37Min: 270.41 / Avg: 271.16 / Max: 272.491. (CC) gcc options: -pthread -shared -fwrapv -O2 -lxc -lblas -lmpi

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUEPYC 7642EPYC 7662EPYC 7552EPYC 7702EPYC 7402PEPYC 7542EPYC 7532EPYC 7502PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P246810SE +/- 0.01930, N = 15SE +/- 0.02912, N = 15SE +/- 0.00950, N = 9SE +/- 0.03302, N = 15SE +/- 0.00636, N = 9SE +/- 0.00582, N = 9SE +/- 0.00881, N = 9SE +/- 0.00414, N = 9SE +/- 0.00511, N = 9SE +/- 0.00474, N = 9SE +/- 0.00482, N = 9SE +/- 0.00724, N = 9SE +/- 0.00118, N = 9SE +/- 0.00735, N = 92.403802.523632.553352.820153.346333.441183.601303.688333.927374.726085.083286.291356.463988.27184MIN: 2.23MIN: 2.25MIN: 2.41MIN: 2.49MIN: 3.15MIN: 3.03MIN: 3.22MIN: 3.26MIN: 3.81MIN: 4.52MIN: 4.73MIN: 6.07MIN: 6.38MIN: 81. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUEPYC 7642EPYC 7662EPYC 7552EPYC 7702EPYC 7402PEPYC 7542EPYC 7532EPYC 7502PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P3691215Min: 2.29 / Avg: 2.4 / Max: 2.52Min: 2.43 / Avg: 2.52 / Max: 2.85Min: 2.52 / Avg: 2.55 / Max: 2.62Min: 2.71 / Avg: 2.82 / Max: 3.12Min: 3.33 / Avg: 3.35 / Max: 3.39Min: 3.42 / Avg: 3.44 / Max: 3.47Min: 3.57 / Avg: 3.6 / Max: 3.64Min: 3.67 / Avg: 3.69 / Max: 3.7Min: 3.89 / Avg: 3.93 / Max: 3.94Min: 4.71 / Avg: 4.73 / Max: 4.74Min: 5.06 / Avg: 5.08 / Max: 5.11Min: 6.26 / Avg: 6.29 / Max: 6.32Min: 6.46 / Avg: 6.46 / Max: 6.47Min: 8.23 / Avg: 8.27 / Max: 8.291. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG19 - Device: CPUEPYC 7662EPYC 7642EPYC 7542EPYC 7552EPYC 7502PEPYC 7702EPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P714212835SE +/- 0.26, N = 3SE +/- 0.12, N = 3SE +/- 0.14, N = 3SE +/- 0.02, N = 3SE +/- 0.21, N = 3SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.12, N = 3SE +/- 0.12, N = 3SE +/- 0.06, N = 3SE +/- 0.06, N = 3SE +/- 0.12, N = 3SE +/- 0.06, N = 3SE +/- 0.06, N = 331.8429.3727.6727.5026.5126.4925.3523.3420.0017.3916.6113.6712.069.30
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG19 - Device: CPUEPYC 7662EPYC 7642EPYC 7542EPYC 7552EPYC 7502PEPYC 7702EPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P714212835Min: 31.48 / Avg: 31.84 / Max: 32.35Min: 29.23 / Avg: 29.37 / Max: 29.61Min: 27.53 / Avg: 27.67 / Max: 27.96Min: 27.47 / Avg: 27.5 / Max: 27.53Min: 26.11 / Avg: 26.51 / Max: 26.82Min: 26.42 / Avg: 26.49 / Max: 26.59Min: 25.31 / Avg: 25.35 / Max: 25.42Min: 23.11 / Avg: 23.34 / Max: 23.52Min: 19.79 / Avg: 20 / Max: 20.19Min: 17.26 / Avg: 17.39 / Max: 17.46Min: 16.52 / Avg: 16.61 / Max: 16.72Min: 13.44 / Avg: 13.67 / Max: 13.83Min: 11.98 / Avg: 12.06 / Max: 12.18Min: 9.23 / Avg: 9.3 / Max: 9.41

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2021Input: water_GMX50_bareEPYC 7662EPYC 7702EPYC 7542EPYC 7532EPYC 7502PEPYC 7F52EPYC 7282EPYC 7272EPYC 7F321.01992.03983.05974.07965.0995SE +/- 0.005, N = 3SE +/- 0.001, N = 3SE +/- 0.004, N = 3SE +/- 0.007, N = 3SE +/- 0.002, N = 3SE +/- 0.006, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 34.5334.3713.3173.2563.1402.3131.6751.4071.3481. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2021Input: water_GMX50_bareEPYC 7662EPYC 7702EPYC 7542EPYC 7532EPYC 7502PEPYC 7F52EPYC 7282EPYC 7272EPYC 7F32246810Min: 4.52 / Avg: 4.53 / Max: 4.54Min: 4.37 / Avg: 4.37 / Max: 4.37Min: 3.31 / Avg: 3.32 / Max: 3.33Min: 3.24 / Avg: 3.26 / Max: 3.27Min: 3.14 / Avg: 3.14 / Max: 3.14Min: 2.31 / Avg: 2.31 / Max: 2.33Min: 1.67 / Avg: 1.68 / Max: 1.68Min: 1.41 / Avg: 1.41 / Max: 1.41Min: 1.35 / Avg: 1.35 / Max: 1.351. (CXX) g++ options: -O3 -pthread

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: CPUEPYC 7662EPYC 7642EPYC 7542EPYC 7552EPYC 7502PEPYC 7702EPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P918273645SE +/- 0.46, N = 4SE +/- 0.45, N = 3SE +/- 0.37, N = 3SE +/- 0.16, N = 3SE +/- 0.19, N = 3SE +/- 0.28, N = 3SE +/- 0.11, N = 3SE +/- 0.12, N = 3SE +/- 0.09, N = 3SE +/- 0.10, N = 3SE +/- 0.05, N = 3SE +/- 0.22, N = 3SE +/- 0.07, N = 3SE +/- 0.15, N = 337.3835.2533.3332.6832.0531.8930.3627.8124.2921.2420.3416.7914.6711.41
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: CPUEPYC 7662EPYC 7642EPYC 7542EPYC 7552EPYC 7502PEPYC 7702EPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P816243240Min: 36.33 / Avg: 37.38 / Max: 38.57Min: 34.41 / Avg: 35.25 / Max: 35.93Min: 32.65 / Avg: 33.33 / Max: 33.94Min: 32.38 / Avg: 32.68 / Max: 32.94Min: 31.69 / Avg: 32.05 / Max: 32.3Min: 31.34 / Avg: 31.89 / Max: 32.28Min: 30.21 / Avg: 30.36 / Max: 30.58Min: 27.57 / Avg: 27.81 / Max: 27.96Min: 24.18 / Avg: 24.29 / Max: 24.46Min: 21.13 / Avg: 21.24 / Max: 21.44Min: 20.24 / Avg: 20.34 / Max: 20.43Min: 16.5 / Avg: 16.79 / Max: 17.22Min: 14.6 / Avg: 14.67 / Max: 14.82Min: 11.13 / Avg: 11.41 / Max: 11.63

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverEPYC 7662EPYC 7702EPYC 7552EPYC 7642EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7F52EPYC 7282EPYC 7302PEPYC 7272EPYC 7F32EPYC 7232P612182430SE +/- 0.051, N = 15SE +/- 0.066, N = 8SE +/- 0.083, N = 6SE +/- 0.073, N = 5SE +/- 0.005, N = 5SE +/- 0.057, N = 5SE +/- 0.030, N = 5SE +/- 0.007, N = 4SE +/- 0.022, N = 4SE +/- 0.009, N = 4SE +/- 0.022, N = 4SE +/- 0.065, N = 3SE +/- 0.078, N = 3SE +/- 0.030, N = 37.7477.8198.4058.4349.8069.8589.89211.57713.53014.79215.44518.13422.21825.3741. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverEPYC 7662EPYC 7702EPYC 7552EPYC 7642EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7F52EPYC 7282EPYC 7302PEPYC 7272EPYC 7F32EPYC 7232P612182430Min: 7.37 / Avg: 7.75 / Max: 8.02Min: 7.45 / Avg: 7.82 / Max: 8.05Min: 8.14 / Avg: 8.41 / Max: 8.71Min: 8.16 / Avg: 8.43 / Max: 8.59Min: 9.79 / Avg: 9.81 / Max: 9.82Min: 9.76 / Avg: 9.86 / Max: 10.08Min: 9.81 / Avg: 9.89 / Max: 9.96Min: 11.57 / Avg: 11.58 / Max: 11.6Min: 13.49 / Avg: 13.53 / Max: 13.59Min: 14.76 / Avg: 14.79 / Max: 14.8Min: 15.41 / Avg: 15.44 / Max: 15.51Min: 18.02 / Avg: 18.13 / Max: 18.24Min: 22.12 / Avg: 22.22 / Max: 22.37Min: 25.33 / Avg: 25.37 / Max: 25.431. (CXX) g++ options: -O2 -lOpenCL

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.BEPYC 7662EPYC 7702EPYC 7F52EPYC 7302PEPYC 728220K40K60K80K100KSE +/- 539.59, N = 13SE +/- 330.58, N = 7SE +/- 79.26, N = 5SE +/- 122.30, N = 5SE +/- 30.77, N = 480724.1679909.2941985.3139529.2124658.721. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.BEPYC 7662EPYC 7702EPYC 7F52EPYC 7302PEPYC 728214K28K42K56K70KMin: 76668.19 / Avg: 80724.16 / Max: 83637.3Min: 78561.87 / Avg: 79909.29 / Max: 80939.53Min: 41819.98 / Avg: 41985.31 / Max: 42184.66Min: 39188.63 / Avg: 39529.21 / Max: 39948.23Min: 24598.85 / Avg: 24658.72 / Max: 24732.261. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUEPYC 7642EPYC 7552EPYC 7662EPYC 7532EPYC 7702EPYC 7542EPYC 7502PEPYC 7402PEPYC 7F52EPYC 7302PEPYC 7F32EPYC 7282EPYC 7272EPYC 7232P5001000150020002500SE +/- 1.28, N = 3SE +/- 2.70, N = 3SE +/- 1.17, N = 3SE +/- 1.51, N = 3SE +/- 3.54, N = 3SE +/- 4.31, N = 3SE +/- 5.72, N = 3SE +/- 0.91, N = 3SE +/- 1.61, N = 3SE +/- 0.19, N = 3SE +/- 0.51, N = 3SE +/- 3.12, N = 3SE +/- 2.69, N = 3SE +/- 2.26, N = 3732.81787.43793.20813.12874.44890.48902.26972.93997.851229.691745.541882.642022.452397.56MIN: 712.9MIN: 769.99MIN: 776.66MIN: 799.88MIN: 847.07MIN: 867.03MIN: 882.8MIN: 960.02MIN: 987.15MIN: 1221.64MIN: 1733.89MIN: 1853.21MIN: 2012.67MIN: 2367.091. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUEPYC 7642EPYC 7552EPYC 7662EPYC 7532EPYC 7702EPYC 7542EPYC 7502PEPYC 7402PEPYC 7F52EPYC 7302PEPYC 7F32EPYC 7282EPYC 7272EPYC 7232P400800120016002000Min: 731.06 / Avg: 732.81 / Max: 735.31Min: 782.55 / Avg: 787.43 / Max: 791.88Min: 790.85 / Avg: 793.2 / Max: 794.46Min: 810.1 / Avg: 813.12 / Max: 814.7Min: 868.68 / Avg: 874.44 / Max: 880.89Min: 882.05 / Avg: 890.48 / Max: 896.25Min: 890.83 / Avg: 902.26 / Max: 908.5Min: 971.59 / Avg: 972.93 / Max: 974.66Min: 995.25 / Avg: 997.85 / Max: 1000.79Min: 1229.46 / Avg: 1229.69 / Max: 1230.06Min: 1744.59 / Avg: 1745.54 / Max: 1746.31Min: 1876.41 / Avg: 1882.64 / Max: 1885.98Min: 2018.73 / Avg: 2022.45 / Max: 2027.68Min: 2394.02 / Avg: 2397.56 / Max: 2401.751. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 10.0Time To CompileEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P160320480640800SE +/- 0.89, N = 3SE +/- 1.73, N = 3SE +/- 1.29, N = 3SE +/- 1.44, N = 3SE +/- 1.12, N = 3SE +/- 1.56, N = 3SE +/- 2.29, N = 3SE +/- 3.16, N = 9SE +/- 2.67, N = 3SE +/- 5.04, N = 4SE +/- 3.68, N = 3SE +/- 0.47, N = 3SE +/- 0.89, N = 3SE +/- 2.05, N = 3233.51234.21243.92245.40275.35280.78289.36312.70343.39404.33440.00529.62581.30763.56
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 10.0Time To CompileEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P130260390520650Min: 232.12 / Avg: 233.51 / Max: 235.17Min: 230.9 / Avg: 234.2 / Max: 236.76Min: 241.53 / Avg: 243.91 / Max: 245.95Min: 242.68 / Avg: 245.4 / Max: 247.61Min: 273.43 / Avg: 275.35 / Max: 277.3Min: 278.05 / Avg: 280.78 / Max: 283.44Min: 286.12 / Avg: 289.36 / Max: 293.79Min: 297.71 / Avg: 312.7 / Max: 321.97Min: 340.62 / Avg: 343.39 / Max: 348.73Min: 391.86 / Avg: 404.33 / Max: 414.08Min: 432.97 / Avg: 440 / Max: 445.42Min: 528.7 / Avg: 529.62 / Max: 530.26Min: 579.68 / Avg: 581.3 / Max: 582.76Min: 759.57 / Avg: 763.56 / Max: 766.35

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUEPYC 7642EPYC 7552EPYC 7662EPYC 7532EPYC 7702EPYC 7542EPYC 7502PEPYC 7402PEPYC 7F52EPYC 7302PEPYC 7F32EPYC 7282EPYC 7272EPYC 7232P5001000150020002500SE +/- 3.28, N = 3SE +/- 2.09, N = 3SE +/- 1.88, N = 3SE +/- 1.32, N = 3SE +/- 2.81, N = 3SE +/- 1.35, N = 3SE +/- 2.58, N = 3SE +/- 1.57, N = 3SE +/- 3.97, N = 3SE +/- 0.54, N = 3SE +/- 0.95, N = 3SE +/- 5.33, N = 3SE +/- 4.61, N = 3SE +/- 3.61, N = 3734.40784.82793.23812.99878.20886.88919.61974.12998.671227.561742.791886.302032.882397.22MIN: 716.82MIN: 771.05MIN: 777.03MIN: 799.51MIN: 851.72MIN: 865.5MIN: 906.32MIN: 962.55MIN: 987.09MIN: 1218.44MIN: 1728.82MIN: 1851.55MIN: 2016.92MIN: 2370.411. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUEPYC 7642EPYC 7552EPYC 7662EPYC 7532EPYC 7702EPYC 7542EPYC 7502PEPYC 7402PEPYC 7F52EPYC 7302PEPYC 7F32EPYC 7282EPYC 7272EPYC 7232P400800120016002000Min: 728.04 / Avg: 734.4 / Max: 738.94Min: 780.67 / Avg: 784.82 / Max: 787.4Min: 791.29 / Avg: 793.23 / Max: 796.99Min: 810.37 / Avg: 812.99 / Max: 814.64Min: 873.24 / Avg: 878.2 / Max: 882.97Min: 884.18 / Avg: 886.88 / Max: 888.33Min: 916.98 / Avg: 919.61 / Max: 924.77Min: 971.07 / Avg: 974.12 / Max: 976.32Min: 991.29 / Avg: 998.67 / Max: 1004.91Min: 1226.72 / Avg: 1227.56 / Max: 1228.57Min: 1741.68 / Avg: 1742.79 / Max: 1744.69Min: 1876.6 / Avg: 1886.3 / Max: 1894.99Min: 2023.66 / Avg: 2032.88 / Max: 2037.71Min: 2392.08 / Avg: 2397.22 / Max: 2404.171. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUEPYC 7642EPYC 7552EPYC 7662EPYC 7532EPYC 7702EPYC 7542EPYC 7502PEPYC 7402PEPYC 7F52EPYC 7302PEPYC 7F32EPYC 7282EPYC 7272EPYC 7232P5001000150020002500SE +/- 2.51, N = 3SE +/- 1.66, N = 3SE +/- 0.96, N = 3SE +/- 2.11, N = 3SE +/- 3.99, N = 3SE +/- 1.82, N = 3SE +/- 5.94, N = 3SE +/- 0.65, N = 3SE +/- 0.99, N = 3SE +/- 1.03, N = 3SE +/- 2.16, N = 3SE +/- 4.32, N = 3SE +/- 1.28, N = 3SE +/- 4.09, N = 3735.82782.65792.55814.03875.71888.54917.61972.77999.141228.501744.751884.702029.282398.98MIN: 715.63MIN: 769.36MIN: 773.96MIN: 801.22MIN: 846.67MIN: 872.96MIN: 892.04MIN: 962.66MIN: 991.16MIN: 1219.05MIN: 1731.94MIN: 1853.9MIN: 2020.38MIN: 2367.551. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUEPYC 7642EPYC 7552EPYC 7662EPYC 7532EPYC 7702EPYC 7542EPYC 7502PEPYC 7402PEPYC 7F52EPYC 7302PEPYC 7F32EPYC 7282EPYC 7272EPYC 7232P400800120016002000Min: 731.42 / Avg: 735.82 / Max: 740.12Min: 779.51 / Avg: 782.65 / Max: 785.17Min: 791.43 / Avg: 792.55 / Max: 794.46Min: 810.97 / Avg: 814.03 / Max: 818.07Min: 869.12 / Avg: 875.71 / Max: 882.9Min: 885.37 / Avg: 888.54 / Max: 891.69Min: 907.1 / Avg: 917.61 / Max: 927.67Min: 971.9 / Avg: 972.77 / Max: 974.05Min: 997.98 / Avg: 999.14 / Max: 1001.1Min: 1226.65 / Avg: 1228.5 / Max: 1230.2Min: 1742.54 / Avg: 1744.75 / Max: 1749.07Min: 1878.26 / Avg: 1884.7 / Max: 1892.91Min: 2026.92 / Avg: 2029.28 / Max: 2031.33Min: 2391.8 / Avg: 2398.98 / Max: 2405.961. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUEPYC 7642EPYC 7542EPYC 7662EPYC 7502PEPYC 7552EPYC 7532EPYC 7702EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7F32EPYC 7272EPYC 7232P1.06622.13243.19864.26485.331SE +/- 0.00906, N = 4SE +/- 0.00341, N = 4SE +/- 0.01406, N = 4SE +/- 0.00185, N = 4SE +/- 0.00439, N = 4SE +/- 0.00891, N = 4SE +/- 0.00178, N = 4SE +/- 0.00090, N = 4SE +/- 0.01053, N = 4SE +/- 0.00144, N = 4SE +/- 0.00488, N = 4SE +/- 0.00382, N = 4SE +/- 0.00316, N = 4SE +/- 0.00331, N = 41.467751.472221.489811.593001.599381.615021.703601.829511.980352.410823.446683.560203.809554.73866MIN: 1.39MIN: 1.41MIN: 1.33MIN: 1.51MIN: 1.51MIN: 1.5MIN: 1.48MIN: 1.74MIN: 1.86MIN: 2.2MIN: 2.92MIN: 3.44MIN: 3.44MIN: 4.551. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUEPYC 7642EPYC 7542EPYC 7662EPYC 7502PEPYC 7552EPYC 7532EPYC 7702EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7F32EPYC 7272EPYC 7232P246810Min: 1.45 / Avg: 1.47 / Max: 1.49Min: 1.46 / Avg: 1.47 / Max: 1.48Min: 1.45 / Avg: 1.49 / Max: 1.52Min: 1.59 / Avg: 1.59 / Max: 1.6Min: 1.59 / Avg: 1.6 / Max: 1.61Min: 1.6 / Avg: 1.62 / Max: 1.64Min: 1.7 / Avg: 1.7 / Max: 1.71Min: 1.83 / Avg: 1.83 / Max: 1.83Min: 1.96 / Avg: 1.98 / Max: 2.01Min: 2.41 / Avg: 2.41 / Max: 2.41Min: 3.43 / Avg: 3.45 / Max: 3.45Min: 3.55 / Avg: 3.56 / Max: 3.57Min: 3.8 / Avg: 3.81 / Max: 3.81Min: 4.73 / Avg: 4.74 / Max: 4.741. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Tungsten Renderer

Tungsten is a C++ physically based renderer that makes use of Intel's Embree ray tracing library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Volumetric CausticEPYC 7662EPYC 7642EPYC 7552EPYC 7702EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7282EPYC 7302PEPYC 7272EPYC 7F32EPYC 7232P48121620SE +/- 0.00245, N = 8SE +/- 0.00169, N = 8SE +/- 0.00349, N = 8SE +/- 0.00214, N = 8SE +/- 0.00749, N = 8SE +/- 0.00220, N = 8SE +/- 0.00562, N = 7SE +/- 0.01885, N = 7SE +/- 0.05079, N = 6SE +/- 0.04707, N = 6SE +/- 0.03030, N = 6SE +/- 0.06509, N = 5SE +/- 0.08977, N = 4SE +/- 0.01019, N = 44.326094.361454.363724.462614.549044.695504.842565.274996.562767.400747.468719.8109812.4141013.926601. (CXX) g++ options: -std=c++0x -march=znver1 -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -mfma -mbmi2 -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -lpthread -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Volumetric CausticEPYC 7662EPYC 7642EPYC 7552EPYC 7702EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7282EPYC 7302PEPYC 7272EPYC 7F32EPYC 7232P48121620Min: 4.32 / Avg: 4.33 / Max: 4.34Min: 4.35 / Avg: 4.36 / Max: 4.37Min: 4.35 / Avg: 4.36 / Max: 4.38Min: 4.46 / Avg: 4.46 / Max: 4.47Min: 4.51 / Avg: 4.55 / Max: 4.57Min: 4.68 / Avg: 4.7 / Max: 4.7Min: 4.83 / Avg: 4.84 / Max: 4.87Min: 5.21 / Avg: 5.27 / Max: 5.33Min: 6.4 / Avg: 6.56 / Max: 6.75Min: 7.21 / Avg: 7.4 / Max: 7.56Min: 7.35 / Avg: 7.47 / Max: 7.56Min: 9.7 / Avg: 9.81 / Max: 10.01Min: 12.18 / Avg: 12.41 / Max: 12.6Min: 13.91 / Avg: 13.93 / Max: 13.961. (CXX) g++ options: -std=c++0x -march=znver1 -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -mfma -mbmi2 -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -lpthread -ldl

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteEPYC 7642EPYC 7662EPYC 7552EPYC 7702EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P306090120150SE +/- 0.47, N = 15SE +/- 0.47, N = 3SE +/- 0.82, N = 15SE +/- 0.61, N = 14SE +/- 0.28, N = 3SE +/- 0.26, N = 3SE +/- 0.42, N = 3SE +/- 0.25, N = 3SE +/- 0.52, N = 3SE +/- 1.14, N = 3SE +/- 0.76, N = 10SE +/- 0.53, N = 3SE +/- 0.11, N = 3SE +/- 0.41, N = 346.5347.3747.4348.7257.6857.9859.4459.6590.5298.1999.72107.97130.98148.201. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteEPYC 7642EPYC 7662EPYC 7552EPYC 7702EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P306090120150Min: 44.25 / Avg: 46.53 / Max: 49.64Min: 46.63 / Avg: 47.37 / Max: 48.23Min: 44.96 / Avg: 47.42 / Max: 54.58Min: 46.16 / Avg: 48.72 / Max: 52.95Min: 57.12 / Avg: 57.68 / Max: 57.99Min: 57.58 / Avg: 57.98 / Max: 58.48Min: 58.61 / Avg: 59.44 / Max: 59.95Min: 59.34 / Avg: 59.65 / Max: 60.14Min: 89.7 / Avg: 90.52 / Max: 91.49Min: 96.7 / Avg: 98.19 / Max: 100.44Min: 96.18 / Avg: 99.72 / Max: 103.91Min: 106.91 / Avg: 107.97 / Max: 108.52Min: 130.76 / Avg: 130.98 / Max: 131.14Min: 147.55 / Avg: 148.2 / Max: 148.961. (CXX) g++ options: -O2 -lOpenCL

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pEPYC 7542EPYC 7642EPYC 7662EPYC 7552EPYC 7502PEPYC 7532EPYC 7402PEPYC 7702EPYC 7302PEPYC 7282EPYC 7F52EPYC 7272EPYC 7F32EPYC 7232P100200300400500SE +/- 0.74, N = 11SE +/- 0.61, N = 10SE +/- 0.96, N = 10SE +/- 0.97, N = 10SE +/- 0.76, N = 11SE +/- 1.03, N = 11SE +/- 0.86, N = 11SE +/- 3.60, N = 15SE +/- 0.29, N = 10SE +/- 0.35, N = 10SE +/- 0.36, N = 10SE +/- 0.45, N = 9SE +/- 0.26, N = 8SE +/- 0.15, N = 8459.14458.24448.08445.92437.88426.52413.09409.36316.49295.80263.13239.47167.08144.321. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pEPYC 7542EPYC 7642EPYC 7662EPYC 7552EPYC 7502PEPYC 7532EPYC 7402PEPYC 7702EPYC 7302PEPYC 7282EPYC 7F52EPYC 7272EPYC 7F32EPYC 7232P80160240320400Min: 454.55 / Avg: 459.14 / Max: 462.25Min: 456.27 / Avg: 458.24 / Max: 462.61Min: 442.15 / Avg: 448.08 / Max: 452.83Min: 441.83 / Avg: 445.92 / Max: 451.13Min: 434.47 / Avg: 437.88 / Max: 443.13Min: 419.29 / Avg: 426.52 / Max: 430.73Min: 407.89 / Avg: 413.09 / Max: 416.67Min: 379.51 / Avg: 409.36 / Max: 426.74Min: 315.62 / Avg: 316.49 / Max: 317.97Min: 293.54 / Avg: 295.8 / Max: 297.47Min: 260.87 / Avg: 263.13 / Max: 264.67Min: 237.06 / Avg: 239.47 / Max: 241.16Min: 165.84 / Avg: 167.08 / Max: 168.07Min: 143.85 / Avg: 144.32 / Max: 145.11. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: EmilyEPYC 7662EPYC 7642EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P110220330440550153.46153.98154.44154.73161.64170.48172.91194.53226.80260.98275.81333.25381.37487.37

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUEPYC 7662EPYC 7642EPYC 7702EPYC 7532EPYC 7552EPYC 7502PEPYC 7542EPYC 7402PEPYC 7302PEPYC 7282EPYC 7F52EPYC 7272EPYC 7F32EPYC 7232P3691215SE +/- 0.01107, N = 7SE +/- 0.00965, N = 7SE +/- 0.00551, N = 7SE +/- 0.01167, N = 7SE +/- 0.00723, N = 7SE +/- 0.01750, N = 7SE +/- 0.00813, N = 7SE +/- 0.00525, N = 7SE +/- 0.00742, N = 7SE +/- 0.03193, N = 7SE +/- 0.03514, N = 7SE +/- 0.00984, N = 7SE +/- 0.04127, N = 7SE +/- 0.04608, N = 73.504123.543163.558923.612683.840183.970433.976994.251115.454006.762426.915617.605059.1512410.95820MIN: 3.39MIN: 3.43MIN: 3.47MIN: 3.49MIN: 3.75MIN: 3.86MIN: 3.87MIN: 4.06MIN: 5.12MIN: 6.41MIN: 6.67MIN: 7.15MIN: 8.56MIN: 10.241. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUEPYC 7662EPYC 7642EPYC 7702EPYC 7532EPYC 7552EPYC 7502PEPYC 7542EPYC 7402PEPYC 7302PEPYC 7282EPYC 7F52EPYC 7272EPYC 7F32EPYC 7232P3691215Min: 3.47 / Avg: 3.5 / Max: 3.54Min: 3.52 / Avg: 3.54 / Max: 3.59Min: 3.54 / Avg: 3.56 / Max: 3.58Min: 3.55 / Avg: 3.61 / Max: 3.64Min: 3.81 / Avg: 3.84 / Max: 3.86Min: 3.92 / Avg: 3.97 / Max: 4.03Min: 3.94 / Avg: 3.98 / Max: 4Min: 4.23 / Avg: 4.25 / Max: 4.27Min: 5.41 / Avg: 5.45 / Max: 5.47Min: 6.64 / Avg: 6.76 / Max: 6.85Min: 6.79 / Avg: 6.92 / Max: 7.05Min: 7.56 / Avg: 7.61 / Max: 7.64Min: 9.06 / Avg: 9.15 / Max: 9.39Min: 10.75 / Avg: 10.96 / Max: 11.121. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Second, More Is BetterASKAP 1.0Test: Hogbom Clean OpenMPEPYC 7542EPYC 7502PEPYC 7662EPYC 7532EPYC 7702EPYC 7642EPYC 7402PEPYC 7302PEPYC 7552EPYC 7F32EPYC 7282EPYC 7272EPYC 7232PEPYC 7F52100200300400500SE +/- 1.71, N = 3SE +/- 1.71, N = 3SE +/- 1.71, N = 3SE +/- 0.64, N = 3SE +/- 2.16, N = 3SE +/- 0.00, N = 3SE +/- 3.44, N = 3SE +/- 0.62, N = 3SE +/- 0.98, N = 3SE +/- 1.74, N = 3SE +/- 2.18, N = 3SE +/- 2.09, N = 3SE +/- 1.09, N = 3SE +/- 0.57, N = 3439.25439.25439.25439.24432.92432.90432.33432.28411.53380.73378.81370.39351.71140.921. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgIterations Per Second, More Is BetterASKAP 1.0Test: Hogbom Clean OpenMPEPYC 7542EPYC 7502PEPYC 7662EPYC 7532EPYC 7702EPYC 7642EPYC 7402PEPYC 7302PEPYC 7552EPYC 7F32EPYC 7282EPYC 7272EPYC 7232PEPYC 7F5280160240320400Min: 436.68 / Avg: 439.25 / Max: 442.48Min: 436.68 / Avg: 439.25 / Max: 442.48Min: 436.68 / Avg: 439.25 / Max: 442.48Min: 438.6 / Avg: 439.24 / Max: 440.53Min: 429.19 / Avg: 432.92 / Max: 436.68Min: 432.9 / Avg: 432.9 / Max: 432.9Min: 425.53 / Avg: 432.33 / Max: 436.68Min: 431.03 / Avg: 432.28 / Max: 432.9Min: 409.84 / Avg: 411.53 / Max: 413.22Min: 377.36 / Avg: 380.73 / Max: 383.14Min: 374.53 / Avg: 378.81 / Max: 381.68Min: 366.3 / Avg: 370.39 / Max: 373.13Min: 349.65 / Avg: 351.71 / Max: 353.36Min: 140.25 / Avg: 140.92 / Max: 142.051. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3-v3 - Model: mnasnetEPYC 7F32EPYC 7282EPYC 7F52EPYC 7502PEPYC 7542EPYC 7532EPYC 7702510152025SE +/- 0.03, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 2SE +/- 0.32, N = 3SE +/- 0.55, N = 3SE +/- 0.17, N = 3SE +/- 0.87, N = 126.227.257.869.349.499.9419.28MIN: 6.06 / MAX: 8.58MIN: 6.73 / MAX: 57.32MIN: 7.64 / MAX: 8.58MIN: 8.75 / MAX: 11.53MIN: 8.73 / MAX: 13.47MIN: 9.35 / MAX: 24.23MIN: 14.29 / MAX: 168.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3-v3 - Model: mnasnetEPYC 7F32EPYC 7282EPYC 7F52EPYC 7502PEPYC 7542EPYC 7532EPYC 7702510152025Min: 6.19 / Avg: 6.22 / Max: 6.27Min: 7.16 / Avg: 7.25 / Max: 7.35Min: 7.85 / Avg: 7.86 / Max: 7.86Min: 8.93 / Avg: 9.34 / Max: 9.96Min: 8.93 / Avg: 9.49 / Max: 10.59Min: 9.76 / Avg: 9.94 / Max: 10.28Min: 14.89 / Avg: 19.28 / Max: 26.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUEPYC 7642EPYC 7662EPYC 7542EPYC 7552EPYC 7502PEPYC 7532EPYC 7702EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P246810SE +/- 0.02012, N = 3SE +/- 0.01627, N = 15SE +/- 0.00152, N = 3SE +/- 0.01668, N = 15SE +/- 0.01343, N = 3SE +/- 0.01295, N = 3SE +/- 0.00813, N = 3SE +/- 0.00832, N = 3SE +/- 0.00961, N = 3SE +/- 0.01740, N = 3SE +/- 0.01071, N = 3SE +/- 0.00923, N = 3SE +/- 0.06606, N = 3SE +/- 0.02020, N = 31.987262.022312.048662.157942.197272.222422.335192.489772.757433.283773.501754.440775.032366.12261MIN: 1.87MIN: 1.84MIN: 1.99MIN: 1.99MIN: 2.12MIN: 2.06MIN: 2.08MIN: 2.4MIN: 2.63MIN: 3.08MIN: 3.1MIN: 4.2MIN: 4.86MIN: 5.931. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUEPYC 7642EPYC 7662EPYC 7542EPYC 7552EPYC 7502PEPYC 7532EPYC 7702EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P246810Min: 1.95 / Avg: 1.99 / Max: 2.02Min: 1.93 / Avg: 2.02 / Max: 2.15Min: 2.05 / Avg: 2.05 / Max: 2.05Min: 2.07 / Avg: 2.16 / Max: 2.28Min: 2.18 / Avg: 2.2 / Max: 2.22Min: 2.21 / Avg: 2.22 / Max: 2.25Min: 2.32 / Avg: 2.34 / Max: 2.35Min: 2.48 / Avg: 2.49 / Max: 2.51Min: 2.74 / Avg: 2.76 / Max: 2.77Min: 3.25 / Avg: 3.28 / Max: 3.3Min: 3.49 / Avg: 3.5 / Max: 3.52Min: 4.43 / Avg: 4.44 / Max: 4.46Min: 4.94 / Avg: 5.03 / Max: 5.16Min: 6.1 / Avg: 6.12 / Max: 6.161. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7532EPYC 7542EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7F52EPYC 7272EPYC 7F32EPYC 7232P20K40K60K80K100KSE +/- 298.03, N = 3SE +/- 34.41, N = 3SE +/- 281.17, N = 3SE +/- 82.31, N = 3SE +/- 26.26, N = 3SE +/- 207.08, N = 3SE +/- 106.91, N = 3SE +/- 81.46, N = 3SE +/- 3.68, N = 3SE +/- 22.56, N = 3SE +/- 82.34, N = 3SE +/- 24.66, N = 3SE +/- 31.62, N = 3SE +/- 12.95, N = 3103985.91102548.9599557.2395079.4186547.2879644.8878846.9674421.6161041.2749617.9244891.6943321.3843172.5133816.481. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7532EPYC 7542EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7F52EPYC 7272EPYC 7F32EPYC 7232P20K40K60K80K100KMin: 103567 / Avg: 103985.91 / Max: 104562.59Min: 102490.22 / Avg: 102548.95 / Max: 102609.37Min: 98994.91 / Avg: 99557.23 / Max: 99842.55Min: 94922.44 / Avg: 95079.41 / Max: 95200.87Min: 86498.25 / Avg: 86547.28 / Max: 86588.09Min: 79399.38 / Avg: 79644.88 / Max: 80056.49Min: 78710.27 / Avg: 78846.96 / Max: 79057.7Min: 74315 / Avg: 74421.61 / Max: 74581.6Min: 61034.85 / Avg: 61041.27 / Max: 61047.59Min: 49572.87 / Avg: 49617.92 / Max: 49642.69Min: 44754.21 / Avg: 44891.69 / Max: 45038.95Min: 43285.83 / Avg: 43321.38 / Max: 43368.77Min: 43122.35 / Avg: 43172.51 / Max: 43230.93Min: 33792.21 / Avg: 33816.48 / Max: 33836.431. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Intel Threaded Building Blocks (TBB), and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: TBBEPYC 7532EPYC 7282EPYC 7F329K18K27K36K45KSE +/- 60.14, N = 4SE +/- 162.21, N = 3SE +/- 233.59, N = 31355625411416061. (CXX) g++ options: -O3 -lpthread
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: TBBEPYC 7532EPYC 7282EPYC 7F327K14K21K28K35KMin: 13434 / Avg: 13555.5 / Max: 13696Min: 25214 / Avg: 25411.33 / Max: 25733Min: 41361 / Avg: 41606 / Max: 420731. (CXX) g++ options: -O3 -lpthread

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read WriteEPYC 7662EPYC 7702EPYC 7552EPYC 7642EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7282EPYC 7302PEPYC 7F52EPYC 7272EPYC 7F32EPYC 7232P12K24K36K48K60KSE +/- 734.82, N = 3SE +/- 365.68, N = 3SE +/- 725.42, N = 3SE +/- 308.59, N = 3SE +/- 320.52, N = 13SE +/- 468.05, N = 3SE +/- 489.37, N = 5SE +/- 510.25, N = 4SE +/- 44.96, N = 3SE +/- 306.40, N = 6SE +/- 92.29, N = 3SE +/- 98.97, N = 3SE +/- 32.87, N = 3SE +/- 5.29, N = 354318521815144450381466554567144343412733211530313281932517020541177091. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read WriteEPYC 7662EPYC 7702EPYC 7552EPYC 7642EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7282EPYC 7302PEPYC 7F52EPYC 7272EPYC 7F32EPYC 7232P9K18K27K36K45KMin: 53010.6 / Avg: 54318.48 / Max: 55552.89Min: 51458.66 / Avg: 52181.25 / Max: 52640.41Min: 50134.59 / Avg: 51444.21 / Max: 52639.74Min: 49982.78 / Avg: 50381.39 / Max: 50988.76Min: 46038.91 / Avg: 46655.09 / Max: 50394.39Min: 45084.17 / Avg: 45670.73 / Max: 46595.8Min: 42733.56 / Avg: 44342.61 / Max: 45416.75Min: 40219.85 / Avg: 41272.74 / Max: 42149.61Min: 32033.65 / Avg: 32115.45 / Max: 32188.7Min: 29631.74 / Avg: 30313.35 / Max: 31648.97Min: 28026.73 / Avg: 28193.29 / Max: 28345.44Min: 25002.83 / Avg: 25170.22 / Max: 25345.42Min: 20498.68 / Avg: 20541.36 / Max: 20606Min: 17702.56 / Avg: 17709.27 / Max: 17719.711. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average LatencyEPYC 7662EPYC 7702EPYC 7552EPYC 7642EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7282EPYC 7302PEPYC 7F52EPYC 7272EPYC 7F32EPYC 7232P48121620SE +/- 0.063, N = 3SE +/- 0.034, N = 3SE +/- 0.069, N = 3SE +/- 0.031, N = 3SE +/- 0.034, N = 13SE +/- 0.056, N = 3SE +/- 0.063, N = 5SE +/- 0.075, N = 4SE +/- 0.011, N = 3SE +/- 0.082, N = 6SE +/- 0.029, N = 3SE +/- 0.040, N = 3SE +/- 0.020, N = 3SE +/- 0.004, N = 34.6174.8044.8724.9725.3695.4815.6486.0677.7918.2588.8769.93912.17614.1231. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average LatencyEPYC 7662EPYC 7702EPYC 7552EPYC 7642EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7282EPYC 7302PEPYC 7F52EPYC 7272EPYC 7F32EPYC 7232P48121620Min: 4.51 / Avg: 4.62 / Max: 4.73Min: 4.76 / Avg: 4.8 / Max: 4.87Min: 4.76 / Avg: 4.87 / Max: 5Min: 4.91 / Avg: 4.97 / Max: 5.01Min: 4.97 / Avg: 5.37 / Max: 5.44Min: 5.37 / Avg: 5.48 / Max: 5.55Min: 5.51 / Avg: 5.65 / Max: 5.86Min: 5.94 / Avg: 6.07 / Max: 6.22Min: 7.77 / Avg: 7.79 / Max: 7.81Min: 7.91 / Avg: 8.26 / Max: 8.44Min: 8.83 / Avg: 8.88 / Max: 8.93Min: 9.87 / Avg: 9.94 / Max: 10.01Min: 12.14 / Avg: 12.18 / Max: 12.2Min: 14.12 / Avg: 14.12 / Max: 14.131. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Intel Threaded Building Blocks (TBB), and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ ThreadsEPYC 7532EPYC 7282EPYC 7F329K18K27K36K45KSE +/- 29.85, N = 4SE +/- 18.66, N = 3SE +/- 8.41, N = 31361925426415731. (CXX) g++ options: -O3 -lpthread
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ ThreadsEPYC 7532EPYC 7282EPYC 7F327K14K21K28K35KMin: 13567 / Avg: 13618.5 / Max: 13689Min: 25402 / Avg: 25426.33 / Max: 25463Min: 41561 / Avg: 41572.67 / Max: 415891. (CXX) g++ options: -O3 -lpthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 4KEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7302PEPYC 7F52EPYC 7282EPYC 7272EPYC 7F32EPYC 7232P100200300400500SE +/- 0.74, N = 3SE +/- 0.34, N = 3SE +/- 0.17, N = 3SE +/- 0.66, N = 3SE +/- 0.75, N = 3SE +/- 0.16, N = 3SE +/- 0.29, N = 3SE +/- 0.61, N = 3SE +/- 0.23, N = 3SE +/- 0.46, N = 3SE +/- 0.22, N = 3SE +/- 0.16, N = 3SE +/- 0.27, N = 3SE +/- 0.14, N = 3457.27437.31416.96406.93367.19361.72349.99328.54245.60244.17239.84203.40168.17150.43MIN: 225.33 / MAX: 494.73MIN: 217.67 / MAX: 473.18MIN: 251.44 / MAX: 465MIN: 257.69 / MAX: 456.39MIN: 288.65 / MAX: 424.55MIN: 287.95 / MAX: 416.23MIN: 266.02 / MAX: 399.72MIN: 249.05 / MAX: 376.65MIN: 210.68 / MAX: 281.84MIN: 200.55 / MAX: 270.42MIN: 211 / MAX: 275.93MIN: 187.66 / MAX: 232.21MIN: 157.91 / MAX: 191.26MIN: 141.09 / MAX: 170.161. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 4KEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7302PEPYC 7F52EPYC 7282EPYC 7272EPYC 7F32EPYC 7232P80160240320400Min: 455.84 / Avg: 457.27 / Max: 458.3Min: 436.66 / Avg: 437.31 / Max: 437.79Min: 416.77 / Avg: 416.96 / Max: 417.29Min: 405.7 / Avg: 406.93 / Max: 407.97Min: 365.83 / Avg: 367.19 / Max: 368.43Min: 361.4 / Avg: 361.72 / Max: 361.91Min: 349.42 / Avg: 349.99 / Max: 350.39Min: 327.42 / Avg: 328.54 / Max: 329.53Min: 245.22 / Avg: 245.6 / Max: 246Min: 243.51 / Avg: 244.17 / Max: 245.05Min: 239.57 / Avg: 239.84 / Max: 240.27Min: 203.1 / Avg: 203.4 / Max: 203.66Min: 167.67 / Avg: 168.17 / Max: 168.6Min: 150.28 / Avg: 150.43 / Max: 150.71. (CC) gcc options: -pthread

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Intel Threaded Building Blocks (TBB), and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ TasksEPYC 7532EPYC 7282EPYC 7F329K18K27K36K45KSE +/- 28.15, N = 4SE +/- 97.58, N = 3SE +/- 11.72, N = 31379825683416791. (CXX) g++ options: -O3 -lpthread
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ TasksEPYC 7532EPYC 7282EPYC 7F327K14K21K28K35KMin: 13745 / Avg: 13798 / Max: 13860Min: 25513 / Avg: 25683 / Max: 25851Min: 41656 / Avg: 41679.33 / Max: 416931. (CXX) g++ options: -O3 -lpthread

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Very FastEPYC 7662EPYC 7642EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P918273645SE +/- 0.01, N = 4SE +/- 0.03, N = 4SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 339.1537.3936.0735.8135.0633.2532.7226.6126.3122.9421.7017.8315.5612.981. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Very FastEPYC 7662EPYC 7642EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P816243240Min: 39.12 / Avg: 39.15 / Max: 39.19Min: 37.31 / Avg: 37.39 / Max: 37.44Min: 36 / Avg: 36.07 / Max: 36.11Min: 35.77 / Avg: 35.81 / Max: 35.84Min: 35.04 / Avg: 35.06 / Max: 35.09Min: 33.24 / Avg: 33.25 / Max: 33.26Min: 32.69 / Avg: 32.72 / Max: 32.73Min: 26.56 / Avg: 26.61 / Max: 26.7Min: 26.29 / Avg: 26.31 / Max: 26.33Min: 22.92 / Avg: 22.94 / Max: 22.97Min: 21.66 / Avg: 21.7 / Max: 21.75Min: 17.82 / Avg: 17.83 / Max: 17.84Min: 15.54 / Avg: 15.56 / Max: 15.61Min: 12.97 / Avg: 12.98 / Max: 12.991. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2EPYC 7662EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P918273645SE +/- 0.00, N = 4SE +/- 0.01, N = 4SE +/- 0.01, N = 4SE +/- 0.00, N = 4SE +/- 0.02, N = 4SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 313.2113.3014.2415.6416.3116.7018.1719.8623.3424.2429.2432.6339.671. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2EPYC 7662EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P816243240Min: 13.2 / Avg: 13.21 / Max: 13.22Min: 13.28 / Avg: 13.3 / Max: 13.33Min: 14.23 / Avg: 14.24 / Max: 14.25Min: 15.63 / Avg: 15.64 / Max: 15.64Min: 16.28 / Avg: 16.31 / Max: 16.38Min: 16.69 / Avg: 16.7 / Max: 16.71Min: 18.16 / Avg: 18.17 / Max: 18.17Min: 19.86 / Avg: 19.86 / Max: 19.87Min: 23.32 / Avg: 23.34 / Max: 23.38Min: 24.19 / Avg: 24.24 / Max: 24.27Min: 29.23 / Avg: 29.24 / Max: 29.27Min: 32.63 / Avg: 32.63 / Max: 32.64Min: 39.66 / Avg: 39.67 / Max: 39.671. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Intel Threaded Building Blocks (TBB), and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: OpenMPEPYC 7532EPYC 7282EPYC 7F329K18K27K36K45KSE +/- 17.05, N = 4SE +/- 4.48, N = 3SE +/- 1.20, N = 31392625388414221. (CXX) g++ options: -O3 -lpthread
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: OpenMPEPYC 7532EPYC 7282EPYC 7F327K14K21K28K35KMin: 13904 / Avg: 13925.75 / Max: 13976Min: 25379 / Avg: 25387.67 / Max: 25394Min: 41420 / Avg: 41421.67 / Max: 414241. (CXX) g++ options: -O3 -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3-v3-v3-v3 - Model: mobilenet-v3EPYC 7F32EPYC 7282EPYC 7F52EPYC 7502PEPYC 7542EPYC 7532EPYC 7702510152025SE +/- 0.04, N = 3SE +/- 0.15, N = 3SE +/- 0.02, N = 3SE +/- 0.20, N = 3SE +/- 0.33, N = 3SE +/- 0.17, N = 3SE +/- 0.67, N = 126.587.417.989.399.4310.0319.11MIN: 6.06 / MAX: 7.03MIN: 6.82 / MAX: 32.46MIN: 7.73 / MAX: 11.22MIN: 8.87 / MAX: 13.81MIN: 8.79 / MAX: 12.92MIN: 9.45 / MAX: 13.2MIN: 14.3 / MAX: 154.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3-v3-v3-v3 - Model: mobilenet-v3EPYC 7F32EPYC 7282EPYC 7F52EPYC 7502PEPYC 7542EPYC 7532EPYC 7702510152025Min: 6.53 / Avg: 6.58 / Max: 6.66Min: 7.22 / Avg: 7.41 / Max: 7.71Min: 7.94 / Avg: 7.98 / Max: 8.02Min: 9.15 / Avg: 9.39 / Max: 9.79Min: 9.07 / Avg: 9.43 / Max: 10.09Min: 9.82 / Avg: 10.03 / Max: 10.38Min: 14.68 / Avg: 19.11 / Max: 23.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

miniFE

MiniFE Finite Element is an application for unstructured implicit finite element codes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgCG Mflops, More Is BetterminiFE 2.2Problem Size: SmallEPYC 7532EPYC 7642EPYC 7662EPYC 7702EPYC 7552EPYC 7F32EPYC 7302PEPYC 7402PEPYC 7542EPYC 7502PEPYC 7232PEPYC 7272EPYC 7282EPYC 7F524K8K12K16K20KSE +/- 8.80, N = 3SE +/- 29.81, N = 3SE +/- 5.01, N = 3SE +/- 34.52, N = 3SE +/- 68.31, N = 3SE +/- 8.04, N = 3SE +/- 24.99, N = 3SE +/- 5.98, N = 3SE +/- 5.12, N = 3SE +/- 22.39, N = 3SE +/- 10.40, N = 3SE +/- 16.20, N = 3SE +/- 1.40, N = 3SE +/- 66.01, N = 1419645.4019436.3019256.7019155.4018424.1017351.2017082.4016743.6016649.7016649.5010156.6010066.6010003.606787.061. (CXX) g++ options: -O3 -fopenmp -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgCG Mflops, More Is BetterminiFE 2.2Problem Size: SmallEPYC 7532EPYC 7642EPYC 7662EPYC 7702EPYC 7552EPYC 7F32EPYC 7302PEPYC 7402PEPYC 7542EPYC 7502PEPYC 7232PEPYC 7272EPYC 7282EPYC 7F523K6K9K12K15KMin: 19632.2 / Avg: 19645.43 / Max: 19662.1Min: 19380.5 / Avg: 19436.3 / Max: 19482.4Min: 19250.6 / Avg: 19256.67 / Max: 19266.6Min: 19088.2 / Avg: 19155.37 / Max: 19202.8Min: 18336.1 / Avg: 18424.1 / Max: 18558.6Min: 17335.9 / Avg: 17351.17 / Max: 17363.2Min: 17033.7 / Avg: 17082.43 / Max: 17116.4Min: 16735 / Avg: 16743.6 / Max: 16755.1Min: 16643.8 / Avg: 16649.7 / Max: 16659.9Min: 16604.7 / Avg: 16649.47 / Max: 16672.4Min: 10136.2 / Avg: 10156.6 / Max: 10170.3Min: 10034.4 / Avg: 10066.63 / Max: 10085.6Min: 10000.9 / Avg: 10003.63 / Max: 10005.5Min: 5976.74 / Avg: 6787.06 / Max: 6965.651. (CXX) g++ options: -O3 -fopenmp -pthread -lmpi_cxx -lmpi

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUEPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7642EPYC 7552EPYC 7662EPYC 7302PEPYC 7702EPYC 7282EPYC 7272EPYC 7F32EPYC 7232P0.75761.51522.27283.03043.788SE +/- 0.00302, N = 4SE +/- 0.00119, N = 4SE +/- 0.00172, N = 4SE +/- 0.00089, N = 4SE +/- 0.00643, N = 4SE +/- 0.00473, N = 4SE +/- 0.01048, N = 4SE +/- 0.01452, N = 4SE +/- 0.00055, N = 4SE +/- 0.01081, N = 4SE +/- 0.00098, N = 4SE +/- 0.00157, N = 4SE +/- 0.00055, N = 4SE +/- 0.00108, N = 41.164021.182041.212301.434871.521431.667501.730071.741991.749751.805981.806472.437602.790343.36727MIN: 1.12MIN: 1.13MIN: 1.14MIN: 1.4MIN: 1.48MIN: 1.4MIN: 1.5MIN: 1.45MIN: 1.72MIN: 1.54MIN: 1.75MIN: 2.4MIN: 2.71MIN: 3.31. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUEPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7642EPYC 7552EPYC 7662EPYC 7302PEPYC 7702EPYC 7282EPYC 7272EPYC 7F32EPYC 7232P246810Min: 1.16 / Avg: 1.16 / Max: 1.17Min: 1.18 / Avg: 1.18 / Max: 1.19Min: 1.21 / Avg: 1.21 / Max: 1.22Min: 1.43 / Avg: 1.43 / Max: 1.44Min: 1.51 / Avg: 1.52 / Max: 1.54Min: 1.65 / Avg: 1.67 / Max: 1.67Min: 1.72 / Avg: 1.73 / Max: 1.76Min: 1.72 / Avg: 1.74 / Max: 1.78Min: 1.75 / Avg: 1.75 / Max: 1.75Min: 1.78 / Avg: 1.81 / Max: 1.83Min: 1.8 / Avg: 1.81 / Max: 1.81Min: 2.43 / Avg: 2.44 / Max: 2.44Min: 2.79 / Avg: 2.79 / Max: 2.79Min: 3.37 / Avg: 3.37 / Max: 3.371. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

YafaRay

YafaRay is an open-source physically based montecarlo ray-tracing engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterYafaRay 3.4.1Total Time For Sample SceneEPYC 7552EPYC 7542EPYC 7662EPYC 7702EPYC 7642EPYC 7502PEPYC 7402PEPYC 7532EPYC 7282EPYC 7302PEPYC 7F52EPYC 7272EPYC 7F32EPYC 7232P4080120160200SE +/- 0.43, N = 14SE +/- 0.43, N = 3SE +/- 0.74, N = 3SE +/- 0.72, N = 15SE +/- 0.60, N = 15SE +/- 0.38, N = 3SE +/- 1.05, N = 3SE +/- 0.21, N = 3SE +/- 1.06, N = 3SE +/- 1.37, N = 3SE +/- 1.73, N = 3SE +/- 0.80, N = 3SE +/- 0.41, N = 3SE +/- 0.97, N = 364.8864.9266.4867.2567.6670.7080.5085.56102.53110.94127.45131.03172.35186.181. (CXX) g++ options: -std=c++11 -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterYafaRay 3.4.1Total Time For Sample SceneEPYC 7552EPYC 7542EPYC 7662EPYC 7702EPYC 7642EPYC 7502PEPYC 7402PEPYC 7532EPYC 7282EPYC 7302PEPYC 7F52EPYC 7272EPYC 7F32EPYC 7232P306090120150Min: 61.7 / Avg: 64.88 / Max: 67.09Min: 64.15 / Avg: 64.92 / Max: 65.62Min: 65.27 / Avg: 66.48 / Max: 67.84Min: 61.8 / Avg: 67.25 / Max: 71.05Min: 64.7 / Avg: 67.66 / Max: 71.97Min: 69.95 / Avg: 70.7 / Max: 71.18Min: 79.1 / Avg: 80.5 / Max: 82.54Min: 85.16 / Avg: 85.56 / Max: 85.86Min: 100.59 / Avg: 102.53 / Max: 104.24Min: 108.85 / Avg: 110.94 / Max: 113.51Min: 124 / Avg: 127.45 / Max: 129.28Min: 129.53 / Avg: 131.03 / Max: 132.24Min: 171.56 / Avg: 172.35 / Max: 172.91Min: 184.7 / Avg: 186.18 / Max: 188.011. (CXX) g++ options: -std=c++11 -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype -lpthread

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read WriteEPYC 7662EPYC 7642EPYC 7552EPYC 7702EPYC 7542EPYC 7502PEPYC 7402PEPYC 7532EPYC 7282EPYC 7302PEPYC 7F52EPYC 7272EPYC 7F32EPYC 7232P13K26K39K52K65KSE +/- 229.52, N = 3SE +/- 539.78, N = 3SE +/- 709.54, N = 3SE +/- 486.09, N = 3SE +/- 676.58, N = 3SE +/- 406.89, N = 15SE +/- 387.48, N = 3SE +/- 174.06, N = 3SE +/- 309.38, N = 15SE +/- 447.90, N = 12SE +/- 337.73, N = 15SE +/- 91.21, N = 3SE +/- 63.04, N = 3SE +/- 18.98, N = 360949588895694555699523255179447662476153858638458371523189624137213341. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read WriteEPYC 7662EPYC 7642EPYC 7552EPYC 7702EPYC 7542EPYC 7502PEPYC 7402PEPYC 7532EPYC 7282EPYC 7302PEPYC 7F52EPYC 7272EPYC 7F32EPYC 7232P11K22K33K44K55KMin: 60636.54 / Avg: 60949.34 / Max: 61396.7Min: 57853.09 / Avg: 58888.92 / Max: 59670.2Min: 55594.88 / Avg: 56944.6 / Max: 57999Min: 54882.69 / Avg: 55699.33 / Max: 56564.47Min: 51433.97 / Avg: 52325.08 / Max: 53652.51Min: 49652.94 / Avg: 51793.61 / Max: 54520.15Min: 47265.69 / Avg: 47661.51 / Max: 48436.42Min: 47424.2 / Avg: 47614.55 / Max: 47962.14Min: 36728.54 / Avg: 38586.13 / Max: 40215.78Min: 36956.38 / Avg: 38457.92 / Max: 41415.74Min: 35001.56 / Avg: 37152.01 / Max: 39986.56Min: 31714.82 / Avg: 31895.88 / Max: 32005.63Min: 24025.48 / Avg: 24137.33 / Max: 24243.66Min: 21301.63 / Avg: 21333.63 / Max: 21367.311. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P1530456075SE +/- 0.02, N = 3SE +/- 0.06, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.07, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.06, N = 3SE +/- 0.06, N = 3SE +/- 0.10, N = 3SE +/- 0.01, N = 3SE +/- 0.06, N = 323.9824.0325.2125.5027.8629.1229.2332.0733.9839.7442.8649.9254.1568.50
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P1326395265Min: 23.93 / Avg: 23.98 / Max: 24.01Min: 23.95 / Avg: 24.03 / Max: 24.14Min: 25.18 / Avg: 25.21 / Max: 25.25Min: 25.47 / Avg: 25.5 / Max: 25.56Min: 27.84 / Avg: 27.86 / Max: 27.89Min: 29.07 / Avg: 29.12 / Max: 29.19Min: 29.13 / Avg: 29.23 / Max: 29.37Min: 32.01 / Avg: 32.07 / Max: 32.1Min: 33.92 / Avg: 33.98 / Max: 34.07Min: 39.64 / Avg: 39.74 / Max: 39.84Min: 42.78 / Avg: 42.86 / Max: 42.98Min: 49.75 / Avg: 49.92 / Max: 50.11Min: 54.13 / Avg: 54.15 / Max: 54.16Min: 68.42 / Avg: 68.5 / Max: 68.63

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average LatencyEPYC 7662EPYC 7642EPYC 7552EPYC 7702EPYC 7542EPYC 7502PEPYC 7402PEPYC 7532EPYC 7282EPYC 7302PEPYC 7F52EPYC 7272EPYC 7F32EPYC 7232P1.05532.11063.16594.22125.2765SE +/- 0.006, N = 3SE +/- 0.016, N = 3SE +/- 0.022, N = 3SE +/- 0.016, N = 3SE +/- 0.024, N = 3SE +/- 0.015, N = 15SE +/- 0.017, N = 3SE +/- 0.008, N = 3SE +/- 0.021, N = 15SE +/- 0.029, N = 12SE +/- 0.024, N = 15SE +/- 0.009, N = 3SE +/- 0.011, N = 3SE +/- 0.004, N = 31.6431.7011.7591.7981.9151.9352.1012.1032.5962.6062.6973.1374.1454.6901. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average LatencyEPYC 7662EPYC 7642EPYC 7552EPYC 7702EPYC 7542EPYC 7502PEPYC 7402PEPYC 7532EPYC 7282EPYC 7302PEPYC 7F52EPYC 7272EPYC 7F32EPYC 7232P246810Min: 1.63 / Avg: 1.64 / Max: 1.65Min: 1.68 / Avg: 1.7 / Max: 1.73Min: 1.73 / Avg: 1.76 / Max: 1.8Min: 1.77 / Avg: 1.8 / Max: 1.82Min: 1.87 / Avg: 1.91 / Max: 1.95Min: 1.84 / Avg: 1.94 / Max: 2.02Min: 2.07 / Avg: 2.1 / Max: 2.12Min: 2.09 / Avg: 2.1 / Max: 2.11Min: 2.49 / Avg: 2.6 / Max: 2.73Min: 2.42 / Avg: 2.61 / Max: 2.71Min: 2.5 / Avg: 2.7 / Max: 2.86Min: 3.13 / Avg: 3.14 / Max: 3.16Min: 4.13 / Avg: 4.14 / Max: 4.16Min: 4.68 / Avg: 4.69 / Max: 4.71. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3-v3-v2-v2 - Model: mobilenet-v2EPYC 7F32EPYC 7282EPYC 7F52EPYC 7542EPYC 7502PEPYC 7532EPYC 7702510152025SE +/- 0.02, N = 3SE +/- 0.17, N = 3SE +/- 0.13, N = 3SE +/- 0.18, N = 3SE +/- 0.25, N = 3SE +/- 0.39, N = 3SE +/- 0.88, N = 127.188.028.9710.0610.3911.0320.46MIN: 6.95 / MAX: 7.82MIN: 7.51 / MAX: 76.17MIN: 8.51 / MAX: 12.91MIN: 9.44 / MAX: 14.1MIN: 9.62 / MAX: 14.89MIN: 10.18 / MAX: 15.94MIN: 15.14 / MAX: 154.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3-v3-v2-v2 - Model: mobilenet-v2EPYC 7F32EPYC 7282EPYC 7F52EPYC 7542EPYC 7502PEPYC 7532EPYC 7702510152025Min: 7.16 / Avg: 7.18 / Max: 7.21Min: 7.78 / Avg: 8.02 / Max: 8.34Min: 8.78 / Avg: 8.97 / Max: 9.22Min: 9.84 / Avg: 10.06 / Max: 10.42Min: 10 / Avg: 10.39 / Max: 10.85Min: 10.61 / Avg: 11.03 / Max: 11.81Min: 16.2 / Avg: 20.46 / Max: 27.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

TTSIOD 3D Renderer

A portable GPL 3D software renderer that supports OpenMP and Intel Threading Building Blocks with many different rendering modes. This version does not use OpenGL but is entirely CPU/software based. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterTTSIOD 3D Renderer 2.3bPhong Rendering With Soft-Shadow MappingEPYC 7552EPYC 7642EPYC 7662EPYC 7542EPYC 7502PEPYC 7532EPYC 7702EPYC 7402PEPYC 7F52EPYC 7282EPYC 7302PEPYC 7272EPYC 7F32EPYC 7232P2004006008001000SE +/- 6.72, N = 12SE +/- 12.71, N = 3SE +/- 3.57, N = 3SE +/- 1.67, N = 3SE +/- 2.79, N = 3SE +/- 3.72, N = 3SE +/- 4.91, N = 3SE +/- 2.84, N = 3SE +/- 0.21, N = 3SE +/- 1.34, N = 3SE +/- 1.71, N = 3SE +/- 0.85, N = 3SE +/- 0.88, N = 3SE +/- 0.73, N = 3933.04889.39873.42858.47857.99849.27792.50721.30589.10563.20551.97457.13359.54327.451. (CXX) g++ options: -O3 -fomit-frame-pointer -ffast-math -mtune=native -flto -msse -mrecip -mfpmath=sse -msse2 -mssse3 -lSDL -fopenmp -fwhole-program -lstdc++
OpenBenchmarking.orgFPS, More Is BetterTTSIOD 3D Renderer 2.3bPhong Rendering With Soft-Shadow MappingEPYC 7552EPYC 7642EPYC 7662EPYC 7542EPYC 7502PEPYC 7532EPYC 7702EPYC 7402PEPYC 7F52EPYC 7282EPYC 7302PEPYC 7272EPYC 7F32EPYC 7232P160320480640800Min: 896.62 / Avg: 933.04 / Max: 961.45Min: 864.6 / Avg: 889.39 / Max: 906.7Min: 866.7 / Avg: 873.42 / Max: 878.89Min: 855.29 / Avg: 858.47 / Max: 860.96Min: 853.39 / Avg: 857.99 / Max: 863.04Min: 843.53 / Avg: 849.27 / Max: 856.24Min: 782.72 / Avg: 792.5 / Max: 798.09Min: 716.79 / Avg: 721.3 / Max: 726.53Min: 588.69 / Avg: 589.1 / Max: 589.38Min: 561.8 / Avg: 563.2 / Max: 565.87Min: 548.55 / Avg: 551.97 / Max: 553.8Min: 455.79 / Avg: 457.13 / Max: 458.72Min: 357.91 / Avg: 359.54 / Max: 360.93Min: 326.51 / Avg: 327.45 / Max: 328.881. (CXX) g++ options: -O3 -fomit-frame-pointer -ffast-math -mtune=native -flto -msse -mrecip -mfpmath=sse -msse2 -mssse3 -lSDL -fopenmp -fwhole-program -lstdc++

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 1080pEPYC 7662EPYC 7642EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7282EPYC 7302PEPYC 7272EPYC 7F32EPYC 7232P30060090012001500SE +/- 3.75, N = 3SE +/- 1.10, N = 3SE +/- 1.60, N = 3SE +/- 3.10, N = 3SE +/- 0.47, N = 3SE +/- 1.87, N = 3SE +/- 1.15, N = 3SE +/- 2.68, N = 3SE +/- 1.44, N = 3SE +/- 0.46, N = 3SE +/- 0.73, N = 3SE +/- 0.90, N = 3SE +/- 0.28, N = 3SE +/- 0.82, N = 31193.661070.091050.991044.25937.44932.75880.57847.40651.36644.23634.51555.11453.36419.19MIN: 500.62 / MAX: 1329.43MIN: 574.58 / MAX: 1186.8MIN: 478.51 / MAX: 1166.67MIN: 554.63 / MAX: 1157.48MIN: 659.51 / MAX: 1052.41MIN: 659.65 / MAX: 1047.61MIN: 567.79 / MAX: 985.57MIN: 566.75 / MAX: 939.62MIN: 480.87 / MAX: 706.63MIN: 503.48 / MAX: 704.1MIN: 482.67 / MAX: 693.38MIN: 463.4 / MAX: 605.59MIN: 400.76 / MAX: 488.59MIN: 370.7 / MAX: 456.841. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 1080pEPYC 7662EPYC 7642EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7282EPYC 7302PEPYC 7272EPYC 7F32EPYC 7232P2004006008001000Min: 1186.16 / Avg: 1193.66 / Max: 1197.47Min: 1068.71 / Avg: 1070.09 / Max: 1072.27Min: 1048.15 / Avg: 1050.99 / Max: 1053.7Min: 1038.12 / Avg: 1044.25 / Max: 1048.12Min: 936.52 / Avg: 937.44 / Max: 938.02Min: 929.58 / Avg: 932.75 / Max: 936.06Min: 879.23 / Avg: 880.57 / Max: 882.86Min: 842.07 / Avg: 847.4 / Max: 850.64Min: 648.52 / Avg: 651.36 / Max: 653.25Min: 643.31 / Avg: 644.23 / Max: 644.7Min: 633.06 / Avg: 634.51 / Max: 635.27Min: 553.68 / Avg: 555.11 / Max: 556.76Min: 452.94 / Avg: 453.36 / Max: 453.9Min: 417.79 / Avg: 419.19 / Max: 420.621. (CC) gcc options: -pthread

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KEPYC 7662EPYC 7552EPYC 7642EPYC 7542EPYC 7502PEPYC 7702EPYC 7402PEPYC 7532EPYC 7302PEPYC 7282EPYC 7F52EPYC 7272EPYC 7F32EPYC 7232P612182430SE +/- 0.21, N = 3SE +/- 0.06, N = 3SE +/- 0.08, N = 3SE +/- 0.07, N = 3SE +/- 0.03, N = 3SE +/- 0.08, N = 3SE +/- 0.08, N = 3SE +/- 0.05, N = 3SE +/- 0.09, N = 3SE +/- 0.04, N = 3SE +/- 0.12, N = 3SE +/- 0.07, N = 3SE +/- 0.13, N = 4SE +/- 0.09, N = 325.6525.3625.2325.2125.1325.0423.8423.7120.4420.0320.0017.0910.209.201. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KEPYC 7662EPYC 7552EPYC 7642EPYC 7542EPYC 7502PEPYC 7702EPYC 7402PEPYC 7532EPYC 7302PEPYC 7282EPYC 7F52EPYC 7272EPYC 7F32EPYC 7232P612182430Min: 25.41 / Avg: 25.65 / Max: 26.06Min: 25.3 / Avg: 25.36 / Max: 25.48Min: 25.14 / Avg: 25.23 / Max: 25.39Min: 25.09 / Avg: 25.21 / Max: 25.34Min: 25.08 / Avg: 25.13 / Max: 25.16Min: 24.94 / Avg: 25.04 / Max: 25.19Min: 23.72 / Avg: 23.84 / Max: 23.98Min: 23.64 / Avg: 23.71 / Max: 23.81Min: 20.34 / Avg: 20.44 / Max: 20.61Min: 19.97 / Avg: 20.03 / Max: 20.1Min: 19.78 / Avg: 20 / Max: 20.2Min: 16.96 / Avg: 17.09 / Max: 17.21Min: 9.85 / Avg: 10.2 / Max: 10.45Min: 9.07 / Avg: 9.2 / Max: 9.371. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To CompileEPYC 7642EPYC 7662EPYC 7552EPYC 7702EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P4080120160200SE +/- 0.07, N = 3SE +/- 0.17, N = 3SE +/- 0.17, N = 3SE +/- 0.31, N = 3SE +/- 0.10, N = 3SE +/- 0.17, N = 3SE +/- 0.10, N = 3SE +/- 0.12, N = 3SE +/- 0.32, N = 3SE +/- 0.07, N = 3SE +/- 0.26, N = 3SE +/- 0.14, N = 3SE +/- 0.21, N = 3SE +/- 0.30, N = 361.8162.2762.4862.5666.5769.4269.6276.1684.2196.77103.74122.41136.62172.13
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To CompileEPYC 7642EPYC 7662EPYC 7552EPYC 7702EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P306090120150Min: 61.69 / Avg: 61.81 / Max: 61.9Min: 61.99 / Avg: 62.27 / Max: 62.58Min: 62.24 / Avg: 62.48 / Max: 62.81Min: 62.22 / Avg: 62.56 / Max: 63.19Min: 66.45 / Avg: 66.57 / Max: 66.76Min: 69.16 / Avg: 69.42 / Max: 69.74Min: 69.45 / Avg: 69.62 / Max: 69.79Min: 75.93 / Avg: 76.16 / Max: 76.36Min: 83.58 / Avg: 84.21 / Max: 84.54Min: 96.64 / Avg: 96.77 / Max: 96.88Min: 103.26 / Avg: 103.74 / Max: 104.13Min: 122.19 / Avg: 122.41 / Max: 122.67Min: 136.37 / Avg: 136.62 / Max: 137.03Min: 171.71 / Avg: 172.13 / Max: 172.7

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P246810SE +/- 0.060, N = 3SE +/- 0.003, N = 3SE +/- 0.039, N = 4SE +/- 0.012, N = 3SE +/- 0.032, N = 4SE +/- 0.048, N = 4SE +/- 0.032, N = 3SE +/- 0.043, N = 3SE +/- 0.035, N = 4SE +/- 0.017, N = 3SE +/- 0.006, N = 3SE +/- 0.008, N = 3SE +/- 0.023, N = 3SE +/- 0.008, N = 36.9666.7526.6786.4946.0915.9585.8665.5635.3334.5064.2253.5933.1842.5361. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P3691215Min: 6.88 / Avg: 6.97 / Max: 7.08Min: 6.75 / Avg: 6.75 / Max: 6.76Min: 6.57 / Avg: 6.68 / Max: 6.75Min: 6.47 / Avg: 6.49 / Max: 6.51Min: 6.04 / Avg: 6.09 / Max: 6.18Min: 5.88 / Avg: 5.96 / Max: 6.09Min: 5.8 / Avg: 5.87 / Max: 5.91Min: 5.48 / Avg: 5.56 / Max: 5.61Min: 5.23 / Avg: 5.33 / Max: 5.39Min: 4.47 / Avg: 4.51 / Max: 4.53Min: 4.22 / Avg: 4.23 / Max: 4.24Min: 3.58 / Avg: 3.59 / Max: 3.61Min: 3.14 / Avg: 3.18 / Max: 3.21Min: 2.52 / Avg: 2.54 / Max: 2.551. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

x264

This is a simple test of the x264 encoder run on the CPU (OpenCL support disabled) with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2019-12-17H.264 Video EncodingEPYC 7662EPYC 7642EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P50100150200250SE +/- 1.83, N = 15SE +/- 1.24, N = 15SE +/- 1.78, N = 15SE +/- 1.18, N = 15SE +/- 0.83, N = 9SE +/- 1.00, N = 9SE +/- 0.75, N = 9SE +/- 0.75, N = 9SE +/- 0.76, N = 9SE +/- 0.59, N = 8SE +/- 0.71, N = 8SE +/- 0.60, N = 7SE +/- 0.33, N = 7SE +/- 0.37, N = 6210.69203.74198.58198.00188.45182.42181.15174.63173.50150.45142.37117.1495.5078.211. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -ffast-math -std=gnu99 -fPIC -fomit-frame-pointer -fno-tree-vectorize
OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2019-12-17H.264 Video EncodingEPYC 7662EPYC 7642EPYC 7702EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P4080120160200Min: 187.36 / Avg: 210.69 / Max: 218.31Min: 187.17 / Avg: 203.74 / Max: 207.43Min: 179.11 / Avg: 198.58 / Max: 206.01Min: 182.68 / Avg: 198 / Max: 201.65Min: 183.08 / Avg: 188.45 / Max: 190.79Min: 175.43 / Avg: 182.42 / Max: 186.12Min: 175.43 / Avg: 181.15 / Max: 183.15Min: 169.05 / Avg: 174.63 / Max: 176.72Min: 167.66 / Avg: 173.5 / Max: 175.1Min: 146.52 / Avg: 150.45 / Max: 151.9Min: 137.77 / Avg: 142.37 / Max: 143.62Min: 113.91 / Avg: 117.14 / Max: 119.03Min: 94.18 / Avg: 95.5 / Max: 96.44Min: 76.63 / Avg: 78.21 / Max: 79.161. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -ffast-math -std=gnu99 -fPIC -fomit-frame-pointer -fno-tree-vectorize

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileEPYC 7542EPYC 7532EPYC 7502PEPYC 7642EPYC 7662EPYC 7552EPYC 7702EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P50K100K150K200K250KSE +/- 115.93, N = 3SE +/- 115.78, N = 3SE +/- 797.17, N = 15SE +/- 34.51, N = 3SE +/- 833.99, N = 6SE +/- 275.99, N = 3SE +/- 718.53, N = 15SE +/- 414.30, N = 3SE +/- 244.85, N = 3SE +/- 652.17, N = 3SE +/- 183.23, N = 3SE +/- 219.52, N = 3SE +/- 54.60, N = 3SE +/- 42.10, N = 379313.684263.585462.886140.688093.792153.295873.7105046.0126743.0144445.0151235.0177540.0177928.0212253.0
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileEPYC 7542EPYC 7532EPYC 7502PEPYC 7642EPYC 7662EPYC 7552EPYC 7702EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P40K80K120K160K200KMin: 79189.4 / Avg: 79313.63 / Max: 79545.3Min: 84034.6 / Avg: 84263.47 / Max: 84408.4Min: 82675.3 / Avg: 85462.75 / Max: 91289.3Min: 86071.8 / Avg: 86140.57 / Max: 86180.1Min: 85520.1 / Avg: 88093.72 / Max: 91636.7Min: 91707.3 / Avg: 92153.17 / Max: 92657.9Min: 91993.6 / Avg: 95873.69 / Max: 100262Min: 104457 / Avg: 105045.67 / Max: 105845Min: 126253 / Avg: 126742.67 / Max: 126993Min: 143270 / Avg: 144444.67 / Max: 145523Min: 150900 / Avg: 151235.33 / Max: 151531Min: 177299 / Avg: 177539.67 / Max: 177978Min: 177860 / Avg: 177928 / Max: 178036Min: 212188 / Avg: 212253.33 / Max: 212332

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2-v2 - Model: blazefaceEPYC 7F32EPYC 7F52EPYC 7282EPYC 7502PEPYC 7542EPYC 7532EPYC 7702246810SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.06, N = 3SE +/- 0.23, N = 3SE +/- 0.15, N = 3SE +/- 0.25, N = 123.303.713.794.594.764.918.57MIN: 3.23 / MAX: 3.45MIN: 3.57 / MAX: 4.18MIN: 3.64 / MAX: 17.29MIN: 4.36 / MAX: 9.16MIN: 4.4 / MAX: 5.79MIN: 4.58 / MAX: 84.54MIN: 7.33 / MAX: 16.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2-v2 - Model: blazefaceEPYC 7F32EPYC 7F52EPYC 7282EPYC 7502PEPYC 7542EPYC 7532EPYC 77023691215Min: 3.29 / Avg: 3.3 / Max: 3.31Min: 3.65 / Avg: 3.71 / Max: 3.74Min: 3.77 / Avg: 3.79 / Max: 3.82Min: 4.48 / Avg: 4.59 / Max: 4.67Min: 4.5 / Avg: 4.76 / Max: 5.22Min: 4.72 / Avg: 4.91 / Max: 5.21Min: 7.62 / Avg: 8.57 / Max: 10.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.CEPYC 7662EPYC 7702EPYC 7302PEPYC 7F52EPYC 728230K60K90K120K150KSE +/- 193.68, N = 3SE +/- 254.66, N = 3SE +/- 63.83, N = 3SE +/- 134.14, N = 3SE +/- 1.88, N = 3125512.61123271.0065066.6053090.1048665.961. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.CEPYC 7662EPYC 7702EPYC 7302PEPYC 7F52EPYC 728220K40K60K80K100KMin: 125128.68 / Avg: 125512.61 / Max: 125749.05Min: 122859.63 / Avg: 123271 / Max: 123736.74Min: 64941.07 / Avg: 65066.6 / Max: 65149.47Min: 52922.84 / Avg: 53090.1 / Max: 53355.38Min: 48662.33 / Avg: 48665.96 / Max: 48668.631. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Ultra FastEPYC 7642EPYC 7662EPYC 7542EPYC 7552EPYC 7502PEPYC 7532EPYC 7702EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P1326395265SE +/- 0.16, N = 5SE +/- 0.44, N = 5SE +/- 0.05, N = 5SE +/- 0.48, N = 5SE +/- 0.05, N = 5SE +/- 0.03, N = 5SE +/- 0.54, N = 5SE +/- 0.05, N = 4SE +/- 0.02, N = 4SE +/- 0.03, N = 4SE +/- 0.03, N = 4SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 359.5958.6456.9855.9954.9254.0753.5248.3347.1741.5239.6531.3927.1623.111. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Ultra FastEPYC 7642EPYC 7662EPYC 7542EPYC 7552EPYC 7502PEPYC 7532EPYC 7702EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P1224364860Min: 59.13 / Avg: 59.59 / Max: 59.95Min: 57.87 / Avg: 58.64 / Max: 60.27Min: 56.85 / Avg: 56.98 / Max: 57.12Min: 55.17 / Avg: 55.99 / Max: 57.85Min: 54.78 / Avg: 54.92 / Max: 55.03Min: 54 / Avg: 54.07 / Max: 54.14Min: 52.03 / Avg: 53.52 / Max: 54.9Min: 48.24 / Avg: 48.33 / Max: 48.45Min: 47.11 / Avg: 47.17 / Max: 47.21Min: 41.47 / Avg: 41.52 / Max: 41.57Min: 39.58 / Avg: 39.65 / Max: 39.69Min: 31.34 / Avg: 31.39 / Max: 31.49Min: 27.11 / Avg: 27.16 / Max: 27.23Min: 23.08 / Avg: 23.11 / Max: 23.141. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Sysbench

This is a benchmark of Sysbench with CPU and memory sub-tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 2018-07-28Test: MemoryEPYC 7282EPYC 7272EPYC 7542EPYC 7502PEPYC 7552EPYC 7662EPYC 7702EPYC 7232PEPYC 7402PEPYC 7532EPYC 7F32EPYC 7302PEPYC 7F522M4M6M8M10MSE +/- 11575.58, N = 5SE +/- 1918.40, N = 5SE +/- 8561.16, N = 5SE +/- 5458.69, N = 5SE +/- 14871.56, N = 5SE +/- 35042.66, N = 5SE +/- 11992.02, N = 5SE +/- 12382.78, N = 5SE +/- 6709.34, N = 5SE +/- 61191.25, N = 15SE +/- 717.93, N = 5SE +/- 780.60, N = 5SE +/- 17904.29, N = 57880986.597232205.306618850.246612746.506380400.356374595.126302136.655942474.375614019.494720934.264656182.264502591.453087955.631. (CC) gcc options: -pthread -O3 -funroll-loops -ggdb3 -march=amdfam10 -rdynamic -ldl -laio -lm
OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 2018-07-28Test: MemoryEPYC 7282EPYC 7272EPYC 7542EPYC 7502PEPYC 7552EPYC 7662EPYC 7702EPYC 7232PEPYC 7402PEPYC 7532EPYC 7F32EPYC 7302PEPYC 7F521.4M2.8M4.2M5.6M7MMin: 7856776.88 / Avg: 7880986.59 / Max: 7920212.37Min: 7225906.31 / Avg: 7232205.3 / Max: 7236184.18Min: 6607713.8 / Avg: 6618850.24 / Max: 6652865.09Min: 6604337.31 / Avg: 6612746.5 / Max: 6634214.25Min: 6349678.61 / Avg: 6380400.35 / Max: 6428305.03Min: 6319346.34 / Avg: 6374595.12 / Max: 6505001.22Min: 6269344.44 / Avg: 6302136.65 / Max: 6328806.52Min: 5895950.69 / Avg: 5942474.37 / Max: 5962905.57Min: 5602311.72 / Avg: 5614019.49 / Max: 5638809.43Min: 4407088.34 / Avg: 4720934.26 / Max: 5255661.79Min: 4654731.01 / Avg: 4656182.26 / Max: 4658759.57Min: 4500022.71 / Avg: 4502591.45 / Max: 4504708.71Min: 3067897.69 / Avg: 3087955.63 / Max: 3159400.311. (CC) gcc options: -pthread -O3 -funroll-loops -ggdb3 -march=amdfam10 -rdynamic -ldl -laio -lm

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.CEPYC 7662EPYC 7702EPYC 7532EPYC 7542EPYC 7502PEPYC 7302PEPYC 7282EPYC 7F32EPYC 7F52EPYC 7232P12K24K36K48K60KSE +/- 50.60, N = 5SE +/- 40.27, N = 5SE +/- 17.78, N = 5SE +/- 17.91, N = 5SE +/- 36.50, N = 5SE +/- 19.15, N = 4SE +/- 23.10, N = 4SE +/- 10.84, N = 4SE +/- 44.41, N = 3SE +/- 23.87, N = 355668.6355136.5049235.9247098.2246206.3337175.2029526.9727284.6922117.1721822.061. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.CEPYC 7662EPYC 7702EPYC 7532EPYC 7542EPYC 7502PEPYC 7302PEPYC 7282EPYC 7F32EPYC 7F52EPYC 7232P10K20K30K40K50KMin: 55491.76 / Avg: 55668.63 / Max: 55799.49Min: 55002.69 / Avg: 55136.5 / Max: 55211.78Min: 49168.13 / Avg: 49235.92 / Max: 49272.59Min: 47048.85 / Avg: 47098.22 / Max: 47153.11Min: 46062.44 / Avg: 46206.33 / Max: 46260.89Min: 37140.1 / Avg: 37175.2 / Max: 37216.01Min: 29462.38 / Avg: 29526.97 / Max: 29571Min: 27272.21 / Avg: 27284.69 / Max: 27317.11Min: 22040.65 / Avg: 22117.17 / Max: 22194.48Min: 21777.28 / Avg: 21822.06 / Max: 21858.761. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3-v3 - Model: blazefaceEPYC 7F32EPYC 7F52EPYC 7282EPYC 7542EPYC 7502PEPYC 7532EPYC 7702246810SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 3SE +/- 0.25, N = 123.283.683.864.604.614.838.36MIN: 3.19 / MAX: 3.52MIN: 3.61 / MAX: 3.81MIN: 3.62 / MAX: 18.81MIN: 4.47 / MAX: 4.78MIN: 4.42 / MAX: 6.48MIN: 4.58 / MAX: 5.04MIN: 7.23 / MAX: 12.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3-v3 - Model: blazefaceEPYC 7F32EPYC 7F52EPYC 7282EPYC 7542EPYC 7502PEPYC 7532EPYC 77023691215Min: 3.27 / Avg: 3.28 / Max: 3.31Min: 3.67 / Avg: 3.68 / Max: 3.69Min: 3.83 / Avg: 3.86 / Max: 3.92Min: 4.56 / Avg: 4.6 / Max: 4.66Min: 4.55 / Avg: 4.61 / Max: 4.64Min: 4.73 / Avg: 4.83 / Max: 4.91Min: 7.4 / Avg: 8.36 / Max: 10.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1EPYC 7532EPYC 7642EPYC 7662EPYC 7702EPYC 7552EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7542EPYC 7F32EPYC 7272EPYC 7282EPYC 7232PEPYC 7F5248121620SE +/- 0.00249, N = 3SE +/- 0.00479, N = 3SE +/- 0.00378, N = 3SE +/- 0.03096, N = 3SE +/- 0.00667, N = 3SE +/- 0.07483, N = 3SE +/- 0.01008, N = 3SE +/- 0.00324, N = 3SE +/- 0.00275, N = 3SE +/- 0.06375, N = 3SE +/- 0.00448, N = 3SE +/- 0.00263, N = 3SE +/- 0.00542, N = 3SE +/- 0.05299, N = 1117.9677017.6850017.3863017.3075016.6494015.6329015.5629015.2920015.2794012.956009.097439.051548.646617.089251. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1EPYC 7532EPYC 7642EPYC 7662EPYC 7702EPYC 7552EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7542EPYC 7F32EPYC 7272EPYC 7282EPYC 7232PEPYC 7F52510152025Min: 17.96 / Avg: 17.97 / Max: 17.97Min: 17.68 / Avg: 17.69 / Max: 17.69Min: 17.38 / Avg: 17.39 / Max: 17.39Min: 17.25 / Avg: 17.31 / Max: 17.34Min: 16.64 / Avg: 16.65 / Max: 16.66Min: 15.51 / Avg: 15.63 / Max: 15.77Min: 15.55 / Avg: 15.56 / Max: 15.58Min: 15.29 / Avg: 15.29 / Max: 15.3Min: 15.28 / Avg: 15.28 / Max: 15.28Min: 12.86 / Avg: 12.96 / Max: 13.08Min: 9.09 / Avg: 9.1 / Max: 9.1Min: 9.05 / Avg: 9.05 / Max: 9.05Min: 8.64 / Avg: 8.65 / Max: 8.66Min: 6.64 / Avg: 7.09 / Max: 7.271. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi_cxx -lmpi

OpenFOAM

OpenFOAM is the leading free, open source software for computational fluid dynamics (CFD). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 60MEPYC 7662EPYC 7702EPYC 7642EPYC 7532EPYC 7552EPYC 7302PEPYC 7402PEPYC 7542EPYC 7502PEPYC 7F32EPYC 7F52EPYC 7282EPYC 7272EPYC 7232P120240360480600SE +/- 0.24, N = 3SE +/- 0.04, N = 3SE +/- 0.31, N = 3SE +/- 0.57, N = 3SE +/- 0.13, N = 3SE +/- 0.15, N = 3SE +/- 0.21, N = 3SE +/- 0.59, N = 3SE +/- 0.44, N = 3SE +/- 0.36, N = 3SE +/- 1.71, N = 3SE +/- 0.84, N = 3SE +/- 1.13, N = 3SE +/- 0.29, N = 3232.98233.34233.73236.99260.78313.64314.06319.87320.20362.74454.84520.45530.10573.041. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -ldecompose -lgenericPatchFields -lmetisDecomp -lscotchDecomp -llagrangian -lregionModels -lOpenFOAM -ldl -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 60MEPYC 7662EPYC 7702EPYC 7642EPYC 7532EPYC 7552EPYC 7302PEPYC 7402PEPYC 7542EPYC 7502PEPYC 7F32EPYC 7F52EPYC 7282EPYC 7272EPYC 7232P100200300400500Min: 232.55 / Avg: 232.98 / Max: 233.38Min: 233.29 / Avg: 233.34 / Max: 233.42Min: 233.28 / Avg: 233.73 / Max: 234.33Min: 235.98 / Avg: 236.99 / Max: 237.94Min: 260.58 / Avg: 260.78 / Max: 261.02Min: 313.39 / Avg: 313.64 / Max: 313.91Min: 313.84 / Avg: 314.06 / Max: 314.48Min: 319.1 / Avg: 319.87 / Max: 321.04Min: 319.73 / Avg: 320.2 / Max: 321.08Min: 362.09 / Avg: 362.74 / Max: 363.35Min: 452.49 / Avg: 454.84 / Max: 458.16Min: 518.85 / Avg: 520.45 / Max: 521.68Min: 528.05 / Avg: 530.1 / Max: 531.94Min: 572.49 / Avg: 573.04 / Max: 573.471. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -ldecompose -lgenericPatchFields -lmetisDecomp -lscotchDecomp -llagrangian -lregionModels -lOpenFOAM -ldl -lm

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080pEPYC 7662EPYC 7642EPYC 7552EPYC 7702EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7282EPYC 7302PEPYC 7F52EPYC 7272EPYC 7F32EPYC 7232P2004006008001000SE +/- 2.82, N = 3SE +/- 0.45, N = 3SE +/- 2.26, N = 3SE +/- 3.47, N = 3SE +/- 0.52, N = 3SE +/- 0.95, N = 3SE +/- 2.20, N = 3SE +/- 1.22, N = 3SE +/- 0.66, N = 3SE +/- 0.72, N = 3SE +/- 1.86, N = 3SE +/- 1.45, N = 3SE +/- 0.38, N = 3SE +/- 0.34, N = 31158.081116.831095.40983.61939.90937.73892.42839.06712.26698.08671.07624.79496.11473.82MIN: 644.06 / MAX: 1489.95MIN: 672.49 / MAX: 1434.66MIN: 671.87 / MAX: 1406.03MIN: 616.84 / MAX: 1260.07MIN: 689.36 / MAX: 1207.51MIN: 687.68 / MAX: 1200.87MIN: 641.23 / MAX: 1144.96MIN: 646.27 / MAX: 1067.76MIN: 546.18 / MAX: 898.36MIN: 539.13 / MAX: 873.36MIN: 524.48 / MAX: 829.64MIN: 483.27 / MAX: 784.91MIN: 390.79 / MAX: 704.31MIN: 365.77 / MAX: 674.591. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080pEPYC 7662EPYC 7642EPYC 7552EPYC 7702EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7282EPYC 7302PEPYC 7F52EPYC 7272EPYC 7F32EPYC 7232P2004006008001000Min: 1153.84 / Avg: 1158.08 / Max: 1163.42Min: 1116.14 / Avg: 1116.83 / Max: 1117.68Min: 1091.59 / Avg: 1095.4 / Max: 1099.4Min: 977.12 / Avg: 983.61 / Max: 988.98Min: 938.86 / Avg: 939.9 / Max: 940.47Min: 935.96 / Avg: 937.73 / Max: 939.19Min: 888.17 / Avg: 892.42 / Max: 895.56Min: 837.5 / Avg: 839.06 / Max: 841.47Min: 711 / Avg: 712.26 / Max: 713.24Min: 697.03 / Avg: 698.08 / Max: 699.47Min: 667.49 / Avg: 671.07 / Max: 673.75Min: 622.12 / Avg: 624.79 / Max: 627.09Min: 495.72 / Avg: 496.11 / Max: 496.87Min: 473.15 / Avg: 473.82 / Max: 474.251. (CC) gcc options: -pthread

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.CEPYC 7662EPYC 7532EPYC 7702EPYC 7302PEPYC 7F32EPYC 7542EPYC 7502PEPYC 7232PEPYC 7282EPYC 7F5211K22K33K44K55KSE +/- 122.43, N = 8SE +/- 218.93, N = 8SE +/- 224.56, N = 8SE +/- 17.34, N = 8SE +/- 20.16, N = 8SE +/- 86.13, N = 8SE +/- 76.06, N = 8SE +/- 37.32, N = 7SE +/- 23.57, N = 6SE +/- 80.46, N = 552245.2352022.7651795.6147349.2945336.8944205.8044082.9530311.9429776.8121714.651. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.CEPYC 7662EPYC 7532EPYC 7702EPYC 7302PEPYC 7F32EPYC 7542EPYC 7502PEPYC 7232PEPYC 7282EPYC 7F529K18K27K36K45KMin: 51776.39 / Avg: 52245.23 / Max: 52860.72Min: 50688.93 / Avg: 52022.76 / Max: 52586.47Min: 50825.97 / Avg: 51795.61 / Max: 52696.05Min: 47278.3 / Avg: 47349.29 / Max: 47425.88Min: 45233.69 / Avg: 45336.89 / Max: 45424.87Min: 43844.26 / Avg: 44205.8 / Max: 44502.94Min: 43772.91 / Avg: 44082.95 / Max: 44328.66Min: 30136.45 / Avg: 30311.94 / Max: 30404.24Min: 29684.13 / Avg: 29776.81 / Max: 29837.83Min: 21536.77 / Avg: 21714.65 / Max: 22018.311. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

Incompact3D

Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: CylinderEPYC 7542EPYC 7502PEPYC 7532EPYC 7642EPYC 7402PEPYC 7552EPYC 7662EPYC 7702EPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P100200300400500SE +/- 0.39, N = 3SE +/- 0.52, N = 3SE +/- 0.57, N = 3SE +/- 0.73, N = 3SE +/- 0.20, N = 3SE +/- 1.22, N = 3SE +/- 0.75, N = 3SE +/- 0.50, N = 3SE +/- 0.68, N = 3SE +/- 0.25, N = 3SE +/- 0.47, N = 3SE +/- 0.37, N = 3SE +/- 0.12, N = 3SE +/- 1.04, N = 3187.60189.66193.92195.34200.21206.58206.95224.62230.94262.63270.72325.84375.70450.851. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: CylinderEPYC 7542EPYC 7502PEPYC 7532EPYC 7642EPYC 7402PEPYC 7552EPYC 7662EPYC 7702EPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P80160240320400Min: 186.85 / Avg: 187.6 / Max: 188.17Min: 188.81 / Avg: 189.66 / Max: 190.61Min: 192.99 / Avg: 193.92 / Max: 194.97Min: 194.23 / Avg: 195.34 / Max: 196.73Min: 199.84 / Avg: 200.21 / Max: 200.55Min: 204.79 / Avg: 206.58 / Max: 208.92Min: 206.04 / Avg: 206.95 / Max: 208.44Min: 224.11 / Avg: 224.62 / Max: 225.62Min: 229.71 / Avg: 230.94 / Max: 232.08Min: 262.32 / Avg: 262.63 / Max: 263.13Min: 270.13 / Avg: 270.72 / Max: 271.65Min: 325.11 / Avg: 325.84 / Max: 326.32Min: 375.47 / Avg: 375.7 / Max: 375.88Min: 449.53 / Avg: 450.85 / Max: 452.911. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3-v3 - Model: efficientnet-b0EPYC 7F32EPYC 7282EPYC 7F52EPYC 7542EPYC 7502PEPYC 7532EPYC 7702612182430SE +/- 0.03, N = 3SE +/- 0.20, N = 3SE +/- 0.02, N = 3SE +/- 0.22, N = 3SE +/- 0.16, N = 3SE +/- 0.13, N = 3SE +/- 0.60, N = 1210.5010.6511.4413.0213.2314.0024.96MIN: 10.31 / MAX: 10.7MIN: 9.93 / MAX: 24.56MIN: 11.21 / MAX: 12.05MIN: 12.47 / MAX: 25.94MIN: 12.57 / MAX: 15.52MIN: 13.26 / MAX: 16.53MIN: 19.33 / MAX: 169.851. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3-v3 - Model: efficientnet-b0EPYC 7F32EPYC 7282EPYC 7F52EPYC 7542EPYC 7502PEPYC 7532EPYC 7702612182430Min: 10.47 / Avg: 10.5 / Max: 10.56Min: 10.28 / Avg: 10.65 / Max: 10.95Min: 11.41 / Avg: 11.44 / Max: 11.48Min: 12.75 / Avg: 13.02 / Max: 13.45Min: 12.97 / Avg: 13.23 / Max: 13.51Min: 13.75 / Avg: 14 / Max: 14.15Min: 22.16 / Avg: 24.96 / Max: 29.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.CEPYC 7532EPYC 7302PEPYC 7662EPYC 7702EPYC 7542EPYC 7502PEPYC 7F32EPYC 7232PEPYC 7282EPYC 7F524K8K12K16K20KSE +/- 25.61, N = 5SE +/- 15.81, N = 5SE +/- 27.38, N = 5SE +/- 15.75, N = 5SE +/- 30.70, N = 5SE +/- 69.28, N = 5SE +/- 18.04, N = 4SE +/- 22.89, N = 4SE +/- 18.07, N = 4SE +/- 43.06, N = 318019.0115688.5715230.2315079.0814977.2914759.4012350.879844.959697.837779.491. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.CEPYC 7532EPYC 7302PEPYC 7662EPYC 7702EPYC 7542EPYC 7502PEPYC 7F32EPYC 7232PEPYC 7282EPYC 7F523K6K9K12K15KMin: 17953.88 / Avg: 18019.01 / Max: 18084.24Min: 15654.67 / Avg: 15688.57 / Max: 15744.56Min: 15146.06 / Avg: 15230.23 / Max: 15316.01Min: 15027.23 / Avg: 15079.08 / Max: 15122.17Min: 14873.66 / Avg: 14977.29 / Max: 15062.42Min: 14524.99 / Avg: 14759.4 / Max: 14889.39Min: 12309.98 / Avg: 12350.87 / Max: 12395.51Min: 9819.42 / Avg: 9844.95 / Max: 9913.54Min: 9644.91 / Avg: 9697.83 / Max: 9725.86Min: 7733.94 / Avg: 7779.49 / Max: 7865.571. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetEPYC 7F32EPYC 7302PEPYC 7272EPYC 7402PEPYC 7232PEPYC 7282EPYC 7542EPYC 7502PEPYC 7F52EPYC 7532EPYC 7552EPYC 7642EPYC 7662EPYC 7702816243240SE +/- 0.10, N = 3SE +/- 0.01, N = 3SE +/- 0.28, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.48, N = 3SE +/- 0.12, N = 3SE +/- 0.33, N = 3SE +/- 0.27, N = 3SE +/- 0.29, N = 11SE +/- 0.14, N = 3SE +/- 0.86, N = 12SE +/- 1.29, N = 9SE +/- 1.15, N = 916.3216.8317.5817.6417.8718.8419.9420.3521.8421.9323.9727.6834.0036.76MIN: 15.69 / MAX: 79.67MIN: 16.42 / MAX: 20.42MIN: 17.05 / MAX: 18.64MIN: 17.29 / MAX: 30.94MIN: 17.59 / MAX: 19.75MIN: 17.66 / MAX: 132.29MIN: 19.43 / MAX: 22.28MIN: 19.39 / MAX: 62.1MIN: 20.88 / MAX: 25.6MIN: 20.32 / MAX: 95.63MIN: 23.45 / MAX: 28.72MIN: 23.41 / MAX: 138.56MIN: 27.04 / MAX: 97.01MIN: 28.46 / MAX: 174.171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetEPYC 7F32EPYC 7302PEPYC 7272EPYC 7402PEPYC 7232PEPYC 7282EPYC 7542EPYC 7502PEPYC 7F52EPYC 7532EPYC 7552EPYC 7642EPYC 7662EPYC 7702816243240Min: 16.12 / Avg: 16.32 / Max: 16.47Min: 16.81 / Avg: 16.83 / Max: 16.84Min: 17.28 / Avg: 17.58 / Max: 18.13Min: 17.62 / Avg: 17.64 / Max: 17.68Min: 17.81 / Avg: 17.87 / Max: 17.93Min: 18.11 / Avg: 18.84 / Max: 19.74Min: 19.7 / Avg: 19.94 / Max: 20.06Min: 19.9 / Avg: 20.35 / Max: 20.99Min: 21.32 / Avg: 21.84 / Max: 22.24Min: 20.81 / Avg: 21.93 / Max: 23.79Min: 23.7 / Avg: 23.97 / Max: 24.12Min: 23.94 / Avg: 27.68 / Max: 33.93Min: 29.81 / Avg: 34 / Max: 38.69Min: 32.1 / Avg: 36.76 / Max: 42.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterEPYC 7662EPYC 7702EPYC 7642EPYC 7532EPYC 7552EPYC 7542EPYC 7502PEPYC 7402PEPYC 7F52EPYC 7232PEPYC 7272EPYC 7302PEPYC 7282EPYC 7F32510152025SE +/- 0.030, N = 5SE +/- 0.017, N = 5SE +/- 0.074, N = 15SE +/- 0.016, N = 5SE +/- 0.119, N = 4SE +/- 0.022, N = 4SE +/- 0.007, N = 4SE +/- 0.042, N = 4SE +/- 0.049, N = 4SE +/- 0.040, N = 4SE +/- 0.079, N = 3SE +/- 0.034, N = 3SE +/- 0.159, N = 8SE +/- 0.017, N = 38.9058.9289.1859.91312.02714.31614.32014.76115.04915.54016.89517.72918.18919.7071. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterEPYC 7662EPYC 7702EPYC 7642EPYC 7532EPYC 7552EPYC 7542EPYC 7502PEPYC 7402PEPYC 7F52EPYC 7232PEPYC 7272EPYC 7302PEPYC 7282EPYC 7F32510152025Min: 8.85 / Avg: 8.91 / Max: 9.02Min: 8.9 / Avg: 8.93 / Max: 8.99Min: 9.06 / Avg: 9.19 / Max: 10.22Min: 9.88 / Avg: 9.91 / Max: 9.97Min: 11.69 / Avg: 12.03 / Max: 12.26Min: 14.25 / Avg: 14.32 / Max: 14.36Min: 14.31 / Avg: 14.32 / Max: 14.34Min: 14.65 / Avg: 14.76 / Max: 14.83Min: 14.94 / Avg: 15.05 / Max: 15.15Min: 15.47 / Avg: 15.54 / Max: 15.65Min: 16.76 / Avg: 16.9 / Max: 17.03Min: 17.66 / Avg: 17.73 / Max: 17.77Min: 17.99 / Avg: 18.19 / Max: 19.3Min: 19.68 / Avg: 19.71 / Max: 19.741. (CXX) g++ options: -O2 -lOpenCL

Timed ImageMagick Compilation

This test times how long it takes to build ImageMagick. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed ImageMagick Compilation 6.9.0Time To CompileEPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P918273645SE +/- 0.08, N = 3SE +/- 0.10, N = 3SE +/- 0.09, N = 3SE +/- 0.09, N = 3SE +/- 0.05, N = 3SE +/- 0.06, N = 3SE +/- 0.02, N = 3SE +/- 0.07, N = 3SE +/- 0.15, N = 3SE +/- 0.04, N = 3SE +/- 0.16, N = 3SE +/- 0.02, N = 3SE +/- 0.10, N = 3SE +/- 0.17, N = 317.1217.1917.6718.0618.9419.2719.4120.4321.6323.5525.4628.2228.6037.72
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed ImageMagick Compilation 6.9.0Time To CompileEPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P816243240Min: 16.99 / Avg: 17.11 / Max: 17.27Min: 16.99 / Avg: 17.19 / Max: 17.31Min: 17.49 / Avg: 17.67 / Max: 17.79Min: 17.88 / Avg: 18.06 / Max: 18.19Min: 18.83 / Avg: 18.93 / Max: 19.02Min: 19.16 / Avg: 19.27 / Max: 19.35Min: 19.38 / Avg: 19.4 / Max: 19.45Min: 20.3 / Avg: 20.43 / Max: 20.52Min: 21.4 / Avg: 21.63 / Max: 21.91Min: 23.47 / Avg: 23.55 / Max: 23.6Min: 25.14 / Avg: 25.46 / Max: 25.64Min: 28.17 / Avg: 28.22 / Max: 28.26Min: 28.39 / Avg: 28.6 / Max: 28.73Min: 37.4 / Avg: 37.72 / Max: 38.01

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPUEPYC 7F32EPYC 7F52EPYC 7232PEPYC 7302PEPYC 7272EPYC 7402PEPYC 7282EPYC 7542EPYC 7502PEPYC 7532EPYC 7642EPYC 7552EPYC 7662EPYC 77029001800270036004500SE +/- 19.31, N = 3SE +/- 3.08, N = 3SE +/- 14.48, N = 3SE +/- 1.52, N = 3SE +/- 9.26, N = 3SE +/- 12.54, N = 3SE +/- 7.78, N = 3SE +/- 3.65, N = 3SE +/- 1.04, N = 3SE +/- 1.33, N = 3SE +/- 2.50, N = 3SE +/- 2.44, N = 3SE +/- 0.31, N = 3SE +/- 12.73, N = 31836.511976.702298.632414.742434.172507.582527.862579.792794.252815.803059.053325.993562.634029.98
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPUEPYC 7F32EPYC 7F52EPYC 7232PEPYC 7302PEPYC 7272EPYC 7402PEPYC 7282EPYC 7542EPYC 7502PEPYC 7532EPYC 7642EPYC 7552EPYC 7662EPYC 77027001400210028003500Min: 1797.95 / Avg: 1836.51 / Max: 1857.52Min: 1971.22 / Avg: 1976.7 / Max: 1981.88Min: 2271.79 / Avg: 2298.63 / Max: 2321.46Min: 2412.09 / Avg: 2414.74 / Max: 2417.36Min: 2418.87 / Avg: 2434.17 / Max: 2450.86Min: 2482.49 / Avg: 2507.58 / Max: 2520.14Min: 2515.93 / Avg: 2527.86 / Max: 2542.47Min: 2575.39 / Avg: 2579.79 / Max: 2587.03Min: 2792.4 / Avg: 2794.25 / Max: 2795.99Min: 2813.23 / Avg: 2815.8 / Max: 2817.7Min: 3056.48 / Avg: 3059.05 / Max: 3064.04Min: 3322.37 / Avg: 3325.99 / Max: 3330.62Min: 3562.19 / Avg: 3562.63 / Max: 3563.22Min: 4015.57 / Avg: 4029.98 / Max: 4055.36

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPUEPYC 7F32EPYC 7F52EPYC 7232PEPYC 7272EPYC 7302PEPYC 7282EPYC 7402PEPYC 7542EPYC 7502PEPYC 7532EPYC 7642EPYC 7552EPYC 7662EPYC 77029001800270036004500SE +/- 14.55, N = 3SE +/- 1.86, N = 3SE +/- 17.95, N = 3SE +/- 34.12, N = 3SE +/- 2.67, N = 3SE +/- 13.37, N = 3SE +/- 3.50, N = 3SE +/- 2.29, N = 3SE +/- 0.81, N = 3SE +/- 2.24, N = 3SE +/- 1.92, N = 3SE +/- 3.41, N = 3SE +/- 0.45, N = 3SE +/- 8.92, N = 31844.311980.562314.662406.222413.452446.022522.612572.362791.542814.153069.383327.503558.094030.53
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPUEPYC 7F32EPYC 7F52EPYC 7232PEPYC 7272EPYC 7302PEPYC 7282EPYC 7402PEPYC 7542EPYC 7502PEPYC 7532EPYC 7642EPYC 7552EPYC 7662EPYC 77027001400210028003500Min: 1815.75 / Avg: 1844.31 / Max: 1863.4Min: 1978.38 / Avg: 1980.56 / Max: 1984.26Min: 2283.58 / Avg: 2314.66 / Max: 2345.75Min: 2355.5 / Avg: 2406.22 / Max: 2471.13Min: 2410.34 / Avg: 2413.45 / Max: 2418.77Min: 2426.14 / Avg: 2446.02 / Max: 2471.46Min: 2516.2 / Avg: 2522.61 / Max: 2528.27Min: 2567.84 / Avg: 2572.36 / Max: 2575.2Min: 2789.93 / Avg: 2791.54 / Max: 2792.44Min: 2809.91 / Avg: 2814.15 / Max: 2817.55Min: 3065.61 / Avg: 3069.38 / Max: 3071.93Min: 3322.21 / Avg: 3327.5 / Max: 3333.86Min: 3557.24 / Avg: 3558.09 / Max: 3558.76Min: 4020.18 / Avg: 4030.53 / Max: 4048.29

LULESH

LULESH is the Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.3EPYC 7702EPYC 7662EPYC 7642EPYC 7532EPYC 7552EPYC 7542EPYC 7502PEPYC 7F32EPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7F52EPYC 7232P3K6K9K12K15KSE +/- 146.93, N = 12SE +/- 142.13, N = 3SE +/- 58.13, N = 4SE +/- 43.36, N = 4SE +/- 60.62, N = 4SE +/- 157.59, N = 4SE +/- 131.70, N = 5SE +/- 12.34, N = 6SE +/- 9.05, N = 6SE +/- 8.39, N = 6SE +/- 15.52, N = 6SE +/- 38.61, N = 6SE +/- 40.17, N = 6SE +/- 11.79, N = 614612.2114603.3314015.6613938.2513435.3013099.6212840.648720.888024.377954.156954.806871.936839.176757.991. (CXX) g++ options: -O3 -fopenmp -lm -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.3EPYC 7702EPYC 7662EPYC 7642EPYC 7532EPYC 7552EPYC 7542EPYC 7502PEPYC 7F32EPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7F52EPYC 7232P3K6K9K12K15KMin: 13024.64 / Avg: 14612.21 / Max: 14895.69Min: 14340.08 / Avg: 14603.33 / Max: 14827.83Min: 13885.31 / Avg: 14015.66 / Max: 14137.76Min: 13879.03 / Avg: 13938.25 / Max: 14067.14Min: 13262.26 / Avg: 13435.3 / Max: 13545.43Min: 12632.05 / Avg: 13099.62 / Max: 13300.62Min: 12361.8 / Avg: 12840.64 / Max: 13103.71Min: 8666.52 / Avg: 8720.88 / Max: 8753.04Min: 7986.3 / Avg: 8024.37 / Max: 8053.74Min: 7941.34 / Avg: 7954.15 / Max: 7995.56Min: 6907.71 / Avg: 6954.8 / Max: 7005.82Min: 6730.47 / Avg: 6871.93 / Max: 6957.51Min: 6733.72 / Avg: 6839.17 / Max: 6954.28Min: 6718.89 / Avg: 6757.99 / Max: 6804.971. (CXX) g++ options: -O3 -fopenmp -lm -pthread -lmpi_cxx -lmpi

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPUEPYC 7F32EPYC 7F52EPYC 7232PEPYC 7302PEPYC 7272EPYC 7542EPYC 7402PEPYC 7282EPYC 7502PEPYC 7532EPYC 7642EPYC 7552EPYC 7662EPYC 770211002200330044005500SE +/- 4.62, N = 3SE +/- 1.31, N = 3SE +/- 3.19, N = 3SE +/- 5.57, N = 3SE +/- 9.03, N = 3SE +/- 2.78, N = 3SE +/- 14.72, N = 3SE +/- 2.79, N = 3SE +/- 1.53, N = 3SE +/- 10.25, N = 3SE +/- 4.94, N = 3SE +/- 5.33, N = 3SE +/- 6.15, N = 3SE +/- 6.77, N = 32399.192643.393071.913145.893264.493276.523310.913344.393578.093757.354132.134197.604731.725170.99
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPUEPYC 7F32EPYC 7F52EPYC 7232PEPYC 7302PEPYC 7272EPYC 7542EPYC 7402PEPYC 7282EPYC 7502PEPYC 7532EPYC 7642EPYC 7552EPYC 7662EPYC 77029001800270036004500Min: 2393.6 / Avg: 2399.19 / Max: 2408.36Min: 2641.61 / Avg: 2643.39 / Max: 2645.95Min: 3065.98 / Avg: 3071.91 / Max: 3076.9Min: 3138.42 / Avg: 3145.89 / Max: 3156.77Min: 3253.32 / Avg: 3264.49 / Max: 3282.36Min: 3273.17 / Avg: 3276.52 / Max: 3282.04Min: 3286.85 / Avg: 3310.91 / Max: 3337.63Min: 3340.81 / Avg: 3344.39 / Max: 3349.89Min: 3576.1 / Avg: 3578.09 / Max: 3581.09Min: 3741.86 / Avg: 3757.35 / Max: 3776.73Min: 4122.78 / Avg: 4132.13 / Max: 4139.56Min: 4187.64 / Avg: 4197.6 / Max: 4205.86Min: 4723.88 / Avg: 4731.72 / Max: 4743.85Min: 5162.09 / Avg: 5170.99 / Max: 5184.28

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.DEPYC 7662EPYC 7532EPYC 7702EPYC 7542EPYC 7502PEPYC 7302PEPYC 7282EPYC 7F32EPYC 7232PEPYC 7F52400800120016002000SE +/- 11.89, N = 3SE +/- 1.95, N = 3SE +/- 4.04, N = 3SE +/- 5.74, N = 3SE +/- 3.36, N = 3SE +/- 2.26, N = 3SE +/- 1.26, N = 3SE +/- 5.70, N = 3SE +/- 2.89, N = 3SE +/- 9.24, N = 32006.811992.761971.501885.981884.311647.781422.071277.151083.36934.001. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.DEPYC 7662EPYC 7532EPYC 7702EPYC 7542EPYC 7502PEPYC 7302PEPYC 7282EPYC 7F32EPYC 7232PEPYC 7F5230060090012001500Min: 1983.12 / Avg: 2006.81 / Max: 2020.44Min: 1988.95 / Avg: 1992.76 / Max: 1995.4Min: 1965.15 / Avg: 1971.5 / Max: 1979Min: 1876.61 / Avg: 1885.98 / Max: 1896.4Min: 1880.37 / Avg: 1884.31 / Max: 1890.99Min: 1643.54 / Avg: 1647.78 / Max: 1651.27Min: 1420.09 / Avg: 1422.07 / Max: 1424.4Min: 1269.25 / Avg: 1277.15 / Max: 1288.21Min: 1077.67 / Avg: 1083.36 / Max: 1087.08Min: 920.59 / Avg: 934 / Max: 951.731. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileEPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7F52EPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P306090120150SE +/- 0.22, N = 3SE +/- 0.09, N = 3SE +/- 0.07, N = 3SE +/- 0.16, N = 3SE +/- 0.32, N = 3SE +/- 0.07, N = 3SE +/- 0.16, N = 3SE +/- 0.30, N = 3SE +/- 0.10, N = 3SE +/- 0.14, N = 3SE +/- 0.07, N = 3SE +/- 0.23, N = 3SE +/- 0.15, N = 3SE +/- 0.29, N = 367.9768.2969.4369.8671.4674.2074.4875.8576.6587.6694.35105.71113.26145.44
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileEPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7F52EPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P306090120150Min: 67.65 / Avg: 67.97 / Max: 68.39Min: 68.11 / Avg: 68.29 / Max: 68.39Min: 69.34 / Avg: 69.43 / Max: 69.57Min: 69.58 / Avg: 69.86 / Max: 70.15Min: 70.83 / Avg: 71.46 / Max: 71.85Min: 74.09 / Avg: 74.2 / Max: 74.34Min: 74.18 / Avg: 74.48 / Max: 74.74Min: 75.26 / Avg: 75.85 / Max: 76.19Min: 76.5 / Avg: 76.65 / Max: 76.84Min: 87.37 / Avg: 87.66 / Max: 87.82Min: 94.22 / Avg: 94.35 / Max: 94.43Min: 105.37 / Avg: 105.71 / Max: 106.15Min: 113.02 / Avg: 113.26 / Max: 113.53Min: 145.08 / Avg: 145.44 / Max: 146.02

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPUEPYC 7F32EPYC 7F52EPYC 7232PEPYC 7302PEPYC 7272EPYC 7542EPYC 7402PEPYC 7282EPYC 7502PEPYC 7532EPYC 7642EPYC 7552EPYC 7662EPYC 770211002200330044005500SE +/- 1.90, N = 3SE +/- 4.58, N = 3SE +/- 10.38, N = 3SE +/- 5.33, N = 3SE +/- 8.60, N = 3SE +/- 5.09, N = 3SE +/- 4.18, N = 3SE +/- 1.17, N = 3SE +/- 1.62, N = 3SE +/- 3.83, N = 3SE +/- 14.85, N = 3SE +/- 8.65, N = 3SE +/- 4.63, N = 3SE +/- 6.37, N = 32409.032649.353090.583155.853265.933271.263315.633342.133582.563754.854138.694203.814732.735153.99
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPUEPYC 7F32EPYC 7F52EPYC 7232PEPYC 7302PEPYC 7272EPYC 7542EPYC 7402PEPYC 7282EPYC 7502PEPYC 7532EPYC 7642EPYC 7552EPYC 7662EPYC 77029001800270036004500Min: 2405.41 / Avg: 2409.03 / Max: 2411.84Min: 2640.2 / Avg: 2649.35 / Max: 2654.02Min: 3072.59 / Avg: 3090.58 / Max: 3108.55Min: 3145.96 / Avg: 3155.85 / Max: 3164.23Min: 3248.92 / Avg: 3265.93 / Max: 3276.67Min: 3264.96 / Avg: 3271.26 / Max: 3281.33Min: 3309.61 / Avg: 3315.63 / Max: 3323.66Min: 3339.82 / Avg: 3342.13 / Max: 3343.55Min: 3579.4 / Avg: 3582.56 / Max: 3584.73Min: 3748.56 / Avg: 3754.85 / Max: 3761.77Min: 4119.13 / Avg: 4138.69 / Max: 4167.82Min: 4186.54 / Avg: 4203.81 / Max: 4213.33Min: 4726.94 / Avg: 4732.73 / Max: 4741.89Min: 5142.04 / Avg: 5153.99 / Max: 5163.81

Parboil

The Parboil Benchmarks from the IMPACT Research Group at University of Illinois are a set of throughput computing applications for looking at computing architecture and compilers. Parboil test-cases support OpenMP, OpenCL, and CUDA multi-processing environments. However, at this time the test profile is just making use of the OpenMP and OpenCL test workloads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterParboil 2.5Test: OpenMP LBMEPYC 7642EPYC 7532EPYC 7702EPYC 7662EPYC 7302PEPYC 7552EPYC 7F32EPYC 7502PEPYC 7542EPYC 7402PEPYC 7F52EPYC 7272EPYC 7232PEPYC 72821020304050SE +/- 0.03, N = 3SE +/- 0.11, N = 3SE +/- 0.07, N = 3SE +/- 0.26, N = 5SE +/- 0.08, N = 3SE +/- 0.26, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.07, N = 3SE +/- 0.03, N = 3SE +/- 0.07, N = 3SE +/- 0.14, N = 3SE +/- 0.02, N = 321.9422.9423.3223.8224.9226.4727.2527.6027.6429.0235.4242.9545.7646.071. (CXX) g++ options: -lm -lpthread -lgomp -O3 -ffast-math -fopenmp
OpenBenchmarking.orgSeconds, Fewer Is BetterParboil 2.5Test: OpenMP LBMEPYC 7642EPYC 7532EPYC 7702EPYC 7662EPYC 7302PEPYC 7552EPYC 7F32EPYC 7502PEPYC 7542EPYC 7402PEPYC 7F52EPYC 7272EPYC 7232PEPYC 7282918273645Min: 21.88 / Avg: 21.94 / Max: 21.99Min: 22.73 / Avg: 22.94 / Max: 23.1Min: 23.22 / Avg: 23.32 / Max: 23.46Min: 23.2 / Avg: 23.82 / Max: 24.72Min: 24.77 / Avg: 24.92 / Max: 25.05Min: 26.02 / Avg: 26.47 / Max: 26.92Min: 27.2 / Avg: 27.25 / Max: 27.28Min: 27.55 / Avg: 27.6 / Max: 27.63Min: 27.53 / Avg: 27.64 / Max: 27.7Min: 28.89 / Avg: 29.02 / Max: 29.09Min: 35.36 / Avg: 35.42 / Max: 35.47Min: 42.88 / Avg: 42.95 / Max: 43.08Min: 45.56 / Avg: 45.76 / Max: 46.03Min: 46.05 / Avg: 46.07 / Max: 46.11. (CXX) g++ options: -lm -lpthread -lgomp -O3 -ffast-math -fopenmp

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2-v2 - Model: googlenetEPYC 7F32EPYC 7282EPYC 7F52EPYC 7542EPYC 7502PEPYC 7532EPYC 7702816243240SE +/- 0.04, N = 3SE +/- 0.07, N = 3SE +/- 0.06, N = 3SE +/- 0.02, N = 3SE +/- 0.06, N = 3SE +/- 0.12, N = 3SE +/- 0.53, N = 1216.1818.3518.6619.8420.1620.9233.33MIN: 15.75 / MAX: 18.34MIN: 17.92 / MAX: 30.16MIN: 18.37 / MAX: 20.77MIN: 19.52 / MAX: 22.25MIN: 19.79 / MAX: 22.47MIN: 20.33 / MAX: 23.62MIN: 28.29 / MAX: 322.051. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2-v2 - Model: googlenetEPYC 7F32EPYC 7282EPYC 7F52EPYC 7542EPYC 7502PEPYC 7532EPYC 7702714212835Min: 16.13 / Avg: 16.18 / Max: 16.25Min: 18.26 / Avg: 18.35 / Max: 18.5Min: 18.56 / Avg: 18.66 / Max: 18.76Min: 19.82 / Avg: 19.84 / Max: 19.87Min: 20.05 / Avg: 20.16 / Max: 20.23Min: 20.77 / Avg: 20.92 / Max: 21.16Min: 31.11 / Avg: 33.33 / Max: 36.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 100, Lossless CompressionEPYC 7502PEPYC 7542EPYC 7402PEPYC 7642EPYC 7662EPYC 7532EPYC 7F52EPYC 7552EPYC 7702EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P2004006008001000SE +/- 0.20, N = 3SE +/- 0.20, N = 3SE +/- 0.25, N = 3SE +/- 0.22, N = 3SE +/- 0.41, N = 3SE +/- 0.13, N = 3SE +/- 0.50, N = 3SE +/- 0.09, N = 3SE +/- 2.26, N = 3SE +/- 1.69, N = 3SE +/- 0.30, N = 3SE +/- 0.80, N = 3SE +/- 0.43, N = 3SE +/- 0.56, N = 3467.43467.51467.70473.76474.55474.59475.39481.50481.95562.07580.07680.90778.66954.801. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -rdynamic -lpthread -ljpeg -lwebp -lwebpdemux
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 100, Lossless CompressionEPYC 7502PEPYC 7542EPYC 7402PEPYC 7642EPYC 7662EPYC 7532EPYC 7F52EPYC 7552EPYC 7702EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P2004006008001000Min: 467.12 / Avg: 467.43 / Max: 467.79Min: 467.15 / Avg: 467.51 / Max: 467.82Min: 467.23 / Avg: 467.7 / Max: 468.09Min: 473.32 / Avg: 473.76 / Max: 474.03Min: 473.79 / Avg: 474.55 / Max: 475.18Min: 474.43 / Avg: 474.59 / Max: 474.85Min: 474.62 / Avg: 475.39 / Max: 476.33Min: 481.32 / Avg: 481.5 / Max: 481.6Min: 477.74 / Avg: 481.95 / Max: 485.47Min: 558.73 / Avg: 562.07 / Max: 564.17Min: 579.69 / Avg: 580.07 / Max: 580.65Min: 679.66 / Avg: 680.9 / Max: 682.4Min: 777.8 / Avg: 778.66 / Max: 779.18Min: 953.7 / Avg: 954.8 / Max: 955.561. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -rdynamic -lpthread -ljpeg -lwebp -lwebpdemux

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 75, Compression Effort 7EPYC 7542EPYC 7502PEPYC 7402PEPYC 7642EPYC 7702EPYC 7662EPYC 7552EPYC 7F52EPYC 7532EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P60120180240300SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.09, N = 3SE +/- 0.03, N = 3SE +/- 0.47, N = 3SE +/- 0.04, N = 3SE +/- 0.13, N = 3SE +/- 0.46, N = 3SE +/- 0.09, N = 3SE +/- 1.62, N = 3SE +/- 1.08, N = 3SE +/- 0.65, N = 3SE +/- 0.90, N = 3SE +/- 0.87, N = 3135.97137.44138.04138.08139.36139.56140.18140.61142.20164.81173.72199.54227.13277.671. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -rdynamic -lpthread -ljpeg -lwebp -lwebpdemux
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 75, Compression Effort 7EPYC 7542EPYC 7502PEPYC 7402PEPYC 7642EPYC 7702EPYC 7662EPYC 7552EPYC 7F52EPYC 7532EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P50100150200250Min: 135.9 / Avg: 135.97 / Max: 136.04Min: 137.39 / Avg: 137.44 / Max: 137.53Min: 137.91 / Avg: 138.04 / Max: 138.21Min: 138.03 / Avg: 138.08 / Max: 138.13Min: 138.86 / Avg: 139.36 / Max: 140.29Min: 139.49 / Avg: 139.56 / Max: 139.6Min: 140 / Avg: 140.18 / Max: 140.43Min: 140.07 / Avg: 140.61 / Max: 141.52Min: 142.07 / Avg: 142.2 / Max: 142.38Min: 161.8 / Avg: 164.81 / Max: 167.37Min: 172.62 / Avg: 173.72 / Max: 175.87Min: 198.32 / Avg: 199.54 / Max: 200.54Min: 225.67 / Avg: 227.13 / Max: 228.77Min: 275.95 / Avg: 277.67 / Max: 278.781. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -rdynamic -lpthread -ljpeg -lwebp -lwebpdemux

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: mobilenet-v1-1.0EPYC 7542EPYC 7502PEPYC 7532EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7402PEPYC 7F32EPYC 7302PEPYC 7232PEPYC 7282EPYC 7272EPYC 7F521.30012.60023.90035.20046.5005SE +/- 0.005, N = 15SE +/- 0.010, N = 3SE +/- 0.008, N = 3SE +/- 0.004, N = 15SE +/- 0.024, N = 3SE +/- 0.006, N = 3SE +/- 0.015, N = 3SE +/- 0.012, N = 3SE +/- 0.002, N = 14SE +/- 0.005, N = 15SE +/- 0.007, N = 11SE +/- 0.060, N = 3SE +/- 0.015, N = 4SE +/- 0.401, N = 32.8362.9273.0033.0223.0563.0733.1873.2133.6253.8563.9174.0815.3225.778MIN: 2.77 / MAX: 4.94MIN: 2.87 / MAX: 5.27MIN: 2.95 / MAX: 3.25MIN: 2.96 / MAX: 3.4MIN: 2.98 / MAX: 3.31MIN: 3.03 / MAX: 3.22MIN: 3.13 / MAX: 3.28MIN: 3.15 / MAX: 4.81MIN: 3.57 / MAX: 17.34MIN: 3.74 / MAX: 20.71MIN: 3.79 / MAX: 19.86MIN: 3.8 / MAX: 20.13MIN: 5.22 / MAX: 13.18MIN: 4.65 / MAX: 7.241. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: mobilenet-v1-1.0EPYC 7542EPYC 7502PEPYC 7532EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7402PEPYC 7F32EPYC 7302PEPYC 7232PEPYC 7282EPYC 7272EPYC 7F52246810Min: 2.81 / Avg: 2.84 / Max: 2.88Min: 2.92 / Avg: 2.93 / Max: 2.95Min: 2.99 / Avg: 3 / Max: 3.02Min: 3.01 / Avg: 3.02 / Max: 3.05Min: 3.02 / Avg: 3.06 / Max: 3.1Min: 3.06 / Avg: 3.07 / Max: 3.09Min: 3.17 / Avg: 3.19 / Max: 3.22Min: 3.19 / Avg: 3.21 / Max: 3.23Min: 3.61 / Avg: 3.62 / Max: 3.64Min: 3.82 / Avg: 3.86 / Max: 3.88Min: 3.86 / Avg: 3.92 / Max: 3.95Min: 3.98 / Avg: 4.08 / Max: 4.18Min: 5.3 / Avg: 5.32 / Max: 5.36Min: 5 / Avg: 5.78 / Max: 6.341. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3-v3 - Model: googlenetEPYC 7F32EPYC 7F52EPYC 7282EPYC 7502PEPYC 7542EPYC 7532EPYC 7702816243240SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.39, N = 3SE +/- 0.16, N = 3SE +/- 0.17, N = 3SE +/- 0.22, N = 3SE +/- 1.10, N = 1216.1218.7918.9319.9619.9821.0332.82MIN: 15.75 / MAX: 17.12MIN: 18.44 / MAX: 21.28MIN: 17.94 / MAX: 109.8MIN: 18.88 / MAX: 22.57MIN: 19.48 / MAX: 22.94MIN: 20.33 / MAX: 34.92MIN: 27.88 / MAX: 266.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3-v3 - Model: googlenetEPYC 7F32EPYC 7F52EPYC 7282EPYC 7502PEPYC 7542EPYC 7532EPYC 7702714212835Min: 16.07 / Avg: 16.12 / Max: 16.19Min: 18.75 / Avg: 18.79 / Max: 18.86Min: 18.26 / Avg: 18.93 / Max: 19.62Min: 19.69 / Avg: 19.96 / Max: 20.23Min: 19.78 / Avg: 19.98 / Max: 20.31Min: 20.67 / Avg: 21.03 / Max: 21.44Min: 28.85 / Avg: 32.82 / Max: 42.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 95, Compression Effort 7EPYC 7542EPYC 7502PEPYC 7402PEPYC 7F52EPYC 7642EPYC 7662EPYC 7702EPYC 7552EPYC 7532EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P110220330440550SE +/- 0.19, N = 3SE +/- 0.13, N = 3SE +/- 0.27, N = 3SE +/- 1.43, N = 3SE +/- 0.24, N = 3SE +/- 0.17, N = 3SE +/- 0.12, N = 3SE +/- 0.12, N = 3SE +/- 0.16, N = 3SE +/- 0.98, N = 3SE +/- 2.55, N = 3SE +/- 2.81, N = 3SE +/- 0.26, N = 3SE +/- 0.90, N = 3250.41253.10254.24254.76254.88256.56257.01258.37260.46301.52309.98361.48416.27507.741. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -rdynamic -lpthread -ljpeg -lwebp -lwebpdemux
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 95, Compression Effort 7EPYC 7542EPYC 7502PEPYC 7402PEPYC 7F52EPYC 7642EPYC 7662EPYC 7702EPYC 7552EPYC 7532EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P90180270360450Min: 250.08 / Avg: 250.41 / Max: 250.74Min: 252.84 / Avg: 253.1 / Max: 253.26Min: 253.9 / Avg: 254.24 / Max: 254.77Min: 252.29 / Avg: 254.76 / Max: 257.23Min: 254.57 / Avg: 254.88 / Max: 255.36Min: 256.37 / Avg: 256.56 / Max: 256.9Min: 256.86 / Avg: 257 / Max: 257.24Min: 258.22 / Avg: 258.37 / Max: 258.61Min: 260.18 / Avg: 260.46 / Max: 260.72Min: 300.37 / Avg: 301.52 / Max: 303.47Min: 305.77 / Avg: 309.98 / Max: 314.59Min: 356.31 / Avg: 361.48 / Max: 365.98Min: 415.89 / Avg: 416.27 / Max: 416.76Min: 506.68 / Avg: 507.74 / Max: 509.531. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -rdynamic -lpthread -ljpeg -lwebp -lwebpdemux

Algebraic Multi-Grid Benchmark

AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2EPYC 7532EPYC 7662EPYC 7702EPYC 7552EPYC 7F32EPYC 7302PEPYC 7402PEPYC 7542EPYC 7502PEPYC 7F52EPYC 7272EPYC 7282EPYC 7232P200M400M600M800M1000MSE +/- 422202.49, N = 3SE +/- 229126.62, N = 3SE +/- 507394.80, N = 3SE +/- 989060.38, N = 3SE +/- 322962.48, N = 6SE +/- 405072.19, N = 4SE +/- 643499.89, N = 3SE +/- 142607.92, N = 3SE +/- 998206.04, N = 3SE +/- 3149876.45, N = 4SE +/- 252656.85, N = 4SE +/- 314598.13, N = 3SE +/- 295157.17, N = 59095326678837352008783753338568934008098445837882668757784594337743294677743048006431809254578553504557606334499164601. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -pthread -lmpi
OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2EPYC 7532EPYC 7662EPYC 7702EPYC 7552EPYC 7F32EPYC 7302PEPYC 7402PEPYC 7542EPYC 7502PEPYC 7F52EPYC 7272EPYC 7282EPYC 7232P160M320M480M640M800MMin: 908914700 / Avg: 909532666.67 / Max: 910340000Min: 883287900 / Avg: 883735200 / Max: 884045100Min: 877829000 / Avg: 878375333.33 / Max: 879389100Min: 854937100 / Avg: 856893400 / Max: 858125300Min: 808702900 / Avg: 809844583.33 / Max: 810642700Min: 787578900 / Avg: 788266875 / Max: 789313600Min: 777565100 / Avg: 778459433.33 / Max: 779708100Min: 774119000 / Avg: 774329466.67 / Max: 774601400Min: 772483400 / Avg: 774304800 / Max: 775923400Min: 634035200 / Avg: 643180925 / Max: 647428500Min: 457151100 / Avg: 457855350 / Max: 458299300Min: 455133300 / Avg: 455760633.33 / Max: 456116200Min: 448913300 / Avg: 449916460 / Max: 4506291001. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -pthread -lmpi

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p 10-bitEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7F32EPYC 7272EPYC 7232P4080120160200SE +/- 0.56, N = 3SE +/- 0.31, N = 3SE +/- 0.28, N = 3SE +/- 0.12, N = 3SE +/- 0.16, N = 3SE +/- 0.08, N = 3SE +/- 0.13, N = 3SE +/- 0.20, N = 3SE +/- 0.17, N = 3SE +/- 0.09, N = 3SE +/- 0.38, N = 3SE +/- 0.13, N = 3SE +/- 0.17, N = 3SE +/- 0.13, N = 3190.19185.31179.72178.70152.78152.15146.29135.69129.77116.68114.49109.55107.2594.36MIN: 126.93 / MAX: 307.7MIN: 125.96 / MAX: 294.64MIN: 121.84 / MAX: 276.81MIN: 121.07 / MAX: 277.42MIN: 98.52 / MAX: 278.09MIN: 97.46 / MAX: 272.72MIN: 94.75 / MAX: 256.37MIN: 85.88 / MAX: 272.31MIN: 84.35 / MAX: 268.95MIN: 74.3 / MAX: 261.93MIN: 72.02 / MAX: 259.87MIN: 70.77 / MAX: 249.84MIN: 67.93 / MAX: 243.83MIN: 60.14 / MAX: 225.561. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p 10-bitEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7F32EPYC 7272EPYC 7232P306090120150Min: 189.1 / Avg: 190.19 / Max: 190.97Min: 184.71 / Avg: 185.31 / Max: 185.73Min: 179.44 / Avg: 179.72 / Max: 180.27Min: 178.47 / Avg: 178.7 / Max: 178.84Min: 152.47 / Avg: 152.78 / Max: 153.01Min: 152.02 / Avg: 152.15 / Max: 152.28Min: 146.03 / Avg: 146.29 / Max: 146.46Min: 135.36 / Avg: 135.69 / Max: 136.06Min: 129.43 / Avg: 129.77 / Max: 129.97Min: 116.54 / Avg: 116.68 / Max: 116.86Min: 113.88 / Avg: 114.49 / Max: 115.2Min: 109.41 / Avg: 109.55 / Max: 109.8Min: 106.92 / Avg: 107.25 / Max: 107.46Min: 94.18 / Avg: 94.36 / Max: 94.611. (CC) gcc options: -pthread

ACES DGEMM

This is a multi-threaded DGEMM benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point RateEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7532EPYC 7502PEPYC 7542EPYC 7402PEPYC 7302PEPYC 7F52EPYC 7282EPYC 7272EPYC 7F32EPYC 7232P48121620SE +/- 0.099835, N = 4SE +/- 0.163720, N = 4SE +/- 0.030087, N = 3SE +/- 0.083534, N = 3SE +/- 0.072162, N = 3SE +/- 0.066319, N = 3SE +/- 0.091101, N = 3SE +/- 0.025438, N = 3SE +/- 0.017716, N = 3SE +/- 0.028662, N = 3SE +/- 0.037637, N = 3SE +/- 0.008716, N = 3SE +/- 0.039059, N = 3SE +/- 0.010754, N = 316.88116815.61481213.94246512.7350319.8288719.3269098.8633367.2082354.9637154.8976534.4524193.4640033.0115951.5837041. (CC) gcc options: -O3 -march=native -fopenmp
OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point RateEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7532EPYC 7502PEPYC 7542EPYC 7402PEPYC 7302PEPYC 7F52EPYC 7282EPYC 7272EPYC 7F32EPYC 7232P48121620Min: 16.59 / Avg: 16.88 / Max: 17.02Min: 15.39 / Avg: 15.61 / Max: 16.1Min: 13.91 / Avg: 13.94 / Max: 14Min: 12.62 / Avg: 12.74 / Max: 12.9Min: 9.75 / Avg: 9.83 / Max: 9.97Min: 9.25 / Avg: 9.33 / Max: 9.46Min: 8.68 / Avg: 8.86 / Max: 8.96Min: 7.16 / Avg: 7.21 / Max: 7.24Min: 4.93 / Avg: 4.96 / Max: 4.99Min: 4.85 / Avg: 4.9 / Max: 4.95Min: 4.39 / Avg: 4.45 / Max: 4.52Min: 3.45 / Avg: 3.46 / Max: 3.48Min: 2.95 / Avg: 3.01 / Max: 3.08Min: 1.57 / Avg: 1.58 / Max: 1.611. (CC) gcc options: -O3 -march=native -fopenmp

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Sequential FillEPYC 7542EPYC 7502PEPYC 7402PEPYC 7532EPYC 7282EPYC 7F52EPYC 7302PEPYC 7272EPYC 7F32EPYC 7232PEPYC 7552EPYC 7642EPYC 7702EPYC 7662200K400K600K800K1000KSE +/- 7271.02, N = 3SE +/- 6414.70, N = 3SE +/- 9904.69, N = 4SE +/- 4243.12, N = 3SE +/- 3857.18, N = 3SE +/- 7660.92, N = 6SE +/- 6451.00, N = 3SE +/- 198.51, N = 3SE +/- 3955.87, N = 3SE +/- 6504.38, N = 3SE +/- 6981.82, N = 3SE +/- 1276.30, N = 3SE +/- 1138.19, N = 3SE +/- 1119.16, N = 38857228819868382248253747757967566767562687058556245026113025362425275414510034471911. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Sequential FillEPYC 7542EPYC 7502PEPYC 7402PEPYC 7532EPYC 7282EPYC 7F52EPYC 7302PEPYC 7272EPYC 7F32EPYC 7232PEPYC 7552EPYC 7642EPYC 7702EPYC 7662150K300K450K600K750KMin: 875249 / Avg: 885722.33 / Max: 899696Min: 872797 / Avg: 881986 / Max: 894334Min: 809652 / Avg: 838224 / Max: 852941Min: 820254 / Avg: 825374 / Max: 833795Min: 771919 / Avg: 775795.67 / Max: 783510Min: 736622 / Avg: 756676 / Max: 785060Min: 749121 / Avg: 756268 / Max: 769144Min: 705589 / Avg: 705854.67 / Max: 706243Min: 617818 / Avg: 624501.67 / Max: 631510Min: 598299 / Avg: 611302 / Max: 618139Min: 528910 / Avg: 536242.33 / Max: 550200Min: 525500 / Avg: 527540.67 / Max: 529889Min: 449363 / Avg: 451002.67 / Max: 453190Min: 445589 / Avg: 447191.33 / Max: 4493461. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 100, Compression Effort 5EPYC 7542EPYC 7502PEPYC 7F52EPYC 7402PEPYC 7662EPYC 7532EPYC 7642EPYC 7702EPYC 7552EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P48121620SE +/- 0.015, N = 6SE +/- 0.027, N = 6SE +/- 0.012, N = 6SE +/- 0.009, N = 6SE +/- 0.009, N = 6SE +/- 0.010, N = 6SE +/- 0.006, N = 6SE +/- 0.012, N = 6SE +/- 0.013, N = 6SE +/- 0.020, N = 5SE +/- 0.012, N = 5SE +/- 0.018, N = 5SE +/- 0.031, N = 4SE +/- 0.032, N = 47.9497.9727.9737.9748.0528.0558.0748.1258.1629.3549.65411.13112.78815.5471. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -rdynamic -lpthread -ljpeg -lwebp -lwebpdemux
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 100, Compression Effort 5EPYC 7542EPYC 7502PEPYC 7F52EPYC 7402PEPYC 7662EPYC 7532EPYC 7642EPYC 7702EPYC 7552EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P48121620Min: 7.92 / Avg: 7.95 / Max: 8.02Min: 7.91 / Avg: 7.97 / Max: 8.08Min: 7.94 / Avg: 7.97 / Max: 8.02Min: 7.94 / Avg: 7.97 / Max: 8Min: 8.04 / Avg: 8.05 / Max: 8.09Min: 8.01 / Avg: 8.06 / Max: 8.08Min: 8.05 / Avg: 8.07 / Max: 8.09Min: 8.1 / Avg: 8.13 / Max: 8.17Min: 8.12 / Avg: 8.16 / Max: 8.21Min: 9.31 / Avg: 9.35 / Max: 9.43Min: 9.63 / Avg: 9.65 / Max: 9.69Min: 11.09 / Avg: 11.13 / Max: 11.19Min: 12.74 / Avg: 12.79 / Max: 12.88Min: 15.46 / Avg: 15.55 / Max: 15.621. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -rdynamic -lpthread -ljpeg -lwebp -lwebpdemux

ECP-CANDLE

The CANDLE benchmark codes implement deep learning architectures relevant to problems in cancer. These architectures address problems at different biological scales, specifically problems at the molecular, cellular and population scales. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterECP-CANDLE 0.3Benchmark: P3B1EPYC 7542EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7552EPYC 7532EPYC 7642EPYC 7662EPYC 7272EPYC 7702EPYC 7282EPYC 7F32EPYC 7232PEPYC 7F522004006008001000566.99574.77576.98577.65584.61586.67590.56604.15623.78649.31655.45668.99715.211108.26

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random FillEPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7282EPYC 7F52EPYC 7302PEPYC 7272EPYC 7F32EPYC 7232PEPYC 7552EPYC 7642EPYC 7702EPYC 7662200K400K600K800K1000KSE +/- 4338.82, N = 3SE +/- 6323.38, N = 3SE +/- 2994.50, N = 3SE +/- 10119.73, N = 3SE +/- 9036.88, N = 4SE +/- 6124.02, N = 3SE +/- 5524.24, N = 9SE +/- 6225.31, N = 6SE +/- 6471.76, N = 5SE +/- 6988.75, N = 3SE +/- 4097.81, N = 3SE +/- 3532.96, N = 3SE +/- 1303.40, N = 3SE +/- 227.93, N = 38602258468878068148050667353816990776920446560555814075458165219105184464515474428641. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random FillEPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7282EPYC 7F52EPYC 7302PEPYC 7272EPYC 7F32EPYC 7232PEPYC 7552EPYC 7642EPYC 7702EPYC 7662150K300K450K600K750KMin: 851801 / Avg: 860225.33 / Max: 866240Min: 835350 / Avg: 846887 / Max: 857142Min: 801674 / Avg: 806813.67 / Max: 812046Min: 794424 / Avg: 805065.67 / Max: 825296Min: 722999 / Avg: 735381.25 / Max: 761712Min: 687578 / Avg: 699077 / Max: 708479Min: 669061 / Avg: 692043.67 / Max: 723569Min: 638259 / Avg: 656055.33 / Max: 681884Min: 569871 / Avg: 581407 / Max: 602220Min: 533331 / Avg: 545816.33 / Max: 557501Min: 516355 / Avg: 521910 / Max: 529906Min: 513881 / Avg: 518445.67 / Max: 525399Min: 449913 / Avg: 451547 / Max: 454123Min: 442493 / Avg: 442864.33 / Max: 4432791. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16EPYC 7302PEPYC 7402PEPYC 7542EPYC 7502PEPYC 7532EPYC 7F32EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7282EPYC 7272EPYC 7232PEPYC 7F521326395265SE +/- 0.12, N = 3SE +/- 0.01, N = 3SE +/- 0.07, N = 3SE +/- 0.12, N = 3SE +/- 0.15, N = 11SE +/- 0.18, N = 3SE +/- 0.32, N = 3SE +/- 0.19, N = 12SE +/- 0.27, N = 9SE +/- 0.17, N = 9SE +/- 0.27, N = 3SE +/- 0.13, N = 3SE +/- 0.05, N = 3SE +/- 0.15, N = 330.8431.4631.4832.0433.2733.8134.2034.9037.3538.5139.4240.0140.6759.81MIN: 30.45 / MAX: 32.11MIN: 31.24 / MAX: 33.08MIN: 31.08 / MAX: 45.9MIN: 31.32 / MAX: 109.6MIN: 31.92 / MAX: 108.33MIN: 33.39 / MAX: 80.53MIN: 33.04 / MAX: 37.52MIN: 33.25 / MAX: 146.34MIN: 35.01 / MAX: 111.55MIN: 36.25 / MAX: 125.98MIN: 38.3 / MAX: 143.19MIN: 39.54 / MAX: 41.05MIN: 40.35 / MAX: 43.91MIN: 57.95 / MAX: 122.991. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16EPYC 7302PEPYC 7402PEPYC 7542EPYC 7502PEPYC 7532EPYC 7F32EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7282EPYC 7272EPYC 7232PEPYC 7F521224364860Min: 30.69 / Avg: 30.84 / Max: 31.08Min: 31.45 / Avg: 31.46 / Max: 31.48Min: 31.34 / Avg: 31.48 / Max: 31.56Min: 31.88 / Avg: 32.04 / Max: 32.28Min: 32.69 / Avg: 33.27 / Max: 34.04Min: 33.62 / Avg: 33.81 / Max: 34.17Min: 33.59 / Avg: 34.2 / Max: 34.65Min: 34.19 / Avg: 34.9 / Max: 36.02Min: 35.99 / Avg: 37.35 / Max: 38.19Min: 37.89 / Avg: 38.51 / Max: 39.47Min: 38.92 / Avg: 39.42 / Max: 39.85Min: 39.75 / Avg: 40.01 / Max: 40.15Min: 40.61 / Avg: 40.67 / Max: 40.77Min: 59.51 / Avg: 59.81 / Max: 60.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference ScoreEPYC 7642EPYC 7662EPYC 7542EPYC 7552EPYC 7532EPYC 7502PEPYC 7702EPYC 7402PEPYC 7302PEPYC 7282EPYC 7F52EPYC 7272EPYC 7F32EPYC 7232P500100015002000250021252122210720492043202219651926166015381456137213011112

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2-v2 - Model: shufflenet-v2EPYC 7282EPYC 7F52EPYC 7F32EPYC 7542EPYC 7502PEPYC 7532EPYC 770248121620SE +/- 0.15, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 2SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.30, N = 129.049.149.619.899.9410.4517.18MIN: 8.5 / MAX: 115.08MIN: 8.98 / MAX: 9.86MIN: 9.47 / MAX: 10.1MIN: 9.55 / MAX: 12.02MIN: 9.7 / MAX: 12.07MIN: 10.06 / MAX: 10.7MIN: 14.33 / MAX: 149.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2-v2 - Model: shufflenet-v2EPYC 7282EPYC 7F52EPYC 7F32EPYC 7542EPYC 7502PEPYC 7532EPYC 770248121620Min: 8.78 / Avg: 9.04 / Max: 9.29Min: 9.11 / Avg: 9.14 / Max: 9.2Min: 9.58 / Avg: 9.61 / Max: 9.64Min: 9.88 / Avg: 9.89 / Max: 9.89Min: 9.84 / Avg: 9.94 / Max: 10.03Min: 10.38 / Avg: 10.45 / Max: 10.49Min: 15.41 / Avg: 17.18 / Max: 18.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetEPYC 7402PEPYC 7302PEPYC 7272EPYC 7542EPYC 7282EPYC 7F32EPYC 7502PEPYC 7232PEPYC 7532EPYC 7F52EPYC 7552EPYC 7642EPYC 7702EPYC 7662816243240SE +/- 0.07, N = 3SE +/- 0.08, N = 3SE +/- 0.16, N = 3SE +/- 0.03, N = 3SE +/- 0.10, N = 3SE +/- 0.10, N = 3SE +/- 0.15, N = 3SE +/- 0.10, N = 3SE +/- 0.18, N = 11SE +/- 0.10, N = 3SE +/- 0.37, N = 3SE +/- 0.64, N = 12SE +/- 0.91, N = 9SE +/- 0.82, N = 919.5219.7020.0620.4320.4620.5421.1422.3323.5924.1426.5028.9235.5835.78MIN: 19.11 / MAX: 21.65MIN: 19.2 / MAX: 20.45MIN: 19.5 / MAX: 23.62MIN: 19.91 / MAX: 33.27MIN: 19.72 / MAX: 77.23MIN: 20.19 / MAX: 21.27MIN: 20.56 / MAX: 35MIN: 21.92 / MAX: 23.79MIN: 22.06 / MAX: 161.6MIN: 23.55 / MAX: 25.73MIN: 24.55 / MAX: 33.31MIN: 24.32 / MAX: 41.88MIN: 29.55 / MAX: 165.69MIN: 28.31 / MAX: 170.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetEPYC 7402PEPYC 7302PEPYC 7272EPYC 7542EPYC 7282EPYC 7F32EPYC 7502PEPYC 7232PEPYC 7532EPYC 7F52EPYC 7552EPYC 7642EPYC 7702EPYC 7662816243240Min: 19.39 / Avg: 19.52 / Max: 19.6Min: 19.56 / Avg: 19.7 / Max: 19.85Min: 19.75 / Avg: 20.06 / Max: 20.29Min: 20.38 / Avg: 20.43 / Max: 20.48Min: 20.36 / Avg: 20.46 / Max: 20.66Min: 20.4 / Avg: 20.54 / Max: 20.73Min: 20.98 / Avg: 21.14 / Max: 21.44Min: 22.23 / Avg: 22.33 / Max: 22.53Min: 22.94 / Avg: 23.59 / Max: 24.92Min: 24 / Avg: 24.14 / Max: 24.32Min: 25.8 / Avg: 26.5 / Max: 27.04Min: 26.74 / Avg: 28.92 / Max: 33.72Min: 31.5 / Avg: 35.58 / Max: 39.42Min: 31.86 / Avg: 35.78 / Max: 39.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko SkylineEPYC 7F52EPYC 7542EPYC 7662EPYC 7702EPYC 7642EPYC 7502PEPYC 7402PEPYC 7552EPYC 7532EPYC 7302PEPYC 7282EPYC 7F32EPYC 7272EPYC 7232P306090120150SE +/- 0.59, N = 3SE +/- 0.80, N = 3SE +/- 0.45, N = 3SE +/- 0.05, N = 3SE +/- 0.18, N = 3SE +/- 0.60, N = 3SE +/- 0.58, N = 3SE +/- 1.10, N = 3SE +/- 0.65, N = 3SE +/- 0.45, N = 3SE +/- 1.07, N = 3SE +/- 0.92, N = 7SE +/- 0.64, N = 3SE +/- 1.44, N = 375.2383.8384.7984.8385.0385.7385.7686.0686.7088.3296.8598.52101.59141.25
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko SkylineEPYC 7F52EPYC 7542EPYC 7662EPYC 7702EPYC 7642EPYC 7502PEPYC 7402PEPYC 7552EPYC 7532EPYC 7302PEPYC 7282EPYC 7F32EPYC 7272EPYC 7232P306090120150Min: 74.45 / Avg: 75.23 / Max: 76.39Min: 82.24 / Avg: 83.83 / Max: 84.72Min: 84.12 / Avg: 84.79 / Max: 85.64Min: 84.74 / Avg: 84.83 / Max: 84.9Min: 84.8 / Avg: 85.03 / Max: 85.39Min: 85.04 / Avg: 85.73 / Max: 86.93Min: 84.6 / Avg: 85.76 / Max: 86.43Min: 83.86 / Avg: 86.06 / Max: 87.25Min: 85.78 / Avg: 86.7 / Max: 87.96Min: 87.53 / Avg: 88.32 / Max: 89.11Min: 94.95 / Avg: 96.85 / Max: 98.65Min: 95.01 / Avg: 98.52 / Max: 100.63Min: 100.3 / Avg: 101.59 / Max: 102.33Min: 138.42 / Avg: 141.25 / Max: 143.09

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchEPYC 7F32EPYC 7F52EPYC 7232PEPYC 7302PEPYC 7272EPYC 7282EPYC 7402PEPYC 7542EPYC 7502PEPYC 7532EPYC 7552EPYC 7642EPYC 7662EPYC 77024080120160200SE +/- 0.03, N = 3SE +/- 0.08, N = 3SE +/- 0.06, N = 3SE +/- 0.04, N = 3SE +/- 0.10, N = 3SE +/- 0.05, N = 3SE +/- 0.09, N = 3SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.08, N = 3SE +/- 0.06, N = 3SE +/- 0.06, N = 3SE +/- 0.16, N = 3SE +/- 0.18, N = 3104.26110.43126.21128.93129.44132.39133.76139.82141.65145.01165.55166.11191.15191.231. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchEPYC 7F32EPYC 7F52EPYC 7232PEPYC 7302PEPYC 7272EPYC 7282EPYC 7402PEPYC 7542EPYC 7502PEPYC 7532EPYC 7552EPYC 7642EPYC 7662EPYC 77024080120160200Min: 104.21 / Avg: 104.26 / Max: 104.31Min: 110.31 / Avg: 110.43 / Max: 110.57Min: 126.1 / Avg: 126.21 / Max: 126.28Min: 128.87 / Avg: 128.93 / Max: 129.01Min: 129.26 / Avg: 129.44 / Max: 129.62Min: 132.35 / Avg: 132.39 / Max: 132.48Min: 133.6 / Avg: 133.76 / Max: 133.9Min: 139.74 / Avg: 139.82 / Max: 139.89Min: 141.6 / Avg: 141.65 / Max: 141.72Min: 144.87 / Avg: 145.01 / Max: 145.14Min: 165.45 / Avg: 165.55 / Max: 165.66Min: 165.99 / Avg: 166.11 / Max: 166.19Min: 190.84 / Avg: 191.15 / Max: 191.38Min: 190.86 / Avg: 191.23 / Max: 191.421. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

Stream-Dynamic

This is an open-source AMD modified copy of the Stream memory benchmark catered towards running the RAM benchmark on systems with the AMD Optimizing C/C++ Compiler (AOCC) among other by-default optimizations aiming for an easy and standardized deployment. This test profile though will attempt to fall-back to GCC / Clang for systems lacking AOCC, otherwise there is the existing "stream" test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterStream-Dynamic 1.0- TriadEPYC 7532EPYC 7542EPYC 728220K40K60K80K100KSE +/- 22.34, N = 3SE +/- 9.40, N = 3SE +/- 9.56, N = 3104119.1690450.7257372.731. (CXX) g++ options: -Ofast -mcmodel=large -mavx2 -ffp-contract=fast -march=native -fopenmp
OpenBenchmarking.orgMB/s, More Is BetterStream-Dynamic 1.0- TriadEPYC 7532EPYC 7542EPYC 728220K40K60K80K100KMin: 104077.01 / Avg: 104119.16 / Max: 104153.07Min: 90432.15 / Avg: 90450.72 / Max: 90462.55Min: 57357.79 / Avg: 57372.73 / Max: 57390.531. (CXX) g++ options: -Ofast -mcmodel=large -mavx2 -ffp-contract=fast -march=native -fopenmp

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3-v3 - Model: shufflenet-v2EPYC 7282EPYC 7F52EPYC 7F32EPYC 7542EPYC 7502PEPYC 7532EPYC 770248121620SE +/- 0.13, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.08, N = 3SE +/- 0.41, N = 128.839.049.599.929.9310.5615.99MIN: 8.29 / MAX: 60.32MIN: 8.92 / MAX: 9.24MIN: 9.42 / MAX: 10.31MIN: 9.58 / MAX: 10.17MIN: 9.59 / MAX: 13.42MIN: 10.2 / MAX: 10.78MIN: 14.24 / MAX: 31.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3-v3 - Model: shufflenet-v2EPYC 7282EPYC 7F52EPYC 7F32EPYC 7542EPYC 7502PEPYC 7532EPYC 770248121620Min: 8.61 / Avg: 8.83 / Max: 9.05Min: 9.01 / Avg: 9.04 / Max: 9.07Min: 9.55 / Avg: 9.59 / Max: 9.63Min: 9.9 / Avg: 9.92 / Max: 9.95Min: 9.85 / Avg: 9.93 / Max: 10.03Min: 10.42 / Avg: 10.56 / Max: 10.68Min: 14.41 / Avg: 15.99 / Max: 18.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Stream-Dynamic

This is an open-source AMD modified copy of the Stream memory benchmark catered towards running the RAM benchmark on systems with the AMD Optimizing C/C++ Compiler (AOCC) among other by-default optimizations aiming for an easy and standardized deployment. This test profile though will attempt to fall-back to GCC / Clang for systems lacking AOCC, otherwise there is the existing "stream" test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterStream-Dynamic 1.0- AddEPYC 7532EPYC 7542EPYC 728220K40K60K80K100KSE +/- 8.82, N = 3SE +/- 6.47, N = 3SE +/- 10.53, N = 3103789.9990238.6357323.381. (CXX) g++ options: -Ofast -mcmodel=large -mavx2 -ffp-contract=fast -march=native -fopenmp
OpenBenchmarking.orgMB/s, More Is BetterStream-Dynamic 1.0- AddEPYC 7532EPYC 7542EPYC 728220K40K60K80K100KMin: 103778.66 / Avg: 103789.99 / Max: 103807.37Min: 90231.4 / Avg: 90238.63 / Max: 90251.53Min: 57311.27 / Avg: 57323.38 / Max: 57344.351. (CXX) g++ options: -Ofast -mcmodel=large -mavx2 -ffp-contract=fast -march=native -fopenmp

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed GaussianEPYC 7F52EPYC 7542EPYC 7702EPYC 7662EPYC 7502PEPYC 7552EPYC 7642EPYC 7402PEPYC 7532EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P3691215SE +/- 0.016, N = 6SE +/- 0.016, N = 6SE +/- 0.029, N = 6SE +/- 0.038, N = 6SE +/- 0.020, N = 6SE +/- 0.025, N = 6SE +/- 0.036, N = 6SE +/- 0.041, N = 6SE +/- 0.044, N = 6SE +/- 0.018, N = 6SE +/- 0.033, N = 6SE +/- 0.043, N = 5SE +/- 0.034, N = 5SE +/- 0.047, N = 46.5866.9006.9306.9716.9897.0337.0637.1217.2207.6458.1058.7989.79011.912
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed GaussianEPYC 7F52EPYC 7542EPYC 7702EPYC 7662EPYC 7502PEPYC 7552EPYC 7642EPYC 7402PEPYC 7532EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P3691215Min: 6.53 / Avg: 6.59 / Max: 6.62Min: 6.84 / Avg: 6.9 / Max: 6.95Min: 6.86 / Avg: 6.93 / Max: 7.04Min: 6.85 / Avg: 6.97 / Max: 7.12Min: 6.93 / Avg: 6.99 / Max: 7.06Min: 6.95 / Avg: 7.03 / Max: 7.12Min: 6.98 / Avg: 7.06 / Max: 7.22Min: 6.99 / Avg: 7.12 / Max: 7.23Min: 7.06 / Avg: 7.22 / Max: 7.34Min: 7.59 / Avg: 7.65 / Max: 7.7Min: 8 / Avg: 8.1 / Max: 8.23Min: 8.7 / Avg: 8.8 / Max: 8.93Min: 9.66 / Avg: 9.79 / Max: 9.84Min: 11.83 / Avg: 11.91 / Max: 12.03

ctx_clock

Ctx_clock is a simple test program to measure the context switch time in clock cycles. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgClocks, Fewer Is Betterctx_clockContext Switch TimeEPYC 7702EPYC 7662EPYC 7552EPYC 7532EPYC 7502PEPYC 7402PEPYC 7542EPYC 7F52EPYC 7302PEPYC 7F32EPYC 7282EPYC 7272EPYC 7232P50100150200250120120132144150168174175180185196203217

Stream

This benchmark tests the system memory (RAM) performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterStream 2013-01-17Type: TriadEPYC 7532EPYC 7662EPYC 7702EPYC 7552EPYC 7F32EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7542EPYC 7F52EPYC 7232PEPYC 7272EPYC 728220K40K60K80K100KSE +/- 16.47, N = 5SE +/- 21.10, N = 5SE +/- 26.33, N = 5SE +/- 11.91, N = 5SE +/- 59.32, N = 5SE +/- 5.88, N = 5SE +/- 7.48, N = 5SE +/- 19.43, N = 5SE +/- 36.22, N = 5SE +/- 194.23, N = 5SE +/- 65.26, N = 5SE +/- 2.27, N = 5SE +/- 8.84, N = 599248.298343.298034.896497.189567.288105.787308.487239.887057.072315.656788.656066.655596.31. (CC) gcc options: -O3 -march=native -fopenmp
OpenBenchmarking.orgMB/s, More Is BetterStream 2013-01-17Type: TriadEPYC 7532EPYC 7662EPYC 7702EPYC 7552EPYC 7F32EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7542EPYC 7F52EPYC 7232PEPYC 7272EPYC 728220K40K60K80K100KMin: 99194.2 / Avg: 99248.22 / Max: 99293.1Min: 98280 / Avg: 98343.22 / Max: 98393.4Min: 97939.6 / Avg: 98034.8 / Max: 98096.1Min: 96474.4 / Avg: 96497.14 / Max: 96541Min: 89468.9 / Avg: 89567.18 / Max: 89800.2Min: 88086.3 / Avg: 88105.74 / Max: 88121.8Min: 87291.9 / Avg: 87308.42 / Max: 87333.6Min: 87168 / Avg: 87239.76 / Max: 87282.1Min: 86972.9 / Avg: 87057.04 / Max: 87161.9Min: 71942.4 / Avg: 72315.56 / Max: 73068.3Min: 56528 / Avg: 56788.62 / Max: 56863.9Min: 56061.4 / Avg: 56066.58 / Max: 56074.5Min: 55564.4 / Avg: 55596.28 / Max: 55617.51. (CC) gcc options: -O3 -march=native -fopenmp

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: MobileNetV2_224EPYC 7F32EPYC 7542EPYC 7502PEPYC 7532EPYC 7642EPYC 7552EPYC 7662EPYC 7232PEPYC 7702EPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7F52246810SE +/- 0.017, N = 14SE +/- 0.009, N = 15SE +/- 0.028, N = 3SE +/- 0.022, N = 3SE +/- 0.016, N = 3SE +/- 0.008, N = 15SE +/- 0.022, N = 3SE +/- 0.030, N = 11SE +/- 0.016, N = 3SE +/- 0.026, N = 3SE +/- 0.046, N = 15SE +/- 0.033, N = 3SE +/- 0.155, N = 4SE +/- 0.391, N = 34.5784.7134.8204.8214.8254.8524.9204.9445.0675.2705.7725.8555.9308.171MIN: 4.43 / MAX: 10.46MIN: 4.51 / MAX: 6.85MIN: 4.67 / MAX: 6.92MIN: 4.72 / MAX: 5.31MIN: 4.71 / MAX: 5.16MIN: 4.69 / MAX: 5.21MIN: 4.78 / MAX: 5.1MIN: 4.74 / MAX: 7.04MIN: 4.95 / MAX: 5.21MIN: 5.12 / MAX: 6.52MIN: 5.46 / MAX: 20.47MIN: 5.58 / MAX: 23.16MIN: 5.53 / MAX: 20.6MIN: 7.28 / MAX: 12.171. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: MobileNetV2_224EPYC 7F32EPYC 7542EPYC 7502PEPYC 7532EPYC 7642EPYC 7552EPYC 7662EPYC 7232PEPYC 7702EPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7F523691215Min: 4.51 / Avg: 4.58 / Max: 4.67Min: 4.6 / Avg: 4.71 / Max: 4.75Min: 4.78 / Avg: 4.82 / Max: 4.87Min: 4.8 / Avg: 4.82 / Max: 4.87Min: 4.79 / Avg: 4.83 / Max: 4.84Min: 4.78 / Avg: 4.85 / Max: 4.89Min: 4.88 / Avg: 4.92 / Max: 4.96Min: 4.81 / Avg: 4.94 / Max: 5.12Min: 5.05 / Avg: 5.07 / Max: 5.1Min: 5.23 / Avg: 5.27 / Max: 5.32Min: 5.57 / Avg: 5.77 / Max: 6.19Min: 5.81 / Avg: 5.86 / Max: 5.92Min: 5.66 / Avg: 5.93 / Max: 6.37Min: 7.72 / Avg: 8.17 / Max: 8.951. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C7552EPYC 7F32EPYC 7F52EPYC 7502PEPYC 7542EPYC 7552EPYC 7532EPYC 7702EPYC 7282EPYC 7662EPYC 7272EPYC 7232P50100150200250SE +/- 1.27, N = 12SE +/- 2.04, N = 12SE +/- 1.66, N = 3SE +/- 1.31, N = 12SE +/- 1.35, N = 3SE +/- 1.09, N = 12SE +/- 1.19, N = 3SE +/- 1.05, N = 3SE +/- 1.10, N = 3SE +/- 1.75, N = 3SE +/- 2.36, N = 3119.44122.44128.92130.09131.53132.04132.62135.68136.87137.59212.891. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE
OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C7552EPYC 7F32EPYC 7F52EPYC 7502PEPYC 7542EPYC 7552EPYC 7532EPYC 7702EPYC 7282EPYC 7662EPYC 7272EPYC 7232P4080120160200Min: 112.44 / Avg: 119.44 / Max: 126.77Min: 113.19 / Avg: 122.44 / Max: 133.84Min: 126.46 / Avg: 128.92 / Max: 132.08Min: 122.93 / Avg: 130.09 / Max: 137.37Min: 128.85 / Avg: 131.53 / Max: 133.13Min: 125.23 / Avg: 132.04 / Max: 137.09Min: 130.9 / Avg: 132.62 / Max: 134.89Min: 133.59 / Avg: 135.68 / Max: 136.92Min: 134.99 / Avg: 136.87 / Max: 138.8Min: 134.17 / Avg: 137.59 / Max: 139.97Min: 209.86 / Avg: 212.89 / Max: 217.531. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE

Stream

This benchmark tests the system memory (RAM) performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterStream 2013-01-17Type: CopyEPYC 7532EPYC 7702EPYC 7662EPYC 7552EPYC 7F32EPYC 7302PEPYC 7402PEPYC 7542EPYC 7502PEPYC 7F52EPYC 7232PEPYC 7272EPYC 728220K40K60K80K100KSE +/- 10.31, N = 5SE +/- 31.65, N = 5SE +/- 13.27, N = 5SE +/- 15.41, N = 5SE +/- 52.59, N = 5SE +/- 8.02, N = 5SE +/- 5.59, N = 5SE +/- 5.66, N = 5SE +/- 14.85, N = 5SE +/- 330.72, N = 5SE +/- 9.94, N = 5SE +/- 7.06, N = 5SE +/- 2.37, N = 590663.290511.890296.588717.282399.780140.179677.279674.179342.966901.152390.051714.451093.11. (CC) gcc options: -O3 -march=native -fopenmp
OpenBenchmarking.orgMB/s, More Is BetterStream 2013-01-17Type: CopyEPYC 7532EPYC 7702EPYC 7662EPYC 7552EPYC 7F32EPYC 7302PEPYC 7402PEPYC 7542EPYC 7502PEPYC 7F52EPYC 7232PEPYC 7272EPYC 728216K32K48K64K80KMin: 90631.3 / Avg: 90663.16 / Max: 90687.7Min: 90390.8 / Avg: 90511.82 / Max: 90564Min: 90262 / Avg: 90296.46 / Max: 90328.8Min: 88676.8 / Avg: 88717.18 / Max: 88755.4Min: 82189.9 / Avg: 82399.72 / Max: 82465.6Min: 80119.5 / Avg: 80140.14 / Max: 80160.6Min: 79657.3 / Avg: 79677.16 / Max: 79688.5Min: 79657.3 / Avg: 79674.1 / Max: 79689.4Min: 79306.1 / Avg: 79342.9 / Max: 79388.7Min: 65648.8 / Avg: 66901.12 / Max: 67581.9Min: 52373.2 / Avg: 52390.04 / Max: 52428Min: 51688.2 / Avg: 51714.44 / Max: 51726.5Min: 51087.4 / Avg: 51093.14 / Max: 51100.21. (CC) gcc options: -O3 -march=native -fopenmp

Timed PHP Compilation

This test times how long it takes to build PHP 7. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 7.4.2Time To CompileEPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7F52EPYC 7402PEPYC 7302PEPYC 7282EPYC 7F32EPYC 7272EPYC 7232P20406080100SE +/- 0.06, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.21, N = 3SE +/- 0.02, N = 3SE +/- 0.08, N = 3SE +/- 0.06, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 342.5643.0643.8243.8444.5545.4446.1046.5447.4152.7555.2259.5660.2075.33
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 7.4.2Time To CompileEPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7F52EPYC 7402PEPYC 7302PEPYC 7282EPYC 7F32EPYC 7272EPYC 7232P1428425670Min: 42.5 / Avg: 42.56 / Max: 42.67Min: 43 / Avg: 43.06 / Max: 43.15Min: 43.8 / Avg: 43.82 / Max: 43.84Min: 43.81 / Avg: 43.84 / Max: 43.91Min: 44.52 / Avg: 44.55 / Max: 44.58Min: 45.42 / Avg: 45.44 / Max: 45.46Min: 46.08 / Avg: 46.1 / Max: 46.13Min: 46.12 / Avg: 46.54 / Max: 46.75Min: 47.38 / Avg: 47.41 / Max: 47.44Min: 52.62 / Avg: 52.75 / Max: 52.9Min: 55.1 / Avg: 55.22 / Max: 55.29Min: 59.55 / Avg: 59.56 / Max: 59.58Min: 60.16 / Avg: 60.2 / Max: 60.27Min: 75.27 / Avg: 75.33 / Max: 75.44

OCRMyPDF

OCRMyPDF is an optical character recognition (OCR) text layer to scanned PDF files, producing new PDFs with the text now selectable/searchable/copy-paste capable. OCRMyPDF leverages the Tesseract OCR engine and is written in Python. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 9.6.0+dfsgProcessing 60 Page PDF DocumentEPYC 7542EPYC 7702EPYC 7662EPYC 7552EPYC 7502PEPYC 7F52EPYC 7532EPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P816243240SE +/- 0.11, N = 3SE +/- 0.20, N = 3SE +/- 0.05, N = 3SE +/- 0.07, N = 3SE +/- 0.11, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.11, N = 3SE +/- 0.10, N = 3SE +/- 0.06, N = 3SE +/- 0.05, N = 3SE +/- 0.11, N = 3SE +/- 0.06, N = 318.6618.7318.7918.8619.3819.8619.9420.8723.3924.2326.9027.2033.02
OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 9.6.0+dfsgProcessing 60 Page PDF DocumentEPYC 7542EPYC 7702EPYC 7662EPYC 7552EPYC 7502PEPYC 7F52EPYC 7532EPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P714212835Min: 18.45 / Avg: 18.66 / Max: 18.82Min: 18.39 / Avg: 18.73 / Max: 19.1Min: 18.7 / Avg: 18.79 / Max: 18.88Min: 18.73 / Avg: 18.85 / Max: 18.96Min: 19.15 / Avg: 19.38 / Max: 19.51Min: 19.76 / Avg: 19.86 / Max: 19.94Min: 19.92 / Avg: 19.94 / Max: 19.97Min: 20.74 / Avg: 20.87 / Max: 21.08Min: 23.21 / Avg: 23.39 / Max: 23.54Min: 24.11 / Avg: 24.23 / Max: 24.33Min: 26.81 / Avg: 26.9 / Max: 26.96Min: 26.98 / Avg: 27.2 / Max: 27.33Min: 32.94 / Avg: 33.02 / Max: 33.14

Stream

This benchmark tests the system memory (RAM) performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterStream 2013-01-17Type: AddEPYC 7532EPYC 7702EPYC 7662EPYC 7552EPYC 7F32EPYC 7302PEPYC 7402PEPYC 7542EPYC 7502PEPYC 7F52EPYC 7232PEPYC 7272EPYC 728220K40K60K80K100KSE +/- 7.96, N = 5SE +/- 25.81, N = 5SE +/- 29.12, N = 5SE +/- 16.17, N = 5SE +/- 18.23, N = 5SE +/- 17.97, N = 5SE +/- 9.19, N = 5SE +/- 16.89, N = 5SE +/- 13.36, N = 5SE +/- 39.00, N = 5SE +/- 61.95, N = 5SE +/- 9.00, N = 5SE +/- 6.11, N = 597912.597206.397180.995807.089255.487568.886805.686737.786677.272752.256795.155911.655437.21. (CC) gcc options: -O3 -march=native -fopenmp
OpenBenchmarking.orgMB/s, More Is BetterStream 2013-01-17Type: AddEPYC 7532EPYC 7702EPYC 7662EPYC 7552EPYC 7F32EPYC 7302PEPYC 7402PEPYC 7542EPYC 7502PEPYC 7F52EPYC 7232PEPYC 7272EPYC 728220K40K60K80K100KMin: 97891 / Avg: 97912.52 / Max: 97938.6Min: 97146.6 / Avg: 97206.28 / Max: 97299.6Min: 97111 / Avg: 97180.94 / Max: 97268.6Min: 95762.2 / Avg: 95807.04 / Max: 95857.9Min: 89196.2 / Avg: 89255.38 / Max: 89299.1Min: 87527.2 / Avg: 87568.78 / Max: 87617.1Min: 86780.2 / Avg: 86805.64 / Max: 86837.1Min: 86695.7 / Avg: 86737.72 / Max: 86780.2Min: 86645.7 / Avg: 86677.24 / Max: 86711.4Min: 72639.1 / Avg: 72752.2 / Max: 72844.7Min: 56547.6 / Avg: 56795.1 / Max: 56866.8Min: 55883.1 / Avg: 55911.58 / Max: 55937.4Min: 55417 / Avg: 55437.24 / Max: 55454.21. (CC) gcc options: -O3 -march=native -fopenmp

Stream-Dynamic

This is an open-source AMD modified copy of the Stream memory benchmark catered towards running the RAM benchmark on systems with the AMD Optimizing C/C++ Compiler (AOCC) among other by-default optimizations aiming for an easy and standardized deployment. This test profile though will attempt to fall-back to GCC / Clang for systems lacking AOCC, otherwise there is the existing "stream" test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterStream-Dynamic 1.0- CopyEPYC 7532EPYC 7542EPYC 728220K40K60K80K100KSE +/- 15.65, N = 3SE +/- 16.48, N = 3SE +/- 10.71, N = 394075.6782081.4353581.271. (CXX) g++ options: -Ofast -mcmodel=large -mavx2 -ffp-contract=fast -march=native -fopenmp
OpenBenchmarking.orgMB/s, More Is BetterStream-Dynamic 1.0- CopyEPYC 7532EPYC 7542EPYC 728216K32K48K64K80KMin: 94054.25 / Avg: 94075.67 / Max: 94106.13Min: 82062.83 / Avg: 82081.43 / Max: 82114.3Min: 53561.74 / Avg: 53581.27 / Max: 53598.651. (CXX) g++ options: -Ofast -mcmodel=large -mavx2 -ffp-contract=fast -march=native -fopenmp

Stream

This benchmark tests the system memory (RAM) performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterStream 2013-01-17Type: ScaleEPYC 7532EPYC 7662EPYC 7702EPYC 7552EPYC 7F32EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7542EPYC 7F52EPYC 7232PEPYC 7272EPYC 728220K40K60K80K100KSE +/- 18.59, N = 5SE +/- 21.55, N = 5SE +/- 27.01, N = 5SE +/- 32.31, N = 5SE +/- 30.51, N = 5SE +/- 10.74, N = 5SE +/- 7.47, N = 5SE +/- 12.22, N = 5SE +/- 14.00, N = 5SE +/- 31.64, N = 5SE +/- 61.32, N = 5SE +/- 4.61, N = 5SE +/- 3.08, N = 589487.687848.187714.686805.681753.079703.579147.378638.678399.967011.952630.151978.251044.21. (CC) gcc options: -O3 -march=native -fopenmp
OpenBenchmarking.orgMB/s, More Is BetterStream 2013-01-17Type: ScaleEPYC 7532EPYC 7662EPYC 7702EPYC 7552EPYC 7F32EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7542EPYC 7F52EPYC 7232PEPYC 7272EPYC 728216K32K48K64K80KMin: 89445.1 / Avg: 89487.56 / Max: 89550.1Min: 87786 / Avg: 87848.06 / Max: 87907.9Min: 87618.6 / Avg: 87714.62 / Max: 87786Min: 86725.2 / Avg: 86805.6 / Max: 86880.2Min: 81720.5 / Avg: 81753 / Max: 81875Min: 79669.6 / Avg: 79703.46 / Max: 79733Min: 79125.7 / Avg: 79147.34 / Max: 79168.6Min: 78592.9 / Avg: 78638.56 / Max: 78666.6Min: 78366.2 / Avg: 78399.86 / Max: 78439.4Min: 66926.2 / Avg: 67011.88 / Max: 67120.3Min: 52385 / Avg: 52630.06 / Max: 52697.2Min: 51965.2 / Avg: 51978.22 / Max: 51992.1Min: 51038.4 / Avg: 51044.22 / Max: 51055.91. (CC) gcc options: -O3 -march=native -fopenmp

Stream-Dynamic

This is an open-source AMD modified copy of the Stream memory benchmark catered towards running the RAM benchmark on systems with the AMD Optimizing C/C++ Compiler (AOCC) among other by-default optimizations aiming for an easy and standardized deployment. This test profile though will attempt to fall-back to GCC / Clang for systems lacking AOCC, otherwise there is the existing "stream" test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterStream-Dynamic 1.0- ScaleEPYC 7532EPYC 7542EPYC 728220K40K60K80K100KSE +/- 25.57, N = 3SE +/- 12.91, N = 3SE +/- 11.66, N = 392368.4181529.0653610.351. (CXX) g++ options: -Ofast -mcmodel=large -mavx2 -ffp-contract=fast -march=native -fopenmp
OpenBenchmarking.orgMB/s, More Is BetterStream-Dynamic 1.0- ScaleEPYC 7532EPYC 7542EPYC 728216K32K48K64K80KMin: 92337.02 / Avg: 92368.41 / Max: 92419.07Min: 81504.71 / Avg: 81529.06 / Max: 81548.69Min: 53588 / Avg: 53610.35 / Max: 53627.311. (CXX) g++ options: -Ofast -mcmodel=large -mavx2 -ffp-contract=fast -march=native -fopenmp

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2-v2 - Model: mobilenetEPYC 7F52EPYC 7542EPYC 7282EPYC 7F32EPYC 7502PEPYC 7532EPYC 7702816243240SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.19, N = 3SE +/- 0.10, N = 3SE +/- 0.06, N = 3SE +/- 0.30, N = 3SE +/- 0.56, N = 1220.1020.3520.4820.7120.8322.7534.51MIN: 19.72 / MAX: 21.22MIN: 20 / MAX: 22.9MIN: 19.64 / MAX: 33.63MIN: 20.23 / MAX: 80.25MIN: 20.38 / MAX: 23.31MIN: 21.04 / MAX: 28.66MIN: 29.62 / MAX: 183.171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2-v2 - Model: mobilenetEPYC 7F52EPYC 7542EPYC 7282EPYC 7F32EPYC 7502PEPYC 7532EPYC 7702714212835Min: 20.06 / Avg: 20.1 / Max: 20.17Min: 20.27 / Avg: 20.35 / Max: 20.43Min: 20.1 / Avg: 20.48 / Max: 20.69Min: 20.54 / Avg: 20.71 / Max: 20.9Min: 20.73 / Avg: 20.83 / Max: 20.94Min: 22.38 / Avg: 22.75 / Max: 23.35Min: 32.12 / Avg: 34.51 / Max: 37.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3-v3 - Model: mobilenetEPYC 7542EPYC 7F32EPYC 7502PEPYC 7282EPYC 7F52EPYC 7532EPYC 7702816243240SE +/- 0.14, N = 3SE +/- 0.08, N = 3SE +/- 0.05, N = 3SE +/- 0.17, N = 3SE +/- 0.11, N = 3SE +/- 0.23, N = 3SE +/- 1.08, N = 1220.6120.6820.8020.8821.5523.4135.12MIN: 19.95 / MAX: 22.84MIN: 20.27 / MAX: 23.55MIN: 20.4 / MAX: 23.29MIN: 19.94 / MAX: 96.87MIN: 21.03 / MAX: 22.43MIN: 22.2 / MAX: 116.55MIN: 28.92 / MAX: 297.991. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3-v3 - Model: mobilenetEPYC 7542EPYC 7F32EPYC 7502PEPYC 7282EPYC 7F52EPYC 7532EPYC 7702816243240Min: 20.32 / Avg: 20.61 / Max: 20.77Min: 20.53 / Avg: 20.68 / Max: 20.78Min: 20.71 / Avg: 20.8 / Max: 20.87Min: 20.54 / Avg: 20.88 / Max: 21.12Min: 21.43 / Avg: 21.55 / Max: 21.76Min: 22.96 / Avg: 23.41 / Max: 23.71Min: 31.56 / Avg: 35.12 / Max: 45.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3EPYC 7642EPYC 7532EPYC 7662EPYC 7702EPYC 7552EPYC 7402PEPYC 7502PEPYC 7F52EPYC 7542EPYC 7302PEPYC 7282EPYC 7F32EPYC 7272EPYC 7232P2K4K6K8K10KSE +/- 34.97, N = 3SE +/- 14.29, N = 3SE +/- 8.45, N = 3SE +/- 33.88, N = 3SE +/- 25.58, N = 3SE +/- 21.24, N = 3SE +/- 18.74, N = 3SE +/- 20.53, N = 3SE +/- 4.93, N = 3SE +/- 34.67, N = 3SE +/- 3.19, N = 3SE +/- 4.75, N = 3SE +/- 6.17, N = 3SE +/- 5.69, N = 38499.28476.38287.78248.38172.08033.07903.67899.77885.07699.46706.26573.96498.45123.21. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3EPYC 7642EPYC 7532EPYC 7662EPYC 7702EPYC 7552EPYC 7402PEPYC 7502PEPYC 7F52EPYC 7542EPYC 7302PEPYC 7282EPYC 7F32EPYC 7272EPYC 7232P15003000450060007500Min: 8460.8 / Avg: 8499.17 / Max: 8569Min: 8449.2 / Avg: 8476.3 / Max: 8497.7Min: 8277.2 / Avg: 8287.67 / Max: 8304.4Min: 8204.5 / Avg: 8248.33 / Max: 8315Min: 8129 / Avg: 8172.03 / Max: 8217.5Min: 7990.6 / Avg: 8033.03 / Max: 8056Min: 7870.9 / Avg: 7903.63 / Max: 7935.8Min: 7860.7 / Avg: 7899.7 / Max: 7930.3Min: 7875.6 / Avg: 7885 / Max: 7892.3Min: 7651 / Avg: 7699.4 / Max: 7766.6Min: 6700.4 / Avg: 6706.17 / Max: 6711.4Min: 6567.3 / Avg: 6573.87 / Max: 6583.1Min: 6486.1 / Avg: 6498.4 / Max: 6505.4Min: 5117.2 / Avg: 5123.23 / Max: 5134.61. (CC) gcc options: -O3 -pthread -lz -llzma

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI ScoreEPYC 7542EPYC 7642EPYC 7532EPYC 7502PEPYC 7402PEPYC 7552EPYC 7662EPYC 7302PEPYC 7702EPYC 7282EPYC 7272EPYC 7F52EPYC 7F32EPYC 7232P800160024003200400036413528351835103472339733413119308929222679261524742206

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 2048EPYC 7F32EPYC 7662EPYC 7402PEPYC 7542EPYC 7F52EPYC 7532EPYC 7552EPYC 7642EPYC 7502PEPYC 7702EPYC 7302PEPYC 7282EPYC 7272EPYC 7232P6K12K18K24K30KSE +/- 204.43, N = 3SE +/- 287.13, N = 3SE +/- 250.51, N = 6SE +/- 218.98, N = 3SE +/- 239.23, N = 3SE +/- 124.32, N = 3SE +/- 82.21, N = 3SE +/- 82.48, N = 3SE +/- 275.94, N = 15SE +/- 294.57, N = 15SE +/- 238.46, N = 3SE +/- 144.50, N = 3SE +/- 254.46, N = 15SE +/- 93.54, N = 330037268672676926724266362630526303262892618626143260452560124876183781. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm
OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 2048EPYC 7F32EPYC 7662EPYC 7402PEPYC 7542EPYC 7F52EPYC 7532EPYC 7552EPYC 7642EPYC 7502PEPYC 7702EPYC 7302PEPYC 7282EPYC 7272EPYC 7232P5K10K15K20K25KMin: 29628 / Avg: 30036.67 / Max: 30252Min: 26432 / Avg: 26866.67 / Max: 27409Min: 26241 / Avg: 26768.83 / Max: 27891Min: 26452 / Avg: 26723.67 / Max: 27157Min: 26158 / Avg: 26636.33 / Max: 26885Min: 26075 / Avg: 26304.67 / Max: 26502Min: 26216 / Avg: 26302.67 / Max: 26467Min: 26194 / Avg: 26288.67 / Max: 26453Min: 23734 / Avg: 26185.6 / Max: 27586Min: 23624 / Avg: 26142.87 / Max: 27261Min: 25635 / Avg: 26045.33 / Max: 26461Min: 25375 / Avg: 25601 / Max: 25870Min: 23020 / Avg: 24876 / Max: 25801Min: 18209 / Avg: 18378 / Max: 185321. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50EPYC 7302PEPYC 7402PEPYC 7542EPYC 7F32EPYC 7502PEPYC 7532EPYC 7282EPYC 7272EPYC 7642EPYC 7232PEPYC 7552EPYC 7662EPYC 7702EPYC 7F52816243240SE +/- 0.03, N = 3SE +/- 0.07, N = 3SE +/- 0.15, N = 3SE +/- 0.09, N = 3SE +/- 0.26, N = 3SE +/- 0.04, N = 11SE +/- 0.09, N = 3SE +/- 0.33, N = 3SE +/- 0.19, N = 12SE +/- 0.22, N = 3SE +/- 0.25, N = 3SE +/- 0.12, N = 9SE +/- 0.16, N = 9SE +/- 0.33, N = 321.8822.7923.5523.7524.3224.6225.1726.8327.8328.1928.5732.7333.9835.47MIN: 21.64 / MAX: 23.37MIN: 22.48 / MAX: 33.96MIN: 23.01 / MAX: 26.62MIN: 23.29 / MAX: 24.37MIN: 23.62 / MAX: 27.07MIN: 23.94 / MAX: 27.83MIN: 24.53 / MAX: 50.53MIN: 26.22 / MAX: 41.74MIN: 26.6 / MAX: 234.67MIN: 27.81 / MAX: 29.82MIN: 27.64 / MAX: 33.13MIN: 31.74 / MAX: 44.34MIN: 31.83 / MAX: 59.59MIN: 34.12 / MAX: 114.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50EPYC 7302PEPYC 7402PEPYC 7542EPYC 7F32EPYC 7502PEPYC 7532EPYC 7282EPYC 7272EPYC 7642EPYC 7232PEPYC 7552EPYC 7662EPYC 7702EPYC 7F52816243240Min: 21.81 / Avg: 21.88 / Max: 21.92Min: 22.65 / Avg: 22.79 / Max: 22.86Min: 23.25 / Avg: 23.55 / Max: 23.73Min: 23.57 / Avg: 23.75 / Max: 23.86Min: 23.81 / Avg: 24.32 / Max: 24.65Min: 24.38 / Avg: 24.62 / Max: 24.86Min: 25.05 / Avg: 25.17 / Max: 25.35Min: 26.36 / Avg: 26.83 / Max: 27.48Min: 27.17 / Avg: 27.83 / Max: 29.25Min: 27.94 / Avg: 28.19 / Max: 28.64Min: 28.08 / Avg: 28.57 / Max: 28.9Min: 32.29 / Avg: 32.73 / Max: 33.15Min: 33.16 / Avg: 33.98 / Max: 34.66Min: 34.82 / Avg: 35.47 / Max: 35.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative EntropyEPYC 7F52EPYC 7542EPYC 7502PEPYC 7642EPYC 7702EPYC 7662EPYC 7552EPYC 7402PEPYC 7532EPYC 7302PEPYC 7282EPYC 7F32EPYC 7272EPYC 7232P612182430SE +/- 0.08, N = 4SE +/- 0.06, N = 4SE +/- 0.15, N = 4SE +/- 0.13, N = 4SE +/- 0.03, N = 4SE +/- 0.10, N = 4SE +/- 0.08, N = 4SE +/- 0.05, N = 4SE +/- 0.08, N = 4SE +/- 0.17, N = 3SE +/- 0.10, N = 3SE +/- 0.09, N = 3SE +/- 0.17, N = 3SE +/- 0.17, N = 314.6015.7215.8515.8915.9415.9715.9716.1216.3317.1818.1018.8819.1123.56
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative EntropyEPYC 7F52EPYC 7542EPYC 7502PEPYC 7642EPYC 7702EPYC 7662EPYC 7552EPYC 7402PEPYC 7532EPYC 7302PEPYC 7282EPYC 7F32EPYC 7272EPYC 7232P612182430Min: 14.38 / Avg: 14.6 / Max: 14.74Min: 15.6 / Avg: 15.72 / Max: 15.88Min: 15.61 / Avg: 15.85 / Max: 16.23Min: 15.65 / Avg: 15.89 / Max: 16.17Min: 15.88 / Avg: 15.94 / Max: 16.04Min: 15.81 / Avg: 15.97 / Max: 16.23Min: 15.82 / Avg: 15.97 / Max: 16.13Min: 16.01 / Avg: 16.12 / Max: 16.23Min: 16.13 / Avg: 16.33 / Max: 16.51Min: 17 / Avg: 17.18 / Max: 17.52Min: 17.96 / Avg: 18.1 / Max: 18.29Min: 18.71 / Avg: 18.88 / Max: 19.01Min: 18.77 / Avg: 19.11 / Max: 19.31Min: 23.35 / Avg: 23.56 / Max: 23.91

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: MediumEPYC 7232PEPYC 7702EPYC 7662EPYC 7552EPYC 7542EPYC 7F52EPYC 7502PEPYC 7532EPYC 7402PEPYC 7302PEPYC 7282EPYC 7F32EPYC 72723691215SE +/- 0.00, N = 5SE +/- 0.01, N = 6SE +/- 0.01, N = 6SE +/- 0.01, N = 6SE +/- 0.01, N = 6SE +/- 0.01, N = 6SE +/- 0.01, N = 6SE +/- 0.01, N = 6SE +/- 0.01, N = 6SE +/- 0.01, N = 6SE +/- 0.02, N = 5SE +/- 0.01, N = 5SE +/- 0.00, N = 55.826.686.786.937.027.107.177.337.478.308.639.049.381. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: MediumEPYC 7232PEPYC 7702EPYC 7662EPYC 7552EPYC 7542EPYC 7F52EPYC 7502PEPYC 7532EPYC 7402PEPYC 7302PEPYC 7282EPYC 7F32EPYC 72723691215Min: 5.81 / Avg: 5.82 / Max: 5.83Min: 6.64 / Avg: 6.68 / Max: 6.71Min: 6.76 / Avg: 6.78 / Max: 6.8Min: 6.89 / Avg: 6.93 / Max: 6.96Min: 7.01 / Avg: 7.02 / Max: 7.04Min: 7.07 / Avg: 7.1 / Max: 7.13Min: 7.15 / Avg: 7.17 / Max: 7.18Min: 7.31 / Avg: 7.33 / Max: 7.35Min: 7.43 / Avg: 7.47 / Max: 7.49Min: 8.25 / Avg: 8.3 / Max: 8.34Min: 8.59 / Avg: 8.63 / Max: 8.68Min: 9.02 / Avg: 9.04 / Max: 9.06Min: 9.37 / Avg: 9.38 / Max: 9.391. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

XZ Compression

This test measures the time needed to compress a sample file (an Ubuntu file-system image) using XZ compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXZ Compression 5.2.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9EPYC 7F52EPYC 7642EPYC 7702EPYC 7662EPYC 7532EPYC 7542EPYC 7402PEPYC 7502PEPYC 7552EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P816243240SE +/- 0.11, N = 3SE +/- 0.08, N = 3SE +/- 0.12, N = 3SE +/- 0.05, N = 3SE +/- 0.06, N = 3SE +/- 0.07, N = 3SE +/- 0.07, N = 3SE +/- 0.09, N = 3SE +/- 0.05, N = 3SE +/- 0.19, N = 9SE +/- 0.08, N = 3SE +/- 0.07, N = 3SE +/- 0.07, N = 3SE +/- 0.02, N = 321.4322.0122.0522.0522.0922.3022.4022.4222.5423.1524.9025.6629.6134.511. (CC) gcc options: -pthread -fvisibility=hidden -O2
OpenBenchmarking.orgSeconds, Fewer Is BetterXZ Compression 5.2.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9EPYC 7F52EPYC 7642EPYC 7702EPYC 7662EPYC 7532EPYC 7542EPYC 7402PEPYC 7502PEPYC 7552EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P714212835Min: 21.32 / Avg: 21.43 / Max: 21.65Min: 21.89 / Avg: 22.01 / Max: 22.17Min: 21.86 / Avg: 22.05 / Max: 22.28Min: 22 / Avg: 22.05 / Max: 22.15Min: 22.03 / Avg: 22.09 / Max: 22.22Min: 22.23 / Avg: 22.3 / Max: 22.43Min: 22.31 / Avg: 22.4 / Max: 22.53Min: 22.33 / Avg: 22.42 / Max: 22.59Min: 22.49 / Avg: 22.54 / Max: 22.64Min: 22.45 / Avg: 23.15 / Max: 23.78Min: 24.75 / Avg: 24.9 / Max: 25.01Min: 25.58 / Avg: 25.66 / Max: 25.79Min: 29.47 / Avg: 29.61 / Max: 29.69Min: 34.46 / Avg: 34.51 / Max: 34.531. (CC) gcc options: -pthread -fvisibility=hidden -O2

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Material TesterEPYC 7402PEPYC 7542EPYC 7502PEPYC 7282EPYC 7552EPYC 7642EPYC 7532EPYC 7662EPYC 7702EPYC 7302PEPYC 7F52EPYC 7272EPYC 7F32EPYC 7232P60120180240300173.05173.16173.53177.85182.38185.01186.27187.07188.84191.51192.28210.16247.66278.32

Timed MrBayes Analysis

This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisEPYC 7F52EPYC 7F32EPYC 7302PEPYC 7402PEPYC 7542EPYC 7272EPYC 7232PEPYC 7282EPYC 7502PEPYC 7532EPYC 7642EPYC 7552EPYC 7702EPYC 7662306090120150SE +/- 0.24, N = 3SE +/- 0.18, N = 3SE +/- 0.21, N = 3SE +/- 0.29, N = 3SE +/- 0.07, N = 3SE +/- 0.05, N = 3SE +/- 0.07, N = 3SE +/- 0.16, N = 3SE +/- 0.21, N = 3SE +/- 0.34, N = 3SE +/- 0.17, N = 3SE +/- 1.17, N = 4SE +/- 0.25, N = 3SE +/- 1.02, N = 372.4372.8985.9386.0086.7287.6088.6789.2094.2696.81102.45104.52115.27116.151. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -mabm -O3 -std=c99 -pedantic -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisEPYC 7F52EPYC 7F32EPYC 7302PEPYC 7402PEPYC 7542EPYC 7272EPYC 7232PEPYC 7282EPYC 7502PEPYC 7532EPYC 7642EPYC 7552EPYC 7702EPYC 766220406080100Min: 72.17 / Avg: 72.43 / Max: 72.9Min: 72.56 / Avg: 72.89 / Max: 73.16Min: 85.7 / Avg: 85.93 / Max: 86.35Min: 85.64 / Avg: 86 / Max: 86.58Min: 86.61 / Avg: 86.72 / Max: 86.86Min: 87.51 / Avg: 87.6 / Max: 87.67Min: 88.6 / Avg: 88.67 / Max: 88.81Min: 88.88 / Avg: 89.2 / Max: 89.39Min: 93.87 / Avg: 94.26 / Max: 94.59Min: 96.44 / Avg: 96.81 / Max: 97.49Min: 102.2 / Avg: 102.45 / Max: 102.77Min: 103.18 / Avg: 104.52 / Max: 108.01Min: 114.96 / Avg: 115.27 / Max: 115.77Min: 115.11 / Avg: 116.15 / Max: 118.181. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -mabm -O3 -std=c99 -pedantic -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPUEPYC 7542EPYC 7F32EPYC 7F52EPYC 7282EPYC 7302PEPYC 7502PEPYC 7272EPYC 7402PEPYC 7532EPYC 7232PEPYC 7552EPYC 7642EPYC 7662EPYC 77020.27450.5490.82351.0981.3725SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 4SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 4SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.770.780.790.790.840.840.870.880.950.960.991.021.071.22
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPUEPYC 7542EPYC 7F32EPYC 7F52EPYC 7282EPYC 7302PEPYC 7502PEPYC 7272EPYC 7402PEPYC 7532EPYC 7232PEPYC 7552EPYC 7642EPYC 7662EPYC 7702246810Min: 0.77 / Avg: 0.77 / Max: 0.78Min: 0.78 / Avg: 0.78 / Max: 0.79Min: 0.78 / Avg: 0.79 / Max: 0.79Min: 0.77 / Avg: 0.79 / Max: 0.81Min: 0.83 / Avg: 0.84 / Max: 0.84Min: 0.83 / Avg: 0.84 / Max: 0.84Min: 0.87 / Avg: 0.87 / Max: 0.87Min: 0.87 / Avg: 0.88 / Max: 0.91Min: 0.95 / Avg: 0.95 / Max: 0.95Min: 0.94 / Avg: 0.96 / Max: 0.99Min: 0.98 / Avg: 0.99 / Max: 0.99Min: 1.01 / Avg: 1.02 / Max: 1.02Min: 1.06 / Avg: 1.07 / Max: 1.07Min: 1.21 / Avg: 1.22 / Max: 1.22

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUEPYC 7542EPYC 7282EPYC 7F32EPYC 7F52EPYC 7302PEPYC 7502PEPYC 7272EPYC 7402PEPYC 7532EPYC 7232PEPYC 7552EPYC 7642EPYC 7662EPYC 77020.27230.54460.81691.08921.3615SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 15SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.770.780.790.810.840.840.890.920.950.980.991.021.061.21
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUEPYC 7542EPYC 7282EPYC 7F32EPYC 7F52EPYC 7302PEPYC 7502PEPYC 7272EPYC 7402PEPYC 7532EPYC 7232PEPYC 7552EPYC 7642EPYC 7662EPYC 7702246810Min: 0.77 / Avg: 0.77 / Max: 0.78Min: 0.77 / Avg: 0.78 / Max: 0.79Min: 0.78 / Avg: 0.79 / Max: 0.81Min: 0.8 / Avg: 0.81 / Max: 0.82Min: 0.83 / Avg: 0.84 / Max: 0.84Min: 0.83 / Avg: 0.84 / Max: 0.84Min: 0.87 / Avg: 0.89 / Max: 0.91Min: 0.91 / Avg: 0.92 / Max: 0.93Min: 0.95 / Avg: 0.95 / Max: 0.95Min: 0.91 / Avg: 0.98 / Max: 1.03Min: 0.99 / Avg: 0.99 / Max: 0.99Min: 1.01 / Avg: 1.02 / Max: 1.02Min: 1.06 / Avg: 1.06 / Max: 1.07Min: 1.21 / Avg: 1.21 / Max: 1.21

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: resnet-v2-50EPYC 7542EPYC 7642EPYC 7502PEPYC 7662EPYC 7532EPYC 7552EPYC 7702EPYC 7F32EPYC 7402PEPYC 7232PEPYC 7302PEPYC 7F52EPYC 7282EPYC 7272918273645SE +/- 0.05, N = 15SE +/- 0.03, N = 3SE +/- 0.12, N = 3SE +/- 0.10, N = 3SE +/- 0.05, N = 3SE +/- 0.05, N = 15SE +/- 0.07, N = 3SE +/- 0.01, N = 14SE +/- 0.97, N = 3SE +/- 0.12, N = 11SE +/- 0.09, N = 15SE +/- 0.19, N = 3SE +/- 0.12, N = 3SE +/- 0.11, N = 424.3024.7825.0625.2225.2725.4426.9227.4529.0231.4431.5433.5033.5938.10MIN: 23.72 / MAX: 27.28MIN: 24.46 / MAX: 25.37MIN: 24.51 / MAX: 27.82MIN: 24.73 / MAX: 27.24MIN: 24.96 / MAX: 28.42MIN: 24.73 / MAX: 28.58MIN: 26.49 / MAX: 27.67MIN: 27.01 / MAX: 58.07MIN: 27.23 / MAX: 33.12MIN: 30.44 / MAX: 49.82MIN: 30.05 / MAX: 48.95MIN: 32.85 / MAX: 46.41MIN: 32.54 / MAX: 48.77MIN: 36.85 / MAX: 55.751. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: resnet-v2-50EPYC 7542EPYC 7642EPYC 7502PEPYC 7662EPYC 7532EPYC 7552EPYC 7702EPYC 7F32EPYC 7402PEPYC 7232PEPYC 7302PEPYC 7F52EPYC 7282EPYC 7272816243240Min: 23.9 / Avg: 24.3 / Max: 24.57Min: 24.73 / Avg: 24.78 / Max: 24.82Min: 24.81 / Avg: 25.06 / Max: 25.18Min: 25.03 / Avg: 25.22 / Max: 25.35Min: 25.2 / Avg: 25.27 / Max: 25.36Min: 25.04 / Avg: 25.44 / Max: 25.64Min: 26.83 / Avg: 26.92 / Max: 27.06Min: 27.35 / Avg: 27.44 / Max: 27.53Min: 27.93 / Avg: 29.02 / Max: 30.96Min: 30.9 / Avg: 31.44 / Max: 32.5Min: 30.84 / Avg: 31.54 / Max: 32Min: 33.28 / Avg: 33.5 / Max: 33.89Min: 33.39 / Avg: 33.59 / Max: 33.8Min: 37.9 / Avg: 38.1 / Max: 38.421. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Tungsten Renderer

Tungsten is a C++ physically based renderer that makes use of Intel's Embree ray tracing library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Water CausticEPYC 7702EPYC 7662EPYC 7642EPYC 7542EPYC 7552EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7F32EPYC 7272EPYC 7232P816243240SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.07, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.22, N = 3SE +/- 0.25, N = 3SE +/- 0.41, N = 321.9722.1322.5522.6422.6523.1123.5523.8724.4226.0226.7928.1029.9533.921. (CXX) g++ options: -std=c++0x -march=znver1 -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -mfma -mbmi2 -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -lpthread -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Water CausticEPYC 7702EPYC 7662EPYC 7642EPYC 7542EPYC 7552EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7282EPYC 7F32EPYC 7272EPYC 7232P714212835Min: 21.94 / Avg: 21.97 / Max: 21.99Min: 22.09 / Avg: 22.13 / Max: 22.15Min: 22.48 / Avg: 22.55 / Max: 22.63Min: 22.58 / Avg: 22.64 / Max: 22.73Min: 22.58 / Avg: 22.65 / Max: 22.7Min: 23.07 / Avg: 23.11 / Max: 23.16Min: 23.54 / Avg: 23.55 / Max: 23.58Min: 23.81 / Avg: 23.87 / Max: 23.93Min: 24.31 / Avg: 24.42 / Max: 24.54Min: 25.95 / Avg: 26.02 / Max: 26.08Min: 26.76 / Avg: 26.79 / Max: 26.84Min: 27.88 / Avg: 28.1 / Max: 28.53Min: 29.5 / Avg: 29.95 / Max: 30.34Min: 33.49 / Avg: 33.92 / Max: 34.731. (CXX) g++ options: -std=c++0x -march=znver1 -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -mfma -mbmi2 -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -lpthread -ldl

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian ChangepointEPYC 7F52EPYC 7502PEPYC 7542EPYC 7702EPYC 7662EPYC 7532EPYC 7642EPYC 7552EPYC 7402PEPYC 7302PEPYC 7F32EPYC 7282EPYC 7272EPYC 7232P1020304050SE +/- 0.20, N = 3SE +/- 0.27, N = 3SE +/- 0.20, N = 3SE +/- 0.26, N = 3SE +/- 0.35, N = 3SE +/- 0.11, N = 3SE +/- 0.24, N = 3SE +/- 0.14, N = 3SE +/- 0.24, N = 3SE +/- 0.10, N = 3SE +/- 0.22, N = 3SE +/- 0.48, N = 3SE +/- 0.13, N = 3SE +/- 0.11, N = 329.4532.6532.8733.2733.4133.4333.6033.9634.3134.8435.6736.3538.0745.21
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian ChangepointEPYC 7F52EPYC 7502PEPYC 7542EPYC 7702EPYC 7662EPYC 7532EPYC 7642EPYC 7552EPYC 7402PEPYC 7302PEPYC 7F32EPYC 7282EPYC 7272EPYC 7232P918273645Min: 29.13 / Avg: 29.45 / Max: 29.83Min: 32.15 / Avg: 32.64 / Max: 33.08Min: 32.54 / Avg: 32.87 / Max: 33.22Min: 32.84 / Avg: 33.27 / Max: 33.73Min: 32.9 / Avg: 33.41 / Max: 34.08Min: 33.26 / Avg: 33.43 / Max: 33.62Min: 33.36 / Avg: 33.6 / Max: 34.09Min: 33.82 / Avg: 33.96 / Max: 34.23Min: 33.83 / Avg: 34.31 / Max: 34.61Min: 34.65 / Avg: 34.84 / Max: 35.01Min: 35.42 / Avg: 35.67 / Max: 36.1Min: 35.71 / Avg: 36.35 / Max: 37.28Min: 37.91 / Avg: 38.07 / Max: 38.32Min: 45 / Avg: 45.21 / Max: 45.37

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2-v2 - Model: squeezenet_ssdEPYC 7F52EPYC 7282EPYC 7542EPYC 7502PEPYC 7532EPYC 7F32EPYC 7702816243240SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.08, N = 3SE +/- 0.03, N = 3SE +/- 0.14, N = 1221.6222.9423.5723.7725.6425.8133.14MIN: 21.07 / MAX: 23.56MIN: 22.01 / MAX: 97.98MIN: 23.08 / MAX: 25.69MIN: 23.13 / MAX: 26.74MIN: 25.07 / MAX: 31.01MIN: 24.64 / MAX: 26.55MIN: 31.31 / MAX: 88.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2-v2 - Model: squeezenet_ssdEPYC 7F52EPYC 7282EPYC 7542EPYC 7502PEPYC 7532EPYC 7F32EPYC 7702714212835Min: 21.55 / Avg: 21.62 / Max: 21.66Min: 22.91 / Avg: 22.94 / Max: 22.99Min: 23.54 / Avg: 23.57 / Max: 23.63Min: 23.69 / Avg: 23.77 / Max: 23.88Min: 25.53 / Avg: 25.64 / Max: 25.8Min: 25.75 / Avg: 25.81 / Max: 25.86Min: 32.49 / Avg: 33.14 / Max: 34.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradesoapEPYC 7F52EPYC 7542EPYC 7402PEPYC 7502PEPYC 7282EPYC 7532EPYC 7302PEPYC 7552EPYC 7662EPYC 7702EPYC 7272EPYC 7F32EPYC 7232P11002200330044005500SE +/- 17.61, N = 4SE +/- 30.98, N = 5SE +/- 36.08, N = 5SE +/- 31.58, N = 20SE +/- 23.50, N = 13SE +/- 38.48, N = 5SE +/- 23.38, N = 5SE +/- 25.46, N = 5SE +/- 22.90, N = 5SE +/- 23.53, N = 20SE +/- 35.80, N = 6SE +/- 13.18, N = 4SE +/- 47.23, N = 43288330433103358349234973526355036243626375740955021
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradesoapEPYC 7F52EPYC 7542EPYC 7402PEPYC 7502PEPYC 7282EPYC 7532EPYC 7302PEPYC 7552EPYC 7662EPYC 7702EPYC 7272EPYC 7F32EPYC 7232P9001800270036004500Min: 3254 / Avg: 3287.75 / Max: 3330Min: 3195 / Avg: 3303.6 / Max: 3360Min: 3228 / Avg: 3310.2 / Max: 3400Min: 3220 / Avg: 3358.3 / Max: 3805Min: 3310 / Avg: 3491.92 / Max: 3632Min: 3410 / Avg: 3496.6 / Max: 3597Min: 3448 / Avg: 3526.2 / Max: 3591Min: 3490 / Avg: 3550 / Max: 3631Min: 3562 / Avg: 3624.4 / Max: 3691Min: 3427 / Avg: 3625.95 / Max: 3913Min: 3621 / Avg: 3757.17 / Max: 3836Min: 4078 / Avg: 4095 / Max: 4134Min: 4937 / Avg: 5020.5 / Max: 5136

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: shufflenet-v2-10 - Device: OpenMP CPUEPYC 7F32EPYC 7F52EPYC 7302PEPYC 7232PEPYC 7402PEPYC 7272EPYC 7542EPYC 7282EPYC 7502PEPYC 7532EPYC 7552EPYC 7642EPYC 7662EPYC 77022K4K6K8K10KSE +/- 45.76, N = 3SE +/- 112.30, N = 4SE +/- 1.83, N = 3SE +/- 15.37, N = 3SE +/- 18.32, N = 3SE +/- 2.92, N = 3SE +/- 29.60, N = 3SE +/- 45.05, N = 3SE +/- 63.96, N = 12SE +/- 105.59, N = 12SE +/- 70.17, N = 12SE +/- 73.86, N = 12SE +/- 46.88, N = 3SE +/- 53.19, N = 12982393488975883688108721869886118421807073737221668664781. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: shufflenet-v2-10 - Device: OpenMP CPUEPYC 7F32EPYC 7F52EPYC 7302PEPYC 7232PEPYC 7402PEPYC 7272EPYC 7542EPYC 7282EPYC 7502PEPYC 7532EPYC 7552EPYC 7642EPYC 7662EPYC 77022K4K6K8K10KMin: 9743 / Avg: 9822.67 / Max: 9901.5Min: 9037.5 / Avg: 9348.13 / Max: 9565.5Min: 8973 / Avg: 8974.83 / Max: 8978.5Min: 8814.5 / Avg: 8836.33 / Max: 8866Min: 8780.5 / Avg: 8809.83 / Max: 8843.5Min: 8717.5 / Avg: 8720.67 / Max: 8726.5Min: 8644 / Avg: 8698 / Max: 8746Min: 8522 / Avg: 8611.17 / Max: 8667Min: 7942 / Avg: 8421.42 / Max: 8675.5Min: 7492 / Avg: 8070 / Max: 8524Min: 6932 / Avg: 7373.04 / Max: 7736Min: 6810.5 / Avg: 7220.58 / Max: 7710.5Min: 6618 / Avg: 6686.17 / Max: 6776Min: 6037.5 / Avg: 6477.5 / Max: 68041. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdEPYC 7302PEPYC 7272EPYC 7282EPYC 7402PEPYC 7502PEPYC 7542EPYC 7F52EPYC 7F32EPYC 7532EPYC 7232PEPYC 7642EPYC 7552EPYC 7662EPYC 7702816243240SE +/- 0.08, N = 3SE +/- 0.09, N = 3SE +/- 0.08, N = 3SE +/- 0.05, N = 3SE +/- 0.14, N = 3SE +/- 0.07, N = 3SE +/- 0.15, N = 3SE +/- 0.05, N = 3SE +/- 0.05, N = 11SE +/- 0.01, N = 3SE +/- 0.09, N = 12SE +/- 0.03, N = 3SE +/- 0.15, N = 9SE +/- 0.19, N = 922.4522.6222.8523.0223.7324.5225.2025.7625.9427.9728.6828.9432.0933.16MIN: 21.7 / MAX: 24.03MIN: 22.05 / MAX: 23.63MIN: 21.95 / MAX: 99.38MIN: 22.47 / MAX: 24.28MIN: 22.99 / MAX: 25.94MIN: 24.07 / MAX: 27.01MIN: 24.28 / MAX: 26.66MIN: 24.63 / MAX: 28.33MIN: 25.15 / MAX: 38.37MIN: 27.46 / MAX: 42.1MIN: 27.54 / MAX: 42.27MIN: 28.1 / MAX: 41.11MIN: 30.51 / MAX: 174.53MIN: 31.64 / MAX: 1271. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdEPYC 7302PEPYC 7272EPYC 7282EPYC 7402PEPYC 7502PEPYC 7542EPYC 7F52EPYC 7F32EPYC 7532EPYC 7232PEPYC 7642EPYC 7552EPYC 7662EPYC 7702714212835Min: 22.32 / Avg: 22.45 / Max: 22.58Min: 22.44 / Avg: 22.62 / Max: 22.74Min: 22.7 / Avg: 22.85 / Max: 22.98Min: 22.92 / Avg: 23.02 / Max: 23.09Min: 23.45 / Avg: 23.73 / Max: 23.93Min: 24.41 / Avg: 24.52 / Max: 24.65Min: 24.91 / Avg: 25.2 / Max: 25.41Min: 25.69 / Avg: 25.76 / Max: 25.86Min: 25.63 / Avg: 25.94 / Max: 26.21Min: 27.95 / Avg: 27.97 / Max: 27.99Min: 28.28 / Avg: 28.68 / Max: 29.56Min: 28.89 / Avg: 28.94 / Max: 28.98Min: 31.56 / Avg: 32.09 / Max: 32.7Min: 32.47 / Avg: 33.16 / Max: 34.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

BlogBench

BlogBench is designed to replicate the load of a real-world busy file server by stressing the file-system with multiple threads of random reads, writes, and rewrites. The behavior is mimicked of that of a blog by creating blogs with content and pictures, modifying blog posts, adding comments to these blogs, and then reading the content of the blogs. All of these blogs generated are created locally with fake content and pictures. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFinal Score, More Is BetterBlogBench 1.1Test: ReadEPYC 7282EPYC 7272EPYC 7402PEPYC 7502PEPYC 7232PEPYC 7542EPYC 7302PEPYC 7F32EPYC 7552EPYC 7F52EPYC 7532EPYC 7642EPYC 7662EPYC 7702400K800K1200K1600K2000KSE +/- 12175.94, N = 3SE +/- 20232.46, N = 3SE +/- 7805.03, N = 3SE +/- 7760.40, N = 3SE +/- 16960.13, N = 3SE +/- 7106.01, N = 3SE +/- 7235.14, N = 3SE +/- 14389.61, N = 3SE +/- 4883.16, N = 3SE +/- 13461.20, N = 3SE +/- 13194.50, N = 3SE +/- 14031.09, N = 3SE +/- 9874.65, N = 3SE +/- 10548.76, N = 3204303720231801959615195008019444961923368189842817720371771624167651616360921583350139747513729551. (CC) gcc options: -O2 -pthread
OpenBenchmarking.orgFinal Score, More Is BetterBlogBench 1.1Test: ReadEPYC 7282EPYC 7272EPYC 7402PEPYC 7502PEPYC 7232PEPYC 7542EPYC 7302PEPYC 7F32EPYC 7552EPYC 7F52EPYC 7532EPYC 7642EPYC 7662EPYC 7702400K800K1200K1600K2000KMin: 2022967 / Avg: 2043037.33 / Max: 2065016Min: 1992185 / Avg: 2023179.67 / Max: 2061206Min: 1944836 / Avg: 1959614.67 / Max: 1971357Min: 1935120 / Avg: 1950080.33 / Max: 1961140Min: 1916141 / Avg: 1944496.33 / Max: 1974796Min: 1909275 / Avg: 1923368 / Max: 1932004Min: 1890301 / Avg: 1898428 / Max: 1912860Min: 1743950 / Avg: 1772037 / Max: 1791514Min: 1764285 / Avg: 1771624.33 / Max: 1780874Min: 1650220 / Avg: 1676515.67 / Max: 1694665Min: 1610844 / Avg: 1636092.33 / Max: 1655363Min: 1565457 / Avg: 1583350 / Max: 1611018Min: 1384215 / Avg: 1397475 / Max: 1416780Min: 1360224 / Avg: 1372955 / Max: 13938901. (CC) gcc options: -O2 -pthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3-v3 - Model: squeezenet_ssdEPYC 7F52EPYC 7282EPYC 7502PEPYC 7542EPYC 7F32EPYC 7532EPYC 7702816243240SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.28, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.12, N = 3SE +/- 0.19, N = 1222.7222.8324.1524.4925.1525.5833.61MIN: 22.33 / MAX: 23.78MIN: 21.87 / MAX: 58.21MIN: 23.15 / MAX: 101MIN: 24.14 / MAX: 33.66MIN: 24.73 / MAX: 26.1MIN: 24.92 / MAX: 28.09MIN: 31.82 / MAX: 180.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3-v3 - Model: squeezenet_ssdEPYC 7F52EPYC 7282EPYC 7502PEPYC 7542EPYC 7F32EPYC 7532EPYC 7702714212835Min: 22.68 / Avg: 22.72 / Max: 22.8Min: 22.75 / Avg: 22.83 / Max: 22.89Min: 23.77 / Avg: 24.15 / Max: 24.7Min: 24.42 / Avg: 24.49 / Max: 24.57Min: 25.1 / Avg: 25.15 / Max: 25.19Min: 25.36 / Avg: 25.58 / Max: 25.76Min: 32.8 / Avg: 33.61 / Max: 34.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3-v3 - Model: resnet50EPYC 7542EPYC 7502PEPYC 7F32EPYC 7532EPYC 7282EPYC 7F52EPYC 7702816243240SE +/- 0.10, N = 3SE +/- 0.06, N = 3SE +/- 0.17, N = 3SE +/- 0.12, N = 3SE +/- 0.07, N = 3SE +/- 0.12, N = 3SE +/- 0.12, N = 1223.4023.9723.9824.8725.4926.6534.44MIN: 23.06 / MAX: 26.02MIN: 23.66 / MAX: 26.64MIN: 23.48 / MAX: 99.76MIN: 24.23 / MAX: 28.08MIN: 24.81 / MAX: 44.64MIN: 26.08 / MAX: 28.21MIN: 31.49 / MAX: 123.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3-v3 - Model: resnet50EPYC 7542EPYC 7502PEPYC 7F32EPYC 7532EPYC 7282EPYC 7F52EPYC 7702714212835Min: 23.26 / Avg: 23.4 / Max: 23.59Min: 23.86 / Avg: 23.97 / Max: 24.08Min: 23.78 / Avg: 23.98 / Max: 24.31Min: 24.66 / Avg: 24.87 / Max: 25.07Min: 25.36 / Avg: 25.49 / Max: 25.57Min: 26.49 / Avg: 26.65 / Max: 26.89Min: 33.59 / Avg: 34.44 / Max: 34.951. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Akka Unbalanced Cobwebbed TreeEPYC 7302PEPYC 7402PEPYC 77024K8K12K16K20KSE +/- 109.57, N = 9SE +/- 112.86, N = 15SE +/- 118.92, N = 513564.1714033.7019829.41
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Akka Unbalanced Cobwebbed TreeEPYC 7302PEPYC 7402PEPYC 77023K6K9K12K15KMin: 12862.78 / Avg: 13564.17 / Max: 13941.72Min: 13276.43 / Avg: 14033.7 / Max: 14919.35Min: 19533.5 / Avg: 19829.41 / Max: 20169.52

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2-v2 - Model: resnet50EPYC 7542EPYC 7502PEPYC 7F32EPYC 7532EPYC 7282EPYC 7F52EPYC 7702816243240SE +/- 0.07, N = 3SE +/- 0.19, N = 3SE +/- 0.60, N = 3SE +/- 0.09, N = 3SE +/- 0.23, N = 3SE +/- 0.21, N = 3SE +/- 0.17, N = 1223.4524.1924.3724.8225.5626.2434.07MIN: 23.16 / MAX: 25.99MIN: 23.68 / MAX: 26.81MIN: 23.47 / MAX: 37.92MIN: 24.27 / MAX: 27.47MIN: 24.69 / MAX: 100.99MIN: 25.52 / MAX: 28.03MIN: 31.16 / MAX: 133.361. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2-v2 - Model: resnet50EPYC 7542EPYC 7502PEPYC 7F32EPYC 7532EPYC 7282EPYC 7F52EPYC 7702714212835Min: 23.35 / Avg: 23.45 / Max: 23.59Min: 23.94 / Avg: 24.19 / Max: 24.56Min: 23.76 / Avg: 24.37 / Max: 25.58Min: 24.66 / Avg: 24.82 / Max: 24.98Min: 25.18 / Avg: 25.56 / Max: 25.97Min: 25.94 / Avg: 26.24 / Max: 26.65Min: 32.8 / Avg: 34.07 / Max: 34.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2-v2 - Model: resnet18EPYC 7F32EPYC 7F52EPYC 7542EPYC 7532EPYC 7502PEPYC 7282EPYC 770248121620SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.21, N = 1211.8012.4012.9613.0613.0914.1917.09MIN: 11.66 / MAX: 13.4MIN: 12.1 / MAX: 13.48MIN: 12.71 / MAX: 15.67MIN: 12.64 / MAX: 15.48MIN: 12.8 / MAX: 15.7MIN: 13.89 / MAX: 26.57MIN: 16.07 / MAX: 156.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2-v2 - Model: resnet18EPYC 7F32EPYC 7F52EPYC 7542EPYC 7532EPYC 7502PEPYC 7282EPYC 770248121620Min: 11.79 / Avg: 11.8 / Max: 11.82Min: 12.35 / Avg: 12.4 / Max: 12.5Min: 12.92 / Avg: 12.96 / Max: 13.02Min: 13 / Avg: 13.06 / Max: 13.1Min: 13.03 / Avg: 13.09 / Max: 13.15Min: 14.07 / Avg: 14.19 / Max: 14.29Min: 16.27 / Avg: 17.09 / Max: 18.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2EPYC 7F32EPYC 7F52EPYC 7272EPYC 7302PEPYC 7282EPYC 7402PEPYC 7232PEPYC 7542EPYC 7502PEPYC 7532EPYC 7552EPYC 7702EPYC 766210002000300040005000SE +/- 31.43, N = 6SE +/- 31.15, N = 7SE +/- 22.48, N = 6SE +/- 38.29, N = 5SE +/- 31.79, N = 5SE +/- 13.74, N = 5SE +/- 29.37, N = 5SE +/- 15.11, N = 5SE +/- 22.77, N = 5SE +/- 44.06, N = 5SE +/- 34.34, N = 5SE +/- 47.69, N = 5SE +/- 39.23, N = 53316332335523633367137783793386139174026446346864773
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2EPYC 7F32EPYC 7F52EPYC 7272EPYC 7302PEPYC 7282EPYC 7402PEPYC 7232PEPYC 7542EPYC 7502PEPYC 7532EPYC 7552EPYC 7702EPYC 76628001600240032004000Min: 3220 / Avg: 3315.5 / Max: 3444Min: 3208 / Avg: 3323.43 / Max: 3442Min: 3511 / Avg: 3551.67 / Max: 3657Min: 3534 / Avg: 3633.2 / Max: 3722Min: 3564 / Avg: 3671.2 / Max: 3763Min: 3743 / Avg: 3778.2 / Max: 3810Min: 3701 / Avg: 3792.8 / Max: 3886Min: 3817 / Avg: 3860.6 / Max: 3906Min: 3832 / Avg: 3917.4 / Max: 3957Min: 3901 / Avg: 4025.6 / Max: 4157Min: 4370 / Avg: 4463.2 / Max: 4566Min: 4536 / Avg: 4686 / Max: 4799Min: 4691 / Avg: 4773.4 / Max: 4896

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Points2ImageEPYC 7F32EPYC 7F52EPYC 7272EPYC 7232PEPYC 7282EPYC 7302PEPYC 7402PEPYC 7542EPYC 7502PEPYC 7532EPYC 7552EPYC 7642EPYC 7662EPYC 77026K12K18K24K30KSE +/- 296.05, N = 4SE +/- 205.94, N = 13SE +/- 251.88, N = 3SE +/- 130.62, N = 3SE +/- 162.08, N = 13SE +/- 278.67, N = 4SE +/- 228.79, N = 5SE +/- 300.22, N = 3SE +/- 303.68, N = 3SE +/- 233.41, N = 5SE +/- 149.83, N = 11SE +/- 208.12, N = 5SE +/- 203.19, N = 6SE +/- 189.69, N = 627135.6324205.5423807.6723230.1922574.8022339.8521653.9121442.0021272.0420967.4820634.2020036.1919915.0019062.921. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Points2ImageEPYC 7F32EPYC 7F52EPYC 7272EPYC 7232PEPYC 7282EPYC 7302PEPYC 7402PEPYC 7542EPYC 7502PEPYC 7532EPYC 7552EPYC 7642EPYC 7662EPYC 77025K10K15K20K25KMin: 26266.6 / Avg: 27135.63 / Max: 27598.01Min: 22569.97 / Avg: 24205.54 / Max: 25313.47Min: 23305.32 / Avg: 23807.67 / Max: 24091.45Min: 22989.39 / Avg: 23230.19 / Max: 23438.32Min: 20715.3 / Avg: 22574.8 / Max: 23030.24Min: 21516.1 / Avg: 22339.85 / Max: 22741.4Min: 20776.27 / Avg: 21653.91 / Max: 22118.67Min: 20857.09 / Avg: 21442 / Max: 21851.95Min: 20665.78 / Avg: 21272.04 / Max: 21606.83Min: 20100.44 / Avg: 20967.48 / Max: 21472.74Min: 19629.33 / Avg: 20634.2 / Max: 21282.63Min: 19216.66 / Avg: 20036.19 / Max: 20355.61Min: 18959.86 / Avg: 19915 / Max: 20292.89Min: 18118.9 / Avg: 19062.92 / Max: 19320.871. (CXX) g++ options: -O3 -std=c++11 -fopenmp

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3-v3 - Model: resnet18EPYC 7F32EPYC 7F52EPYC 7502PEPYC 7542EPYC 7532EPYC 7282EPYC 770248121620SE +/- 0.09, N = 3SE +/- 0.17, N = 3SE +/- 0.05, N = 3SE +/- 0.08, N = 3SE +/- 0.06, N = 3SE +/- 0.22, N = 3SE +/- 0.05, N = 1211.9012.8613.0413.0613.1014.5716.88MIN: 11.65 / MAX: 55.15MIN: 12.33 / MAX: 13.98MIN: 12.78 / MAX: 27.66MIN: 12.71 / MAX: 28.81MIN: 12.59 / MAX: 16.42MIN: 14.1 / MAX: 125.1MIN: 16.26 / MAX: 23.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3-v3 - Model: resnet18EPYC 7F32EPYC 7F52EPYC 7502PEPYC 7542EPYC 7532EPYC 7282EPYC 770248121620Min: 11.8 / Avg: 11.9 / Max: 12.07Min: 12.57 / Avg: 12.86 / Max: 13.15Min: 12.95 / Avg: 13.04 / Max: 13.1Min: 12.98 / Avg: 13.06 / Max: 13.22Min: 13.01 / Avg: 13.1 / Max: 13.21Min: 14.32 / Avg: 14.57 / Max: 15.01Min: 16.54 / Avg: 16.88 / Max: 17.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18EPYC 7F32EPYC 7302PEPYC 7402PEPYC 7542EPYC 7532EPYC 7502PEPYC 7232PEPYC 7272EPYC 7282EPYC 7552EPYC 7642EPYC 7F52EPYC 7662EPYC 770248121620SE +/- 0.06, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.09, N = 3SE +/- 0.03, N = 11SE +/- 0.22, N = 3SE +/- 0.01, N = 3SE +/- 0.16, N = 3SE +/- 0.23, N = 3SE +/- 0.09, N = 3SE +/- 0.50, N = 12SE +/- 0.02, N = 3SE +/- 0.08, N = 9SE +/- 0.13, N = 911.8812.0212.5313.0113.0913.1714.1414.5014.5214.9515.3515.8316.1916.80MIN: 11.62 / MAX: 12.46MIN: 11.87 / MAX: 12.25MIN: 12.29 / MAX: 14.54MIN: 12.71 / MAX: 16.73MIN: 12.54 / MAX: 15.91MIN: 12.72 / MAX: 15.99MIN: 14.02 / MAX: 14.84MIN: 14.03 / MAX: 30.23MIN: 14.04 / MAX: 90.85MIN: 14.44 / MAX: 17.51MIN: 13.83 / MAX: 1062.41MIN: 15.56 / MAX: 17.09MIN: 14.96 / MAX: 24.1MIN: 15.34 / MAX: 148.171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18EPYC 7F32EPYC 7302PEPYC 7402PEPYC 7542EPYC 7532EPYC 7502PEPYC 7232PEPYC 7272EPYC 7282EPYC 7552EPYC 7642EPYC 7F52EPYC 7662EPYC 770248121620Min: 11.76 / Avg: 11.88 / Max: 11.97Min: 11.98 / Avg: 12.02 / Max: 12.05Min: 12.49 / Avg: 12.53 / Max: 12.57Min: 12.88 / Avg: 13.01 / Max: 13.19Min: 12.94 / Avg: 13.09 / Max: 13.29Min: 12.85 / Avg: 13.17 / Max: 13.59Min: 14.12 / Avg: 14.14 / Max: 14.15Min: 14.19 / Avg: 14.5 / Max: 14.66Min: 14.26 / Avg: 14.52 / Max: 14.98Min: 14.77 / Avg: 14.95 / Max: 15.05Min: 14.54 / Avg: 15.35 / Max: 20.77Min: 15.79 / Avg: 15.83 / Max: 15.87Min: 15.8 / Avg: 16.19 / Max: 16.66Min: 16.19 / Avg: 16.8 / Max: 17.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training ScoreEPYC 7402PEPYC 7542EPYC 7502PEPYC 7532EPYC 7302PEPYC 7642EPYC 7282EPYC 7552EPYC 7272EPYC 7662EPYC 7F32EPYC 7F52EPYC 7702EPYC 7232P3006009001200150015461534148814751459140313841348130712191173115911241094

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Twitter HTTP RequestsEPYC 7402PEPYC 77028001600240032004000SE +/- 19.25, N = 5SE +/- 17.47, N = 52698.593811.02
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Twitter HTTP RequestsEPYC 7402PEPYC 77027001400210028003500Min: 2627.95 / Avg: 2698.59 / Max: 2739.55Min: 3758.54 / Avg: 3811.02 / Max: 3848.59

Monte Carlo Simulations of Ionised Nebulae

Mocassin is the Monte Carlo Simulations of Ionised Nebulae. MOCASSIN is a fully 3D or 2D photoionisation and dust radiative transfer code which employs a Monte Carlo approach to the transfer of radiation through media of arbitrary geometry and density distribution. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.0EPYC 7F52EPYC 7502PEPYC 7542EPYC 7532EPYC 7402PEPYC 7642EPYC 7F32EPYC 7552EPYC 7302PEPYC 7662EPYC 7282EPYC 7702EPYC 7272EPYC 7232P60120180240300SE +/- 0.33, N = 3SE +/- 0.33, N = 3SE +/- 0.33, N = 31972142152152172192212232282392392422502761. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.0EPYC 7F52EPYC 7502PEPYC 7542EPYC 7532EPYC 7402PEPYC 7642EPYC 7F32EPYC 7552EPYC 7302PEPYC 7662EPYC 7282EPYC 7702EPYC 7272EPYC 7232P50100150200250Min: 216 / Avg: 216.67 / Max: 217Min: 242 / Avg: 242.33 / Max: 243Min: 249 / Avg: 249.67 / Max: 2501. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C2670EPYC 7F52EPYC 7F32EPYC 7542EPYC 7552EPYC 7662EPYC 7502PEPYC 7272EPYC 7532EPYC 7702EPYC 7282EPYC 7232P50100150200250SE +/- 2.33, N = 9SE +/- 1.05, N = 3SE +/- 1.90, N = 3SE +/- 1.89, N = 3SE +/- 1.33, N = 3SE +/- 1.93, N = 5SE +/- 1.59, N = 3SE +/- 1.85, N = 3SE +/- 1.88, N = 4SE +/- 1.23, N = 3SE +/- 2.54, N = 3151.46158.79165.66170.22172.37173.90174.20175.85177.62183.22211.491. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE
OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C2670EPYC 7F52EPYC 7F32EPYC 7542EPYC 7552EPYC 7662EPYC 7502PEPYC 7272EPYC 7532EPYC 7702EPYC 7282EPYC 7232P4080120160200Min: 142 / Avg: 151.46 / Max: 162.3Min: 157.45 / Avg: 158.79 / Max: 160.86Min: 162.62 / Avg: 165.66 / Max: 169.15Min: 168.07 / Avg: 170.22 / Max: 173.98Min: 170.3 / Avg: 172.37 / Max: 174.84Min: 168.11 / Avg: 173.89 / Max: 179.12Min: 171.02 / Avg: 174.2 / Max: 175.94Min: 172.42 / Avg: 175.85 / Max: 178.76Min: 172.48 / Avg: 177.62 / Max: 181.39Min: 181.83 / Avg: 183.22 / Max: 185.67Min: 206.5 / Avg: 211.49 / Max: 214.781. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: SqueezeNetV1.0EPYC 7532EPYC 7642EPYC 7502PEPYC 7662EPYC 7542EPYC 7552EPYC 7702EPYC 7402PEPYC 7F32EPYC 7302PEPYC 7282EPYC 7232PEPYC 7272EPYC 7F523691215SE +/- 0.016, N = 3SE +/- 0.003, N = 3SE +/- 0.048, N = 3SE +/- 0.018, N = 3SE +/- 0.252, N = 15SE +/- 0.271, N = 15SE +/- 0.014, N = 3SE +/- 0.114, N = 3SE +/- 0.062, N = 14SE +/- 0.061, N = 15SE +/- 0.027, N = 3SE +/- 0.071, N = 11SE +/- 0.115, N = 4SE +/- 0.064, N = 37.3387.4797.4937.5427.6258.1428.3758.9789.3579.5049.6109.6509.76810.209MIN: 7.23 / MAX: 7.87MIN: 7.34 / MAX: 8.04MIN: 7.29 / MAX: 9.78MIN: 7.38 / MAX: 8.01MIN: 7.08 / MAX: 14.04MIN: 7.43 / MAX: 13.26MIN: 8.25 / MAX: 8.55MIN: 8.7 / MAX: 10.42MIN: 8.98 / MAX: 23.95MIN: 9.01 / MAX: 25.56MIN: 9.28 / MAX: 25.71MIN: 9.16 / MAX: 25.96MIN: 9.3 / MAX: 12.44MIN: 10.06 / MAX: 11.231. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: SqueezeNetV1.0EPYC 7532EPYC 7642EPYC 7502PEPYC 7662EPYC 7542EPYC 7552EPYC 7702EPYC 7402PEPYC 7F32EPYC 7302PEPYC 7282EPYC 7232PEPYC 7272EPYC 7F523691215Min: 7.32 / Avg: 7.34 / Max: 7.37Min: 7.47 / Avg: 7.48 / Max: 7.48Min: 7.44 / Avg: 7.49 / Max: 7.59Min: 7.51 / Avg: 7.54 / Max: 7.56Min: 7.17 / Avg: 7.63 / Max: 10.86Min: 7.63 / Avg: 8.14 / Max: 11.51Min: 8.35 / Avg: 8.37 / Max: 8.4Min: 8.84 / Avg: 8.98 / Max: 9.2Min: 9.16 / Avg: 9.36 / Max: 10Min: 9.26 / Avg: 9.5 / Max: 10.27Min: 9.58 / Avg: 9.61 / Max: 9.66Min: 9.3 / Avg: 9.65 / Max: 10.14Min: 9.49 / Avg: 9.77 / Max: 10.05Min: 10.14 / Avg: 10.21 / Max: 10.341. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeEPYC 7542EPYC 7502PEPYC 7F52EPYC 7402PEPYC 7552EPYC 7532EPYC 7662EPYC 7702EPYC 7302PEPYC 7282EPYC 7F32EPYC 7272EPYC 7232P1632486480SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.12, N = 3SE +/- 0.04, N = 3SE +/- 0.08, N = 3SE +/- 0.06, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.59, N = 3SE +/- 0.08, N = 3SE +/- 0.05, N = 3SE +/- 0.02, N = 352.1352.7453.1553.9354.4954.7755.0456.0358.5161.5462.2462.7672.471. RawTherapee, version 5.8, command line.
OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeEPYC 7542EPYC 7502PEPYC 7F52EPYC 7402PEPYC 7552EPYC 7532EPYC 7662EPYC 7702EPYC 7302PEPYC 7282EPYC 7F32EPYC 7272EPYC 7232P1428425670Min: 52.04 / Avg: 52.12 / Max: 52.19Min: 52.69 / Avg: 52.74 / Max: 52.79Min: 52.9 / Avg: 53.15 / Max: 53.28Min: 53.88 / Avg: 53.93 / Max: 54Min: 54.34 / Avg: 54.49 / Max: 54.59Min: 54.71 / Avg: 54.77 / Max: 54.89Min: 55.03 / Avg: 55.04 / Max: 55.06Min: 55.95 / Avg: 56.03 / Max: 56.09Min: 58.49 / Avg: 58.51 / Max: 58.52Min: 60.92 / Avg: 61.54 / Max: 62.73Min: 62.13 / Avg: 62.24 / Max: 62.4Min: 62.66 / Avg: 62.76 / Max: 62.82Min: 72.44 / Avg: 72.47 / Max: 72.511. RawTherapee, version 5.8, command line.

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: FastEPYC 7F52EPYC 7702EPYC 7662EPYC 7542EPYC 7552EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F32EPYC 7302PEPYC 7282EPYC 7272EPYC 7232P246810SE +/- 0.01, N = 7SE +/- 0.01, N = 7SE +/- 0.00, N = 7SE +/- 0.01, N = 7SE +/- 0.01, N = 7SE +/- 0.01, N = 7SE +/- 0.01, N = 7SE +/- 0.00, N = 7SE +/- 0.01, N = 7SE +/- 0.01, N = 6SE +/- 0.00, N = 6SE +/- 0.01, N = 6SE +/- 0.01, N = 65.515.765.835.835.895.936.056.066.286.476.686.977.651. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: FastEPYC 7F52EPYC 7702EPYC 7662EPYC 7542EPYC 7552EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F32EPYC 7302PEPYC 7282EPYC 7272EPYC 7232P3691215Min: 5.49 / Avg: 5.51 / Max: 5.54Min: 5.75 / Avg: 5.76 / Max: 5.8Min: 5.82 / Avg: 5.83 / Max: 5.84Min: 5.81 / Avg: 5.83 / Max: 5.86Min: 5.87 / Avg: 5.89 / Max: 5.91Min: 5.92 / Avg: 5.93 / Max: 5.96Min: 6.01 / Avg: 6.05 / Max: 6.07Min: 6.04 / Avg: 6.06 / Max: 6.07Min: 6.24 / Avg: 6.28 / Max: 6.31Min: 6.45 / Avg: 6.47 / Max: 6.5Min: 6.66 / Avg: 6.68 / Max: 6.69Min: 6.93 / Avg: 6.97 / Max: 7.01Min: 7.6 / Avg: 7.65 / Max: 7.691. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

Timed GDB GNU Debugger Compilation

This test times how long it takes to build the GNU Debugger (GDB) in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 9.1Time To CompileEPYC 7F52EPYC 7542EPYC 7502PEPYC 7702EPYC 7552EPYC 7402PEPYC 7662EPYC 7642EPYC 7532EPYC 7302PEPYC 7F32EPYC 7282EPYC 7272EPYC 7232P306090120150SE +/- 0.08, N = 3SE +/- 0.07, N = 3SE +/- 0.03, N = 3SE +/- 0.07, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.11, N = 3SE +/- 0.09, N = 3SE +/- 0.08, N = 3SE +/- 0.05, N = 3SE +/- 0.06, N = 3SE +/- 0.11, N = 382.9284.8385.9686.3786.9586.9787.0287.2188.4292.2694.0194.4899.22114.83
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 9.1Time To CompileEPYC 7F52EPYC 7542EPYC 7502PEPYC 7702EPYC 7552EPYC 7402PEPYC 7662EPYC 7642EPYC 7532EPYC 7302PEPYC 7F32EPYC 7282EPYC 7272EPYC 7232P20406080100Min: 82.8 / Avg: 82.92 / Max: 83.07Min: 84.74 / Avg: 84.83 / Max: 84.97Min: 85.93 / Avg: 85.96 / Max: 86.02Min: 86.23 / Avg: 86.37 / Max: 86.48Min: 86.95 / Avg: 86.95 / Max: 86.96Min: 86.94 / Avg: 86.97 / Max: 87.03Min: 86.98 / Avg: 87.02 / Max: 87.09Min: 87.2 / Avg: 87.21 / Max: 87.22Min: 88.21 / Avg: 88.42 / Max: 88.6Min: 92.07 / Avg: 92.26 / Max: 92.37Min: 93.84 / Avg: 94.01 / Max: 94.13Min: 94.41 / Avg: 94.48 / Max: 94.59Min: 99.11 / Avg: 99.22 / Max: 99.3Min: 114.63 / Avg: 114.83 / Max: 115

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3-v3 - Model: alexnetEPYC 7F52EPYC 7F32EPYC 7532EPYC 7542EPYC 7502PEPYC 7702EPYC 72823691215SE +/- 0.24, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.18, N = 3SE +/- 0.02, N = 3SE +/- 0.07, N = 12SE +/- 0.03, N = 37.427.647.728.308.419.0310.18MIN: 7 / MAX: 8.77MIN: 7.53 / MAX: 8.34MIN: 7.6 / MAX: 10.25MIN: 7.99 / MAX: 11.01MIN: 8.19 / MAX: 10.25MIN: 8.55 / MAX: 19.15MIN: 9.97 / MAX: 41.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3-v3 - Model: alexnetEPYC 7F52EPYC 7F32EPYC 7532EPYC 7542EPYC 7502PEPYC 7702EPYC 72823691215Min: 7.14 / Avg: 7.42 / Max: 7.89Min: 7.62 / Avg: 7.64 / Max: 7.66Min: 7.69 / Avg: 7.72 / Max: 7.74Min: 8.12 / Avg: 8.3 / Max: 8.65Min: 8.38 / Avg: 8.41 / Max: 8.45Min: 8.77 / Avg: 9.03 / Max: 9.73Min: 10.14 / Avg: 10.18 / Max: 10.231. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Quantum ESPRESSO

Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterQuantum ESPRESSO 6.7Input: AUSURF112EPYC 7702EPYC 7662EPYC 7542EPYC 7552EPYC 7402PEPYC 7F32EPYC 7F52EPYC 7642EPYC 7502PEPYC 7302PEPYC 7532EPYC 7282EPYC 7272EPYC 7232P400800120016002000SE +/- 1.52, N = 3SE +/- 1.64, N = 3SE +/- 3.48, N = 3SE +/- 16.58, N = 4SE +/- 1.29, N = 3SE +/- 0.33, N = 3SE +/- 11.50, N = 3SE +/- 22.87, N = 9SE +/- 5.50, N = 3SE +/- 5.19, N = 3SE +/- 6.34, N = 3SE +/- 4.40, N = 3SE +/- 1.18, N = 3SE +/- 0.83, N = 31208.311216.501317.271329.901342.141356.261357.301372.731386.781403.521403.871456.821520.431656.881. (F9X) gfortran options: -lopenblas -lFoX_dom -lFoX_sax -lFoX_wxml -lFoX_common -lFoX_utils -lFoX_fsys -lfftw3 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterQuantum ESPRESSO 6.7Input: AUSURF112EPYC 7702EPYC 7662EPYC 7542EPYC 7552EPYC 7402PEPYC 7F32EPYC 7F52EPYC 7642EPYC 7502PEPYC 7302PEPYC 7532EPYC 7282EPYC 7272EPYC 7232P30060090012001500Min: 1205.82 / Avg: 1208.31 / Max: 1211.05Min: 1214.11 / Avg: 1216.5 / Max: 1219.63Min: 1313.6 / Avg: 1317.27 / Max: 1324.22Min: 1306.89 / Avg: 1329.9 / Max: 1377.55Min: 1339.87 / Avg: 1342.14 / Max: 1344.32Min: 1355.84 / Avg: 1356.26 / Max: 1356.92Min: 1338.99 / Avg: 1357.3 / Max: 1378.5Min: 1251.31 / Avg: 1372.73 / Max: 1438.64Min: 1377.07 / Avg: 1386.78 / Max: 1396.11Min: 1394.1 / Avg: 1403.52 / Max: 1411.99Min: 1396.5 / Avg: 1403.87 / Max: 1416.5Min: 1450.92 / Avg: 1456.82 / Max: 1465.43Min: 1518.1 / Avg: 1520.43 / Max: 1521.97Min: 1655.62 / Avg: 1656.88 / Max: 1658.441. (F9X) gfortran options: -lopenblas -lFoX_dom -lFoX_sax -lFoX_wxml -lFoX_common -lFoX_utils -lFoX_fsys -lfftw3 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2-v2 - Model: alexnetEPYC 7F52EPYC 7F32EPYC 7532EPYC 7542EPYC 7502PEPYC 7702EPYC 72823691215SE +/- 0.10, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.08, N = 12SE +/- 0.07, N = 37.497.637.728.118.409.0210.23MIN: 7.16 / MAX: 8.48MIN: 7.52 / MAX: 8.93MIN: 7.25 / MAX: 10.27MIN: 8 / MAX: 10.07MIN: 8.19 / MAX: 10.75MIN: 8.58 / MAX: 72.11MIN: 9.93 / MAX: 68.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2-v2 - Model: alexnetEPYC 7F52EPYC 7F32EPYC 7532EPYC 7542EPYC 7502PEPYC 7702EPYC 72823691215Min: 7.31 / Avg: 7.49 / Max: 7.67Min: 7.62 / Avg: 7.63 / Max: 7.64Min: 7.71 / Avg: 7.72 / Max: 7.73Min: 8.1 / Avg: 8.11 / Max: 8.12Min: 8.39 / Avg: 8.4 / Max: 8.42Min: 8.8 / Avg: 9.02 / Max: 9.86Min: 10.11 / Avg: 10.23 / Max: 10.341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetEPYC 7302PEPYC 7F32EPYC 7402PEPYC 7532EPYC 7542EPYC 7502PEPYC 7642EPYC 7662EPYC 7702EPYC 7552EPYC 7232PEPYC 7272EPYC 7F52EPYC 72823691215SE +/- 0.01, N = 2SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.06, N = 11SE +/- 0.16, N = 3SE +/- 0.16, N = 3SE +/- 0.07, N = 12SE +/- 0.08, N = 9SE +/- 0.05, N = 9SE +/- 0.32, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.14, N = 3SE +/- 0.08, N = 37.447.607.807.818.288.558.568.819.019.499.869.869.8710.16MIN: 7.34 / MAX: 8.53MIN: 7.51 / MAX: 8.13MIN: 7.66 / MAX: 9.41MIN: 7.55 / MAX: 10.47MIN: 8 / MAX: 10.85MIN: 8.18 / MAX: 10.72MIN: 8.22 / MAX: 24.11MIN: 8.44 / MAX: 32.96MIN: 8.61 / MAX: 17.39MIN: 8.54 / MAX: 13.63MIN: 9.77 / MAX: 25.65MIN: 9.77 / MAX: 10.66MIN: 9.45 / MAX: 10.5MIN: 9.88 / MAX: 21.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetEPYC 7302PEPYC 7F32EPYC 7402PEPYC 7532EPYC 7542EPYC 7502PEPYC 7642EPYC 7662EPYC 7702EPYC 7552EPYC 7232PEPYC 7272EPYC 7F52EPYC 72823691215Min: 7.43 / Avg: 7.44 / Max: 7.45Min: 7.6 / Avg: 7.6 / Max: 7.61Min: 7.78 / Avg: 7.8 / Max: 7.82Min: 7.65 / Avg: 7.81 / Max: 8.24Min: 8.12 / Avg: 8.28 / Max: 8.6Min: 8.38 / Avg: 8.55 / Max: 8.86Min: 8.41 / Avg: 8.56 / Max: 9.07Min: 8.55 / Avg: 8.81 / Max: 9.42Min: 8.88 / Avg: 9.01 / Max: 9.38Min: 8.86 / Avg: 9.49 / Max: 9.82Min: 9.84 / Avg: 9.86 / Max: 9.9Min: 9.82 / Avg: 9.86 / Max: 9.89Min: 9.7 / Avg: 9.87 / Max: 10.15Min: 10.05 / Avg: 10.16 / Max: 10.311. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 200EPYC 7662EPYC 7702EPYC 7F32EPYC 7F52EPYC 7232PEPYC 7302PEPYC 7402PEPYC 7272EPYC 7282EPYC 7542EPYC 7502PEPYC 7552EPYC 7532EPYC 764240K80K120K160K200KSE +/- 146.69, N = 3SE +/- 182.06, N = 3SE +/- 101.93, N = 3SE +/- 1333.26, N = 3SE +/- 139.37, N = 3SE +/- 37.21, N = 3SE +/- 92.46, N = 3SE +/- 272.29, N = 3SE +/- 76.96, N = 3SE +/- 120.93, N = 3SE +/- 187.42, N = 3SE +/- 1549.72, N = 7SE +/- 106.86, N = 3SE +/- 114.49, N = 31276991289651405581513271515591525391546931548711557631625441687281718851725081738121. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 200EPYC 7662EPYC 7702EPYC 7F32EPYC 7F52EPYC 7232PEPYC 7302PEPYC 7402PEPYC 7272EPYC 7282EPYC 7542EPYC 7502PEPYC 7552EPYC 7532EPYC 764230K60K90K120K150KMin: 127449 / Avg: 127699.33 / Max: 127957Min: 128622 / Avg: 128965.33 / Max: 129242Min: 140379 / Avg: 140558 / Max: 140732Min: 149779 / Avg: 151326.67 / Max: 153981Min: 151334 / Avg: 151559 / Max: 151814Min: 152490 / Avg: 152539 / Max: 152612Min: 154545 / Avg: 154693 / Max: 154863Min: 154328 / Avg: 154871.33 / Max: 155175Min: 155611 / Avg: 155763 / Max: 155860Min: 162302 / Avg: 162543.67 / Max: 162673Min: 168512 / Avg: 168727.67 / Max: 169101Min: 162598 / Avg: 171885 / Max: 173740Min: 172364 / Avg: 172508.33 / Max: 172717Min: 173597 / Avg: 173811.67 / Max: 1739881. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: inception-v3EPYC 7542EPYC 7532EPYC 7642EPYC 7502PEPYC 7662EPYC 7F32EPYC 7552EPYC 7402PEPYC 7702EPYC 7232PEPYC 7272EPYC 7302PEPYC 7282EPYC 7F521020304050SE +/- 0.05, N = 15SE +/- 0.06, N = 3SE +/- 0.12, N = 3SE +/- 0.14, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 14SE +/- 0.05, N = 15SE +/- 0.70, N = 3SE +/- 0.04, N = 3SE +/- 0.12, N = 11SE +/- 0.09, N = 4SE +/- 0.09, N = 15SE +/- 0.09, N = 3SE +/- 0.27, N = 331.8732.4732.8632.9233.5834.9135.4736.8838.0038.3338.6241.1243.1343.36MIN: 30.79 / MAX: 34.77MIN: 32.03 / MAX: 34.7MIN: 32.05 / MAX: 34.89MIN: 32.02 / MAX: 35.9MIN: 32.98 / MAX: 34.68MIN: 33.81 / MAX: 50.1MIN: 34.49 / MAX: 38.11MIN: 35.1 / MAX: 40.26MIN: 36.26 / MAX: 38.93MIN: 37.06 / MAX: 55.19MIN: 38.09 / MAX: 52.97MIN: 39.63 / MAX: 81.35MIN: 42.07 / MAX: 59.48MIN: 40.47 / MAX: 57.181. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: inception-v3EPYC 7542EPYC 7532EPYC 7642EPYC 7502PEPYC 7662EPYC 7F32EPYC 7552EPYC 7402PEPYC 7702EPYC 7232PEPYC 7272EPYC 7302PEPYC 7282EPYC 7F52918273645Min: 31.52 / Avg: 31.87 / Max: 32.13Min: 32.35 / Avg: 32.47 / Max: 32.57Min: 32.62 / Avg: 32.86 / Max: 33.02Min: 32.64 / Avg: 32.92 / Max: 33.11Min: 33.57 / Avg: 33.58 / Max: 33.59Min: 34.79 / Avg: 34.91 / Max: 35.08Min: 34.98 / Avg: 35.47 / Max: 35.68Min: 35.66 / Avg: 36.88 / Max: 38.08Min: 37.92 / Avg: 38 / Max: 38.08Min: 37.94 / Avg: 38.33 / Max: 39.34Min: 38.45 / Avg: 38.62 / Max: 38.79Min: 40.41 / Avg: 41.12 / Max: 41.65Min: 43 / Avg: 43.13 / Max: 43.3Min: 42.82 / Avg: 43.36 / Max: 43.651. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: fcn-resnet101-11 - Device: OpenMP CPUEPYC 7642EPYC 7532EPYC 7662EPYC 7552EPYC 7282EPYC 7702EPYC 7402PEPYC 7272EPYC 7542EPYC 7F52EPYC 7302PEPYC 7502PEPYC 7F32EPYC 7232P20406080100SE +/- 0.17, N = 3SE +/- 0.67, N = 3SE +/- 0.33, N = 3SE +/- 1.04, N = 3SE +/- 0.29, N = 3SE +/- 0.44, N = 3SE +/- 0.29, N = 3SE +/- 0.17, N = 3SE +/- 0.58, N = 3SE +/- 0.60, N = 3SE +/- 0.67, N = 3SE +/- 0.53, N = 12SE +/- 0.67, N = 3SE +/- 0.00, N = 380797979777472706969696459591. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: fcn-resnet101-11 - Device: OpenMP CPUEPYC 7642EPYC 7532EPYC 7662EPYC 7552EPYC 7282EPYC 7702EPYC 7402PEPYC 7272EPYC 7542EPYC 7F52EPYC 7302PEPYC 7502PEPYC 7F32EPYC 7232P1530456075Min: 80 / Avg: 80.17 / Max: 80.5Min: 78.5 / Avg: 79.17 / Max: 80.5Min: 78.5 / Avg: 79.17 / Max: 79.5Min: 77 / Avg: 79 / Max: 80.5Min: 76.5 / Avg: 77 / Max: 77.5Min: 73.5 / Avg: 74.33 / Max: 75Min: 71 / Avg: 71.5 / Max: 72Min: 69.5 / Avg: 69.83 / Max: 70Min: 68 / Avg: 69 / Max: 70Min: 68 / Avg: 69.17 / Max: 70Min: 68 / Avg: 69.33 / Max: 70Min: 61 / Avg: 64 / Max: 67.5Min: 58 / Avg: 59.33 / Max: 60Min: 58.5 / Avg: 58.5 / Max: 58.51. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

C-Blosc

A simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.0 Beta 5Compressor: blosclzEPYC 7F32EPYC 7F52EPYC 7302PEPYC 7532EPYC 7402PEPYC 7542EPYC 7502PEPYC 7272EPYC 7282EPYC 7232PEPYC 7642EPYC 7552EPYC 7702EPYC 76622K4K6K8K10KSE +/- 15.51, N = 3SE +/- 27.57, N = 3SE +/- 5.79, N = 3SE +/- 8.53, N = 3SE +/- 4.79, N = 3SE +/- 0.44, N = 3SE +/- 6.00, N = 3SE +/- 6.54, N = 3SE +/- 21.69, N = 3SE +/- 13.17, N = 3SE +/- 9.12, N = 3SE +/- 22.60, N = 3SE +/- 20.77, N = 3SE +/- 18.47, N = 311206.810910.110476.810253.310178.39878.09808.09442.99400.79386.89301.09155.08372.78328.21. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.0 Beta 5Compressor: blosclzEPYC 7F32EPYC 7F52EPYC 7302PEPYC 7532EPYC 7402PEPYC 7542EPYC 7502PEPYC 7272EPYC 7282EPYC 7232PEPYC 7642EPYC 7552EPYC 7702EPYC 76622K4K6K8K10KMin: 11190.3 / Avg: 11206.8 / Max: 11237.8Min: 10869.2 / Avg: 10910.13 / Max: 10962.6Min: 10468.4 / Avg: 10476.8 / Max: 10487.9Min: 10242.8 / Avg: 10253.3 / Max: 10270.2Min: 10170 / Avg: 10178.27 / Max: 10186.6Min: 9877.3 / Avg: 9878 / Max: 9878.8Min: 9796 / Avg: 9808 / Max: 9814.1Min: 9434.1 / Avg: 9442.93 / Max: 9455.7Min: 9370.8 / Avg: 9400.73 / Max: 9442.9Min: 9360.9 / Avg: 9386.83 / Max: 9403.8Min: 9282.8 / Avg: 9301 / Max: 9311.2Min: 9132 / Avg: 9155 / Max: 9200.2Min: 8331.9 / Avg: 8372.7 / Max: 8399.9Min: 8294.6 / Avg: 8328.2 / Max: 8358.31. (CXX) g++ options: -rdynamic

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: bertsquad-10 - Device: OpenMP CPUEPYC 7402PEPYC 7F32EPYC 7542EPYC 7532EPYC 7502PEPYC 7302PEPYC 7552EPYC 7642EPYC 7232PEPYC 7272EPYC 7282EPYC 7662EPYC 7702EPYC 7F52110220330440550SE +/- 4.15, N = 9SE +/- 0.50, N = 3SE +/- 1.15, N = 3SE +/- 0.44, N = 3SE +/- 4.69, N = 12SE +/- 5.01, N = 3SE +/- 0.60, N = 3SE +/- 3.97, N = 3SE +/- 0.50, N = 3SE +/- 1.26, N = 3SE +/- 0.29, N = 3SE +/- 6.16, N = 12SE +/- 7.16, N = 12SE +/- 6.13, N = 125004804754664594564404314094034003953903721. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: bertsquad-10 - Device: OpenMP CPUEPYC 7402PEPYC 7F32EPYC 7542EPYC 7532EPYC 7502PEPYC 7302PEPYC 7552EPYC 7642EPYC 7232PEPYC 7272EPYC 7282EPYC 7662EPYC 7702EPYC 7F5290180270360450Min: 468 / Avg: 500.22 / Max: 507Min: 479.5 / Avg: 480 / Max: 481Min: 472.5 / Avg: 474.5 / Max: 476.5Min: 465.5 / Avg: 466.33 / Max: 467Min: 425.5 / Avg: 459.13 / Max: 469.5Min: 450.5 / Avg: 456 / Max: 466Min: 439.5 / Avg: 440.33 / Max: 441.5Min: 424.5 / Avg: 430.5 / Max: 438Min: 408 / Avg: 409 / Max: 409.5Min: 401.5 / Avg: 403 / Max: 405.5Min: 399.5 / Avg: 400 / Max: 400.5Min: 343.5 / Avg: 395 / Max: 412.5Min: 335.5 / Avg: 389.54 / Max: 410.5Min: 329.5 / Avg: 371.92 / Max: 403.51. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: super-resolution-10 - Device: OpenMP CPUEPYC 7F52EPYC 7542EPYC 7402PEPYC 7532EPYC 7502PEPYC 7552EPYC 7302PEPYC 7F32EPYC 7642EPYC 7662EPYC 7702EPYC 7282EPYC 7272EPYC 7232P11002200330044005500SE +/- 88.35, N = 12SE +/- 67.46, N = 11SE +/- 71.97, N = 12SE +/- 35.49, N = 11SE +/- 62.74, N = 3SE +/- 27.98, N = 3SE +/- 73.62, N = 12SE +/- 5.63, N = 3SE +/- 52.31, N = 12SE +/- 95.52, N = 12SE +/- 84.35, N = 9SE +/- 24.57, N = 3SE +/- 16.48, N = 3SE +/- 3.69, N = 3521249864946486447554642452245184506417540874044394339411. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: super-resolution-10 - Device: OpenMP CPUEPYC 7F52EPYC 7542EPYC 7402PEPYC 7532EPYC 7502PEPYC 7552EPYC 7302PEPYC 7F32EPYC 7642EPYC 7662EPYC 7702EPYC 7282EPYC 7272EPYC 7232P9001800270036004500Min: 4665 / Avg: 5211.92 / Max: 5596Min: 4318 / Avg: 4986.05 / Max: 5084Min: 4494.5 / Avg: 4946.25 / Max: 5164.5Min: 4539 / Avg: 4864.36 / Max: 4940.5Min: 4633.5 / Avg: 4754.67 / Max: 4843.5Min: 4586 / Avg: 4641.67 / Max: 4674.5Min: 4171 / Avg: 4521.92 / Max: 5107Min: 4508 / Avg: 4517.67 / Max: 4527.5Min: 4219.5 / Avg: 4506.42 / Max: 4722Min: 3788 / Avg: 4174.71 / Max: 4608.5Min: 3791.5 / Avg: 4086.5 / Max: 4435Min: 3996 / Avg: 4043.83 / Max: 4077.5Min: 3922 / Avg: 3943 / Max: 3975.5Min: 3934 / Avg: 3941 / Max: 3946.51. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

Nebular Empirical Analysis Tool

NEAT is the Nebular Empirical Analysis Tool for empirical analysis of ionised nebulae, with uncertainty propagation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNebular Empirical Analysis Tool 2020-02-29EPYC 7F52EPYC 7542EPYC 7502PEPYC 7552EPYC 7662EPYC 7402PEPYC 7642EPYC 7532EPYC 7702EPYC 7282EPYC 7F32EPYC 7302PEPYC 7272EPYC 7232P510152025SE +/- 0.01, N = 4SE +/- 0.01, N = 4SE +/- 0.03, N = 4SE +/- 0.08, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.16, N = 3SE +/- 0.00, N = 3SE +/- 0.15, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 316.2016.3716.5516.6716.8916.9217.0317.2117.3717.3917.4417.7518.1021.351. (F9X) gfortran options: -cpp -ffree-line-length-0 -Jsource/ -fopenmp -O3 -fno-backtrace
OpenBenchmarking.orgSeconds, Fewer Is BetterNebular Empirical Analysis Tool 2020-02-29EPYC 7F52EPYC 7542EPYC 7502PEPYC 7552EPYC 7662EPYC 7402PEPYC 7642EPYC 7532EPYC 7702EPYC 7282EPYC 7F32EPYC 7302PEPYC 7272EPYC 7232P510152025Min: 16.19 / Avg: 16.2 / Max: 16.23Min: 16.34 / Avg: 16.37 / Max: 16.39Min: 16.49 / Avg: 16.55 / Max: 16.65Min: 16.52 / Avg: 16.67 / Max: 16.78Min: 16.79 / Avg: 16.89 / Max: 16.96Min: 16.91 / Avg: 16.92 / Max: 16.93Min: 16.72 / Avg: 17.03 / Max: 17.24Min: 17.21 / Avg: 17.21 / Max: 17.22Min: 17.08 / Avg: 17.37 / Max: 17.52Min: 17.38 / Avg: 17.39 / Max: 17.4Min: 17.42 / Avg: 17.43 / Max: 17.46Min: 17.73 / Avg: 17.75 / Max: 17.77Min: 18.08 / Avg: 18.1 / Max: 18.11Min: 21.31 / Avg: 21.35 / Max: 21.391. (F9X) gfortran options: -cpp -ffree-line-length-0 -Jsource/ -fopenmp -O3 -fno-backtrace

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: ETC1SEPYC 7F52EPYC 7F32EPYC 7542EPYC 7702EPYC 7502PEPYC 7662EPYC 7552EPYC 7402PEPYC 7532EPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1428425670SE +/- 0.03, N = 3SE +/- 0.23, N = 3SE +/- 0.05, N = 3SE +/- 0.06, N = 3SE +/- 0.07, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.13, N = 349.3453.6553.9354.1054.5754.8254.9855.1455.5957.1858.6460.1064.831. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: ETC1SEPYC 7F52EPYC 7F32EPYC 7542EPYC 7702EPYC 7502PEPYC 7662EPYC 7552EPYC 7402PEPYC 7532EPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1326395265Min: 49.28 / Avg: 49.34 / Max: 49.38Min: 53.23 / Avg: 53.65 / Max: 54.01Min: 53.83 / Avg: 53.93 / Max: 54.01Min: 54.02 / Avg: 54.1 / Max: 54.23Min: 54.42 / Avg: 54.57 / Max: 54.65Min: 54.77 / Avg: 54.82 / Max: 54.86Min: 54.94 / Avg: 54.98 / Max: 55.01Min: 55.12 / Avg: 55.14 / Max: 55.16Min: 55.5 / Avg: 55.59 / Max: 55.67Min: 57.13 / Avg: 57.18 / Max: 57.25Min: 58.62 / Avg: 58.64 / Max: 58.66Min: 60.03 / Avg: 60.1 / Max: 60.17Min: 64.68 / Avg: 64.83 / Max: 65.091. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 0EPYC 7F52EPYC 7542EPYC 7F32EPYC 7702EPYC 7662EPYC 7502PEPYC 7552EPYC 7402PEPYC 7532EPYC 7302PEPYC 7282EPYC 7272EPYC 7232P3691215SE +/- 0.005, N = 6SE +/- 0.005, N = 6SE +/- 0.002, N = 6SE +/- 0.006, N = 6SE +/- 0.002, N = 6SE +/- 0.002, N = 6SE +/- 0.003, N = 6SE +/- 0.009, N = 6SE +/- 0.003, N = 5SE +/- 0.005, N = 5SE +/- 0.007, N = 5SE +/- 0.002, N = 5SE +/- 0.005, N = 57.4538.0388.0778.0958.1328.1498.1988.2678.3578.6608.9239.1909.7561. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 0EPYC 7F52EPYC 7542EPYC 7F32EPYC 7702EPYC 7662EPYC 7502PEPYC 7552EPYC 7402PEPYC 7532EPYC 7302PEPYC 7282EPYC 7272EPYC 7232P3691215Min: 7.44 / Avg: 7.45 / Max: 7.48Min: 8.02 / Avg: 8.04 / Max: 8.05Min: 8.07 / Avg: 8.08 / Max: 8.08Min: 8.06 / Avg: 8.09 / Max: 8.11Min: 8.13 / Avg: 8.13 / Max: 8.14Min: 8.14 / Avg: 8.15 / Max: 8.16Min: 8.19 / Avg: 8.2 / Max: 8.21Min: 8.24 / Avg: 8.27 / Max: 8.3Min: 8.35 / Avg: 8.36 / Max: 8.37Min: 8.65 / Avg: 8.66 / Max: 8.67Min: 8.9 / Avg: 8.92 / Max: 8.94Min: 9.18 / Avg: 9.19 / Max: 9.19Min: 9.74 / Avg: 9.76 / Max: 9.771. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2-v2 - Model: yolov4-tinyEPYC 7F32EPYC 7542EPYC 7502PEPYC 7F52EPYC 7532EPYC 7282EPYC 7702816243240SE +/- 0.30, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.21, N = 3SE +/- 0.02, N = 3SE +/- 0.65, N = 3SE +/- 0.12, N = 1227.4328.2728.6428.8029.3829.5835.87MIN: 26.83 / MAX: 28.57MIN: 27.92 / MAX: 30.97MIN: 28.24 / MAX: 30.97MIN: 28.11 / MAX: 30.02MIN: 28.95 / MAX: 31.47MIN: 28.18 / MAX: 79.51MIN: 33.58 / MAX: 180.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2-v2 - Model: yolov4-tinyEPYC 7F32EPYC 7542EPYC 7502PEPYC 7F52EPYC 7532EPYC 7282EPYC 7702816243240Min: 27.1 / Avg: 27.43 / Max: 28.02Min: 28.24 / Avg: 28.27 / Max: 28.33Min: 28.56 / Avg: 28.64 / Max: 28.69Min: 28.45 / Avg: 28.8 / Max: 29.17Min: 29.35 / Avg: 29.38 / Max: 29.41Min: 28.65 / Avg: 29.58 / Max: 30.82Min: 35.17 / Avg: 35.87 / Max: 36.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 200EPYC 7F32EPYC 7662EPYC 7702EPYC 7232PEPYC 7272EPYC 7F52EPYC 7302PEPYC 7282EPYC 7402PEPYC 7552EPYC 7542EPYC 7502PEPYC 7642EPYC 753290K180K270K360K450KSE +/- 95.91, N = 3SE +/- 1226.45, N = 3SE +/- 730.51, N = 3SE +/- 533.40, N = 3SE +/- 226.68, N = 3SE +/- 1699.46, N = 3SE +/- 60.14, N = 3SE +/- 456.29, N = 3SE +/- 338.07, N = 3SE +/- 18239.03, N = 9SE +/- 147.16, N = 3SE +/- 440.86, N = 3SE +/- 10886.13, N = 9SE +/- 422.64, N = 33401683456263474843690273783673820203871733886993910153996104116814279674298754419781. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 200EPYC 7F32EPYC 7662EPYC 7702EPYC 7232PEPYC 7272EPYC 7F52EPYC 7302PEPYC 7282EPYC 7402PEPYC 7552EPYC 7542EPYC 7502PEPYC 7642EPYC 753280K160K240K320K400KMin: 339980 / Avg: 340168 / Max: 340295Min: 344068 / Avg: 345626.33 / Max: 348046Min: 346044 / Avg: 347483.67 / Max: 348419Min: 367965 / Avg: 369027.33 / Max: 369643Min: 378044 / Avg: 378367 / Max: 378804Min: 379249 / Avg: 382019.67 / Max: 385110Min: 387058 / Avg: 387173 / Max: 387261Min: 387821 / Avg: 388699.33 / Max: 389353Min: 390425 / Avg: 391015.33 / Max: 391596Min: 331340 / Avg: 399610.44 / Max: 446964Min: 411435 / Avg: 411681.33 / Max: 411944Min: 427345 / Avg: 427966.67 / Max: 428819Min: 364614 / Avg: 429875.11 / Max: 446994Min: 441242 / Avg: 441977.67 / Max: 4427061. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

Timed Apache Compilation

This test times how long it takes to build the Apache HTTPD web server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileEPYC 7F52EPYC 7F32EPYC 7542EPYC 7502PEPYC 7402PEPYC 7702EPYC 7552EPYC 7662EPYC 7642EPYC 7532EPYC 7302PEPYC 7282EPYC 7272EPYC 7232P714212835SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 321.6223.0024.2024.4324.5324.5624.8924.9224.9325.0225.0225.6526.0228.08
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileEPYC 7F52EPYC 7F32EPYC 7542EPYC 7502PEPYC 7402PEPYC 7702EPYC 7552EPYC 7662EPYC 7642EPYC 7532EPYC 7302PEPYC 7282EPYC 7272EPYC 7232P612182430Min: 21.59 / Avg: 21.62 / Max: 21.66Min: 22.98 / Avg: 23 / Max: 23.04Min: 24.17 / Avg: 24.2 / Max: 24.23Min: 24.41 / Avg: 24.43 / Max: 24.46Min: 24.51 / Avg: 24.53 / Max: 24.56Min: 24.54 / Avg: 24.56 / Max: 24.58Min: 24.85 / Avg: 24.89 / Max: 24.92Min: 24.89 / Avg: 24.92 / Max: 24.95Min: 24.92 / Avg: 24.93 / Max: 24.95Min: 25 / Avg: 25.02 / Max: 25.06Min: 24.97 / Avg: 25.02 / Max: 25.06Min: 25.6 / Avg: 25.65 / Max: 25.68Min: 26.01 / Avg: 26.02 / Max: 26.03Min: 28.05 / Avg: 28.08 / Max: 28.12

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 4096EPYC 7F52EPYC 7F32EPYC 7542EPYC 7702EPYC 7662EPYC 7502PEPYC 7402PEPYC 7642EPYC 7302PEPYC 7282EPYC 7532EPYC 7552EPYC 7272EPYC 7232P4K8K12K16K20KSE +/- 146.33, N = 3SE +/- 233.09, N = 3SE +/- 37.78, N = 3SE +/- 52.30, N = 3SE +/- 32.36, N = 3SE +/- 33.50, N = 3SE +/- 31.25, N = 3SE +/- 142.34, N = 3SE +/- 28.18, N = 3SE +/- 49.84, N = 3SE +/- 135.81, N = 9SE +/- 159.60, N = 7SE +/- 167.66, N = 3SE +/- 181.85, N = 319037190361804717793176811766417644176091760017559173081709016820147131. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm
OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 4096EPYC 7F52EPYC 7F32EPYC 7542EPYC 7702EPYC 7662EPYC 7502PEPYC 7402PEPYC 7642EPYC 7302PEPYC 7282EPYC 7532EPYC 7552EPYC 7272EPYC 7232P3K6K9K12K15KMin: 18745 / Avg: 19037 / Max: 19200Min: 18580 / Avg: 19036.33 / Max: 19347Min: 18004 / Avg: 18046.67 / Max: 18122Min: 17693 / Avg: 17792.67 / Max: 17870Min: 17626 / Avg: 17680.67 / Max: 17738Min: 17611 / Avg: 17664 / Max: 17726Min: 17597 / Avg: 17643.67 / Max: 17703Min: 17324 / Avg: 17608.67 / Max: 17754Min: 17559 / Avg: 17600 / Max: 17654Min: 17469 / Avg: 17559.33 / Max: 17641Min: 16389 / Avg: 17308.33 / Max: 17615Min: 16499 / Avg: 17090.14 / Max: 17664Min: 16499 / Avg: 16820.33 / Max: 17064Min: 14527 / Avg: 14713.33 / Max: 150771. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

Timed MAFFT Alignment

This test performs an alignment of 100 pyruvate decarboxylase sequences. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNAEPYC 7F32EPYC 7F52EPYC 7542EPYC 7402PEPYC 7502PEPYC 7302PEPYC 7552EPYC 7532EPYC 7662EPYC 7642EPYC 7702EPYC 7282EPYC 7272EPYC 7232P3691215SE +/- 0.030, N = 5SE +/- 0.037, N = 5SE +/- 0.014, N = 5SE +/- 0.027, N = 5SE +/- 0.030, N = 5SE +/- 0.004, N = 5SE +/- 0.033, N = 5SE +/- 0.021, N = 5SE +/- 0.024, N = 5SE +/- 0.037, N = 5SE +/- 0.027, N = 5SE +/- 0.028, N = 5SE +/- 0.023, N = 5SE +/- 0.035, N = 58.5988.8689.6029.7259.7359.8929.9259.9389.98910.04010.09710.22110.23611.1121. (CC) gcc options: -std=c99 -O3 -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNAEPYC 7F32EPYC 7F52EPYC 7542EPYC 7402PEPYC 7502PEPYC 7302PEPYC 7552EPYC 7532EPYC 7662EPYC 7642EPYC 7702EPYC 7282EPYC 7272EPYC 7232P3691215Min: 8.55 / Avg: 8.6 / Max: 8.69Min: 8.78 / Avg: 8.87 / Max: 8.95Min: 9.57 / Avg: 9.6 / Max: 9.64Min: 9.65 / Avg: 9.73 / Max: 9.8Min: 9.66 / Avg: 9.74 / Max: 9.84Min: 9.88 / Avg: 9.89 / Max: 9.9Min: 9.82 / Avg: 9.92 / Max: 10Min: 9.89 / Avg: 9.94 / Max: 10.01Min: 9.92 / Avg: 9.99 / Max: 10.06Min: 9.91 / Avg: 10.04 / Max: 10.12Min: 10.02 / Avg: 10.1 / Max: 10.18Min: 10.15 / Avg: 10.22 / Max: 10.32Min: 10.16 / Avg: 10.24 / Max: 10.3Min: 11.04 / Avg: 11.11 / Max: 11.221. (CC) gcc options: -std=c99 -O3 -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyEPYC 7F32EPYC 7402PEPYC 7302PEPYC 7542EPYC 7502PEPYC 7272EPYC 7532EPYC 7232PEPYC 7282EPYC 7642EPYC 7552EPYC 7662EPYC 7702EPYC 7F52816243240SE +/- 0.29, N = 3SE +/- 0.02, N = 3SE +/- 0.17, N = 3SE +/- 0.04, N = 3SE +/- 0.33, N = 3SE +/- 0.61, N = 3SE +/- 0.17, N = 11SE +/- 0.50, N = 3SE +/- 0.45, N = 3SE +/- 0.06, N = 12SE +/- 0.10, N = 3SE +/- 0.12, N = 9SE +/- 0.21, N = 9SE +/- 0.28, N = 327.6127.7827.8728.2729.1129.2329.6629.6729.7830.6331.6134.1535.1235.38MIN: 26.76 / MAX: 28.5MIN: 27.43 / MAX: 29.51MIN: 27.31 / MAX: 41.48MIN: 27.87 / MAX: 31.14MIN: 28.33 / MAX: 32.38MIN: 28.35 / MAX: 32.28MIN: 28.65 / MAX: 167.51MIN: 28.72 / MAX: 42.97MIN: 28.36 / MAX: 108.17MIN: 29.81 / MAX: 126.09MIN: 30.9 / MAX: 36.86MIN: 32.54 / MAX: 132.52MIN: 33.13 / MAX: 169.41MIN: 34.22 / MAX: 37.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyEPYC 7F32EPYC 7402PEPYC 7302PEPYC 7542EPYC 7502PEPYC 7272EPYC 7532EPYC 7232PEPYC 7282EPYC 7642EPYC 7552EPYC 7662EPYC 7702EPYC 7F52816243240Min: 27.05 / Avg: 27.61 / Max: 28Min: 27.74 / Avg: 27.78 / Max: 27.82Min: 27.62 / Avg: 27.87 / Max: 28.19Min: 28.2 / Avg: 28.27 / Max: 28.34Min: 28.7 / Avg: 29.11 / Max: 29.76Min: 28.56 / Avg: 29.23 / Max: 30.45Min: 29.28 / Avg: 29.66 / Max: 31.16Min: 29 / Avg: 29.67 / Max: 30.66Min: 28.88 / Avg: 29.78 / Max: 30.25Min: 30.33 / Avg: 30.63 / Max: 30.98Min: 31.41 / Avg: 31.61 / Max: 31.75Min: 33.31 / Avg: 34.15 / Max: 34.59Min: 33.91 / Avg: 35.12 / Max: 35.94Min: 34.83 / Avg: 35.38 / Max: 35.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000EPYC 7F52EPYC 7702EPYC 7502PEPYC 7642EPYC 7662EPYC 7402PEPYC 7552EPYC 7532EPYC 7F32EPYC 7302PEPYC 7282EPYC 7542EPYC 7272EPYC 7232P300K600K900K1200K1500KSE +/- 2686.24, N = 3SE +/- 628.66, N = 3SE +/- 702.94, N = 3SE +/- 1733.41, N = 3SE +/- 762.66, N = 3SE +/- 2008.07, N = 3SE +/- 686.09, N = 3SE +/- 759.90, N = 3SE +/- 1176.92, N = 3SE +/- 1121.87, N = 3SE +/- 1061.91, N = 3SE +/- 295.02, N = 3SE +/- 1242.45, N = 3SE +/- 1442.60, N = 31438050.41308517.21303437.61295588.81294664.51290897.81288738.61281490.51279851.81249264.61211354.31206497.81185857.81113861.3
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000EPYC 7F52EPYC 7702EPYC 7502PEPYC 7642EPYC 7662EPYC 7402PEPYC 7552EPYC 7532EPYC 7F32EPYC 7302PEPYC 7282EPYC 7542EPYC 7272EPYC 7232P200K400K600K800K1000KMin: 1433192.9 / Avg: 1438050.37 / Max: 1442466.9Min: 1307504.2 / Avg: 1308517.23 / Max: 1309668.7Min: 1302037.7 / Avg: 1303437.6 / Max: 1304249.7Min: 1293258.5 / Avg: 1295588.83 / Max: 1298976.9Min: 1293775.5 / Avg: 1294664.5 / Max: 1296182.4Min: 1287374.1 / Avg: 1290897.83 / Max: 1294328.4Min: 1287797.7 / Avg: 1288738.57 / Max: 1290074Min: 1279979.8 / Avg: 1281490.47 / Max: 1282389.9Min: 1277678.1 / Avg: 1279851.77 / Max: 1281720.8Min: 1247865.2 / Avg: 1249264.6 / Max: 1251483.2Min: 1209909.6 / Avg: 1211354.27 / Max: 1213424.8Min: 1205942.3 / Avg: 1206497.83 / Max: 1206947.8Min: 1183868.7 / Avg: 1185857.83 / Max: 1188142.2Min: 1110976.3 / Avg: 1113861.27 / Max: 1115334.9

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkEPYC 7F52EPYC 7F32EPYC 7542EPYC 7502PEPYC 7402PEPYC 7702EPYC 7642EPYC 7552EPYC 7532EPYC 7662EPYC 7302PEPYC 7272EPYC 7282EPYC 7232P80160240320400SE +/- 0.20, N = 3SE +/- 0.65, N = 3SE +/- 0.40, N = 3SE +/- 0.31, N = 3SE +/- 0.30, N = 3SE +/- 0.70, N = 3SE +/- 0.24, N = 3SE +/- 0.35, N = 3SE +/- 0.48, N = 3SE +/- 0.41, N = 3SE +/- 1.35, N = 3SE +/- 0.30, N = 3SE +/- 0.76, N = 3SE +/- 0.53, N = 3348.66345.36311.97309.10307.47306.50303.75303.45303.33303.21301.28298.27296.96270.83
OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkEPYC 7F52EPYC 7F32EPYC 7542EPYC 7502PEPYC 7402PEPYC 7702EPYC 7642EPYC 7552EPYC 7532EPYC 7662EPYC 7302PEPYC 7272EPYC 7282EPYC 7232P60120180240300Min: 348.4 / Avg: 348.66 / Max: 349.06Min: 344.12 / Avg: 345.36 / Max: 346.34Min: 311.24 / Avg: 311.97 / Max: 312.62Min: 308.57 / Avg: 309.1 / Max: 309.64Min: 307 / Avg: 307.47 / Max: 308.03Min: 305.69 / Avg: 306.5 / Max: 307.9Min: 303.29 / Avg: 303.75 / Max: 304.08Min: 302.78 / Avg: 303.45 / Max: 303.96Min: 302.7 / Avg: 303.33 / Max: 304.28Min: 302.77 / Avg: 303.21 / Max: 304.03Min: 298.67 / Avg: 301.28 / Max: 303.16Min: 297.91 / Avg: 298.27 / Max: 298.86Min: 295.48 / Avg: 296.96 / Max: 298.01Min: 269.87 / Avg: 270.83 / Max: 271.69

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.1.1Bulk Size: 100 - Inserts: 1000 - Rounds: 24EPYC 7F52EPYC 7502PEPYC 7542EPYC 7402PEPYC 7282EPYC 7302PEPYC 7532EPYC 7642EPYC 7272EPYC 7552EPYC 7F32EPYC 7662EPYC 7702EPYC 7232P20406080100SE +/- 0.45, N = 3SE +/- 1.14, N = 3SE +/- 0.45, N = 3SE +/- 0.52, N = 3SE +/- 0.89, N = 3SE +/- 0.51, N = 3SE +/- 0.64, N = 3SE +/- 0.03, N = 3SE +/- 1.02, N = 4SE +/- 0.35, N = 3SE +/- 0.48, N = 3SE +/- 0.34, N = 3SE +/- 0.55, N = 3SE +/- 0.41, N = 383.2886.2386.5786.6887.9489.4391.3291.5192.3592.5993.7997.3097.95107.011. (CXX) g++ options: -std=c++14 -lmozjs-68 -lm -lerl_interface -lei -fPIC -MMD
OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.1.1Bulk Size: 100 - Inserts: 1000 - Rounds: 24EPYC 7F52EPYC 7502PEPYC 7542EPYC 7402PEPYC 7282EPYC 7302PEPYC 7532EPYC 7642EPYC 7272EPYC 7552EPYC 7F32EPYC 7662EPYC 7702EPYC 7232P20406080100Min: 82.43 / Avg: 83.28 / Max: 83.97Min: 84.36 / Avg: 86.23 / Max: 88.29Min: 86.01 / Avg: 86.57 / Max: 87.47Min: 85.65 / Avg: 86.68 / Max: 87.25Min: 86.34 / Avg: 87.94 / Max: 89.41Min: 88.41 / Avg: 89.43 / Max: 89.96Min: 90.24 / Avg: 91.31 / Max: 92.46Min: 91.47 / Avg: 91.51 / Max: 91.55Min: 90.47 / Avg: 92.35 / Max: 94.9Min: 91.9 / Avg: 92.59 / Max: 93.05Min: 92.83 / Avg: 93.79 / Max: 94.31Min: 96.62 / Avg: 97.29 / Max: 97.72Min: 96.92 / Avg: 97.95 / Max: 98.78Min: 106.25 / Avg: 107.01 / Max: 107.661. (CXX) g++ options: -std=c++14 -lmozjs-68 -lm -lerl_interface -lei -fPIC -MMD

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonEPYC 7F52EPYC 7F32EPYC 7502PEPYC 7542EPYC 7702EPYC 7662EPYC 7402PEPYC 7532EPYC 7552EPYC 7302PEPYC 7282EPYC 7272EPYC 7232P12002400360048006000SE +/- 11.37, N = 7SE +/- 22.07, N = 7SE +/- 12.59, N = 6SE +/- 17.79, N = 6SE +/- 22.61, N = 6SE +/- 12.22, N = 6SE +/- 17.40, N = 6SE +/- 21.30, N = 6SE +/- 9.85, N = 6SE +/- 20.02, N = 6SE +/- 22.94, N = 6SE +/- 17.45, N = 6SE +/- 49.86, N = 64375454449164935501250215053507051125145528053535580
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonEPYC 7F52EPYC 7F32EPYC 7502PEPYC 7542EPYC 7702EPYC 7662EPYC 7402PEPYC 7532EPYC 7552EPYC 7302PEPYC 7282EPYC 7272EPYC 7232P10002000300040005000Min: 4328 / Avg: 4374.57 / Max: 4420Min: 4475 / Avg: 4544.29 / Max: 4659Min: 4883 / Avg: 4916 / Max: 4959Min: 4891 / Avg: 4934.83 / Max: 4996Min: 4957 / Avg: 5011.67 / Max: 5119Min: 4981 / Avg: 5020.83 / Max: 5072Min: 5000 / Avg: 5053.33 / Max: 5104Min: 5014 / Avg: 5070.17 / Max: 5141Min: 5077 / Avg: 5112 / Max: 5138Min: 5069 / Avg: 5145.33 / Max: 5205Min: 5219 / Avg: 5279.5 / Max: 5358Min: 5275 / Avg: 5352.5 / Max: 5392Min: 5502 / Avg: 5579.5 / Max: 5827

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3-v3 - Model: yolov4-tinyEPYC 7F32EPYC 7542EPYC 7502PEPYC 7F52EPYC 7532EPYC 7282EPYC 7702816243240SE +/- 0.34, N = 3SE +/- 0.11, N = 3SE +/- 0.06, N = 3SE +/- 0.21, N = 3SE +/- 0.07, N = 3SE +/- 0.23, N = 3SE +/- 0.22, N = 1227.8128.3828.4528.8929.4230.2235.45MIN: 26.87 / MAX: 29.43MIN: 27.87 / MAX: 30.83MIN: 28.02 / MAX: 30.8MIN: 28.11 / MAX: 30.08MIN: 28.77 / MAX: 32.92MIN: 28.5 / MAX: 91.38MIN: 33.92 / MAX: 177.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3-v3 - Model: yolov4-tinyEPYC 7F32EPYC 7542EPYC 7502PEPYC 7F52EPYC 7532EPYC 7282EPYC 7702816243240Min: 27.19 / Avg: 27.81 / Max: 28.35Min: 28.17 / Avg: 28.38 / Max: 28.49Min: 28.33 / Avg: 28.45 / Max: 28.53Min: 28.47 / Avg: 28.89 / Max: 29.15Min: 29.33 / Avg: 29.42 / Max: 29.55Min: 29.96 / Avg: 30.22 / Max: 30.69Min: 34.59 / Avg: 35.45 / Max: 36.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessEPYC 7F52EPYC 7F32EPYC 7542EPYC 7502PEPYC 7702EPYC 7402PEPYC 7552EPYC 7532EPYC 7642EPYC 7302PEPYC 7662EPYC 7272EPYC 7282EPYC 7232P510152025SE +/- 0.14, N = 3SE +/- 0.10, N = 3SE +/- 0.17, N = 3SE +/- 0.13, N = 3SE +/- 0.10, N = 3SE +/- 0.09, N = 3SE +/- 0.09, N = 3SE +/- 0.09, N = 3SE +/- 0.06, N = 3SE +/- 0.20, N = 3SE +/- 0.09, N = 3SE +/- 0.15, N = 3SE +/- 0.20, N = 3SE +/- 0.03, N = 317.6717.7020.1020.2920.3920.5020.8520.8620.8620.8720.9021.2721.3822.451. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessEPYC 7F52EPYC 7F32EPYC 7542EPYC 7502PEPYC 7702EPYC 7402PEPYC 7552EPYC 7532EPYC 7642EPYC 7302PEPYC 7662EPYC 7272EPYC 7282EPYC 7232P510152025Min: 17.4 / Avg: 17.67 / Max: 17.81Min: 17.51 / Avg: 17.7 / Max: 17.84Min: 19.77 / Avg: 20.1 / Max: 20.29Min: 20.1 / Avg: 20.29 / Max: 20.55Min: 20.19 / Avg: 20.39 / Max: 20.52Min: 20.34 / Avg: 20.5 / Max: 20.64Min: 20.67 / Avg: 20.85 / Max: 20.97Min: 20.71 / Avg: 20.86 / Max: 21.01Min: 20.76 / Avg: 20.86 / Max: 20.97Min: 20.48 / Avg: 20.87 / Max: 21.12Min: 20.76 / Avg: 20.9 / Max: 21.07Min: 21.11 / Avg: 21.27 / Max: 21.58Min: 21.02 / Avg: 21.38 / Max: 21.69Min: 22.41 / Avg: 22.45 / Max: 22.51. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupEPYC 7F32EPYC 7F52EPYC 7542EPYC 7502PEPYC 7702EPYC 7402PEPYC 7532EPYC 7662EPYC 7302PEPYC 7552EPYC 7642EPYC 7272EPYC 7282EPYC 7232P3691215SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 37.477.548.578.628.718.728.798.808.828.838.849.069.139.49
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupEPYC 7F32EPYC 7F52EPYC 7542EPYC 7502PEPYC 7702EPYC 7402PEPYC 7532EPYC 7662EPYC 7302PEPYC 7552EPYC 7642EPYC 7272EPYC 7282EPYC 7232P3691215Min: 7.47 / Avg: 7.47 / Max: 7.48Min: 7.54 / Avg: 7.54 / Max: 7.55Min: 8.57 / Avg: 8.57 / Max: 8.57Min: 8.61 / Avg: 8.62 / Max: 8.62Min: 8.71 / Avg: 8.71 / Max: 8.71Min: 8.72 / Avg: 8.72 / Max: 8.73Min: 8.78 / Avg: 8.79 / Max: 8.79Min: 8.8 / Avg: 8.8 / Max: 8.8Min: 8.81 / Avg: 8.82 / Max: 8.82Min: 8.81 / Avg: 8.83 / Max: 8.84Min: 8.83 / Avg: 8.84 / Max: 8.85Min: 9.05 / Avg: 9.06 / Max: 9.06Min: 9.12 / Avg: 9.13 / Max: 9.13Min: 9.49 / Avg: 9.49 / Max: 9.49

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionEPYC 7F52EPYC 7F32EPYC 7542EPYC 7702EPYC 7402PEPYC 7502PEPYC 7552EPYC 7302PEPYC 7532EPYC 7642EPYC 7662EPYC 7272EPYC 7282EPYC 7232P1020304050SE +/- 0.07, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.18, N = 3SE +/- 0.04, N = 3SE +/- 0.09, N = 3SE +/- 0.07, N = 3SE +/- 0.04, N = 3SE +/- 0.09, N = 3SE +/- 0.12, N = 3SE +/- 0.06, N = 3SE +/- 0.08, N = 3SE +/- 0.08, N = 336.4436.4941.6642.2942.3042.3142.8642.8842.9042.9643.1444.2744.3746.301. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionEPYC 7F52EPYC 7F32EPYC 7542EPYC 7702EPYC 7402PEPYC 7502PEPYC 7552EPYC 7302PEPYC 7532EPYC 7642EPYC 7662EPYC 7272EPYC 7282EPYC 7232P918273645Min: 36.33 / Avg: 36.44 / Max: 36.56Min: 36.45 / Avg: 36.49 / Max: 36.52Min: 41.62 / Avg: 41.66 / Max: 41.7Min: 42.26 / Avg: 42.29 / Max: 42.3Min: 42.03 / Avg: 42.3 / Max: 42.64Min: 42.23 / Avg: 42.31 / Max: 42.35Min: 42.71 / Avg: 42.86 / Max: 43.03Min: 42.75 / Avg: 42.88 / Max: 42.95Min: 42.83 / Avg: 42.9 / Max: 42.94Min: 42.78 / Avg: 42.96 / Max: 43.08Min: 43 / Avg: 43.13 / Max: 43.38Min: 44.18 / Avg: 44.27 / Max: 44.39Min: 44.26 / Avg: 44.37 / Max: 44.53Min: 46.14 / Avg: 46.3 / Max: 46.421. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3-v3 - Model: vgg16EPYC 7542EPYC 7502PEPYC 7F32EPYC 7532EPYC 7702EPYC 7F52EPYC 7282918273645SE +/- 0.10, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.37, N = 3SE +/- 0.23, N = 12SE +/- 0.11, N = 3SE +/- 0.06, N = 331.5531.6033.6733.8138.6339.0340.03MIN: 31.2 / MAX: 34.12MIN: 31.34 / MAX: 33.82MIN: 33.38 / MAX: 34.39MIN: 32.79 / MAX: 97.48MIN: 36.39 / MAX: 183.6MIN: 38.43 / MAX: 68.5MIN: 38.49 / MAX: 116.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3-v3 - Model: vgg16EPYC 7542EPYC 7502PEPYC 7F32EPYC 7532EPYC 7702EPYC 7F52EPYC 7282816243240Min: 31.41 / Avg: 31.55 / Max: 31.74Min: 31.57 / Avg: 31.6 / Max: 31.62Min: 33.6 / Avg: 33.67 / Max: 33.72Min: 33.31 / Avg: 33.81 / Max: 34.53Min: 37.71 / Avg: 38.63 / Max: 40.18Min: 38.8 / Avg: 39.03 / Max: 39.17Min: 39.91 / Avg: 40.03 / Max: 40.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-PassEPYC 7F52EPYC 7F32EPYC 7542EPYC 7502PEPYC 7402PEPYC 7702EPYC 7662EPYC 7552EPYC 7532EPYC 7302PEPYC 7642EPYC 7282EPYC 7272EPYC 7232P0.56481.12961.69442.25922.824SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 32.512.382.222.202.192.172.162.162.152.142.122.082.051.981. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-PassEPYC 7F52EPYC 7F32EPYC 7542EPYC 7502PEPYC 7402PEPYC 7702EPYC 7662EPYC 7552EPYC 7532EPYC 7302PEPYC 7642EPYC 7282EPYC 7272EPYC 7232P246810Min: 2.5 / Avg: 2.51 / Max: 2.51Min: 2.37 / Avg: 2.38 / Max: 2.39Min: 2.21 / Avg: 2.22 / Max: 2.23Min: 2.2 / Avg: 2.2 / Max: 2.2Min: 2.19 / Avg: 2.19 / Max: 2.19Min: 2.17 / Avg: 2.17 / Max: 2.17Min: 2.15 / Avg: 2.16 / Max: 2.16Min: 2.15 / Avg: 2.16 / Max: 2.16Min: 2.14 / Avg: 2.15 / Max: 2.15Min: 2.14 / Avg: 2.14 / Max: 2.15Min: 2.12 / Avg: 2.12 / Max: 2.13Min: 2.07 / Avg: 2.08 / Max: 2.09Min: 2.04 / Avg: 2.05 / Max: 2.06Min: 1.97 / Avg: 1.98 / Max: 1.981. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-PassEPYC 7F52EPYC 7F32EPYC 7542EPYC 7502PEPYC 7402PEPYC 7702EPYC 7532EPYC 7552EPYC 7662EPYC 7302PEPYC 7642EPYC 7282EPYC 7272EPYC 7232P0.87751.7552.63253.514.3875SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 33.903.733.483.423.423.383.373.373.363.363.333.233.193.081. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-PassEPYC 7F52EPYC 7F32EPYC 7542EPYC 7502PEPYC 7402PEPYC 7702EPYC 7532EPYC 7552EPYC 7662EPYC 7302PEPYC 7642EPYC 7282EPYC 7272EPYC 7232P246810Min: 3.89 / Avg: 3.9 / Max: 3.91Min: 3.73 / Avg: 3.73 / Max: 3.74Min: 3.48 / Avg: 3.48 / Max: 3.48Min: 3.41 / Avg: 3.42 / Max: 3.43Min: 3.42 / Avg: 3.42 / Max: 3.43Min: 3.38 / Avg: 3.38 / Max: 3.39Min: 3.36 / Avg: 3.37 / Max: 3.37Min: 3.37 / Avg: 3.37 / Max: 3.38Min: 3.36 / Avg: 3.36 / Max: 3.36Min: 3.33 / Avg: 3.36 / Max: 3.37Min: 3.32 / Avg: 3.33 / Max: 3.35Min: 3.21 / Avg: 3.23 / Max: 3.27Min: 3.18 / Avg: 3.19 / Max: 3.21Min: 3.07 / Avg: 3.08 / Max: 3.11. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: PNG - Encode Speed: 5EPYC 7542EPYC 7502PEPYC 7532EPYC 7F52EPYC 7282EPYC 7F3220406080100SE +/- 0.06, N = 3SE +/- 0.13, N = 3SE +/- 0.21, N = 3SE +/- 0.12, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 380.0579.5976.1873.8367.2963.511. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: PNG - Encode Speed: 5EPYC 7542EPYC 7502PEPYC 7532EPYC 7F52EPYC 7282EPYC 7F321530456075Min: 79.94 / Avg: 80.05 / Max: 80.11Min: 79.43 / Avg: 79.59 / Max: 79.84Min: 75.81 / Avg: 76.18 / Max: 76.52Min: 73.58 / Avg: 73.83 / Max: 73.98Min: 67.24 / Avg: 67.29 / Max: 67.35Min: 63.47 / Avg: 63.51 / Max: 63.541. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeEPYC 7F52EPYC 7F32EPYC 7542EPYC 7502PEPYC 7402PEPYC 7702EPYC 7552EPYC 7662EPYC 7532EPYC 7642EPYC 7302PEPYC 7282EPYC 7272EPYC 7232P510152025SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.07, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.16, N = 3SE +/- 0.07, N = 3SE +/- 0.05, N = 3SE +/- 0.05, N = 321.7620.2220.1919.9719.8819.8419.6319.5919.5119.4819.1118.9217.8317.301. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeEPYC 7F52EPYC 7F32EPYC 7542EPYC 7502PEPYC 7402PEPYC 7702EPYC 7552EPYC 7662EPYC 7532EPYC 7642EPYC 7302PEPYC 7282EPYC 7272EPYC 7232P510152025Min: 21.73 / Avg: 21.76 / Max: 21.79Min: 20.16 / Avg: 20.22 / Max: 20.28Min: 20.17 / Avg: 20.19 / Max: 20.22Min: 19.9 / Avg: 19.97 / Max: 20.03Min: 19.75 / Avg: 19.88 / Max: 19.97Min: 19.82 / Avg: 19.84 / Max: 19.86Min: 19.61 / Avg: 19.63 / Max: 19.66Min: 19.58 / Avg: 19.59 / Max: 19.6Min: 19.46 / Avg: 19.51 / Max: 19.57Min: 19.45 / Avg: 19.48 / Max: 19.52Min: 18.83 / Avg: 19.11 / Max: 19.38Min: 18.8 / Avg: 18.92 / Max: 19.03Min: 17.77 / Avg: 17.83 / Max: 17.92Min: 17.21 / Avg: 17.3 / Max: 17.371. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatEPYC 7F52EPYC 7F32EPYC 7502PEPYC 7702EPYC 7402PEPYC 7552EPYC 7662EPYC 7532EPYC 7302PEPYC 7542EPYC 7642EPYC 7272EPYC 7282EPYC 7232P306090120150SE +/- 0.33, N = 3SE +/- 0.58, N = 3SE +/- 0.67, N = 3SE +/- 0.58, N = 3114115132133133134134134135135135138139143
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatEPYC 7F52EPYC 7F32EPYC 7502PEPYC 7702EPYC 7402PEPYC 7552EPYC 7662EPYC 7532EPYC 7302PEPYC 7542EPYC 7642EPYC 7272EPYC 7282EPYC 7232P306090120150Min: 131 / Avg: 131.67 / Max: 132Min: 132 / Avg: 133 / Max: 134Min: 133 / Avg: 133.67 / Max: 135Min: 134 / Avg: 135 / Max: 136

Timed Eigen Compilation

This test times how long it takes to build all Eigen examples. The Eigen examples are compiled serially. Eigen is a C++ template library for linear algebra. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To CompileEPYC 7F52EPYC 7F32EPYC 7402PEPYC 7542EPYC 7502PEPYC 7702EPYC 7302PEPYC 7552EPYC 7532EPYC 7662EPYC 7642EPYC 7272EPYC 7282EPYC 7232P20406080100SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 383.0283.5094.8494.9795.8196.0997.2097.2297.3797.5597.9999.8499.92103.98
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To CompileEPYC 7F52EPYC 7F32EPYC 7402PEPYC 7542EPYC 7502PEPYC 7702EPYC 7302PEPYC 7552EPYC 7532EPYC 7662EPYC 7642EPYC 7272EPYC 7282EPYC 7232P20406080100Min: 82.99 / Avg: 83.02 / Max: 83.06Min: 83.48 / Avg: 83.5 / Max: 83.55Min: 94.73 / Avg: 94.83 / Max: 94.91Min: 94.92 / Avg: 94.97 / Max: 95.02Min: 95.78 / Avg: 95.81 / Max: 95.83Min: 96 / Avg: 96.09 / Max: 96.18Min: 97.15 / Avg: 97.2 / Max: 97.27Min: 97.2 / Avg: 97.22 / Max: 97.24Min: 97.37 / Avg: 97.37 / Max: 97.38Min: 97.52 / Avg: 97.55 / Max: 97.6Min: 97.94 / Avg: 97.99 / Max: 98.06Min: 99.78 / Avg: 99.84 / Max: 99.91Min: 99.9 / Avg: 99.92 / Max: 99.96Min: 103.94 / Avg: 103.98 / Max: 104.04

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: NDT MappingEPYC 7F32EPYC 7542EPYC 7642EPYC 7402PEPYC 7502PEPYC 7302PEPYC 7662EPYC 7552EPYC 7532EPYC 7272EPYC 7702EPYC 7232PEPYC 7282EPYC 7F522004006008001000SE +/- 2.91, N = 3SE +/- 3.23, N = 3SE +/- 1.66, N = 3SE +/- 2.49, N = 3SE +/- 3.41, N = 3SE +/- 1.97, N = 3SE +/- 7.17, N = 3SE +/- 3.34, N = 3SE +/- 1.89, N = 3SE +/- 3.25, N = 3SE +/- 3.26, N = 3SE +/- 1.85, N = 3SE +/- 3.07, N = 3SE +/- 5.28, N = 3967.37908.46903.30899.45898.84898.21897.59895.52895.33864.62861.35861.07858.54773.861. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: NDT MappingEPYC 7F32EPYC 7542EPYC 7642EPYC 7402PEPYC 7502PEPYC 7302PEPYC 7662EPYC 7552EPYC 7532EPYC 7272EPYC 7702EPYC 7232PEPYC 7282EPYC 7F522004006008001000Min: 961.58 / Avg: 967.37 / Max: 970.7Min: 902.91 / Avg: 908.46 / Max: 914.1Min: 900.01 / Avg: 903.3 / Max: 905.33Min: 895.16 / Avg: 899.45 / Max: 903.77Min: 892.02 / Avg: 898.84 / Max: 902.4Min: 894.42 / Avg: 898.21 / Max: 901.07Min: 885.88 / Avg: 897.59 / Max: 910.61Min: 888.84 / Avg: 895.52 / Max: 899.09Min: 892.13 / Avg: 895.33 / Max: 898.66Min: 858.29 / Avg: 864.62 / Max: 869.06Min: 854.83 / Avg: 861.35 / Max: 864.7Min: 857.36 / Avg: 861.07 / Max: 863.04Min: 852.55 / Avg: 858.54 / Max: 862.73Min: 763.33 / Avg: 773.86 / Max: 779.831. (CXX) g++ options: -O3 -std=c++11 -fopenmp

librsvg

RSVG/librsvg is an SVG vector graphics library. This test profile times how long it takes to complete various operations by rsvg-convert. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterlibrsvgOperation: SVG Files To PNGEPYC 7F32EPYC 7F52EPYC 7542EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7272EPYC 7532EPYC 7702EPYC 7282EPYC 7552EPYC 7662EPYC 7232P612182430SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.07, N = 3SE +/- 0.06, N = 3SE +/- 0.09, N = 3SE +/- 0.07, N = 3SE +/- 0.09, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.06, N = 3SE +/- 0.04, N = 320.0320.4823.3523.6523.6623.7824.0924.0924.3424.4624.5324.7325.021. rsvg-convert version 2.48.9
OpenBenchmarking.orgSeconds, Fewer Is BetterlibrsvgOperation: SVG Files To PNGEPYC 7F32EPYC 7F52EPYC 7542EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7272EPYC 7532EPYC 7702EPYC 7282EPYC 7552EPYC 7662EPYC 7232P612182430Min: 19.96 / Avg: 20.03 / Max: 20.1Min: 20.42 / Avg: 20.48 / Max: 20.58Min: 23.26 / Avg: 23.35 / Max: 23.49Min: 23.57 / Avg: 23.65 / Max: 23.77Min: 23.49 / Avg: 23.66 / Max: 23.78Min: 23.65 / Avg: 23.78 / Max: 23.89Min: 23.94 / Avg: 24.09 / Max: 24.24Min: 24 / Avg: 24.09 / Max: 24.2Min: 24.29 / Avg: 24.34 / Max: 24.4Min: 24.41 / Avg: 24.46 / Max: 24.53Min: 24.43 / Avg: 24.53 / Max: 24.6Min: 24.63 / Avg: 24.73 / Max: 24.84Min: 24.97 / Avg: 25.02 / Max: 25.11. rsvg-convert version 2.48.9

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2-v2 - Model: vgg16EPYC 7542EPYC 7502PEPYC 7532EPYC 7F32EPYC 7702EPYC 7F52EPYC 7282918273645SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.19, N = 12SE +/- 0.21, N = 3SE +/- 0.16, N = 331.6431.8433.5233.6038.4038.7839.49MIN: 31.33 / MAX: 45.64MIN: 31.49 / MAX: 45.59MIN: 32.72 / MAX: 36.12MIN: 33.35 / MAX: 34.17MIN: 35.78 / MAX: 101.98MIN: 37.99 / MAX: 49.15MIN: 38.04 / MAX: 141.141. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2-v2 - Model: vgg16EPYC 7542EPYC 7502PEPYC 7532EPYC 7F32EPYC 7702EPYC 7F52EPYC 7282816243240Min: 31.58 / Avg: 31.64 / Max: 31.69Min: 31.76 / Avg: 31.84 / Max: 31.9Min: 33.43 / Avg: 33.52 / Max: 33.57Min: 33.55 / Avg: 33.6 / Max: 33.62Min: 37.11 / Avg: 38.4 / Max: 39.65Min: 38.4 / Avg: 38.78 / Max: 39.14Min: 39.21 / Avg: 39.49 / Max: 39.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweetsEPYC 7F32EPYC 7F52EPYC 7542EPYC 7402PEPYC 7642EPYC 7502PEPYC 7662EPYC 7552EPYC 7302PEPYC 7702EPYC 7532EPYC 7282EPYC 7272EPYC 7232P0.13950.2790.41850.5580.6975SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 30.620.610.530.530.520.520.520.520.520.520.510.500.500.501. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweetsEPYC 7F32EPYC 7F52EPYC 7542EPYC 7402PEPYC 7642EPYC 7502PEPYC 7662EPYC 7552EPYC 7302PEPYC 7702EPYC 7532EPYC 7282EPYC 7272EPYC 7232P246810Min: 0.62 / Avg: 0.62 / Max: 0.62Min: 0.61 / Avg: 0.61 / Max: 0.62Min: 0.52 / Avg: 0.53 / Max: 0.54Min: 0.52 / Avg: 0.53 / Max: 0.53Min: 0.51 / Avg: 0.52 / Max: 0.52Min: 0.52 / Avg: 0.52 / Max: 0.53Min: 0.51 / Avg: 0.52 / Max: 0.52Min: 0.51 / Avg: 0.52 / Max: 0.52Min: 0.52 / Avg: 0.52 / Max: 0.52Min: 0.52 / Avg: 0.52 / Max: 0.53Min: 0.51 / Avg: 0.51 / Max: 0.52Min: 0.49 / Avg: 0.5 / Max: 0.51Min: 0.49 / Avg: 0.5 / Max: 0.51Min: 0.49 / Avg: 0.5 / Max: 0.511. (CXX) g++ options: -O3 -pthread

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibEPYC 7F32EPYC 7F52EPYC 7502PEPYC 7542EPYC 7702EPYC 7402PEPYC 7302PEPYC 7532EPYC 7552EPYC 7662EPYC 7642EPYC 7272EPYC 7232PEPYC 7282510152025SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 317.217.519.919.920.220.220.420.420.520.520.521.021.121.3
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibEPYC 7F32EPYC 7F52EPYC 7502PEPYC 7542EPYC 7702EPYC 7402PEPYC 7302PEPYC 7532EPYC 7552EPYC 7662EPYC 7642EPYC 7272EPYC 7232PEPYC 7282510152025Min: 17.2 / Avg: 17.2 / Max: 17.2Min: 17.5 / Avg: 17.5 / Max: 17.5Min: 19.9 / Avg: 19.9 / Max: 19.9Min: 19.8 / Avg: 19.87 / Max: 19.9Min: 20.2 / Avg: 20.2 / Max: 20.2Min: 20.2 / Avg: 20.23 / Max: 20.3Min: 20.4 / Avg: 20.43 / Max: 20.5Min: 20.4 / Avg: 20.43 / Max: 20.5Min: 20.5 / Avg: 20.5 / Max: 20.5Min: 20.5 / Avg: 20.53 / Max: 20.6Min: 20.5 / Avg: 20.5 / Max: 20.5Min: 20.9 / Avg: 20.97 / Max: 21Min: 21.1 / Avg: 21.1 / Max: 21.1Min: 21.2 / Avg: 21.27 / Max: 21.3

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeEPYC 7F52EPYC 7F32EPYC 7542EPYC 7702EPYC 7402PEPYC 7502PEPYC 7302PEPYC 7662EPYC 7552EPYC 7272EPYC 7282EPYC 7532EPYC 7232P2M4M6M8M10MSE +/- 3201.01, N = 3SE +/- 9056.97, N = 3SE +/- 15198.80, N = 3SE +/- 4763.31, N = 3SE +/- 5954.89, N = 3SE +/- 2921.91, N = 3SE +/- 16881.14, N = 3SE +/- 6430.53, N = 3SE +/- 15133.11, N = 3SE +/- 4235.69, N = 3SE +/- 18409.69, N = 3SE +/- 8887.25, N = 3SE +/- 8448.82, N = 379400477939883698394568999486880760680430567876926753902667488165513516547690654521164146091. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm
OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeEPYC 7F52EPYC 7F32EPYC 7542EPYC 7702EPYC 7402PEPYC 7502PEPYC 7302PEPYC 7662EPYC 7552EPYC 7272EPYC 7282EPYC 7532EPYC 7232P1.4M2.8M4.2M5.6M7MMin: 7934298 / Avg: 7940047 / Max: 7945361Min: 7924088 / Avg: 7939883 / Max: 7955460Min: 6962515 / Avg: 6983944.67 / Max: 7013330Min: 6892591 / Avg: 6899948 / Max: 6908868Min: 6872751 / Avg: 6880759.67 / Max: 6892398Min: 6798906 / Avg: 6804305.33 / Max: 6808941Min: 6763111 / Avg: 6787691.67 / Max: 6820026Min: 6743220 / Avg: 6753902.33 / Max: 6765446Min: 6646698 / Avg: 6674880.67 / Max: 6698529Min: 6543694 / Avg: 6551351.33 / Max: 6558318Min: 6524614 / Avg: 6547690 / Max: 6584075Min: 6532471 / Avg: 6545211 / Max: 6562315Min: 6399116 / Avg: 6414609 / Max: 64281971. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserIDEPYC 7F32EPYC 7F52EPYC 7542EPYC 7702EPYC 7502PEPYC 7662EPYC 7302PEPYC 7402PEPYC 7642EPYC 7532EPYC 7552EPYC 7232PEPYC 7282EPYC 72720.14180.28360.42540.56720.709SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.630.620.540.540.530.530.530.530.520.520.520.520.510.511. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserIDEPYC 7F32EPYC 7F52EPYC 7542EPYC 7702EPYC 7502PEPYC 7662EPYC 7302PEPYC 7402PEPYC 7642EPYC 7532EPYC 7552EPYC 7232PEPYC 7282EPYC 7272246810Min: 0.62 / Avg: 0.63 / Max: 0.63Min: 0.61 / Avg: 0.62 / Max: 0.63Min: 0.54 / Avg: 0.54 / Max: 0.55Min: 0.53 / Avg: 0.54 / Max: 0.54Min: 0.53 / Avg: 0.53 / Max: 0.54Min: 0.52 / Avg: 0.53 / Max: 0.53Min: 0.53 / Avg: 0.53 / Max: 0.53Min: 0.53 / Avg: 0.53 / Max: 0.54Min: 0.52 / Avg: 0.52 / Max: 0.53Min: 0.52 / Avg: 0.52 / Max: 0.53Min: 0.52 / Avg: 0.52 / Max: 0.53Min: 0.51 / Avg: 0.52 / Max: 0.52Min: 0.51 / Avg: 0.51 / Max: 0.52Min: 0.51 / Avg: 0.51 / Max: 0.521. (CXX) g++ options: -O3 -pthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedEPYC 7F52EPYC 7F32EPYC 7542EPYC 7402PEPYC 7662EPYC 7502PEPYC 7702EPYC 7552EPYC 7302PEPYC 7532EPYC 7232PEPYC 7272EPYC 72821224364860SE +/- 0.42, N = 3SE +/- 0.36, N = 15SE +/- 0.28, N = 3SE +/- 0.34, N = 3SE +/- 0.42, N = 3SE +/- 0.39, N = 3SE +/- 0.30, N = 14SE +/- 0.48, N = 5SE +/- 0.42, N = 3SE +/- 0.02, N = 3SE +/- 0.44, N = 5SE +/- 0.61, N = 3SE +/- 0.13, N = 352.0251.9545.1345.0944.8044.6844.5444.4943.9843.6943.1742.9442.251. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedEPYC 7F52EPYC 7F32EPYC 7542EPYC 7402PEPYC 7662EPYC 7502PEPYC 7702EPYC 7552EPYC 7302PEPYC 7532EPYC 7232PEPYC 7272EPYC 72821020304050Min: 51.53 / Avg: 52.02 / Max: 52.86Min: 49.68 / Avg: 51.95 / Max: 54.03Min: 44.77 / Avg: 45.13 / Max: 45.69Min: 44.42 / Avg: 45.09 / Max: 45.46Min: 44.34 / Avg: 44.8 / Max: 45.63Min: 44.24 / Avg: 44.68 / Max: 45.45Min: 42.74 / Avg: 44.54 / Max: 46.27Min: 43.45 / Avg: 44.49 / Max: 45.67Min: 43.51 / Avg: 43.98 / Max: 44.82Min: 43.65 / Avg: 43.69 / Max: 43.71Min: 42.2 / Avg: 43.17 / Max: 44.21Min: 42.28 / Avg: 42.94 / Max: 44.16Min: 42.11 / Avg: 42.25 / Max: 42.521. (CC) gcc options: -O3

Radiance Benchmark

This is a benchmark of NREL Radiance, a synthetic imaging system that is open-source and developed by the Lawrence Berkeley National Laboratory in California. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRadiance Benchmark 5.0Test: SMP ParallelEPYC 7F52EPYC 7F32EPYC 7542EPYC 7702EPYC 7502PEPYC 7532EPYC 7402PEPYC 7552EPYC 7302PEPYC 7642EPYC 7662EPYC 7272EPYC 7282EPYC 7232P50100150200250184.49184.74214.26214.77215.16217.33217.55217.90218.14218.42219.26223.38224.52226.64

Minion

Minion is an open-source constraint solver that is designed to be very scalable. This test profile uses Minion's integrated benchmarking problems to solve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMinion 1.8Benchmark: GracefulEPYC 7F52EPYC 7F32EPYC 7542EPYC 7402PEPYC 7502PEPYC 7702EPYC 7302PEPYC 7662EPYC 7532EPYC 7642EPYC 7552EPYC 7272EPYC 7282EPYC 7232P1224364860SE +/- 0.06, N = 3SE +/- 0.07, N = 3SE +/- 0.06, N = 3SE +/- 0.00, N = 3SE +/- 0.12, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.07, N = 3SE +/- 0.06, N = 3SE +/- 0.06, N = 3SE +/- 0.11, N = 3SE +/- 0.12, N = 3SE +/- 0.08, N = 3SE +/- 0.06, N = 344.8444.9251.6252.3052.3452.3552.9353.0753.1853.2053.2954.6354.7655.081. (CXX) g++ options: -std=gnu++11 -O3 -fomit-frame-pointer -rdynamic
OpenBenchmarking.orgSeconds, Fewer Is BetterMinion 1.8Benchmark: GracefulEPYC 7F52EPYC 7F32EPYC 7542EPYC 7402PEPYC 7502PEPYC 7702EPYC 7302PEPYC 7662EPYC 7532EPYC 7642EPYC 7552EPYC 7272EPYC 7282EPYC 7232P1122334455Min: 44.76 / Avg: 44.84 / Max: 44.96Min: 44.81 / Avg: 44.92 / Max: 45.04Min: 51.51 / Avg: 51.62 / Max: 51.69Min: 52.29 / Avg: 52.3 / Max: 52.3Min: 52.19 / Avg: 52.34 / Max: 52.57Min: 52.3 / Avg: 52.35 / Max: 52.43Min: 52.9 / Avg: 52.93 / Max: 52.97Min: 52.97 / Avg: 53.07 / Max: 53.19Min: 53.07 / Avg: 53.18 / Max: 53.26Min: 53.13 / Avg: 53.2 / Max: 53.31Min: 53.08 / Avg: 53.29 / Max: 53.44Min: 54.4 / Avg: 54.63 / Max: 54.76Min: 54.61 / Avg: 54.76 / Max: 54.89Min: 54.97 / Avg: 55.08 / Max: 55.191. (CXX) g++ options: -std=gnu++11 -O3 -fomit-frame-pointer -rdynamic

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesEPYC 7F52EPYC 7F32EPYC 7502PEPYC 7702EPYC 7542EPYC 7302PEPYC 7402PEPYC 7532EPYC 7552EPYC 7662EPYC 7272EPYC 7642EPYC 7232PEPYC 7282306090120150SE +/- 0.33, N = 3110111128129129130131131133133134134135135
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesEPYC 7F52EPYC 7F32EPYC 7502PEPYC 7702EPYC 7542EPYC 7302PEPYC 7402PEPYC 7532EPYC 7552EPYC 7662EPYC 7272EPYC 7642EPYC 7232PEPYC 7282306090120150Min: 110 / Avg: 110.33 / Max: 111

Dolfyn

Dolfyn is a Computational Fluid Dynamics (CFD) code of modern numerical simulation techniques. The Dolfyn test profile measures the execution time of the bundled computational fluid dynamics demos that are bundled with Dolfyn. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid DynamicsEPYC 7F32EPYC 7F52EPYC 7542EPYC 7402PEPYC 7702EPYC 7502PEPYC 7642EPYC 7662EPYC 7532EPYC 7302PEPYC 7552EPYC 7272EPYC 7282EPYC 7232P510152025SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.07, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 317.5617.6020.1120.4820.5620.6920.7420.7820.7920.7920.8121.3921.4221.55
OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid DynamicsEPYC 7F32EPYC 7F52EPYC 7542EPYC 7402PEPYC 7702EPYC 7502PEPYC 7642EPYC 7662EPYC 7532EPYC 7302PEPYC 7552EPYC 7272EPYC 7282EPYC 7232P510152025Min: 17.53 / Avg: 17.56 / Max: 17.6Min: 17.53 / Avg: 17.6 / Max: 17.67Min: 20.1 / Avg: 20.11 / Max: 20.15Min: 20.46 / Avg: 20.48 / Max: 20.5Min: 20.54 / Avg: 20.56 / Max: 20.59Min: 20.68 / Avg: 20.69 / Max: 20.7Min: 20.73 / Avg: 20.74 / Max: 20.77Min: 20.77 / Avg: 20.78 / Max: 20.8Min: 20.71 / Avg: 20.79 / Max: 20.84Min: 20.67 / Avg: 20.79 / Max: 20.9Min: 20.79 / Avg: 20.81 / Max: 20.83Min: 21.37 / Avg: 21.39 / Max: 21.42Min: 21.35 / Avg: 21.42 / Max: 21.51Min: 21.5 / Avg: 21.55 / Max: 21.61

Gcrypt Library

Libgcrypt is a general purpose cryptographic library developed as part of the GnuPG project. This is a benchmark of libgcrypt's integrated benchmark and is measuring the time to run the benchmark command with a cipher/mac/hash repetition count set for 50 times as simple, high level look at the overall crypto performance of the system under test. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGcrypt Library 1.9EPYC 7F32EPYC 7F52EPYC 7542EPYC 7702EPYC 7502PEPYC 7402PEPYC 7532EPYC 7552EPYC 7662EPYC 7642EPYC 7302PEPYC 7232PEPYC 7272EPYC 728260120180240300SE +/- 0.09, N = 3SE +/- 0.11, N = 3SE +/- 0.52, N = 3SE +/- 0.70, N = 3SE +/- 1.03, N = 3SE +/- 1.43, N = 3SE +/- 0.48, N = 3SE +/- 0.57, N = 3SE +/- 0.81, N = 3SE +/- 1.00, N = 3SE +/- 0.38, N = 3SE +/- 0.64, N = 3SE +/- 0.37, N = 3SE +/- 0.34, N = 3223.71223.75257.57261.45262.12262.91264.68264.99265.19265.92268.68273.02273.20274.301. (CC) gcc options: -O2 -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterGcrypt Library 1.9EPYC 7F32EPYC 7F52EPYC 7542EPYC 7702EPYC 7502PEPYC 7402PEPYC 7532EPYC 7552EPYC 7662EPYC 7642EPYC 7302PEPYC 7232PEPYC 7272EPYC 728250100150200250Min: 223.61 / Avg: 223.71 / Max: 223.89Min: 223.6 / Avg: 223.75 / Max: 223.97Min: 256.94 / Avg: 257.57 / Max: 258.61Min: 260.68 / Avg: 261.45 / Max: 262.85Min: 260.44 / Avg: 262.12 / Max: 264Min: 261.48 / Avg: 262.91 / Max: 265.77Min: 263.89 / Avg: 264.68 / Max: 265.54Min: 264.26 / Avg: 264.98 / Max: 266.12Min: 263.73 / Avg: 265.19 / Max: 266.52Min: 264.29 / Avg: 265.92 / Max: 267.74Min: 267.92 / Avg: 268.68 / Max: 269.11Min: 271.81 / Avg: 273.02 / Max: 274.01Min: 272.5 / Avg: 273.19 / Max: 273.74Min: 273.81 / Avg: 274.3 / Max: 274.961. (CC) gcc options: -O2 -fvisibility=hidden

Scikit-Learn

Scikit-learn is a Python module for machine learning Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 0.22.1EPYC 7F52EPYC 7F32EPYC 7542EPYC 7502PEPYC 7662EPYC 7702EPYC 7402PEPYC 7302PEPYC 7532EPYC 7642EPYC 7552EPYC 7282EPYC 7272EPYC 7232P3691215SE +/- 0.031, N = 5SE +/- 0.004, N = 5SE +/- 0.158, N = 15SE +/- 0.003, N = 5SE +/- 0.006, N = 5SE +/- 0.051, N = 5SE +/- 0.006, N = 5SE +/- 0.012, N = 5SE +/- 0.004, N = 5SE +/- 0.016, N = 5SE +/- 0.009, N = 5SE +/- 0.010, N = 5SE +/- 0.007, N = 5SE +/- 0.168, N = 159.2409.29310.07410.08310.13010.13110.15410.28110.32510.33710.35810.53910.74611.327
OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 0.22.1EPYC 7F52EPYC 7F32EPYC 7542EPYC 7502PEPYC 7662EPYC 7702EPYC 7402PEPYC 7302PEPYC 7532EPYC 7642EPYC 7552EPYC 7282EPYC 7272EPYC 7232P3691215Min: 9.14 / Avg: 9.24 / Max: 9.33Min: 9.28 / Avg: 9.29 / Max: 9.3Min: 9.88 / Avg: 10.07 / Max: 12.28Min: 10.08 / Avg: 10.08 / Max: 10.09Min: 10.11 / Avg: 10.13 / Max: 10.14Min: 10.04 / Avg: 10.13 / Max: 10.33Min: 10.13 / Avg: 10.15 / Max: 10.17Min: 10.25 / Avg: 10.28 / Max: 10.32Min: 10.32 / Avg: 10.33 / Max: 10.34Min: 10.28 / Avg: 10.34 / Max: 10.37Min: 10.34 / Avg: 10.36 / Max: 10.39Min: 10.52 / Avg: 10.54 / Max: 10.57Min: 10.72 / Avg: 10.75 / Max: 10.76Min: 11.13 / Avg: 11.33 / Max: 13.67

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandomEPYC 7F32EPYC 7F52EPYC 7542EPYC 7502PEPYC 7302PEPYC 7402PEPYC 7702EPYC 7642EPYC 7532EPYC 7282EPYC 7662EPYC 7272EPYC 7552EPYC 7232P0.08550.1710.25650.3420.4275SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.380.380.330.330.330.330.330.320.320.320.320.320.320.311. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandomEPYC 7F32EPYC 7F52EPYC 7542EPYC 7502PEPYC 7302PEPYC 7402PEPYC 7702EPYC 7642EPYC 7532EPYC 7282EPYC 7662EPYC 7272EPYC 7552EPYC 7232P12345Min: 0.38 / Avg: 0.38 / Max: 0.39Min: 0.38 / Avg: 0.38 / Max: 0.38Min: 0.33 / Avg: 0.33 / Max: 0.34Min: 0.33 / Avg: 0.33 / Max: 0.33Min: 0.32 / Avg: 0.33 / Max: 0.33Min: 0.33 / Avg: 0.33 / Max: 0.33Min: 0.33 / Avg: 0.33 / Max: 0.33Min: 0.32 / Avg: 0.32 / Max: 0.33Min: 0.32 / Avg: 0.32 / Max: 0.32Min: 0.31 / Avg: 0.32 / Max: 0.32Min: 0.32 / Avg: 0.32 / Max: 0.33Min: 0.31 / Avg: 0.32 / Max: 0.32Min: 0.32 / Avg: 0.32 / Max: 0.33Min: 0.31 / Avg: 0.31 / Max: 0.311. (CXX) g++ options: -O3 -pthread

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000EPYC 7F52EPYC 7F32EPYC 7402PEPYC 7542EPYC 7502PEPYC 7702EPYC 7302PEPYC 7662EPYC 7532EPYC 7642EPYC 7552EPYC 7282EPYC 7272EPYC 7232P20406080100SE +/- 0.10, N = 3SE +/- 0.06, N = 3SE +/- 0.54, N = 3SE +/- 0.19, N = 3SE +/- 0.36, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.11, N = 3SE +/- 0.27, N = 3SE +/- 0.11, N = 3SE +/- 0.06, N = 3SE +/- 0.18, N = 3SE +/- 0.06, N = 3SE +/- 0.25, N = 366.5267.3075.9176.1677.1977.6177.7477.9478.0778.2378.2680.0580.1981.511. (CC) gcc options: -O2 -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000EPYC 7F52EPYC 7F32EPYC 7402PEPYC 7542EPYC 7502PEPYC 7702EPYC 7302PEPYC 7662EPYC 7532EPYC 7642EPYC 7552EPYC 7282EPYC 7272EPYC 7232P1632486480Min: 66.35 / Avg: 66.52 / Max: 66.7Min: 67.2 / Avg: 67.3 / Max: 67.4Min: 74.83 / Avg: 75.9 / Max: 76.53Min: 75.89 / Avg: 76.16 / Max: 76.52Min: 76.8 / Avg: 77.18 / Max: 77.9Min: 77.55 / Avg: 77.61 / Max: 77.67Min: 77.69 / Avg: 77.74 / Max: 77.85Min: 77.74 / Avg: 77.94 / Max: 78.1Min: 77.69 / Avg: 78.06 / Max: 78.59Min: 78.09 / Avg: 78.23 / Max: 78.45Min: 78.14 / Avg: 78.26 / Max: 78.37Min: 79.76 / Avg: 80.05 / Max: 80.38Min: 80.11 / Avg: 80.19 / Max: 80.3Min: 81.25 / Avg: 81.51 / Max: 82.011. (CC) gcc options: -O2 -ldl -lz -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionEPYC 7F52EPYC 7F32EPYC 7542EPYC 7702EPYC 7402PEPYC 7502PEPYC 7552EPYC 7532EPYC 7642EPYC 7302PEPYC 7662EPYC 7232PEPYC 7272EPYC 72823691215SE +/- 0.003, N = 6SE +/- 0.010, N = 6SE +/- 0.004, N = 5SE +/- 0.005, N = 5SE +/- 0.014, N = 5SE +/- 0.025, N = 5SE +/- 0.001, N = 5SE +/- 0.005, N = 5SE +/- 0.009, N = 5SE +/- 0.009, N = 5SE +/- 0.025, N = 5SE +/- 0.004, N = 5SE +/- 0.009, N = 5SE +/- 0.025, N = 57.7397.7458.8619.0039.0149.0149.1269.1409.1459.1519.1549.4149.4199.4811. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionEPYC 7F52EPYC 7F32EPYC 7542EPYC 7702EPYC 7402PEPYC 7502PEPYC 7552EPYC 7532EPYC 7642EPYC 7302PEPYC 7662EPYC 7232PEPYC 7272EPYC 72823691215Min: 7.73 / Avg: 7.74 / Max: 7.75Min: 7.72 / Avg: 7.74 / Max: 7.79Min: 8.85 / Avg: 8.86 / Max: 8.87Min: 8.99 / Avg: 9 / Max: 9.02Min: 8.99 / Avg: 9.01 / Max: 9.06Min: 8.98 / Avg: 9.01 / Max: 9.12Min: 9.12 / Avg: 9.13 / Max: 9.13Min: 9.13 / Avg: 9.14 / Max: 9.15Min: 9.13 / Avg: 9.14 / Max: 9.17Min: 9.12 / Avg: 9.15 / Max: 9.17Min: 9.12 / Avg: 9.15 / Max: 9.25Min: 9.4 / Avg: 9.41 / Max: 9.42Min: 9.41 / Avg: 9.42 / Max: 9.46Min: 9.43 / Avg: 9.48 / Max: 9.581. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2EPYC 7F32EPYC 7F52EPYC 7542EPYC 7702EPYC 7402PEPYC 7502PEPYC 7302PEPYC 7642EPYC 7662EPYC 7532EPYC 7552EPYC 7232PEPYC 7272EPYC 728280160240320400SE +/- 0.35, N = 3SE +/- 0.41, N = 3SE +/- 0.10, N = 3SE +/- 0.62, N = 3SE +/- 0.03, N = 3SE +/- 0.72, N = 3SE +/- 0.94, N = 3SE +/- 0.40, N = 3SE +/- 0.29, N = 3SE +/- 0.88, N = 3SE +/- 0.46, N = 3SE +/- 0.14, N = 3SE +/- 0.22, N = 3SE +/- 0.21, N = 3286.43288.47333.28333.30336.66336.89339.05341.93342.47344.25344.33347.18347.77350.76MIN: 283.81 / MAX: 294.29MIN: 285.34 / MAX: 304.15MIN: 329.87 / MAX: 349.8MIN: 330.49 / MAX: 343.73MIN: 333.97 / MAX: 348.51MIN: 332.6 / MAX: 347.08MIN: 335.68 / MAX: 355.89MIN: 337.09 / MAX: 359.76MIN: 338.7 / MAX: 360.58MIN: 339.59 / MAX: 367.05MIN: 340.96 / MAX: 358.42MIN: 343.85 / MAX: 356.42MIN: 345.44 / MAX: 366.04MIN: 346.27 / MAX: 363.671. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2EPYC 7F32EPYC 7F52EPYC 7542EPYC 7702EPYC 7402PEPYC 7502PEPYC 7302PEPYC 7642EPYC 7662EPYC 7532EPYC 7552EPYC 7232PEPYC 7272EPYC 728260120180240300Min: 285.73 / Avg: 286.43 / Max: 286.81Min: 287.8 / Avg: 288.47 / Max: 289.23Min: 333.14 / Avg: 333.28 / Max: 333.46Min: 332.62 / Avg: 333.3 / Max: 334.54Min: 336.61 / Avg: 336.66 / Max: 336.68Min: 335.5 / Avg: 336.89 / Max: 337.88Min: 337.77 / Avg: 339.05 / Max: 340.89Min: 341.17 / Avg: 341.93 / Max: 342.54Min: 342 / Avg: 342.47 / Max: 343.01Min: 342.92 / Avg: 344.25 / Max: 345.93Min: 343.4 / Avg: 344.33 / Max: 344.79Min: 347.01 / Avg: 347.18 / Max: 347.46Min: 347.4 / Avg: 347.77 / Max: 348.16Min: 350.35 / Avg: 350.76 / Max: 351.031. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

Crypto++

Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Integer + Elliptic Curve Public Key AlgorithmsEPYC 7F32EPYC 7F52EPYC 7542EPYC 7702EPYC 7402PEPYC 7502PEPYC 7662EPYC 7532EPYC 7302PEPYC 7552EPYC 7642EPYC 7272EPYC 7232PEPYC 728210002000300040005000SE +/- 2.49, N = 3SE +/- 3.14, N = 3SE +/- 5.67, N = 3SE +/- 5.35, N = 3SE +/- 2.96, N = 3SE +/- 4.11, N = 3SE +/- 2.07, N = 3SE +/- 3.97, N = 3SE +/- 1.09, N = 3SE +/- 2.54, N = 3SE +/- 3.12, N = 3SE +/- 1.24, N = 3SE +/- 5.48, N = 3SE +/- 2.72, N = 34835.024834.054207.914152.604151.294139.984091.784089.124086.084085.534072.763965.033949.443949.001. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe
OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Integer + Elliptic Curve Public Key AlgorithmsEPYC 7F32EPYC 7F52EPYC 7542EPYC 7702EPYC 7402PEPYC 7502PEPYC 7662EPYC 7532EPYC 7302PEPYC 7552EPYC 7642EPYC 7272EPYC 7232PEPYC 72828001600240032004000Min: 4830.71 / Avg: 4835.02 / Max: 4839.34Min: 4828.11 / Avg: 4834.05 / Max: 4838.82Min: 4199.27 / Avg: 4207.91 / Max: 4218.6Min: 4142.26 / Avg: 4152.6 / Max: 4160.14Min: 4145.43 / Avg: 4151.29 / Max: 4154.96Min: 4132.23 / Avg: 4139.98 / Max: 4146.24Min: 4087.91 / Avg: 4091.78 / Max: 4095.01Min: 4081.58 / Avg: 4089.12 / Max: 4095.03Min: 4084.12 / Avg: 4086.08 / Max: 4087.91Min: 4080.99 / Avg: 4085.53 / Max: 4089.76Min: 4068.43 / Avg: 4072.76 / Max: 4078.81Min: 3963.31 / Avg: 3965.03 / Max: 3967.43Min: 3939.83 / Avg: 3949.44 / Max: 3958.81Min: 3943.6 / Avg: 3949 / Max: 3952.31. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyEPYC 7F52EPYC 7F32EPYC 7542EPYC 7702EPYC 7402PEPYC 7502PEPYC 7302PEPYC 7552EPYC 7662EPYC 7532EPYC 7642EPYC 7232PEPYC 7272EPYC 7282306090120150SE +/- 0.33, N = 3SE +/- 0.33, N = 3116116133135135135137137137137138141141142
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyEPYC 7F52EPYC 7F32EPYC 7542EPYC 7702EPYC 7402PEPYC 7502PEPYC 7302PEPYC 7552EPYC 7662EPYC 7532EPYC 7642EPYC 7232PEPYC 7272EPYC 7282306090120150Min: 137 / Avg: 137.67 / Max: 138Min: 141 / Avg: 141.67 / Max: 142

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesEPYC 7F32EPYC 7F52EPYC 7542EPYC 7402PEPYC 7702EPYC 7302PEPYC 7502PEPYC 7552EPYC 7532EPYC 7662EPYC 7642EPYC 7272EPYC 7232PEPYC 728230060090012001500SE +/- 8.33, N = 3SE +/- 5.46, N = 3SE +/- 1.20, N = 3SE +/- 1.20, N = 3SE +/- 6.84, N = 3SE +/- 1.86, N = 3SE +/- 10.82, N = 3SE +/- 9.35, N = 3SE +/- 7.33, N = 3SE +/- 9.54, N = 3SE +/- 7.55, N = 3SE +/- 9.68, N = 3SE +/- 8.01, N = 3998999113711591166117211731178118011841184120812131221
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesEPYC 7F32EPYC 7F52EPYC 7542EPYC 7402PEPYC 7702EPYC 7302PEPYC 7502PEPYC 7552EPYC 7532EPYC 7662EPYC 7642EPYC 7272EPYC 7232PEPYC 72822004006008001000Min: 990 / Avg: 998.33 / Max: 1015Min: 992 / Avg: 999.33 / Max: 1010Min: 1135 / Avg: 1137.33 / Max: 1139Min: 1157 / Avg: 1159.33 / Max: 1161Min: 1159 / Avg: 1166.33 / Max: 1180Min: 1168 / Avg: 1171.67 / Max: 1174Min: 1152 / Avg: 1173 / Max: 1188Min: 1168 / Avg: 1178.33 / Max: 1197Min: 1173 / Avg: 1180.33 / Max: 1195Min: 1165 / Avg: 1184 / Max: 1195Min: 1175 / Avg: 1184 / Max: 1199Min: 1202 / Avg: 1212.67 / Max: 1232Min: 1209 / Avg: 1220.67 / Max: 1236

Crypto++

Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Keyed AlgorithmsEPYC 7F52EPYC 7F32EPYC 7542EPYC 7702EPYC 7502PEPYC 7402PEPYC 7532EPYC 7552EPYC 7302PEPYC 7662EPYC 7642EPYC 7272EPYC 7232PEPYC 7282140280420560700SE +/- 0.14, N = 3SE +/- 0.29, N = 3SE +/- 0.77, N = 3SE +/- 0.06, N = 3SE +/- 0.31, N = 3SE +/- 0.56, N = 3SE +/- 0.27, N = 3SE +/- 0.13, N = 3SE +/- 0.13, N = 3SE +/- 0.63, N = 3SE +/- 0.31, N = 3SE +/- 0.07, N = 3SE +/- 0.59, N = 3SE +/- 0.21, N = 3628.21627.44546.72539.45539.17538.09531.14530.80530.36529.62529.60514.85513.65513.601. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe
OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Keyed AlgorithmsEPYC 7F52EPYC 7F32EPYC 7542EPYC 7702EPYC 7502PEPYC 7402PEPYC 7532EPYC 7552EPYC 7302PEPYC 7662EPYC 7642EPYC 7272EPYC 7232PEPYC 7282110220330440550Min: 628.01 / Avg: 628.21 / Max: 628.49Min: 627.02 / Avg: 627.44 / Max: 627.99Min: 545.3 / Avg: 546.72 / Max: 547.95Min: 539.37 / Avg: 539.45 / Max: 539.56Min: 538.79 / Avg: 539.17 / Max: 539.79Min: 537.38 / Avg: 538.09 / Max: 539.19Min: 530.69 / Avg: 531.14 / Max: 531.61Min: 530.54 / Avg: 530.8 / Max: 530.93Min: 530.22 / Avg: 530.36 / Max: 530.61Min: 528.75 / Avg: 529.62 / Max: 530.84Min: 528.99 / Avg: 529.6 / Max: 530.03Min: 514.7 / Avg: 514.85 / Max: 514.94Min: 512.6 / Avg: 513.65 / Max: 514.65Min: 513.18 / Avg: 513.6 / Max: 513.821. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe

FinanceBench

FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU via OpenCL and CPU benchmarking with OpenMP. The FinanceBench test cases are focused on Black-Sholes-Merton Process with Analytic European Option engine, QMC (Sobol) Monte-Carlo method (Equity Option Example), Bonds Fixed-rate bond with flat forward curve, and Repo Securities repurchase agreement. FinanceBench was originally written by the Cavazos Lab at University of Delaware. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Repo OpenMPEPYC 7F52EPYC 7F32EPYC 7542EPYC 7402PEPYC 7502PEPYC 7702EPYC 7532EPYC 7662EPYC 7302PEPYC 7552EPYC 7272EPYC 7282EPYC 7232P11K22K33K44K55KSE +/- 174.69, N = 3SE +/- 107.65, N = 3SE +/- 84.26, N = 3SE +/- 17.38, N = 3SE +/- 71.59, N = 3SE +/- 45.54, N = 3SE +/- 133.22, N = 3SE +/- 71.36, N = 3SE +/- 383.79, N = 3SE +/- 241.95, N = 3SE +/- 108.67, N = 3SE +/- 109.50, N = 3SE +/- 275.73, N = 343527.2843548.6350076.2750474.7050609.0451015.8751428.8951437.7951629.1051630.5052936.9852977.6053237.681. (CXX) g++ options: -O3 -march=native -fopenmp
OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Repo OpenMPEPYC 7F52EPYC 7F32EPYC 7542EPYC 7402PEPYC 7502PEPYC 7702EPYC 7532EPYC 7662EPYC 7302PEPYC 7552EPYC 7272EPYC 7282EPYC 7232P9K18K27K36K45KMin: 43324.05 / Avg: 43527.28 / Max: 43875.02Min: 43343.38 / Avg: 43548.63 / Max: 43707.56Min: 49985.34 / Avg: 50076.27 / Max: 50244.61Min: 50450.23 / Avg: 50474.7 / Max: 50508.32Min: 50528.84 / Avg: 50609.04 / Max: 50751.85Min: 50939.99 / Avg: 51015.87 / Max: 51097.43Min: 51271.63 / Avg: 51428.89 / Max: 51693.79Min: 51363.7 / Avg: 51437.79 / Max: 51580.48Min: 51241.3 / Avg: 51629.1 / Max: 52396.66Min: 51347.45 / Avg: 51630.5 / Max: 52111.92Min: 52784.59 / Avg: 52936.98 / Max: 53147.39Min: 52852.14 / Avg: 52977.6 / Max: 53195.79Min: 52860.58 / Avg: 53237.68 / Max: 53774.71. (CXX) g++ options: -O3 -march=native -fopenmp

Minion

Minion is an open-source constraint solver that is designed to be very scalable. This test profile uses Minion's integrated benchmarking problems to solve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMinion 1.8Benchmark: QuasigroupEPYC 7F52EPYC 7F32EPYC 7542EPYC 7502PEPYC 7402PEPYC 7702EPYC 7642EPYC 7532EPYC 7662EPYC 7552EPYC 7302PEPYC 7282EPYC 7272EPYC 7232P306090120150SE +/- 0.12, N = 3SE +/- 0.39, N = 3SE +/- 0.55, N = 3SE +/- 0.46, N = 3SE +/- 0.44, N = 3SE +/- 0.49, N = 3SE +/- 0.18, N = 3SE +/- 0.37, N = 3SE +/- 0.28, N = 3SE +/- 0.31, N = 3SE +/- 0.37, N = 3SE +/- 0.80, N = 3SE +/- 0.38, N = 3SE +/- 0.50, N = 3114.91114.95132.23133.23133.73134.93135.09135.33135.90136.07136.52140.13140.43140.531. (CXX) g++ options: -std=gnu++11 -O3 -fomit-frame-pointer -rdynamic
OpenBenchmarking.orgSeconds, Fewer Is BetterMinion 1.8Benchmark: QuasigroupEPYC 7F52EPYC 7F32EPYC 7542EPYC 7502PEPYC 7402PEPYC 7702EPYC 7642EPYC 7532EPYC 7662EPYC 7552EPYC 7302PEPYC 7282EPYC 7272EPYC 7232P306090120150Min: 114.67 / Avg: 114.91 / Max: 115.06Min: 114.45 / Avg: 114.95 / Max: 115.72Min: 131.17 / Avg: 132.23 / Max: 133Min: 132.34 / Avg: 133.23 / Max: 133.85Min: 133.03 / Avg: 133.73 / Max: 134.55Min: 133.97 / Avg: 134.93 / Max: 135.63Min: 134.86 / Avg: 135.09 / Max: 135.44Min: 134.6 / Avg: 135.33 / Max: 135.75Min: 135.39 / Avg: 135.9 / Max: 136.37Min: 135.55 / Avg: 136.07 / Max: 136.63Min: 135.79 / Avg: 136.52 / Max: 136.95Min: 138.59 / Avg: 140.13 / Max: 141.26Min: 139.94 / Avg: 140.43 / Max: 141.19Min: 139.78 / Avg: 140.53 / Max: 141.471. (CXX) g++ options: -std=gnu++11 -O3 -fomit-frame-pointer -rdynamic

Botan

Botan is a cross-platform open-source C++ crypto library that supports most all publicly known cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: BlowfishEPYC 7F32EPYC 7F52EPYC 7542EPYC 7702EPYC 7502PEPYC 7402PEPYC 7552EPYC 7532EPYC 7662EPYC 7302PEPYC 7642EPYC 7232PEPYC 7272EPYC 728290180270360450SE +/- 0.15, N = 3SE +/- 0.07, N = 3SE +/- 0.10, N = 3SE +/- 0.16, N = 3SE +/- 0.14, N = 3SE +/- 0.40, N = 3SE +/- 0.10, N = 3SE +/- 0.13, N = 3SE +/- 0.16, N = 3SE +/- 0.15, N = 3SE +/- 0.20, N = 3SE +/- 0.13, N = 3SE +/- 0.16, N = 3SE +/- 0.25, N = 3429.41429.14374.52368.93368.69368.38363.50363.10362.99362.20361.80352.29352.07351.141. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: BlowfishEPYC 7F32EPYC 7F52EPYC 7542EPYC 7702EPYC 7502PEPYC 7402PEPYC 7552EPYC 7532EPYC 7662EPYC 7302PEPYC 7642EPYC 7232PEPYC 7272EPYC 728280160240320400Min: 429.18 / Avg: 429.41 / Max: 429.7Min: 429.06 / Avg: 429.14 / Max: 429.27Min: 374.32 / Avg: 374.52 / Max: 374.67Min: 368.71 / Avg: 368.93 / Max: 369.25Min: 368.54 / Avg: 368.69 / Max: 368.97Min: 367.96 / Avg: 368.38 / Max: 369.17Min: 363.36 / Avg: 363.5 / Max: 363.69Min: 362.85 / Avg: 363.1 / Max: 363.25Min: 362.68 / Avg: 362.99 / Max: 363.15Min: 362.03 / Avg: 362.2 / Max: 362.49Min: 361.43 / Avg: 361.8 / Max: 362.1Min: 352.1 / Avg: 352.29 / Max: 352.53Min: 351.75 / Avg: 352.07 / Max: 352.26Min: 350.66 / Avg: 351.14 / Max: 351.451. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

Minion

Minion is an open-source constraint solver that is designed to be very scalable. This test profile uses Minion's integrated benchmarking problems to solve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMinion 1.8Benchmark: SolitaireEPYC 7F52EPYC 7F32EPYC 7542EPYC 7502PEPYC 7402PEPYC 7702EPYC 7642EPYC 7532EPYC 7662EPYC 7552EPYC 7302PEPYC 7272EPYC 7282EPYC 7232P20406080100SE +/- 0.11, N = 3SE +/- 0.23, N = 3SE +/- 0.07, N = 3SE +/- 0.02, N = 3SE +/- 0.10, N = 3SE +/- 0.14, N = 3SE +/- 0.32, N = 3SE +/- 0.17, N = 3SE +/- 0.07, N = 3SE +/- 0.09, N = 3SE +/- 0.11, N = 3SE +/- 0.19, N = 3SE +/- 0.33, N = 3SE +/- 0.13, N = 367.7467.8077.8079.0079.0379.3979.7279.8580.1280.1580.3381.9782.1582.841. (CXX) g++ options: -std=gnu++11 -O3 -fomit-frame-pointer -rdynamic
OpenBenchmarking.orgSeconds, Fewer Is BetterMinion 1.8Benchmark: SolitaireEPYC 7F52EPYC 7F32EPYC 7542EPYC 7502PEPYC 7402PEPYC 7702EPYC 7642EPYC 7532EPYC 7662EPYC 7552EPYC 7302PEPYC 7272EPYC 7282EPYC 7232P1632486480Min: 67.63 / Avg: 67.74 / Max: 67.95Min: 67.36 / Avg: 67.8 / Max: 68.07Min: 77.68 / Avg: 77.8 / Max: 77.93Min: 78.98 / Avg: 79 / Max: 79.03Min: 78.84 / Avg: 79.03 / Max: 79.18Min: 79.2 / Avg: 79.39 / Max: 79.66Min: 79.1 / Avg: 79.72 / Max: 80.13Min: 79.51 / Avg: 79.85 / Max: 80.03Min: 80.05 / Avg: 80.12 / Max: 80.26Min: 80.03 / Avg: 80.15 / Max: 80.32Min: 80.18 / Avg: 80.33 / Max: 80.55Min: 81.59 / Avg: 81.97 / Max: 82.16Min: 81.53 / Avg: 82.15 / Max: 82.68Min: 82.61 / Avg: 82.84 / Max: 83.061. (CXX) g++ options: -std=gnu++11 -O3 -fomit-frame-pointer -rdynamic

Hierarchical INTegration

This test runs the U.S. Department of Energy's Ames Laboratory Hierarchical INTegration (HINT) benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOATEPYC 7F32EPYC 7F52EPYC 7542EPYC 7402PEPYC 7502PEPYC 7702EPYC 7532EPYC 7552EPYC 7662EPYC 7302PEPYC 7272EPYC 7232PEPYC 728270M140M210M280M350MSE +/- 60046.42, N = 3SE +/- 33906.80, N = 3SE +/- 40543.01, N = 3SE +/- 465752.47, N = 3SE +/- 131222.60, N = 3SE +/- 320354.97, N = 3SE +/- 12833.10, N = 3SE +/- 234513.76, N = 3SE +/- 136985.64, N = 3SE +/- 206566.86, N = 3SE +/- 380619.49, N = 3SE +/- 333820.85, N = 3SE +/- 38644.36, N = 3346733990.00346505619.64302406827.18297827555.09297816529.28297478250.43293437386.11293216264.80293169027.15292597759.78284710986.66284332324.15283588722.341. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOATEPYC 7F32EPYC 7F52EPYC 7542EPYC 7402PEPYC 7502PEPYC 7702EPYC 7532EPYC 7552EPYC 7662EPYC 7302PEPYC 7272EPYC 7232PEPYC 728260M120M180M240M300MMin: 346619184.22 / Avg: 346733990 / Max: 346821912.43Min: 346439918.8 / Avg: 346505619.64 / Max: 346553015.3Min: 302325902.16 / Avg: 302406827.18 / Max: 302451712.79Min: 297325728.53 / Avg: 297827555.09 / Max: 298758103.32Min: 297554997.89 / Avg: 297816529.28 / Max: 297966245.2Min: 296837585.98 / Avg: 297478250.43 / Max: 297805193.98Min: 293416576.95 / Avg: 293437386.11 / Max: 293460802.19Min: 292759126.74 / Avg: 293216264.8 / Max: 293535711.28Min: 292897063.96 / Avg: 293169027.15 / Max: 293333683.01Min: 292233343.29 / Avg: 292597759.78 / Max: 292948520.88Min: 284060087.32 / Avg: 284710986.66 / Max: 285378285.89Min: 283664869.62 / Avg: 284332324.15 / Max: 284679741.15Min: 283512887.38 / Avg: 283588722.34 / Max: 283639560.881. (CC) gcc options: -O3 -march=native -lm

Botan

Botan is a cross-platform open-source C++ crypto library that supports most all publicly known cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: TwofishEPYC 7F32EPYC 7F52EPYC 7542EPYC 7502PEPYC 7702EPYC 7402PEPYC 7662EPYC 7532EPYC 7302PEPYC 7552EPYC 7642EPYC 7232PEPYC 7272EPYC 728280160240320400SE +/- 0.08, N = 3SE +/- 0.04, N = 3SE +/- 0.37, N = 3SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.18, N = 3SE +/- 0.08, N = 3SE +/- 0.03, N = 3SE +/- 0.11, N = 3SE +/- 0.05, N = 3SE +/- 0.05, N = 3SE +/- 0.10, N = 3SE +/- 0.07, N = 3SE +/- 0.10, N = 3351.99351.63306.62302.25302.14302.12297.81297.71297.65297.10296.91288.70288.62287.931. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: TwofishEPYC 7F32EPYC 7F52EPYC 7542EPYC 7502PEPYC 7702EPYC 7402PEPYC 7662EPYC 7532EPYC 7302PEPYC 7552EPYC 7642EPYC 7232PEPYC 7272EPYC 728260120180240300Min: 351.85 / Avg: 351.99 / Max: 352.11Min: 351.56 / Avg: 351.63 / Max: 351.71Min: 305.88 / Avg: 306.62 / Max: 307Min: 302.19 / Avg: 302.25 / Max: 302.32Min: 302.1 / Avg: 302.14 / Max: 302.21Min: 301.76 / Avg: 302.12 / Max: 302.35Min: 297.64 / Avg: 297.81 / Max: 297.92Min: 297.66 / Avg: 297.7 / Max: 297.77Min: 297.5 / Avg: 297.65 / Max: 297.86Min: 297.02 / Avg: 297.1 / Max: 297.2Min: 296.81 / Avg: 296.91 / Max: 296.96Min: 288.49 / Avg: 288.7 / Max: 288.83Min: 288.49 / Avg: 288.62 / Max: 288.69Min: 287.77 / Avg: 287.93 / Max: 288.11. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

Google SynthMark

SynthMark is a cross platform tool for benchmarking CPU performance under a variety of real-time audio workloads. It uses a polyphonic synthesizer model to provide standardized tests for latency, jitter and computational throughput. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_100EPYC 7F32EPYC 7F52EPYC 7542EPYC 7502PEPYC 7402PEPYC 7702EPYC 7302PEPYC 7532EPYC 7552EPYC 7662EPYC 7232PEPYC 7272EPYC 7282160320480640800SE +/- 0.17, N = 3SE +/- 0.59, N = 3SE +/- 0.70, N = 3SE +/- 0.11, N = 3SE +/- 0.32, N = 3SE +/- 0.57, N = 3SE +/- 0.13, N = 3SE +/- 0.46, N = 3SE +/- 0.70, N = 3SE +/- 0.57, N = 3SE +/- 0.27, N = 3SE +/- 0.12, N = 3SE +/- 0.36, N = 3740.58739.72645.87637.38636.43635.92627.74627.06626.57624.73608.61608.05605.831. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast
OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_100EPYC 7F32EPYC 7F52EPYC 7542EPYC 7502PEPYC 7402PEPYC 7702EPYC 7302PEPYC 7532EPYC 7552EPYC 7662EPYC 7232PEPYC 7272EPYC 7282130260390520650Min: 740.39 / Avg: 740.58 / Max: 740.92Min: 738.56 / Avg: 739.72 / Max: 740.43Min: 644.48 / Avg: 645.87 / Max: 646.66Min: 637.17 / Avg: 637.38 / Max: 637.55Min: 635.86 / Avg: 636.43 / Max: 636.98Min: 634.78 / Avg: 635.92 / Max: 636.53Min: 627.49 / Avg: 627.74 / Max: 627.91Min: 626.15 / Avg: 627.06 / Max: 627.62Min: 625.29 / Avg: 626.57 / Max: 627.72Min: 623.79 / Avg: 624.73 / Max: 625.77Min: 608.15 / Avg: 608.61 / Max: 609.07Min: 607.84 / Avg: 608.05 / Max: 608.27Min: 605.12 / Avg: 605.82 / Max: 606.331. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 1D FFT Size 4096EPYC 7F52EPYC 7F32EPYC 7702EPYC 7542EPYC 7502PEPYC 7402PEPYC 7552EPYC 7532EPYC 7642EPYC 7302PEPYC 7662EPYC 7282EPYC 7272EPYC 7232P11K22K33K44K55KSE +/- 131.48, N = 6SE +/- 204.79, N = 6SE +/- 126.64, N = 6SE +/- 307.98, N = 6SE +/- 288.40, N = 6SE +/- 343.41, N = 6SE +/- 152.42, N = 6SE +/- 312.60, N = 6SE +/- 392.71, N = 6SE +/- 385.00, N = 7SE +/- 399.17, N = 6SE +/- 244.70, N = 6SE +/- 304.76, N = 6SE +/- 369.07, N = 650969506754432843936439034341443404432754283642761427304237541912417041. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm
OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 1D FFT Size 4096EPYC 7F52EPYC 7F32EPYC 7702EPYC 7542EPYC 7502PEPYC 7402PEPYC 7552EPYC 7532EPYC 7642EPYC 7302PEPYC 7662EPYC 7282EPYC 7272EPYC 7232P9K18K27K36K45KMin: 50577 / Avg: 50968.83 / Max: 51456Min: 49987 / Avg: 50674.5 / Max: 51435Min: 43820 / Avg: 44328.17 / Max: 44688Min: 43041 / Avg: 43936.17 / Max: 44903Min: 42938 / Avg: 43902.83 / Max: 44700Min: 42288 / Avg: 43414 / Max: 44416Min: 42872 / Avg: 43404.33 / Max: 43831Min: 41954 / Avg: 43274.83 / Max: 44008Min: 41412 / Avg: 42836.17 / Max: 44004Min: 40864 / Avg: 42761.14 / Max: 44031Min: 40860 / Avg: 42729.67 / Max: 43619Min: 41289 / Avg: 42374.5 / Max: 42923Min: 40544 / Avg: 41912 / Max: 42672Min: 40672 / Avg: 41703.5 / Max: 428901. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateEPYC 7F32EPYC 7F52EPYC 7542EPYC 7502PEPYC 7402PEPYC 7702EPYC 7302PEPYC 7552EPYC 7532EPYC 7642EPYC 7662EPYC 7282EPYC 7232PEPYC 72721326395265SE +/- 0.45, N = 3SE +/- 0.23, N = 3SE +/- 0.22, N = 3SE +/- 0.37, N = 3SE +/- 0.03, N = 3SE +/- 0.09, N = 3SE +/- 0.44, N = 3SE +/- 0.52, N = 3SE +/- 0.26, N = 3SE +/- 0.37, N = 3SE +/- 0.19, N = 3SE +/- 0.27, N = 3SE +/- 0.40, N = 3SE +/- 0.15, N = 349.150.456.756.957.257.557.658.158.358.458.659.159.560.0
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateEPYC 7F32EPYC 7F52EPYC 7542EPYC 7502PEPYC 7402PEPYC 7702EPYC 7302PEPYC 7552EPYC 7532EPYC 7642EPYC 7662EPYC 7282EPYC 7232PEPYC 72721224364860Min: 48.2 / Avg: 49.07 / Max: 49.7Min: 50.2 / Avg: 50.43 / Max: 50.9Min: 56.4 / Avg: 56.67 / Max: 57.1Min: 56.2 / Avg: 56.93 / Max: 57.4Min: 57.2 / Avg: 57.23 / Max: 57.3Min: 57.3 / Avg: 57.47 / Max: 57.6Min: 56.8 / Avg: 57.63 / Max: 58.3Min: 57.2 / Avg: 58.1 / Max: 59Min: 57.8 / Avg: 58.27 / Max: 58.7Min: 57.9 / Avg: 58.37 / Max: 59.1Min: 58.2 / Avg: 58.57 / Max: 58.8Min: 58.6 / Avg: 59.13 / Max: 59.4Min: 58.7 / Avg: 59.5 / Max: 59.9Min: 59.8 / Avg: 60.03 / Max: 60.3

FinanceBench

FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU via OpenCL and CPU benchmarking with OpenMP. The FinanceBench test cases are focused on Black-Sholes-Merton Process with Analytic European Option engine, QMC (Sobol) Monte-Carlo method (Equity Option Example), Bonds Fixed-rate bond with flat forward curve, and Repo Securities repurchase agreement. FinanceBench was originally written by the Cavazos Lab at University of Delaware. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Bonds OpenMPEPYC 7F32EPYC 7F52EPYC 7542EPYC 7502PEPYC 7702EPYC 7402PEPYC 7302PEPYC 7532EPYC 7552EPYC 7662EPYC 7272EPYC 7232PEPYC 728220K40K60K80K100KSE +/- 185.79, N = 3SE +/- 395.61, N = 3SE +/- 26.41, N = 3SE +/- 294.03, N = 3SE +/- 232.73, N = 3SE +/- 397.99, N = 3SE +/- 461.99, N = 3SE +/- 287.97, N = 3SE +/- 75.52, N = 3SE +/- 966.88, N = 3SE +/- 203.38, N = 3SE +/- 200.17, N = 3SE +/- 402.31, N = 376027.9876410.1186946.6188487.5788553.7888634.3189863.2690157.2890310.9090672.4592387.2592426.4992896.621. (CXX) g++ options: -O3 -march=native -fopenmp
OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Bonds OpenMPEPYC 7F32EPYC 7F52EPYC 7542EPYC 7502PEPYC 7702EPYC 7402PEPYC 7302PEPYC 7532EPYC 7552EPYC 7662EPYC 7272EPYC 7232PEPYC 728216K32K48K64K80KMin: 75656.88 / Avg: 76027.98 / Max: 76230.05Min: 75619.2 / Avg: 76410.11 / Max: 76824.91Min: 86896.07 / Avg: 86946.61 / Max: 86985.19Min: 88176.03 / Avg: 88487.57 / Max: 89075.27Min: 88306.55 / Avg: 88553.78 / Max: 89018.95Min: 88093.03 / Avg: 88634.31 / Max: 89410.36Min: 89400.2 / Avg: 89863.26 / Max: 90787.24Min: 89581.38 / Avg: 90157.28 / Max: 90451.57Min: 90159.87 / Avg: 90310.9 / Max: 90387.48Min: 89694.92 / Avg: 90672.45 / Max: 92606.17Min: 92154.74 / Avg: 92387.25 / Max: 92792.54Min: 92207.57 / Avg: 92426.49 / Max: 92826.23Min: 92266.52 / Avg: 92896.62 / Max: 93645.021. (CXX) g++ options: -O3 -march=native -fopenmp

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedEPYC 7F52EPYC 7F32EPYC 7402PEPYC 7542EPYC 7502PEPYC 7702EPYC 7552EPYC 7662EPYC 7232PEPYC 7532EPYC 7302PEPYC 7282EPYC 72721224364860SE +/- 0.01, N = 3SE +/- 0.35, N = 3SE +/- 0.55, N = 4SE +/- 0.04, N = 3SE +/- 0.29, N = 3SE +/- 0.42, N = 7SE +/- 0.65, N = 3SE +/- 0.31, N = 3SE +/- 0.43, N = 3SE +/- 0.37, N = 3SE +/- 0.53, N = 3SE +/- 0.15, N = 3SE +/- 0.15, N = 352.8252.7945.9845.9045.7745.6945.3845.1944.9344.6944.5344.3843.231. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedEPYC 7F52EPYC 7F32EPYC 7402PEPYC 7542EPYC 7502PEPYC 7702EPYC 7552EPYC 7662EPYC 7232PEPYC 7532EPYC 7302PEPYC 7282EPYC 72721122334455Min: 52.79 / Avg: 52.82 / Max: 52.84Min: 52.36 / Avg: 52.79 / Max: 53.48Min: 45.35 / Avg: 45.98 / Max: 47.62Min: 45.84 / Avg: 45.9 / Max: 45.97Min: 45.44 / Avg: 45.77 / Max: 46.36Min: 44.72 / Avg: 45.69 / Max: 47.44Min: 44.32 / Avg: 45.38 / Max: 46.55Min: 44.56 / Avg: 45.19 / Max: 45.52Min: 44.07 / Avg: 44.93 / Max: 45.44Min: 44.12 / Avg: 44.69 / Max: 45.37Min: 44 / Avg: 44.53 / Max: 45.58Min: 44.15 / Avg: 44.38 / Max: 44.66Min: 42.93 / Avg: 43.23 / Max: 43.41. (CC) gcc options: -O3

Swet

Swet is a synthetic CPU/RAM benchmark, includes multi-processor test cases. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOperations Per Second, More Is BetterSwet 1.5.16AverageEPYC 7F32EPYC 7F52EPYC 7542EPYC 7552EPYC 7702EPYC 7662EPYC 7402PEPYC 7302PEPYC 7502PEPYC 7532EPYC 7272EPYC 7282EPYC 7232P150M300M450M600M750MSE +/- 2325206.90, N = 3SE +/- 2209414.80, N = 3SE +/- 8506660.76, N = 3SE +/- 807930.96, N = 3SE +/- 2469610.76, N = 3SE +/- 5706451.14, N = 3SE +/- 2164773.96, N = 3SE +/- 3828695.03, N = 3SE +/- 4052005.46, N = 3SE +/- 1400113.05, N = 3SE +/- 2920022.24, N = 3SE +/- 5177314.46, N = 3SE +/- 6957648.38, N = 36863954196851888106122184256056443646024348856022038426000655725985892285973611975826843335811945885718255805618167721. (CC) gcc options: -lm -lpthread -lcurses -lrt
OpenBenchmarking.orgOperations Per Second, More Is BetterSwet 1.5.16AverageEPYC 7F32EPYC 7F52EPYC 7542EPYC 7552EPYC 7702EPYC 7662EPYC 7402PEPYC 7302PEPYC 7502PEPYC 7532EPYC 7272EPYC 7282EPYC 7232P120M240M360M480M600MMin: 682854921 / Avg: 686395419.33 / Max: 690776878Min: 680933266 / Avg: 685188809.67 / Max: 688347262Min: 596416785 / Avg: 612218425.33 / Max: 625580102Min: 604227306 / Avg: 605644364.33 / Max: 607025364Min: 597611652 / Avg: 602434885.33 / Max: 605768048Min: 590802604 / Avg: 602203842.33 / Max: 608351202Min: 595836568 / Avg: 600065571.67 / Max: 602983432Min: 593041731 / Avg: 598589227.67 / Max: 605934155Min: 589487365 / Avg: 597361197.33 / Max: 602958932Min: 579892857 / Avg: 582684332.67 / Max: 584271636Min: 575619977 / Avg: 581194588 / Max: 585489333Min: 566389680 / Avg: 571825580.33 / Max: 582175830Min: 547924209 / Avg: 561816771.67 / Max: 5694516311. (CC) gcc options: -lm -lpthread -lcurses -lrt

Botan

Botan is a cross-platform open-source C++ crypto library that supports most all publicly known cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: CAST-256EPYC 7F32EPYC 7F52EPYC 7542EPYC 7502PEPYC 7702EPYC 7402PEPYC 7552EPYC 7532EPYC 7662EPYC 7642EPYC 7302PEPYC 7272EPYC 7232PEPYC 7282306090120150SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.09, N = 3SE +/- 0.11, N = 3SE +/- 0.28, N = 3SE +/- 0.06, N = 3SE +/- 0.04, N = 3SE +/- 0.09, N = 3139.83139.71121.92120.06120.00119.99118.32118.29118.23117.99117.88114.69114.59114.471. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: CAST-256EPYC 7F32EPYC 7F52EPYC 7542EPYC 7502PEPYC 7702EPYC 7402PEPYC 7552EPYC 7532EPYC 7662EPYC 7642EPYC 7302PEPYC 7272EPYC 7232PEPYC 7282306090120150Min: 139.8 / Avg: 139.83 / Max: 139.86Min: 139.63 / Avg: 139.71 / Max: 139.74Min: 121.89 / Avg: 121.92 / Max: 121.94Min: 119.97 / Avg: 120.06 / Max: 120.12Min: 119.99 / Avg: 119.99 / Max: 120.01Min: 119.95 / Avg: 119.99 / Max: 120.04Min: 118.29 / Avg: 118.32 / Max: 118.36Min: 118.22 / Avg: 118.29 / Max: 118.36Min: 118.06 / Avg: 118.23 / Max: 118.33Min: 117.81 / Avg: 117.99 / Max: 118.19Min: 117.32 / Avg: 117.88 / Max: 118.21Min: 114.63 / Avg: 114.69 / Max: 114.8Min: 114.51 / Avg: 114.59 / Max: 114.64Min: 114.34 / Avg: 114.47 / Max: 114.631. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: KASUMIEPYC 7F32EPYC 7F52EPYC 7542EPYC 7702EPYC 7502PEPYC 7402PEPYC 7552EPYC 7662EPYC 7532EPYC 7302PEPYC 7642EPYC 7232PEPYC 7272EPYC 728220406080100SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.06, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.07, N = 390.4690.4278.8877.7077.6977.5776.5876.5576.5376.5176.2274.2774.2174.061. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: KASUMIEPYC 7F32EPYC 7F52EPYC 7542EPYC 7702EPYC 7502PEPYC 7402PEPYC 7552EPYC 7662EPYC 7532EPYC 7302PEPYC 7642EPYC 7232PEPYC 7272EPYC 728220406080100Min: 90.45 / Avg: 90.46 / Max: 90.47Min: 90.37 / Avg: 90.42 / Max: 90.45Min: 78.84 / Avg: 78.88 / Max: 78.91Min: 77.66 / Avg: 77.7 / Max: 77.77Min: 77.68 / Avg: 77.69 / Max: 77.7Min: 77.5 / Avg: 77.57 / Max: 77.66Min: 76.57 / Avg: 76.57 / Max: 76.58Min: 76.54 / Avg: 76.55 / Max: 76.55Min: 76.42 / Avg: 76.53 / Max: 76.6Min: 76.49 / Avg: 76.51 / Max: 76.52Min: 76.12 / Avg: 76.22 / Max: 76.31Min: 74.23 / Avg: 74.27 / Max: 74.29Min: 74.18 / Avg: 74.21 / Max: 74.22Min: 73.98 / Avg: 74.06 / Max: 74.211. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

Radiance Benchmark

This is a benchmark of NREL Radiance, a synthetic imaging system that is open-source and developed by the Lawrence Berkeley National Laboratory in California. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRadiance Benchmark 5.0Test: SerialEPYC 7F52EPYC 7F32EPYC 7542EPYC 7702EPYC 7402PEPYC 7502PEPYC 7662EPYC 7532EPYC 7302PEPYC 7642EPYC 7552EPYC 7282EPYC 7272EPYC 7232P160320480640800612.89613.55705.16713.56714.24721.79723.32724.56724.99725.23727.34746.18746.72748.59

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisEPYC 7F52EPYC 7F32EPYC 7542EPYC 7402PEPYC 7502PEPYC 7702EPYC 7662EPYC 7532EPYC 7642EPYC 7302PEPYC 7552EPYC 7232PEPYC 7272EPYC 7282918273645SE +/- 0.08, N = 4SE +/- 0.07, N = 4SE +/- 0.04, N = 4SE +/- 0.07, N = 4SE +/- 0.08, N = 4SE +/- 0.12, N = 4SE +/- 0.09, N = 4SE +/- 0.07, N = 4SE +/- 0.06, N = 4SE +/- 0.12, N = 4SE +/- 0.11, N = 4SE +/- 0.11, N = 4SE +/- 0.08, N = 4SE +/- 0.10, N = 430.7530.8435.2735.6935.7335.9336.2036.2436.2836.3636.4037.4037.4037.551. (CC) gcc options: -O2 -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisEPYC 7F52EPYC 7F32EPYC 7542EPYC 7402PEPYC 7502PEPYC 7702EPYC 7662EPYC 7532EPYC 7642EPYC 7302PEPYC 7552EPYC 7232PEPYC 7272EPYC 7282816243240Min: 30.52 / Avg: 30.75 / Max: 30.86Min: 30.64 / Avg: 30.84 / Max: 30.96Min: 35.16 / Avg: 35.27 / Max: 35.37Min: 35.55 / Avg: 35.69 / Max: 35.87Min: 35.5 / Avg: 35.73 / Max: 35.85Min: 35.63 / Avg: 35.93 / Max: 36.21Min: 35.97 / Avg: 36.2 / Max: 36.42Min: 36.12 / Avg: 36.24 / Max: 36.4Min: 36.17 / Avg: 36.28 / Max: 36.45Min: 36.06 / Avg: 36.36 / Max: 36.65Min: 36.11 / Avg: 36.4 / Max: 36.59Min: 37.13 / Avg: 37.4 / Max: 37.65Min: 37.26 / Avg: 37.4 / Max: 37.61Min: 37.36 / Avg: 37.55 / Max: 37.831. (CC) gcc options: -O2 -std=c99

Perl Benchmarks

Perl benchmark suite that can be used to compare the relative speed of different versions of perl. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPerl BenchmarksTest: Pod2htmlEPYC 7F32EPYC 7F52EPYC 7402PEPYC 7542EPYC 7502PEPYC 7642EPYC 7702EPYC 7662EPYC 7302PEPYC 7532EPYC 7552EPYC 7282EPYC 7272EPYC 7232P0.03520.07040.10560.14080.176SE +/- 0.00007037, N = 3SE +/- 0.00013172, N = 3SE +/- 0.00052181, N = 3SE +/- 0.00010921, N = 3SE +/- 0.00050264, N = 3SE +/- 0.00113052, N = 3SE +/- 0.00022828, N = 3SE +/- 0.00009609, N = 3SE +/- 0.00104595, N = 3SE +/- 0.00097851, N = 3SE +/- 0.00076232, N = 3SE +/- 0.00047942, N = 3SE +/- 0.00070595, N = 3SE +/- 0.00101629, N = 30.128195450.129242850.145735970.148508120.148592730.148682530.149543390.149767950.149784890.149882930.151249760.153417050.154578900.15653208
OpenBenchmarking.orgSeconds, Fewer Is BetterPerl BenchmarksTest: Pod2htmlEPYC 7F32EPYC 7F52EPYC 7402PEPYC 7542EPYC 7502PEPYC 7642EPYC 7702EPYC 7662EPYC 7302PEPYC 7532EPYC 7552EPYC 7282EPYC 7272EPYC 7232P12345Min: 0.13 / Avg: 0.13 / Max: 0.13Min: 0.13 / Avg: 0.13 / Max: 0.13Min: 0.14 / Avg: 0.15 / Max: 0.15Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.15 / Avg: 0.15 / Max: 0.16Min: 0.16 / Avg: 0.16 / Max: 0.16

Botan

Botan is a cross-platform open-source C++ crypto library that supports most all publicly known cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: AES-256EPYC 7F32EPYC 7F52EPYC 7542EPYC 7702EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7532EPYC 7662EPYC 7642EPYC 7552EPYC 7232PEPYC 7282EPYC 727211002200330044005500SE +/- 1.69, N = 3SE +/- 1.30, N = 3SE +/- 2.98, N = 3SE +/- 1.00, N = 3SE +/- 3.31, N = 3SE +/- 0.83, N = 3SE +/- 0.26, N = 3SE +/- 1.33, N = 3SE +/- 0.85, N = 3SE +/- 3.58, N = 3SE +/- 3.88, N = 3SE +/- 13.87, N = 3SE +/- 0.80, N = 3SE +/- 7.02, N = 35238.765226.754561.194499.504497.774487.904430.724429.474429.194425.924421.414311.064293.874290.851. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: AES-256EPYC 7F32EPYC 7F52EPYC 7542EPYC 7702EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7532EPYC 7662EPYC 7642EPYC 7552EPYC 7232PEPYC 7282EPYC 72729001800270036004500Min: 5235.4 / Avg: 5238.76 / Max: 5240.69Min: 5224.38 / Avg: 5226.75 / Max: 5228.88Min: 4555.36 / Avg: 4561.19 / Max: 4565.19Min: 4497.86 / Avg: 4499.5 / Max: 4501.3Min: 4492.53 / Avg: 4497.77 / Max: 4503.89Min: 4486.88 / Avg: 4487.9 / Max: 4489.54Min: 4430.21 / Avg: 4430.72 / Max: 4431.05Min: 4426.94 / Avg: 4429.47 / Max: 4431.46Min: 4427.63 / Avg: 4429.19 / Max: 4430.55Min: 4421.28 / Avg: 4425.91 / Max: 4432.96Min: 4413.66 / Avg: 4421.41 / Max: 4425.33Min: 4293.42 / Avg: 4311.06 / Max: 4338.43Min: 4292.82 / Avg: 4293.87 / Max: 4295.45Min: 4276.91 / Avg: 4290.85 / Max: 4299.271. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC2EPYC 7F32EPYC 7F52EPYC 7542EPYC 7502PEPYC 7702EPYC 7402PEPYC 7662EPYC 7302PEPYC 7552EPYC 7532EPYC 7232PEPYC 7272EPYC 72824080120160200SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.11, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.10, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.08, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.09, N = 3159.79159.60139.25137.34137.28137.09135.24135.18135.14135.01131.18131.15130.881. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC2EPYC 7F32EPYC 7F52EPYC 7542EPYC 7502PEPYC 7702EPYC 7402PEPYC 7662EPYC 7302PEPYC 7552EPYC 7532EPYC 7232PEPYC 7272EPYC 7282306090120150Min: 159.77 / Avg: 159.79 / Max: 159.8Min: 159.56 / Avg: 159.6 / Max: 159.65Min: 139.03 / Avg: 139.25 / Max: 139.39Min: 137.34 / Avg: 137.34 / Max: 137.35Min: 137.26 / Avg: 137.28 / Max: 137.3Min: 136.98 / Avg: 137.09 / Max: 137.3Min: 135.2 / Avg: 135.23 / Max: 135.25Min: 135.12 / Avg: 135.18 / Max: 135.24Min: 135.11 / Avg: 135.14 / Max: 135.2Min: 134.91 / Avg: 135.01 / Max: 135.17Min: 131.16 / Avg: 131.18 / Max: 131.22Min: 131.11 / Avg: 131.15 / Max: 131.19Min: 130.77 / Avg: 130.88 / Max: 131.051. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

libjpeg-turbo tjbench

tjbench is a JPEG decompression/compression benchmark part of libjpeg-turbo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMegapixels/sec, More Is Betterlibjpeg-turbo tjbench 2.0.2Test: Decompression ThroughputEPYC 7F52EPYC 7F32EPYC 7542EPYC 7502PEPYC 7402PEPYC 7702EPYC 7302PEPYC 7662EPYC 7552EPYC 7642EPYC 7532EPYC 7232PEPYC 7272EPYC 72824080120160200SE +/- 0.04, N = 7SE +/- 0.02, N = 7SE +/- 0.07, N = 7SE +/- 0.05, N = 7SE +/- 0.44, N = 7SE +/- 0.04, N = 7SE +/- 0.04, N = 7SE +/- 0.04, N = 7SE +/- 0.02, N = 7SE +/- 0.03, N = 7SE +/- 0.07, N = 7SE +/- 0.03, N = 7SE +/- 0.26, N = 7SE +/- 0.10, N = 7197.79197.77172.55170.11169.70169.54167.63167.62167.61167.57167.52162.56162.37162.071. (CC) gcc options: -O3 -rdynamic
OpenBenchmarking.orgMegapixels/sec, More Is Betterlibjpeg-turbo tjbench 2.0.2Test: Decompression ThroughputEPYC 7F52EPYC 7F32EPYC 7542EPYC 7502PEPYC 7402PEPYC 7702EPYC 7302PEPYC 7662EPYC 7552EPYC 7642EPYC 7532EPYC 7232PEPYC 7272EPYC 72824080120160200Min: 197.62 / Avg: 197.79 / Max: 197.87Min: 197.66 / Avg: 197.77 / Max: 197.86Min: 172.16 / Avg: 172.55 / Max: 172.69Min: 169.86 / Avg: 170.11 / Max: 170.26Min: 167.04 / Avg: 169.7 / Max: 170.26Min: 169.39 / Avg: 169.54 / Max: 169.69Min: 167.45 / Avg: 167.63 / Max: 167.79Min: 167.5 / Avg: 167.62 / Max: 167.77Min: 167.49 / Avg: 167.61 / Max: 167.66Min: 167.44 / Avg: 167.57 / Max: 167.69Min: 167.15 / Avg: 167.52 / Max: 167.68Min: 162.5 / Avg: 162.56 / Max: 162.7Min: 160.82 / Avg: 162.37 / Max: 162.74Min: 161.84 / Avg: 162.07 / Max: 162.461. (CC) gcc options: -O3 -rdynamic

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC1EPYC 7F32EPYC 7F52EPYC 7542EPYC 7502PEPYC 7402PEPYC 7702EPYC 7662EPYC 7532EPYC 7302PEPYC 7552EPYC 7232PEPYC 7272EPYC 728260120180240300SE +/- 0.19, N = 3SE +/- 0.18, N = 3SE +/- 0.18, N = 3SE +/- 0.11, N = 3SE +/- 0.16, N = 3SE +/- 0.12, N = 3SE +/- 0.16, N = 3SE +/- 0.10, N = 3SE +/- 0.08, N = 3SE +/- 0.00, N = 3SE +/- 0.14, N = 3SE +/- 0.03, N = 3SE +/- 0.21, N = 3271.32271.07236.91233.51233.39233.35229.86229.73229.56229.46223.03222.62222.361. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC1EPYC 7F32EPYC 7F52EPYC 7542EPYC 7502PEPYC 7402PEPYC 7702EPYC 7662EPYC 7532EPYC 7302PEPYC 7552EPYC 7232PEPYC 7272EPYC 728250100150200250Min: 271.12 / Avg: 271.32 / Max: 271.69Min: 270.72 / Avg: 271.07 / Max: 271.3Min: 236.57 / Avg: 236.91 / Max: 237.18Min: 233.3 / Avg: 233.51 / Max: 233.66Min: 233.07 / Avg: 233.39 / Max: 233.6Min: 233.22 / Avg: 233.35 / Max: 233.6Min: 229.54 / Avg: 229.86 / Max: 230.03Min: 229.59 / Avg: 229.73 / Max: 229.93Min: 229.4 / Avg: 229.56 / Max: 229.64Min: 229.45 / Avg: 229.46 / Max: 229.47Min: 222.74 / Avg: 223.03 / Max: 223.2Min: 222.56 / Avg: 222.62 / Max: 222.66Min: 221.94 / Avg: 222.36 / Max: 222.641. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

AOBench

AOBench is a lightweight ambient occlusion renderer, written in C. The test profile is using a size of 2048 x 2048. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAOBenchSize: 2048 x 2048 - Total TimeEPYC 7F52EPYC 7F32EPYC 7542EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7702EPYC 7552EPYC 7642EPYC 7532EPYC 7662EPYC 7282EPYC 7232PEPYC 72721020304050SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 3SE +/- 0.18, N = 3SE +/- 0.51, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 334.8134.8439.9240.5340.5940.9841.1541.1741.1741.1941.2642.4442.4442.481. (CC) gcc options: -lm -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterAOBenchSize: 2048 x 2048 - Total TimeEPYC 7F52EPYC 7F32EPYC 7542EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7702EPYC 7552EPYC 7642EPYC 7532EPYC 7662EPYC 7282EPYC 7232PEPYC 7272918273645Min: 34.79 / Avg: 34.81 / Max: 34.85Min: 34.8 / Avg: 34.83 / Max: 34.87Min: 39.91 / Avg: 39.92 / Max: 39.94Min: 40.51 / Avg: 40.53 / Max: 40.58Min: 40.58 / Avg: 40.59 / Max: 40.6Min: 40.63 / Avg: 40.98 / Max: 41.17Min: 40.64 / Avg: 41.15 / Max: 42.18Min: 41.14 / Avg: 41.16 / Max: 41.18Min: 41.14 / Avg: 41.17 / Max: 41.18Min: 41.18 / Avg: 41.19 / Max: 41.2Min: 41.23 / Avg: 41.26 / Max: 41.3Min: 42.4 / Avg: 42.43 / Max: 42.45Min: 42.43 / Avg: 42.44 / Max: 42.45Min: 42.42 / Avg: 42.48 / Max: 42.581. (CC) gcc options: -lm -O3

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC1 + DitheringEPYC 7F32EPYC 7F52EPYC 7542EPYC 7702EPYC 7502PEPYC 7402PEPYC 7552EPYC 7302PEPYC 7662EPYC 7532EPYC 7272EPYC 7232PEPYC 728260120180240300SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.08, N = 3SE +/- 0.07, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.07, N = 3SE +/- 0.08, N = 3SE +/- 0.08, N = 3SE +/- 0.09, N = 3SE +/- 0.12, N = 3257.01256.42223.49220.80220.76220.71217.50217.50217.45216.79211.03210.93210.721. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC1 + DitheringEPYC 7F32EPYC 7F52EPYC 7542EPYC 7702EPYC 7502PEPYC 7402PEPYC 7552EPYC 7302PEPYC 7662EPYC 7532EPYC 7272EPYC 7232PEPYC 728250100150200250Min: 257 / Avg: 257.01 / Max: 257.02Min: 256.36 / Avg: 256.42 / Max: 256.5Min: 223.43 / Avg: 223.49 / Max: 223.58Min: 220.64 / Avg: 220.8 / Max: 220.91Min: 220.68 / Avg: 220.76 / Max: 220.9Min: 220.68 / Avg: 220.71 / Max: 220.76Min: 217.48 / Avg: 217.5 / Max: 217.52Min: 217.45 / Avg: 217.5 / Max: 217.57Min: 217.36 / Avg: 217.45 / Max: 217.6Min: 216.67 / Avg: 216.79 / Max: 216.93Min: 210.87 / Avg: 211.03 / Max: 211.13Min: 210.74 / Avg: 210.93 / Max: 211.03Min: 210.52 / Avg: 210.72 / Max: 210.941. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

TSCP

This is a performance test of TSCP, Tom Kerrigan's Simple Chess Program, which has a built-in performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess PerformanceEPYC 7F32EPYC 7F52EPYC 7542EPYC 7702EPYC 7502PEPYC 7402PEPYC 7532EPYC 7662EPYC 7552EPYC 7302PEPYC 7232PEPYC 7272EPYC 7282300K600K900K1200K1500KSE +/- 1531.45, N = 12SE +/- 791.31, N = 12SE +/- 372.53, N = 12SE +/- 658.02, N = 11SE +/- 1366.07, N = 11SE +/- 1181.68, N = 11SE +/- 986.25, N = 11SE +/- 1511.23, N = 11SE +/- 587.46, N = 11SE +/- 609.53, N = 11SE +/- 504.67, N = 11SE +/- 863.61, N = 11SE +/- 1355.09, N = 111180682117814010302951015858101434210133209994429978099974619971339693719690669681491. (CC) gcc options: -O3 -march=native
OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess PerformanceEPYC 7F32EPYC 7F52EPYC 7542EPYC 7702EPYC 7502PEPYC 7402PEPYC 7532EPYC 7662EPYC 7552EPYC 7302PEPYC 7232PEPYC 7272EPYC 7282200K400K600K800K1000KMin: 1166902 / Avg: 1180681.92 / Max: 1187021Min: 1174366 / Avg: 1178140.42 / Max: 1181927Min: 1029491 / Avg: 1030294.92 / Max: 1033354Min: 1010601 / Avg: 1015858.45 / Max: 1018073Min: 1001414 / Avg: 1014342.27 / Max: 1018073Min: 1003238 / Avg: 1013319.55 / Max: 1016195Min: 990607 / Avg: 999441.64 / Max: 1003238Min: 987057 / Avg: 997809 / Max: 1001414Min: 994184 / Avg: 997460.73 / Max: 999597Min: 994184 / Avg: 997132.73 / Max: 999597Min: 966277 / Avg: 969371.36 / Max: 971389Min: 961218 / Avg: 969066.36 / Max: 971389Min: 956211 / Avg: 968148.55 / Max: 9713891. (CC) gcc options: -O3 -march=native

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21EPYC 7F52EPYC 7F32EPYC 7542EPYC 7502PEPYC 7402PEPYC 7702EPYC 7662EPYC 7302PEPYC 7552EPYC 7532EPYC 7272EPYC 7232PEPYC 72825001000150020002500SE +/- 10.66, N = 3SE +/- 13.33, N = 3SE +/- 11.83, N = 3SE +/- 9.97, N = 3SE +/- 12.22, N = 3SE +/- 11.47, N = 3SE +/- 10.12, N = 3SE +/- 10.13, N = 3SE +/- 13.45, N = 3SE +/- 10.26, N = 3SE +/- 12.91, N = 3SE +/- 10.95, N = 3SE +/- 8.78, N = 32300.72285.22016.41984.51977.31970.11955.31952.51945.31922.31893.21892.01887.01. (CXX) g++ options: -O3 -march=native -rdynamic
OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21EPYC 7F52EPYC 7F32EPYC 7542EPYC 7502PEPYC 7402PEPYC 7702EPYC 7662EPYC 7302PEPYC 7552EPYC 7532EPYC 7272EPYC 7232PEPYC 7282400800120016002000Min: 2279.8 / Avg: 2300.73 / Max: 2314.7Min: 2258.5 / Avg: 2285.17 / Max: 2298.8Min: 1993.3 / Avg: 2016.43 / Max: 2032.3Min: 1964.7 / Avg: 1984.5 / Max: 1996.4Min: 1953.5 / Avg: 1977.33 / Max: 1993.9Min: 1947.7 / Avg: 1970.13 / Max: 1985.5Min: 1935.1 / Avg: 1955.3 / Max: 1966.5Min: 1932.3 / Avg: 1952.53 / Max: 1963.6Min: 1918.7 / Avg: 1945.33 / Max: 1961.9Min: 1902.5 / Avg: 1922.27 / Max: 1936.9Min: 1867.5 / Avg: 1893.2 / Max: 1908.2Min: 1870.4 / Avg: 1892 / Max: 1905.9Min: 1869.4 / Avg: 1886.97 / Max: 1895.91. (CXX) g++ options: -O3 -march=native -rdynamic

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1EPYC 7F32EPYC 7F52EPYC 7542EPYC 7502PEPYC 7702EPYC 7402PEPYC 7662EPYC 7302PEPYC 7532EPYC 7642EPYC 7552EPYC 7232PEPYC 7272EPYC 728270140210280350SE +/- 0.36, N = 3SE +/- 0.12, N = 3SE +/- 0.29, N = 3SE +/- 0.71, N = 3SE +/- 0.80, N = 3SE +/- 0.19, N = 3SE +/- 0.83, N = 3SE +/- 0.82, N = 3SE +/- 0.26, N = 3SE +/- 1.03, N = 3SE +/- 1.22, N = 3SE +/- 0.77, N = 3SE +/- 0.11, N = 3SE +/- 1.44, N = 3269.64269.92308.94312.89313.01314.31317.55318.37318.58318.67318.67327.99328.36328.64MIN: 267.19 / MAX: 279.76MIN: 267.18 / MAX: 272.47MIN: 306.29 / MAX: 322.41MIN: 310.9 / MAX: 315.08MIN: 310.72 / MAX: 314.66MIN: 311.3 / MAX: 317.52MIN: 315.59 / MAX: 319.4MIN: 315.66 / MAX: 323.53MIN: 315.09 / MAX: 320.21MIN: 315.68 / MAX: 321.91MIN: 315.63 / MAX: 323.01MIN: 325.21 / MAX: 329.77MIN: 325.02 / MAX: 339.93MIN: 325.19 / MAX: 342.81. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1EPYC 7F32EPYC 7F52EPYC 7542EPYC 7502PEPYC 7702EPYC 7402PEPYC 7662EPYC 7302PEPYC 7532EPYC 7642EPYC 7552EPYC 7232PEPYC 7272EPYC 728260120180240300Min: 269.01 / Avg: 269.64 / Max: 270.25Min: 269.68 / Avg: 269.92 / Max: 270.07Min: 308.42 / Avg: 308.94 / Max: 309.44Min: 311.52 / Avg: 312.89 / Max: 313.94Min: 311.43 / Avg: 313.01 / Max: 313.95Min: 313.92 / Avg: 314.31 / Max: 314.52Min: 316.01 / Avg: 317.55 / Max: 318.87Min: 317.19 / Avg: 318.37 / Max: 319.96Min: 318.08 / Avg: 318.58 / Max: 318.92Min: 316.65 / Avg: 318.67 / Max: 319.98Min: 316.23 / Avg: 318.67 / Max: 320.01Min: 326.49 / Avg: 327.99 / Max: 329.05Min: 328.14 / Avg: 328.36 / Max: 328.49Min: 325.78 / Avg: 328.64 / Max: 330.411. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

Perl Benchmarks

Perl benchmark suite that can be used to compare the relative speed of different versions of perl. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPerl BenchmarksTest: InterpreterEPYC 7F52EPYC 7F32EPYC 7302PEPYC 7542EPYC 7402PEPYC 7502PEPYC 7552EPYC 7642EPYC 7532EPYC 7702EPYC 7662EPYC 7272EPYC 7282EPYC 7232P0.00020.00040.00060.00080.001SE +/- 0.00000483, N = 3SE +/- 0.00001015, N = 3SE +/- 0.00000413, N = 3SE +/- 0.00000879, N = 3SE +/- 0.00000962, N = 3SE +/- 0.00000724, N = 3SE +/- 0.00000921, N = 3SE +/- 0.00000264, N = 3SE +/- 0.00000252, N = 3SE +/- 0.00000621, N = 3SE +/- 0.00000365, N = 3SE +/- 0.00001257, N = 3SE +/- 0.00000497, N = 3SE +/- 0.00000737, N = 30.000825700.000835630.000954780.000954850.000955860.000960820.000961530.000965790.000967530.000970840.000974550.000980740.000992100.00100608
OpenBenchmarking.orgSeconds, Fewer Is BetterPerl BenchmarksTest: InterpreterEPYC 7F52EPYC 7F32EPYC 7302PEPYC 7542EPYC 7402PEPYC 7502PEPYC 7552EPYC 7642EPYC 7532EPYC 7702EPYC 7662EPYC 7272EPYC 7282EPYC 7232P12345Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0

Montage Astronomical Image Mosaic Engine

Montage is an open-source astronomical image mosaic engine. This BSD-licensed astronomy software is developed by the California Institute of Technology, Pasadena. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMontage Astronomical Image Mosaic Engine 6.0Mosaic of M17, K band, 1.5 deg x 1.5 degEPYC 7F32EPYC 7F52EPYC 7542EPYC 7502PEPYC 7402PEPYC 7702EPYC 7642EPYC 7302PEPYC 7532EPYC 7552EPYC 7662EPYC 7282EPYC 7272EPYC 7232P20406080100SE +/- 0.05, N = 3SE +/- 0.14, N = 3SE +/- 0.31, N = 3SE +/- 0.17, N = 3SE +/- 0.09, N = 3SE +/- 0.11, N = 3SE +/- 0.19, N = 3SE +/- 0.09, N = 3SE +/- 0.08, N = 3SE +/- 0.28, N = 3SE +/- 0.12, N = 3SE +/- 0.12, N = 3SE +/- 0.04, N = 3SE +/- 0.16, N = 380.8580.9492.7793.8693.9294.7695.3995.4595.4895.5395.5698.2498.3198.411. (CC) gcc options: -std=gnu99 -lcfitsio -lm -O2
OpenBenchmarking.orgSeconds, Fewer Is BetterMontage Astronomical Image Mosaic Engine 6.0Mosaic of M17, K band, 1.5 deg x 1.5 degEPYC 7F32EPYC 7F52EPYC 7542EPYC 7502PEPYC 7402PEPYC 7702EPYC 7642EPYC 7302PEPYC 7532EPYC 7552EPYC 7662EPYC 7282EPYC 7272EPYC 7232P20406080100Min: 80.76 / Avg: 80.85 / Max: 80.91Min: 80.67 / Avg: 80.94 / Max: 81.13Min: 92.29 / Avg: 92.77 / Max: 93.36Min: 93.53 / Avg: 93.86 / Max: 94.1Min: 93.8 / Avg: 93.92 / Max: 94.11Min: 94.65 / Avg: 94.76 / Max: 94.97Min: 95.12 / Avg: 95.39 / Max: 95.75Min: 95.27 / Avg: 95.45 / Max: 95.58Min: 95.35 / Avg: 95.48 / Max: 95.62Min: 95.11 / Avg: 95.53 / Max: 96.06Min: 95.43 / Avg: 95.56 / Max: 95.8Min: 98.08 / Avg: 98.24 / Max: 98.48Min: 98.25 / Avg: 98.31 / Max: 98.39Min: 98.12 / Avg: 98.41 / Max: 98.671. (CC) gcc options: -std=gnu99 -lcfitsio -lm -O2

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100EPYC 7F32EPYC 7F52EPYC 7542EPYC 7502PEPYC 7402PEPYC 7702EPYC 7302PEPYC 7552EPYC 7532EPYC 7662EPYC 7642EPYC 7272EPYC 7232PEPYC 72820.68331.36662.04992.73323.4165SE +/- 0.001, N = 10SE +/- 0.002, N = 10SE +/- 0.001, N = 9SE +/- 0.003, N = 9SE +/- 0.002, N = 9SE +/- 0.003, N = 9SE +/- 0.002, N = 9SE +/- 0.002, N = 9SE +/- 0.001, N = 9SE +/- 0.003, N = 9SE +/- 0.003, N = 9SE +/- 0.001, N = 9SE +/- 0.002, N = 9SE +/- 0.002, N = 92.4952.5002.8582.8992.9012.9052.9452.9452.9452.9462.9523.0293.0353.0371. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100EPYC 7F32EPYC 7F52EPYC 7542EPYC 7502PEPYC 7402PEPYC 7702EPYC 7302PEPYC 7552EPYC 7532EPYC 7662EPYC 7642EPYC 7272EPYC 7232PEPYC 7282246810Min: 2.49 / Avg: 2.49 / Max: 2.5Min: 2.49 / Avg: 2.5 / Max: 2.51Min: 2.85 / Avg: 2.86 / Max: 2.87Min: 2.89 / Avg: 2.9 / Max: 2.92Min: 2.89 / Avg: 2.9 / Max: 2.91Min: 2.89 / Avg: 2.9 / Max: 2.92Min: 2.94 / Avg: 2.94 / Max: 2.95Min: 2.94 / Avg: 2.94 / Max: 2.95Min: 2.94 / Avg: 2.94 / Max: 2.95Min: 2.94 / Avg: 2.95 / Max: 2.97Min: 2.94 / Avg: 2.95 / Max: 2.97Min: 3.02 / Avg: 3.03 / Max: 3.03Min: 3.03 / Avg: 3.04 / Max: 3.05Min: 3.03 / Avg: 3.04 / Max: 3.051. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

Himeno Benchmark

The Himeno benchmark is a linear solver of pressure Poisson using a point-Jacobi method. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterHimeno Benchmark 3.0Poisson Pressure SolverEPYC 7F52EPYC 7F32EPYC 7542EPYC 7502PEPYC 7662EPYC 7532EPYC 7642EPYC 7552EPYC 7302PEPYC 7402PEPYC 7702EPYC 7282EPYC 7272EPYC 7232P9001800270036004500SE +/- 38.21, N = 3SE +/- 33.69, N = 3SE +/- 9.42, N = 3SE +/- 43.10, N = 4SE +/- 47.38, N = 4SE +/- 42.46, N = 3SE +/- 44.49, N = 4SE +/- 31.29, N = 9SE +/- 46.66, N = 4SE +/- 37.84, N = 3SE +/- 36.48, N = 3SE +/- 17.87, N = 3SE +/- 37.21, N = 3SE +/- 25.83, N = 34367.124336.413988.943853.503837.563830.203816.083793.173792.813765.853762.493756.963634.173589.931. (CC) gcc options: -O3 -mavx2
OpenBenchmarking.orgMFLOPS, More Is BetterHimeno Benchmark 3.0Poisson Pressure SolverEPYC 7F52EPYC 7F32EPYC 7542EPYC 7502PEPYC 7662EPYC 7532EPYC 7642EPYC 7552EPYC 7302PEPYC 7402PEPYC 7702EPYC 7282EPYC 7272EPYC 7232P8001600240032004000Min: 4297.85 / Avg: 4367.12 / Max: 4429.7Min: 4277.41 / Avg: 4336.41 / Max: 4394.1Min: 3976.02 / Avg: 3988.94 / Max: 4007.28Min: 3731.7 / Avg: 3853.5 / Max: 3929.87Min: 3701.86 / Avg: 3837.56 / Max: 3910.17Min: 3768.21 / Avg: 3830.2 / Max: 3911.46Min: 3692 / Avg: 3816.08 / Max: 3902.55Min: 3632 / Avg: 3793.17 / Max: 3925.79Min: 3699.39 / Avg: 3792.81 / Max: 3917.69Min: 3719.42 / Avg: 3765.85 / Max: 3840.83Min: 3718.35 / Avg: 3762.49 / Max: 3834.86Min: 3721.38 / Avg: 3756.96 / Max: 3777.75Min: 3590.61 / Avg: 3634.17 / Max: 3708.22Min: 3541.75 / Avg: 3589.93 / Max: 3630.151. (CC) gcc options: -O3 -mavx2

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileEPYC 7F32EPYC 7F52EPYC 7502PEPYC 7702EPYC 7542EPYC 7402PEPYC 7302PEPYC 7552EPYC 7662EPYC 7532EPYC 7642EPYC 7232PEPYC 7272EPYC 728250100150200250SE +/- 0.58, N = 3SE +/- 0.67, N = 3SE +/- 0.33, N = 3SE +/- 0.67, N = 3SE +/- 0.33, N = 3SE +/- 0.33, N = 3171174197198198200202202202202203208208208
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileEPYC 7F32EPYC 7F52EPYC 7502PEPYC 7702EPYC 7542EPYC 7402PEPYC 7302PEPYC 7552EPYC 7662EPYC 7532EPYC 7642EPYC 7232PEPYC 7272EPYC 72824080120160200Min: 170 / Avg: 171 / Max: 172Min: 196 / Avg: 197.33 / Max: 198Min: 197 / Avg: 197.67 / Max: 198Min: 201 / Avg: 201.67 / Max: 203Min: 202 / Avg: 202.67 / Max: 203Min: 208 / Avg: 208.33 / Max: 209

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteEPYC 7F32EPYC 7F52EPYC 7402PEPYC 7542EPYC 7532EPYC 7502PEPYC 7702EPYC 7662EPYC 7302PEPYC 7552EPYC 7642EPYC 7272EPYC 7232PEPYC 7282130K260K390K520K650KSE +/- 7086.33, N = 3SE +/- 1476.72, N = 3SE +/- 4035.67, N = 3SE +/- 741.11, N = 3SE +/- 4131.15, N = 3SE +/- 948.15, N = 3SE +/- 1921.47, N = 3SE +/- 1174.39, N = 3SE +/- 957.34, N = 3SE +/- 142.64, N = 3SE +/- 215.70, N = 3SE +/- 2157.92, N = 3SE +/- 569.07, N = 3SE +/- 973.54, N = 3621672614039542521539060532957531740529609524751524189523575521572513356511906511228
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteEPYC 7F32EPYC 7F52EPYC 7402PEPYC 7542EPYC 7532EPYC 7502PEPYC 7702EPYC 7662EPYC 7302PEPYC 7552EPYC 7642EPYC 7272EPYC 7232PEPYC 7282110K220K330K440K550KMin: 613060 / Avg: 621672 / Max: 635726Min: 611253 / Avg: 614039 / Max: 616281Min: 534521 / Avg: 542520.67 / Max: 547450Min: 538010 / Avg: 539060 / Max: 540491Min: 524902 / Avg: 532956.67 / Max: 538578Min: 530354 / Avg: 531740.33 / Max: 533554Min: 525962 / Avg: 529608.67 / Max: 532482Min: 523197 / Avg: 524750.67 / Max: 527053Min: 522517 / Avg: 524188.67 / Max: 525833Min: 523393 / Avg: 523574.67 / Max: 523856Min: 521324 / Avg: 521572.33 / Max: 522002Min: 510924 / Avg: 513356.33 / Max: 517660Min: 511300 / Avg: 511905.67 / Max: 513043Min: 510027 / Avg: 511228.33 / Max: 513156

GnuPG

This test times how long it takes to encrypt a sample file using GnuPG. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGnuPG 2.2.272.7GB Sample File EncryptionEPYC 7F32EPYC 7F52EPYC 7542EPYC 7402PEPYC 7502PEPYC 7702EPYC 7662EPYC 7532EPYC 7302PEPYC 7642EPYC 7552EPYC 7272EPYC 7232PEPYC 728220406080100SE +/- 0.36, N = 3SE +/- 0.29, N = 3SE +/- 0.30, N = 3SE +/- 0.84, N = 3SE +/- 0.75, N = 3SE +/- 0.76, N = 3SE +/- 0.88, N = 3SE +/- 0.48, N = 3SE +/- 0.95, N = 3SE +/- 0.80, N = 3SE +/- 0.71, N = 3SE +/- 0.94, N = 3SE +/- 0.53, N = 3SE +/- 0.25, N = 373.7573.8583.9084.4385.0285.2485.7385.7985.8286.3686.3788.4488.4489.681. (CC) gcc options: -O2
OpenBenchmarking.orgSeconds, Fewer Is BetterGnuPG 2.2.272.7GB Sample File EncryptionEPYC 7F32EPYC 7F52EPYC 7542EPYC 7402PEPYC 7502PEPYC 7702EPYC 7662EPYC 7532EPYC 7302PEPYC 7642EPYC 7552EPYC 7272EPYC 7232PEPYC 728220406080100Min: 73.34 / Avg: 73.75 / Max: 74.47Min: 73.37 / Avg: 73.85 / Max: 74.37Min: 83.3 / Avg: 83.9 / Max: 84.23Min: 83.56 / Avg: 84.43 / Max: 86.11Min: 83.6 / Avg: 85.02 / Max: 86.12Min: 83.83 / Avg: 85.24 / Max: 86.45Min: 84.82 / Avg: 85.73 / Max: 87.48Min: 85.06 / Avg: 85.79 / Max: 86.69Min: 84.82 / Avg: 85.82 / Max: 87.72Min: 84.87 / Avg: 86.36 / Max: 87.6Min: 85.07 / Avg: 86.37 / Max: 87.5Min: 87.5 / Avg: 88.44 / Max: 90.31Min: 87.54 / Avg: 88.44 / Max: 89.39Min: 89.26 / Avg: 89.68 / Max: 90.121. (CC) gcc options: -O2

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Euclidean ClusterEPYC 7F52EPYC 7F32EPYC 7542EPYC 7502PEPYC 7662EPYC 7642EPYC 7402PEPYC 7552EPYC 7702EPYC 7532EPYC 7302PEPYC 7282EPYC 7272EPYC 7232P2004006008001000SE +/- 0.51, N = 3SE +/- 1.50, N = 3SE +/- 0.71, N = 3SE +/- 0.42, N = 3SE +/- 0.53, N = 3SE +/- 1.18, N = 3SE +/- 0.40, N = 3SE +/- 0.82, N = 3SE +/- 1.46, N = 3SE +/- 0.79, N = 3SE +/- 0.19, N = 3SE +/- 1.22, N = 3SE +/- 0.49, N = 3SE +/- 0.40, N = 31062.321039.60983.45974.40966.04963.00962.88956.82954.20949.15943.29928.43919.01876.451. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Euclidean ClusterEPYC 7F52EPYC 7F32EPYC 7542EPYC 7502PEPYC 7662EPYC 7642EPYC 7402PEPYC 7552EPYC 7702EPYC 7532EPYC 7302PEPYC 7282EPYC 7272EPYC 7232P2004006008001000Min: 1061.29 / Avg: 1062.32 / Max: 1062.84Min: 1037.63 / Avg: 1039.6 / Max: 1042.55Min: 982.72 / Avg: 983.45 / Max: 984.87Min: 973.64 / Avg: 974.4 / Max: 975.08Min: 965.08 / Avg: 966.04 / Max: 966.9Min: 961.75 / Avg: 963 / Max: 965.37Min: 962.13 / Avg: 962.88 / Max: 963.5Min: 955.86 / Avg: 956.82 / Max: 958.45Min: 951.37 / Avg: 954.2 / Max: 956.21Min: 947.85 / Avg: 949.15 / Max: 950.57Min: 942.97 / Avg: 943.29 / Max: 943.63Min: 926.64 / Avg: 928.43 / Max: 930.77Min: 918.04 / Avg: 919.01 / Max: 919.66Min: 876.05 / Avg: 876.45 / Max: 877.261. (CXX) g++ options: -O3 -std=c++11 -fopenmp

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000EPYC 7502PEPYC 7402PEPYC 7F52EPYC 7642EPYC 7702EPYC 7662EPYC 7552EPYC 7532EPYC 7302PEPYC 7282EPYC 7542EPYC 7F32EPYC 7272EPYC 7232P300K600K900K1200K1500KSE +/- 1998.87, N = 3SE +/- 607.09, N = 3SE +/- 771.81, N = 3SE +/- 472.72, N = 3SE +/- 1936.89, N = 3SE +/- 2041.67, N = 3SE +/- 2507.53, N = 3SE +/- 2350.83, N = 3SE +/- 842.53, N = 3SE +/- 1258.02, N = 3SE +/- 577.19, N = 3SE +/- 2229.96, N = 3SE +/- 1165.03, N = 3SE +/- 1045.78, N = 31262530.41248284.11247037.61215056.11212918.21209698.01208480.81197778.91188387.21172021.11162690.91155871.81143360.51041996.8
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000EPYC 7502PEPYC 7402PEPYC 7F52EPYC 7642EPYC 7702EPYC 7662EPYC 7552EPYC 7532EPYC 7302PEPYC 7282EPYC 7542EPYC 7F32EPYC 7272EPYC 7232P200K400K600K800K1000KMin: 1259246.9 / Avg: 1262530.43 / Max: 1266147.1Min: 1247624.2 / Avg: 1248284.1 / Max: 1249496.7Min: 1245534.1 / Avg: 1247037.63 / Max: 1248092.1Min: 1214294.3 / Avg: 1215056.1 / Max: 1215921.9Min: 1209097.9 / Avg: 1212918.2 / Max: 1215383.9Min: 1206399.7 / Avg: 1209698.03 / Max: 1213431.9Min: 1203466.2 / Avg: 1208480.8 / Max: 1211047.5Min: 1194820.8 / Avg: 1197778.87 / Max: 1202422.8Min: 1187003 / Avg: 1188387.17 / Max: 1189911.5Min: 1169540.4 / Avg: 1172021.13 / Max: 1173625.2Min: 1161606.1 / Avg: 1162690.93 / Max: 1163575.1Min: 1152601.1 / Avg: 1155871.8 / Max: 1160133Min: 1141134.6 / Avg: 1143360.47 / Max: 1145070.1Min: 1040104.9 / Avg: 1041996.77 / Max: 1043715.1

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: DXT1EPYC 7F32EPYC 7F52EPYC 7542EPYC 7502PEPYC 7702EPYC 7402PEPYC 7552EPYC 7302PEPYC 7662EPYC 7532EPYC 7272EPYC 7232PEPYC 728230060090012001500SE +/- 0.24, N = 8SE +/- 2.05, N = 8SE +/- 1.94, N = 7SE +/- 1.96, N = 7SE +/- 1.34, N = 7SE +/- 1.99, N = 7SE +/- 1.97, N = 7SE +/- 1.74, N = 7SE +/- 2.20, N = 7SE +/- 2.27, N = 7SE +/- 1.94, N = 7SE +/- 1.98, N = 7SE +/- 2.23, N = 71179.301179.021035.231022.481019.211018.971008.241007.951007.221006.92978.85976.59975.221. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: DXT1EPYC 7F32EPYC 7F52EPYC 7542EPYC 7502PEPYC 7702EPYC 7402PEPYC 7552EPYC 7302PEPYC 7662EPYC 7532EPYC 7272EPYC 7232PEPYC 72822004006008001000Min: 1178.67 / Avg: 1179.3 / Max: 1180.64Min: 1173.37 / Avg: 1179.02 / Max: 1186.77Min: 1031.59 / Avg: 1035.22 / Max: 1042.84Min: 1017.68 / Avg: 1022.48 / Max: 1028.23Min: 1017.42 / Avg: 1019.21 / Max: 1027.18Min: 1012.27 / Avg: 1018.97 / Max: 1026.96Min: 1002.46 / Avg: 1008.24 / Max: 1013.3Min: 1001.21 / Avg: 1007.95 / Max: 1011.76Min: 1002.3 / Avg: 1007.22 / Max: 1013.81Min: 999.92 / Avg: 1006.92 / Max: 1013.6Min: 972.88 / Avg: 978.85 / Max: 983.32Min: 970.13 / Avg: 976.59 / Max: 981.11Min: 970.09 / Avg: 975.22 / Max: 983.121. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

JPEG XL Decoding

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding 0.3.1CPU Threads: AllEPYC 7F52EPYC 7282EPYC 7542EPYC 7F32EPYC 753250100150200250SE +/- 0.29, N = 3SE +/- 0.06, N = 3SE +/- 0.90, N = 3SE +/- 0.06, N = 3SE +/- 0.41, N = 3207.63197.76188.06183.95171.73
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding 0.3.1CPU Threads: AllEPYC 7F52EPYC 7282EPYC 7542EPYC 7F32EPYC 75324080120160200Min: 207.06 / Avg: 207.63 / Max: 208.05Min: 197.65 / Avg: 197.76 / Max: 197.84Min: 186.86 / Avg: 188.06 / Max: 189.82Min: 183.83 / Avg: 183.95 / Max: 184.04Min: 171 / Avg: 171.73 / Max: 172.4

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 10EPYC 7F32EPYC 7F52EPYC 7542EPYC 7702EPYC 7402PEPYC 7532EPYC 7502PEPYC 7552EPYC 7662EPYC 7302PEPYC 7642EPYC 7272EPYC 7282EPYC 7232P0.7671.5342.3013.0683.835SE +/- 0.010, N = 3SE +/- 0.017, N = 3SE +/- 0.009, N = 3SE +/- 0.009, N = 3SE +/- 0.004, N = 3SE +/- 0.006, N = 3SE +/- 0.005, N = 3SE +/- 0.005, N = 3SE +/- 0.009, N = 3SE +/- 0.009, N = 3SE +/- 0.008, N = 3SE +/- 0.008, N = 3SE +/- 0.008, N = 3SE +/- 0.006, N = 33.4093.3653.0042.9702.9672.9372.9292.9292.9262.9262.8942.8612.8382.822
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 10EPYC 7F32EPYC 7F52EPYC 7542EPYC 7702EPYC 7402PEPYC 7532EPYC 7502PEPYC 7552EPYC 7662EPYC 7302PEPYC 7642EPYC 7272EPYC 7282EPYC 7232P246810Min: 3.39 / Avg: 3.41 / Max: 3.42Min: 3.35 / Avg: 3.36 / Max: 3.4Min: 2.99 / Avg: 3 / Max: 3.02Min: 2.95 / Avg: 2.97 / Max: 2.98Min: 2.96 / Avg: 2.97 / Max: 2.97Min: 2.93 / Avg: 2.94 / Max: 2.94Min: 2.92 / Avg: 2.93 / Max: 2.94Min: 2.92 / Avg: 2.93 / Max: 2.94Min: 2.91 / Avg: 2.93 / Max: 2.94Min: 2.92 / Avg: 2.93 / Max: 2.94Min: 2.88 / Avg: 2.89 / Max: 2.9Min: 2.85 / Avg: 2.86 / Max: 2.87Min: 2.82 / Avg: 2.84 / Max: 2.85Min: 2.81 / Avg: 2.82 / Max: 2.83

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 6EPYC 7F32EPYC 7F52EPYC 7542EPYC 7402PEPYC 7702EPYC 7502PEPYC 7532EPYC 7662EPYC 7552EPYC 7302PEPYC 7642EPYC 7272EPYC 7282EPYC 7232P0.34990.69981.04971.39961.7495SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.002, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.002, N = 3SE +/- 0.000, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.000, N = 3SE +/- 0.003, N = 3SE +/- 0.002, N = 3SE +/- 0.002, N = 3SE +/- 0.001, N = 31.5551.5421.3681.3511.3421.3381.3361.3331.3311.3291.3201.2961.2931.289
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 6EPYC 7F32EPYC 7F52EPYC 7542EPYC 7402PEPYC 7702EPYC 7502PEPYC 7532EPYC 7662EPYC 7552EPYC 7302PEPYC 7642EPYC 7272EPYC 7282EPYC 7232P246810Min: 1.55 / Avg: 1.55 / Max: 1.56Min: 1.54 / Avg: 1.54 / Max: 1.54Min: 1.36 / Avg: 1.37 / Max: 1.37Min: 1.35 / Avg: 1.35 / Max: 1.35Min: 1.34 / Avg: 1.34 / Max: 1.34Min: 1.33 / Avg: 1.34 / Max: 1.34Min: 1.34 / Avg: 1.34 / Max: 1.34Min: 1.33 / Avg: 1.33 / Max: 1.34Min: 1.33 / Avg: 1.33 / Max: 1.33Min: 1.33 / Avg: 1.33 / Max: 1.33Min: 1.32 / Avg: 1.32 / Max: 1.32Min: 1.29 / Avg: 1.3 / Max: 1.3Min: 1.29 / Avg: 1.29 / Max: 1.3Min: 1.29 / Avg: 1.29 / Max: 1.29

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 5EPYC 7F32EPYC 7F52EPYC 7542EPYC 7402PEPYC 7502PEPYC 7702EPYC 7532EPYC 7302PEPYC 7662EPYC 7552EPYC 7642EPYC 7272EPYC 7282EPYC 7232P0.26190.52380.78571.04761.3095SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.003, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.003, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.000, N = 3SE +/- 0.001, N = 31.1641.1561.0241.0091.0061.0050.9990.9970.9930.9920.9890.9720.9690.965
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 5EPYC 7F32EPYC 7F52EPYC 7542EPYC 7402PEPYC 7502PEPYC 7702EPYC 7532EPYC 7302PEPYC 7662EPYC 7552EPYC 7642EPYC 7272EPYC 7282EPYC 7232P246810Min: 1.16 / Avg: 1.16 / Max: 1.17Min: 1.15 / Avg: 1.16 / Max: 1.16Min: 1.02 / Avg: 1.02 / Max: 1.03Min: 1.01 / Avg: 1.01 / Max: 1.01Min: 1 / Avg: 1.01 / Max: 1.01Min: 1 / Avg: 1.01 / Max: 1.01Min: 1 / Avg: 1 / Max: 1Min: 0.99 / Avg: 1 / Max: 1Min: 0.99 / Avg: 0.99 / Max: 0.99Min: 0.99 / Avg: 0.99 / Max: 1Min: 0.99 / Avg: 0.99 / Max: 0.99Min: 0.97 / Avg: 0.97 / Max: 0.97Min: 0.97 / Avg: 0.97 / Max: 0.97Min: 0.96 / Avg: 0.96 / Max: 0.97

Crypto++

Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Unkeyed AlgorithmsEPYC 7F32EPYC 7F52EPYC 7542EPYC 7702EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7532EPYC 7552EPYC 7662EPYC 7642EPYC 7232PEPYC 7272EPYC 728270140210280350SE +/- 0.04, N = 3SE +/- 0.17, N = 3SE +/- 0.11, N = 3SE +/- 0.01, N = 3SE +/- 0.11, N = 3SE +/- 0.11, N = 3SE +/- 0.12, N = 3SE +/- 0.11, N = 3SE +/- 0.10, N = 3SE +/- 0.18, N = 3SE +/- 0.93, N = 3SE +/- 0.14, N = 3SE +/- 0.17, N = 3SE +/- 0.25, N = 3343.61342.93302.93298.73298.71298.49294.44294.27293.88293.81292.60286.19286.18285.251. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe
OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Unkeyed AlgorithmsEPYC 7F32EPYC 7F52EPYC 7542EPYC 7702EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7532EPYC 7552EPYC 7662EPYC 7642EPYC 7232PEPYC 7272EPYC 728260120180240300Min: 343.56 / Avg: 343.61 / Max: 343.68Min: 342.6 / Avg: 342.93 / Max: 343.11Min: 302.75 / Avg: 302.93 / Max: 303.14Min: 298.7 / Avg: 298.73 / Max: 298.75Min: 298.57 / Avg: 298.71 / Max: 298.93Min: 298.33 / Avg: 298.49 / Max: 298.7Min: 294.21 / Avg: 294.44 / Max: 294.59Min: 294.13 / Avg: 294.27 / Max: 294.49Min: 293.7 / Avg: 293.88 / Max: 294.04Min: 293.61 / Avg: 293.81 / Max: 294.16Min: 290.74 / Avg: 292.6 / Max: 293.53Min: 285.95 / Avg: 286.19 / Max: 286.43Min: 286.01 / Avg: 286.18 / Max: 286.52Min: 284.78 / Avg: 285.25 / Max: 285.611. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: KostyaEPYC 7F32EPYC 7F52EPYC 7542EPYC 7502PEPYC 7402PEPYC 7702EPYC 7642EPYC 7532EPYC 7662EPYC 7552EPYC 7302PEPYC 7282EPYC 7272EPYC 7232P0.11930.23860.35790.47720.5965SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.530.530.460.460.460.460.450.450.450.450.450.440.440.441. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: KostyaEPYC 7F32EPYC 7F52EPYC 7542EPYC 7502PEPYC 7402PEPYC 7702EPYC 7642EPYC 7532EPYC 7662EPYC 7552EPYC 7302PEPYC 7282EPYC 7272EPYC 7232P246810Min: 0.53 / Avg: 0.53 / Max: 0.53Min: 0.53 / Avg: 0.53 / Max: 0.53Min: 0.46 / Avg: 0.46 / Max: 0.46Min: 0.46 / Avg: 0.46 / Max: 0.46Min: 0.46 / Avg: 0.46 / Max: 0.46Min: 0.46 / Avg: 0.46 / Max: 0.46Min: 0.45 / Avg: 0.45 / Max: 0.45Min: 0.45 / Avg: 0.45 / Max: 0.45Min: 0.45 / Avg: 0.45 / Max: 0.45Min: 0.45 / Avg: 0.45 / Max: 0.45Min: 0.45 / Avg: 0.45 / Max: 0.45Min: 0.44 / Avg: 0.44 / Max: 0.44Min: 0.44 / Avg: 0.44 / Max: 0.44Min: 0.43 / Avg: 0.44 / Max: 0.441. (CXX) g++ options: -O3 -pthread

ECP-CANDLE

The CANDLE benchmark codes implement deep learning architectures relevant to problems in cancer. These architectures address problems at different biological scales, specifically problems at the molecular, cellular and population scales. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterECP-CANDLE 0.3Benchmark: P1B2EPYC 7F32EPYC 7F52EPYC 7542EPYC 7502PEPYC 7302PEPYC 7402PEPYC 7532EPYC 7642EPYC 7552EPYC 7662EPYC 7282EPYC 7702EPYC 7272EPYC 7232P102030405036.9840.5540.8941.0441.1541.3341.5241.8541.9142.3542.7843.1643.3844.49

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DEPYC 7F52EPYC 7F32EPYC 7542EPYC 7502PEPYC 7402PEPYC 7702EPYC 7302PEPYC 7532EPYC 7642EPYC 7662EPYC 7552EPYC 7282EPYC 7272EPYC 7232P20406080100SE +/- 0.58, N = 3SE +/- 0.07, N = 3SE +/- 0.18, N = 3SE +/- 0.50, N = 3SE +/- 0.65, N = 3SE +/- 0.46, N = 3SE +/- 0.56, N = 3SE +/- 0.36, N = 3SE +/- 0.98, N = 3SE +/- 0.11, N = 3SE +/- 0.05, N = 3SE +/- 0.58, N = 3SE +/- 0.54, N = 3SE +/- 1.23, N = 592.0893.08103.80105.24105.28106.58106.77107.48107.70107.88108.25108.86109.60110.501. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DEPYC 7F52EPYC 7F32EPYC 7542EPYC 7502PEPYC 7402PEPYC 7702EPYC 7302PEPYC 7532EPYC 7642EPYC 7662EPYC 7552EPYC 7282EPYC 7272EPYC 7232P20406080100Min: 91.23 / Avg: 92.08 / Max: 93.18Min: 92.93 / Avg: 93.08 / Max: 93.17Min: 103.43 / Avg: 103.8 / Max: 104Min: 104.53 / Avg: 105.24 / Max: 106.2Min: 104.57 / Avg: 105.28 / Max: 106.58Min: 105.66 / Avg: 106.58 / Max: 107.16Min: 106.14 / Avg: 106.77 / Max: 107.88Min: 106.77 / Avg: 107.48 / Max: 107.96Min: 105.86 / Avg: 107.7 / Max: 109.21Min: 107.69 / Avg: 107.88 / Max: 108.06Min: 108.18 / Avg: 108.25 / Max: 108.34Min: 108.23 / Avg: 108.86 / Max: 110.01Min: 108.57 / Avg: 109.6 / Max: 110.38Min: 108.22 / Avg: 110.5 / Max: 115.081. (CXX) g++ options: -O2 -lOpenCL

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: PNG - Encode Speed: 8EPYC 7F32EPYC 7F52EPYC 7542EPYC 7532EPYC 7502PEPYC 72820.17780.35560.53340.71120.889SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.790.790.710.690.690.661. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: PNG - Encode Speed: 8EPYC 7F32EPYC 7F52EPYC 7542EPYC 7532EPYC 7502PEPYC 7282246810Min: 0.78 / Avg: 0.79 / Max: 0.79Min: 0.79 / Avg: 0.79 / Max: 0.79Min: 0.7 / Avg: 0.71 / Max: 0.71Min: 0.68 / Avg: 0.69 / Max: 0.69Min: 0.69 / Avg: 0.69 / Max: 0.69Min: 0.66 / Avg: 0.66 / Max: 0.661. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: JPEG - Encode Speed: 8EPYC 7F52EPYC 7F32EPYC 7542EPYC 7502PEPYC 7532EPYC 7282612182430SE +/- 0.04, N = 5SE +/- 0.02, N = 5SE +/- 0.03, N = 5SE +/- 0.03, N = 3SE +/- 0.02, N = 5SE +/- 0.02, N = 527.5927.4425.2325.0324.0623.051. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: JPEG - Encode Speed: 8EPYC 7F52EPYC 7F32EPYC 7542EPYC 7502PEPYC 7532EPYC 7282612182430Min: 27.44 / Avg: 27.59 / Max: 27.67Min: 27.38 / Avg: 27.44 / Max: 27.47Min: 25.13 / Avg: 25.23 / Max: 25.3Min: 24.98 / Avg: 25.03 / Max: 25.07Min: 24.02 / Avg: 24.06 / Max: 24.15Min: 22.99 / Avg: 23.05 / Max: 23.111. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansEPYC 7F32EPYC 7282EPYC 7542EPYC 7232PEPYC 7502PEPYC 7272EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7552EPYC 7532EPYC 7702EPYC 766210002000300040005000SE +/- 35.92, N = 5SE +/- 32.99, N = 5SE +/- 12.58, N = 5SE +/- 22.80, N = 5SE +/- 28.80, N = 5SE +/- 33.20, N = 5SE +/- 9.28, N = 5SE +/- 35.77, N = 5SE +/- 20.03, N = 5SE +/- 22.36, N = 5SE +/- 15.90, N = 5SE +/- 33.26, N = 5SE +/- 25.14, N = 53724388138863894391239713993408541384218430344074457
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansEPYC 7F32EPYC 7282EPYC 7542EPYC 7232PEPYC 7502PEPYC 7272EPYC 7402PEPYC 7F52EPYC 7302PEPYC 7552EPYC 7532EPYC 7702EPYC 76628001600240032004000Min: 3653 / Avg: 3723.6 / Max: 3860Min: 3765 / Avg: 3881.4 / Max: 3963Min: 3845 / Avg: 3886.4 / Max: 3916Min: 3832 / Avg: 3893.8 / Max: 3948Min: 3827 / Avg: 3912.2 / Max: 3971Min: 3886 / Avg: 3970.6 / Max: 4055Min: 3975 / Avg: 3993.2 / Max: 4028Min: 3969 / Avg: 4084.8 / Max: 4158Min: 4074 / Avg: 4138.2 / Max: 4180Min: 4158 / Avg: 4217.6 / Max: 4273Min: 4246 / Avg: 4302.6 / Max: 4341Min: 4330 / Avg: 4406.6 / Max: 4494Min: 4392 / Avg: 4456.8 / Max: 4536

JPEG XL Decoding

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding 0.3.1CPU Threads: 1EPYC 7F32EPYC 7F52EPYC 7542EPYC 7532EPYC 7282918273645SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.02, N = 338.4237.9434.2232.6932.16
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding 0.3.1CPU Threads: 1EPYC 7F32EPYC 7F52EPYC 7542EPYC 7532EPYC 7282816243240Min: 38.34 / Avg: 38.42 / Max: 38.47Min: 37.88 / Avg: 37.94 / Max: 37.97Min: 34.2 / Avg: 34.22 / Max: 34.24Min: 32.59 / Avg: 32.69 / Max: 32.77Min: 32.13 / Avg: 32.16 / Max: 32.2

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETEPYC 7F52EPYC 7F32EPYC 7542EPYC 7702EPYC 7532EPYC 7402PEPYC 7552EPYC 7302PEPYC 7642EPYC 7232PEPYC 7502PEPYC 7662EPYC 7282EPYC 7272400K800K1200K1600K2000KSE +/- 17849.12, N = 5SE +/- 19118.80, N = 4SE +/- 13573.21, N = 15SE +/- 18256.32, N = 3SE +/- 21214.43, N = 3SE +/- 13170.58, N = 15SE +/- 8071.25, N = 3SE +/- 13583.32, N = 6SE +/- 11135.64, N = 15SE +/- 18495.33, N = 15SE +/- 18437.40, N = 3SE +/- 13883.40, N = 3SE +/- 14367.89, N = 15SE +/- 12289.67, N = 31635794.571601070.501508082.411482738.921481804.001466557.031450496.711438157.731437534.231421564.351419600.131417246.671414840.411372468.881. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETEPYC 7F52EPYC 7F32EPYC 7542EPYC 7702EPYC 7532EPYC 7402PEPYC 7552EPYC 7302PEPYC 7642EPYC 7232PEPYC 7502PEPYC 7662EPYC 7282EPYC 7272300K600K900K1200K1500KMin: 1586048 / Avg: 1635794.57 / Max: 1671133.62Min: 1561051.5 / Avg: 1601070.5 / Max: 1650709.75Min: 1433897.38 / Avg: 1508082.41 / Max: 1604899.38Min: 1456664.25 / Avg: 1482738.92 / Max: 1517911.38Min: 1456461.38 / Avg: 1481804 / Max: 1523945.12Min: 1388893.38 / Avg: 1466557.03 / Max: 1567157.5Min: 1439056 / Avg: 1450496.71 / Max: 1466079.5Min: 1387351.75 / Avg: 1438157.73 / Max: 1487252.88Min: 1363529.88 / Avg: 1437534.23 / Max: 1507163.75Min: 1340662.25 / Avg: 1421564.35 / Max: 1628409.75Min: 1387942.88 / Avg: 1419600.13 / Max: 1451804.88Min: 1397042.75 / Avg: 1417246.67 / Max: 1443844Min: 1349900.62 / Avg: 1414840.41 / Max: 1535877.12Min: 1350621.38 / Avg: 1372468.88 / Max: 1393145.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Tinymembench

This benchmark tests the system memory (RAM) performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterTinymembench 2018-05-28Standard MemsetEPYC 7702EPYC 7552EPYC 7F32EPYC 7542EPYC 7662EPYC 7F52EPYC 7502PEPYC 7402PEPYC 7282EPYC 7302PEPYC 7532EPYC 7272EPYC 7232P4K8K12K16K20KSE +/- 217.98, N = 3SE +/- 217.68, N = 3SE +/- 28.30, N = 3SE +/- 39.21, N = 3SE +/- 52.61, N = 3SE +/- 39.37, N = 3SE +/- 13.93, N = 3SE +/- 46.42, N = 3SE +/- 18.80, N = 3SE +/- 20.90, N = 3SE +/- 27.08, N = 3SE +/- 37.39, N = 3SE +/- 21.61, N = 317329.816494.416357.215872.015640.115585.715097.114961.414921.314820.314786.914776.414571.71. (CC) gcc options: -O2 -lm
OpenBenchmarking.orgMB/s, More Is BetterTinymembench 2018-05-28Standard MemsetEPYC 7702EPYC 7552EPYC 7F32EPYC 7542EPYC 7662EPYC 7F52EPYC 7502PEPYC 7402PEPYC 7282EPYC 7302PEPYC 7532EPYC 7272EPYC 7232P3K6K9K12K15KMin: 16900.6 / Avg: 17329.83 / Max: 17610.5Min: 16092.5 / Avg: 16494.43 / Max: 16840.3Min: 16301.7 / Avg: 16357.2 / Max: 16394.6Min: 15794.6 / Avg: 15872.03 / Max: 15921.5Min: 15534.9 / Avg: 15640.1 / Max: 15694.5Min: 15512.6 / Avg: 15585.7 / Max: 15647.6Min: 15074.8 / Avg: 15097.07 / Max: 15122.7Min: 14876.5 / Avg: 14961.37 / Max: 15036.4Min: 14884.6 / Avg: 14921.27 / Max: 14946.8Min: 14781.5 / Avg: 14820.27 / Max: 14853.2Min: 14734.2 / Avg: 14786.93 / Max: 14824Min: 14704.8 / Avg: 14776.4 / Max: 14830.9Min: 14546.4 / Avg: 14571.7 / Max: 14614.71. (CC) gcc options: -O2 -lm

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkEPYC 7F32EPYC 7F52EPYC 7702EPYC 7542EPYC 7502PEPYC 7402PEPYC 7662EPYC 7302PEPYC 7552EPYC 7272EPYC 7282EPYC 7532EPYC 7232P918273645SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.09, N = 3SE +/- 0.05, N = 3SE +/- 0.09, N = 3SE +/- 0.07, N = 3SE +/- 0.17, N = 3SE +/- 0.02, N = 3SE +/- 0.23, N = 3SE +/- 0.05, N = 3SE +/- 0.10, N = 3SE +/- 0.07, N = 3SE +/- 0.02, N = 338.5938.0134.5034.3834.1134.0334.0133.7133.6433.1233.0833.0632.481. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkEPYC 7F32EPYC 7F52EPYC 7702EPYC 7542EPYC 7502PEPYC 7402PEPYC 7662EPYC 7302PEPYC 7552EPYC 7272EPYC 7282EPYC 7532EPYC 7232P816243240Min: 38.57 / Avg: 38.59 / Max: 38.62Min: 37.96 / Avg: 38.01 / Max: 38.06Min: 34.32 / Avg: 34.5 / Max: 34.63Min: 34.31 / Avg: 34.38 / Max: 34.48Min: 33.96 / Avg: 34.11 / Max: 34.27Min: 33.89 / Avg: 34.03 / Max: 34.12Min: 33.71 / Avg: 34.01 / Max: 34.31Min: 33.67 / Avg: 33.71 / Max: 33.74Min: 33.2 / Avg: 33.64 / Max: 34Min: 33.04 / Avg: 33.12 / Max: 33.21Min: 32.96 / Avg: 33.08 / Max: 33.27Min: 32.93 / Avg: 33.06 / Max: 33.13Min: 32.44 / Avg: 32.48 / Max: 32.511. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPUEPYC 7542EPYC 7402PEPYC 7502PEPYC 7302PEPYC 7532EPYC 7282EPYC 7552EPYC 7642EPYC 7F32EPYC 7272EPYC 7F52EPYC 7662EPYC 7232PEPYC 770260120180240300SE +/- 0.67, N = 3SE +/- 0.17, N = 3SE +/- 0.88, N = 3SE +/- 0.29, N = 3SE +/- 0.17, N = 3SE +/- 0.33, N = 3SE +/- 0.50, N = 3SE +/- 1.01, N = 3SE +/- 1.00, N = 3SE +/- 0.29, N = 3SE +/- 0.93, N = 3SE +/- 0.60, N = 3SE +/- 1.30, N = 32852802772772712672642632622592482432432401. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPUEPYC 7542EPYC 7402PEPYC 7502PEPYC 7302PEPYC 7532EPYC 7282EPYC 7552EPYC 7642EPYC 7F32EPYC 7272EPYC 7F52EPYC 7662EPYC 7232PEPYC 770250100150200250Min: 284.5 / Avg: 285.17 / Max: 286.5Min: 279.5 / Avg: 279.67 / Max: 280Min: 275.5 / Avg: 276.83 / Max: 278.5Min: 276.5 / Avg: 277 / Max: 277.5Min: 270.5 / Avg: 270.67 / Max: 271Min: 266.5 / Avg: 267.17 / Max: 267.5Min: 263 / Avg: 263.5 / Max: 264.5Min: 261 / Avg: 262.83 / Max: 264.5Min: 260 / Avg: 262 / Max: 263Min: 258.5 / Avg: 259 / Max: 259.5Min: 242 / Avg: 243.17 / Max: 245Min: 242.5 / Avg: 243.33 / Max: 244.5Min: 238 / Avg: 240.33 / Max: 242.51. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETEPYC 7F52EPYC 7F32EPYC 7702EPYC 7662EPYC 7542EPYC 7532EPYC 7642EPYC 7502PEPYC 7302PEPYC 7552EPYC 7272EPYC 7402PEPYC 7282EPYC 7232P300K600K900K1200K1500KSE +/- 8091.69, N = 3SE +/- 13442.74, N = 3SE +/- 22602.36, N = 12SE +/- 18435.82, N = 15SE +/- 15275.38, N = 3SE +/- 15826.22, N = 15SE +/- 16083.41, N = 14SE +/- 14615.10, N = 4SE +/- 12613.83, N = 5SE +/- 14106.76, N = 3SE +/- 17229.52, N = 15SE +/- 4313.29, N = 3SE +/- 15930.17, N = 3SE +/- 11537.36, N = 41336546.211315900.461228831.791200527.311189270.871186674.321183717.451180459.911173570.001170719.461156241.741147451.381143594.631126925.501. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETEPYC 7F52EPYC 7F32EPYC 7702EPYC 7662EPYC 7542EPYC 7532EPYC 7642EPYC 7502PEPYC 7302PEPYC 7552EPYC 7272EPYC 7402PEPYC 7282EPYC 7232P200K400K600K800K1000KMin: 1320510.25 / Avg: 1336546.21 / Max: 1346451.62Min: 1289503 / Avg: 1315900.46 / Max: 1333515.38Min: 1142730.12 / Avg: 1228831.79 / Max: 1378956.5Min: 1107542.38 / Avg: 1200527.31 / Max: 1349007.38Min: 1161717.5 / Avg: 1189270.87 / Max: 1214476.5Min: 1117600.38 / Avg: 1186674.32 / Max: 1295341Min: 1125387.38 / Avg: 1183717.45 / Max: 1363145Min: 1147582.38 / Avg: 1180459.91 / Max: 1209351.5Min: 1141950.88 / Avg: 1173570 / Max: 1208467Min: 1151024 / Avg: 1170719.46 / Max: 1198062Min: 1088613.12 / Avg: 1156241.74 / Max: 1310666.25Min: 1143020.75 / Avg: 1147451.38 / Max: 1156076.88Min: 1112121 / Avg: 1143594.63 / Max: 1163617.38Min: 1105957.12 / Avg: 1126925.5 / Max: 1159285.881. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSHEPYC 7F32EPYC 7F52EPYC 7542EPYC 7532EPYC 7402PEPYC 7502PEPYC 7662EPYC 7272EPYC 7702EPYC 7302PEPYC 7642EPYC 7552EPYC 7282EPYC 7232P300K600K900K1200K1500KSE +/- 6374.74, N = 3SE +/- 8000.25, N = 3SE +/- 11205.82, N = 5SE +/- 10288.34, N = 15SE +/- 11072.90, N = 15SE +/- 9569.43, N = 3SE +/- 5890.12, N = 3SE +/- 15424.51, N = 15SE +/- 7054.62, N = 13SE +/- 8436.86, N = 3SE +/- 4227.00, N = 3SE +/- 7701.66, N = 3SE +/- 3774.33, N = 3SE +/- 6585.32, N = 31174520.461174427.001067438.771059976.321045821.591045486.811039389.171037523.561031876.331030785.791018402.691013049.13997652.46991962.081. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSHEPYC 7F32EPYC 7F52EPYC 7542EPYC 7532EPYC 7402PEPYC 7502PEPYC 7662EPYC 7272EPYC 7702EPYC 7302PEPYC 7642EPYC 7552EPYC 7282EPYC 7232P200K400K600K800K1000KMin: 1167284.62 / Avg: 1174520.46 / Max: 1187229.25Min: 1165101.25 / Avg: 1174427 / Max: 1190349.75Min: 1044932.12 / Avg: 1067438.77 / Max: 1106467.75Min: 1006359.25 / Avg: 1059976.32 / Max: 1149564.75Min: 987478.25 / Avg: 1045821.59 / Max: 1105957.12Min: 1029134.12 / Avg: 1045486.81 / Max: 1062275Min: 1027854.88 / Avg: 1039389.17 / Max: 1047230.12Min: 976187.38 / Avg: 1037523.56 / Max: 1152915Min: 1004120.12 / Avg: 1031876.33 / Max: 1093254.62Min: 1014202.06 / Avg: 1030785.79 / Max: 1041775.19Min: 1013376.56 / Avg: 1018402.69 / Max: 1026802.69Min: 1004120.12 / Avg: 1013049.13 / Max: 1028383.38Min: 990504 / Avg: 997652.46 / Max: 1003327Min: 978965.06 / Avg: 991962.08 / Max: 1000306.51. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADDEPYC 7F52EPYC 7F32EPYC 7402PEPYC 7542EPYC 7502PEPYC 7642EPYC 7552EPYC 7702EPYC 7302PEPYC 7662EPYC 7282EPYC 7532EPYC 7232PEPYC 7272300K600K900K1200K1500KSE +/- 8827.27, N = 3SE +/- 11422.05, N = 10SE +/- 20897.40, N = 15SE +/- 17960.88, N = 3SE +/- 16125.60, N = 15SE +/- 16607.14, N = 15SE +/- 17355.22, N = 15SE +/- 13671.93, N = 15SE +/- 8695.76, N = 3SE +/- 12423.96, N = 3SE +/- 16958.48, N = 15SE +/- 3475.48, N = 3SE +/- 8933.33, N = 3SE +/- 13873.03, N = 31514902.301510395.931393300.081388206.161387616.131360852.661349388.241347163.141345235.041328786.501325778.661319402.581302498.711285763.371. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADDEPYC 7F52EPYC 7F32EPYC 7402PEPYC 7542EPYC 7502PEPYC 7642EPYC 7552EPYC 7702EPYC 7302PEPYC 7662EPYC 7282EPYC 7532EPYC 7232PEPYC 7272300K600K900K1200K1500KMin: 1500849.5 / Avg: 1514902.33 / Max: 1531183.5Min: 1461584.38 / Avg: 1510395.93 / Max: 1591637.12Min: 1309088.62 / Avg: 1393300.08 / Max: 1560067.38Min: 1357040.62 / Avg: 1388206.16 / Max: 1419258.62Min: 1293828.38 / Avg: 1387616.13 / Max: 1488321.5Min: 1297029.25 / Avg: 1360852.66 / Max: 1498136.88Min: 1255650.38 / Avg: 1349388.24 / Max: 1542049.62Min: 1270656.12 / Avg: 1347163.14 / Max: 1487873.75Min: 1327844.88 / Avg: 1345235.04 / Max: 1354117.75Min: 1314595.5 / Avg: 1328786.5 / Max: 1353546.25Min: 1238091.38 / Avg: 1325778.66 / Max: 1481061.62Min: 1313197.62 / Avg: 1319402.58 / Max: 1325218.12Min: 1285520.5 / Avg: 1302498.71 / Max: 1315806.38Min: 1260088.75 / Avg: 1285763.37 / Max: 1307710.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: PNG - Encode Speed: 7EPYC 7F52EPYC 7542EPYC 7502PEPYC 7F32EPYC 7532EPYC 72823691215SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 310.6910.0910.029.789.709.081. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: PNG - Encode Speed: 7EPYC 7F52EPYC 7542EPYC 7502PEPYC 7F32EPYC 7532EPYC 72823691215Min: 10.68 / Avg: 10.69 / Max: 10.7Min: 10.08 / Avg: 10.09 / Max: 10.09Min: 10.01 / Avg: 10.02 / Max: 10.03Min: 9.77 / Avg: 9.78 / Max: 9.79Min: 9.68 / Avg: 9.7 / Max: 9.71Min: 9.06 / Avg: 9.08 / Max: 9.091. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: In-Memory Database ShootoutEPYC 7302PEPYC 7402PEPYC 770211002200330044005500SE +/- 58.56, N = 8SE +/- 31.07, N = 25SE +/- 22.10, N = 54254.654340.285000.14
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: In-Memory Database ShootoutEPYC 7302PEPYC 7402PEPYC 77029001800270036004500Min: 4090.92 / Avg: 4254.65 / Max: 4484.41Min: 3988.26 / Avg: 4340.28 / Max: 4553.94Min: 4929.24 / Avg: 5000.14 / Max: 5053

Hugin

Hugin is an open-source, cross-platform panorama photo stitcher software package. This test profile times how long it takes to run the assistant and panorama photo stitching on a set of images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeEPYC 7F52EPYC 7542EPYC 7F32EPYC 7402PEPYC 7502PEPYC 7552EPYC 7532EPYC 7702EPYC 7302PEPYC 7662EPYC 7282EPYC 7272EPYC 7232P1326395265SE +/- 0.24, N = 3SE +/- 0.30, N = 3SE +/- 0.33, N = 3SE +/- 0.41, N = 3SE +/- 0.42, N = 3SE +/- 0.28, N = 3SE +/- 0.36, N = 3SE +/- 0.32, N = 3SE +/- 0.45, N = 3SE +/- 0.23, N = 3SE +/- 0.50, N = 3SE +/- 0.07, N = 3SE +/- 0.80, N = 350.5952.9952.9953.3653.7255.4255.5755.6855.7755.9556.1057.2459.31
OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeEPYC 7F52EPYC 7542EPYC 7F32EPYC 7402PEPYC 7502PEPYC 7552EPYC 7532EPYC 7702EPYC 7302PEPYC 7662EPYC 7282EPYC 7272EPYC 7232P1224364860Min: 50.13 / Avg: 50.59 / Max: 50.91Min: 52.43 / Avg: 52.99 / Max: 53.44Min: 52.38 / Avg: 52.99 / Max: 53.51Min: 52.68 / Avg: 53.36 / Max: 54.09Min: 52.98 / Avg: 53.72 / Max: 54.43Min: 54.86 / Avg: 55.42 / Max: 55.71Min: 55.11 / Avg: 55.57 / Max: 56.28Min: 55.04 / Avg: 55.68 / Max: 56.12Min: 54.93 / Avg: 55.77 / Max: 56.46Min: 55.51 / Avg: 55.95 / Max: 56.27Min: 55.21 / Avg: 56.1 / Max: 56.93Min: 57.12 / Avg: 57.24 / Max: 57.34Min: 57.78 / Avg: 59.31 / Max: 60.47

KeyDB

A benchmark of KeyDB as a multi-threaded fork of the Redis server. The KeyDB benchmark is conducted using memtier-benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.16EPYC 7F52EPYC 7F32EPYC 7542EPYC 7282EPYC 7272EPYC 7402PEPYC 7502PEPYC 7302PEPYC 7532EPYC 7232PEPYC 7552EPYC 7642EPYC 7662EPYC 770290K180K270K360K450KSE +/- 428.23, N = 3SE +/- 183.84, N = 3SE +/- 3270.77, N = 3SE +/- 1079.81, N = 3SE +/- 1365.59, N = 3SE +/- 3460.73, N = 3SE +/- 430.27, N = 3SE +/- 639.41, N = 3SE +/- 2791.26, N = 3SE +/- 533.37, N = 3SE +/- 5135.26, N = 3SE +/- 4280.31, N = 3SE +/- 4382.81, N = 3SE +/- 4507.61, N = 3433091.73424600.07422944.46420324.96418080.47415583.96413354.59405905.78404313.15399110.88394639.23386824.54376241.11371448.211. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.16EPYC 7F52EPYC 7F32EPYC 7542EPYC 7282EPYC 7272EPYC 7402PEPYC 7502PEPYC 7302PEPYC 7532EPYC 7232PEPYC 7552EPYC 7642EPYC 7662EPYC 770280K160K240K320K400KMin: 432304.57 / Avg: 433091.73 / Max: 433777.59Min: 424252.48 / Avg: 424600.07 / Max: 424877.7Min: 416505.35 / Avg: 422944.46 / Max: 427162.66Min: 419061.55 / Avg: 420324.96 / Max: 422473.5Min: 416043.78 / Avg: 418080.47 / Max: 420674.69Min: 412054.17 / Avg: 415583.96 / Max: 422504.97Min: 412498.94 / Avg: 413354.59 / Max: 413861.79Min: 404993.42 / Avg: 405905.78 / Max: 407138Min: 399536.5 / Avg: 404313.15 / Max: 409203.69Min: 398044.81 / Avg: 399110.88 / Max: 399676.81Min: 385396.59 / Avg: 394639.23 / Max: 403139.08Min: 379984.9 / Avg: 386824.54 / Max: 394702.8Min: 368183.94 / Avg: 376241.11 / Max: 383259.45Min: 362545.96 / Avg: 371448.21 / Max: 377131.411. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: ResNet 50 - Device: CPUEPYC 7F52EPYC 7F32EPYC 7542EPYC 7502PEPYC 7662EPYC 7642EPYC 7532EPYC 7402PEPYC 7552EPYC 7302PEPYC 7282EPYC 7702EPYC 7272EPYC 7232P1.34782.69564.04345.39126.739SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 35.995.965.845.765.765.725.715.685.675.675.575.565.535.17
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: ResNet 50 - Device: CPUEPYC 7F52EPYC 7F32EPYC 7542EPYC 7502PEPYC 7662EPYC 7642EPYC 7532EPYC 7402PEPYC 7552EPYC 7302PEPYC 7282EPYC 7702EPYC 7272EPYC 7232P246810Min: 5.9 / Avg: 5.99 / Max: 6.05Min: 5.9 / Avg: 5.96 / Max: 6.02Min: 5.82 / Avg: 5.84 / Max: 5.86Min: 5.73 / Avg: 5.76 / Max: 5.78Min: 5.73 / Avg: 5.76 / Max: 5.79Min: 5.7 / Avg: 5.72 / Max: 5.73Min: 5.68 / Avg: 5.71 / Max: 5.73Min: 5.62 / Avg: 5.68 / Max: 5.72Min: 5.64 / Avg: 5.67 / Max: 5.69Min: 5.65 / Avg: 5.67 / Max: 5.68Min: 5.55 / Avg: 5.57 / Max: 5.59Min: 5.54 / Avg: 5.56 / Max: 5.58Min: 5.46 / Avg: 5.53 / Max: 5.61Min: 5.14 / Avg: 5.17 / Max: 5.2

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 RealtimeEPYC 7F52EPYC 7F32EPYC 7542EPYC 7402PEPYC 7502PEPYC 7702EPYC 7302PEPYC 7552EPYC 7662EPYC 7532EPYC 7642EPYC 7282EPYC 7272EPYC 7232P816243240SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.06, N = 3SE +/- 0.07, N = 3SE +/- 0.34, N = 5SE +/- 0.03, N = 3SE +/- 0.11, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 335.5635.4833.5132.9932.8632.7832.6232.6032.5432.3532.3132.0331.6331.081. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 RealtimeEPYC 7F52EPYC 7F32EPYC 7542EPYC 7402PEPYC 7502PEPYC 7702EPYC 7302PEPYC 7552EPYC 7662EPYC 7532EPYC 7642EPYC 7282EPYC 7272EPYC 7232P816243240Min: 35.51 / Avg: 35.56 / Max: 35.62Min: 35.37 / Avg: 35.48 / Max: 35.54Min: 33.4 / Avg: 33.51 / Max: 33.6Min: 32.85 / Avg: 32.99 / Max: 33.06Min: 31.51 / Avg: 32.86 / Max: 33.21Min: 32.72 / Avg: 32.78 / Max: 32.82Min: 32.42 / Avg: 32.62 / Max: 32.79Min: 32.57 / Avg: 32.6 / Max: 32.62Min: 32.54 / Avg: 32.54 / Max: 32.55Min: 32.3 / Avg: 32.35 / Max: 32.38Min: 32.25 / Avg: 32.31 / Max: 32.36Min: 31.98 / Avg: 32.03 / Max: 32.11Min: 31.58 / Avg: 31.63 / Max: 31.69Min: 31.04 / Avg: 31.08 / Max: 31.131. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: JPEG - Encode Speed: 5EPYC 7F32EPYC 7F52EPYC 7542EPYC 7502PEPYC 7532EPYC 72821428425670SE +/- 0.12, N = 3SE +/- 0.18, N = 3SE +/- 0.04, N = 3SE +/- 0.17, N = 3SE +/- 0.08, N = 3SE +/- 0.16, N = 360.5760.2558.6858.5954.9753.841. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: JPEG - Encode Speed: 5EPYC 7F32EPYC 7F52EPYC 7542EPYC 7502PEPYC 7532EPYC 72821224364860Min: 60.37 / Avg: 60.57 / Max: 60.77Min: 59.93 / Avg: 60.25 / Max: 60.57Min: 58.62 / Avg: 58.68 / Max: 58.75Min: 58.4 / Avg: 58.59 / Max: 58.93Min: 54.84 / Avg: 54.97 / Max: 55.11Min: 53.64 / Avg: 53.84 / Max: 54.151. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: JPEG - Encode Speed: 7EPYC 7F32EPYC 7F52EPYC 7502PEPYC 7542EPYC 7532EPYC 72821326395265SE +/- 0.15, N = 4SE +/- 0.24, N = 4SE +/- 0.18, N = 3SE +/- 0.03, N = 3SE +/- 0.12, N = 3SE +/- 0.13, N = 360.2560.1258.6258.5954.7353.751. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: JPEG - Encode Speed: 7EPYC 7F32EPYC 7F52EPYC 7502PEPYC 7542EPYC 7532EPYC 72821224364860Min: 59.9 / Avg: 60.25 / Max: 60.56Min: 59.67 / Avg: 60.12 / Max: 60.74Min: 58.33 / Avg: 58.62 / Max: 58.95Min: 58.56 / Avg: 58.59 / Max: 58.65Min: 54.6 / Avg: 54.73 / Max: 54.97Min: 53.53 / Avg: 53.75 / Max: 53.971. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl

WireGuard + Linux Networking Stack Stress Test

This is a benchmark of the WireGuard secure VPN tunnel and Linux networking stack stress test. The test runs on the local host but does require root permissions to run. The way it works is it creates three namespaces. ns0 has a loopback device. ns1 and ns2 each have wireguard devices. Those two wireguard devices send traffic through the loopback device of ns0. The end result of this is that tests wind up testing encryption and decryption at the same time -- a pretty CPU and scheduler-heavy workflow. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestEPYC 7542EPYC 7502PEPYC 7282EPYC 7402PEPYC 7F52EPYC 7552EPYC 7662EPYC 7F32EPYC 7532EPYC 7272EPYC 7702EPYC 7302PEPYC 7232P70140210280350SE +/- 0.63, N = 3SE +/- 1.08, N = 3SE +/- 0.25, N = 3SE +/- 1.52, N = 3SE +/- 0.12, N = 3SE +/- 0.75, N = 3SE +/- 1.82, N = 3SE +/- 0.79, N = 3SE +/- 0.79, N = 3SE +/- 0.55, N = 3SE +/- 1.60, N = 3SE +/- 0.22, N = 3SE +/- 0.07, N = 3291.08292.42296.16297.53299.33302.11302.42302.70305.06305.22307.85308.49326.23
OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestEPYC 7542EPYC 7502PEPYC 7282EPYC 7402PEPYC 7F52EPYC 7552EPYC 7662EPYC 7F32EPYC 7532EPYC 7272EPYC 7702EPYC 7302PEPYC 7232P60120180240300Min: 290.32 / Avg: 291.08 / Max: 292.34Min: 291.12 / Avg: 292.42 / Max: 294.56Min: 295.71 / Avg: 296.16 / Max: 296.59Min: 295.48 / Avg: 297.53 / Max: 300.5Min: 299.09 / Avg: 299.33 / Max: 299.49Min: 301.02 / Avg: 302.11 / Max: 303.55Min: 299.36 / Avg: 302.42 / Max: 305.67Min: 301.87 / Avg: 302.7 / Max: 304.28Min: 303.92 / Avg: 305.06 / Max: 306.58Min: 304.14 / Avg: 305.22 / Max: 305.91Min: 305.74 / Avg: 307.85 / Max: 310.98Min: 308.17 / Avg: 308.49 / Max: 308.9Min: 326.14 / Avg: 326.23 / Max: 326.37

ECP-CANDLE

The CANDLE benchmark codes implement deep learning architectures relevant to problems in cancer. These architectures address problems at different biological scales, specifically problems at the molecular, cellular and population scales. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterECP-CANDLE 0.3Benchmark: P3B2EPYC 7282EPYC 7542EPYC 7272EPYC 7402PEPYC 7502PEPYC 7302PEPYC 7F32EPYC 7232PEPYC 7532EPYC 7F52EPYC 7552EPYC 7662EPYC 7642EPYC 77022004006008001000736.22739.71748.54749.67755.01763.80775.67783.24783.25785.07788.52789.56792.74817.66

DeepSpeech

Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUEPYC 7F32EPYC 7702EPYC 7642EPYC 7662EPYC 7552EPYC 7532EPYC 7F52EPYC 7542EPYC 7502PEPYC 7402PEPYC 7232PEPYC 7302PEPYC 7272EPYC 728220406080100SE +/- 0.14, N = 3SE +/- 0.28, N = 3SE +/- 0.58, N = 3SE +/- 0.10, N = 3SE +/- 0.13, N = 3SE +/- 0.09, N = 3SE +/- 0.45, N = 3SE +/- 0.14, N = 3SE +/- 0.52, N = 3SE +/- 0.20, N = 3SE +/- 0.15, N = 3SE +/- 0.22, N = 3SE +/- 0.20, N = 3SE +/- 0.07, N = 369.5170.1170.1770.6770.7271.5671.6772.1372.2172.6872.9973.1773.7974.99
OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUEPYC 7F32EPYC 7702EPYC 7642EPYC 7662EPYC 7552EPYC 7532EPYC 7F52EPYC 7542EPYC 7502PEPYC 7402PEPYC 7232PEPYC 7302PEPYC 7272EPYC 72821428425670Min: 69.32 / Avg: 69.51 / Max: 69.77Min: 69.57 / Avg: 70.11 / Max: 70.51Min: 69.04 / Avg: 70.17 / Max: 70.95Min: 70.52 / Avg: 70.67 / Max: 70.87Min: 70.48 / Avg: 70.72 / Max: 70.91Min: 71.46 / Avg: 71.56 / Max: 71.73Min: 70.79 / Avg: 71.67 / Max: 72.23Min: 71.89 / Avg: 72.13 / Max: 72.38Min: 71.26 / Avg: 72.21 / Max: 73.03Min: 72.4 / Avg: 72.68 / Max: 73.08Min: 72.82 / Avg: 72.99 / Max: 73.3Min: 72.74 / Avg: 73.17 / Max: 73.46Min: 73.4 / Avg: 73.79 / Max: 74.01Min: 74.85 / Avg: 74.99 / Max: 75.07

Tinymembench

This benchmark tests the system memory (RAM) performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterTinymembench 2018-05-28Standard MemcpyEPYC 7702EPYC 7552EPYC 7F52EPYC 7F32EPYC 7542EPYC 7402PEPYC 7502PEPYC 7662EPYC 7282EPYC 7302PEPYC 7532EPYC 7272EPYC 7232P2K4K6K8K10KSE +/- 44.65, N = 3SE +/- 35.12, N = 3SE +/- 1.52, N = 3SE +/- 2.03, N = 3SE +/- 17.46, N = 3SE +/- 19.03, N = 3SE +/- 2.05, N = 3SE +/- 14.82, N = 3SE +/- 7.90, N = 3SE +/- 2.29, N = 3SE +/- 2.22, N = 3SE +/- 22.30, N = 3SE +/- 2.95, N = 39314.09087.19055.19025.48907.78902.88854.28852.08850.68842.78829.98825.78805.91. (CC) gcc options: -O2 -lm
OpenBenchmarking.orgMB/s, More Is BetterTinymembench 2018-05-28Standard MemcpyEPYC 7702EPYC 7552EPYC 7F52EPYC 7F32EPYC 7542EPYC 7402PEPYC 7502PEPYC 7662EPYC 7282EPYC 7302PEPYC 7532EPYC 7272EPYC 7232P16003200480064008000Min: 9224.8 / Avg: 9313.97 / Max: 9362.9Min: 9051.6 / Avg: 9087.07 / Max: 9157.3Min: 9052.3 / Avg: 9055.13 / Max: 9057.5Min: 9023.3 / Avg: 9025.43 / Max: 9029.5Min: 8873.4 / Avg: 8907.73 / Max: 8930.4Min: 8864.9 / Avg: 8902.83 / Max: 8924.5Min: 8851 / Avg: 8854.17 / Max: 8858Min: 8829.6 / Avg: 8851.97 / Max: 8880Min: 8835.2 / Avg: 8850.57 / Max: 8861.4Min: 8838.4 / Avg: 8842.73 / Max: 8846.2Min: 8825.5 / Avg: 8829.87 / Max: 8832.7Min: 8781.6 / Avg: 8825.67 / Max: 8853.7Min: 8800 / Avg: 8805.9 / Max: 88091. (CC) gcc options: -O2 -lm

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Random ForestEPYC 7402PEPYC 7702EPYC 7302P5001000150020002500SE +/- 16.12, N = 25SE +/- 16.39, N = 5SE +/- 14.97, N = 252059.092114.382167.67
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Random ForestEPYC 7402PEPYC 7702EPYC 7302P400800120016002000Min: 1853.69 / Avg: 2059.09 / Max: 2183.92Min: 2077.52 / Avg: 2114.38 / Max: 2169.74Min: 2048.25 / Avg: 2167.67 / Max: 2398.13

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedEPYC 7F32EPYC 7542EPYC 7402PEPYC 7662EPYC 7702EPYC 7532EPYC 7502PEPYC 7552EPYC 7302PEPYC 7282EPYC 7272EPYC 7F52EPYC 7232P2K4K6K8K10KSE +/- 12.37, N = 15SE +/- 71.90, N = 3SE +/- 36.49, N = 3SE +/- 42.91, N = 3SE +/- 25.22, N = 14SE +/- 51.28, N = 3SE +/- 31.02, N = 3SE +/- 28.28, N = 5SE +/- 35.59, N = 3SE +/- 51.98, N = 3SE +/- 37.27, N = 3SE +/- 33.96, N = 3SE +/- 28.32, N = 510561.910292.510256.310249.310219.510188.610184.510156.510132.110110.910070.210068.510057.81. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedEPYC 7F32EPYC 7542EPYC 7402PEPYC 7662EPYC 7702EPYC 7532EPYC 7502PEPYC 7552EPYC 7302PEPYC 7282EPYC 7272EPYC 7F52EPYC 7232P2K4K6K8K10KMin: 10508.8 / Avg: 10561.89 / Max: 10609.7Min: 10179.5 / Avg: 10292.47 / Max: 10426Min: 10188.2 / Avg: 10256.27 / Max: 10313.1Min: 10163.9 / Avg: 10249.27 / Max: 10299.5Min: 10124.1 / Avg: 10219.52 / Max: 10411.5Min: 10089.5 / Avg: 10188.6 / Max: 10261Min: 10134.8 / Avg: 10184.5 / Max: 10241.5Min: 10088.8 / Avg: 10156.48 / Max: 10226.8Min: 10093.6 / Avg: 10132.1 / Max: 10203.2Min: 10010.1 / Avg: 10110.87 / Max: 10183.4Min: 10010.8 / Avg: 10070.17 / Max: 10138.9Min: 10031 / Avg: 10068.5 / Max: 10136.3Min: 10025.8 / Avg: 10057.76 / Max: 10170.91. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedEPYC 7F32EPYC 7502PEPYC 7542EPYC 7402PEPYC 7302PEPYC 7662EPYC 7702EPYC 7552EPYC 7532EPYC 7282EPYC 7272EPYC 7F52EPYC 7232P2K4K6K8K10KSE +/- 34.53, N = 3SE +/- 14.46, N = 3SE +/- 29.33, N = 3SE +/- 36.01, N = 4SE +/- 93.14, N = 3SE +/- 44.06, N = 3SE +/- 18.64, N = 7SE +/- 54.81, N = 3SE +/- 43.54, N = 3SE +/- 66.67, N = 3SE +/- 29.05, N = 3SE +/- 89.78, N = 3SE +/- 44.49, N = 310567.210232.810222.410209.310172.710170.710162.010159.710118.010112.210101.010099.610063.01. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedEPYC 7F32EPYC 7502PEPYC 7542EPYC 7402PEPYC 7302PEPYC 7662EPYC 7702EPYC 7552EPYC 7532EPYC 7282EPYC 7272EPYC 7F52EPYC 7232P2K4K6K8K10KMin: 10498.8 / Avg: 10567.23 / Max: 10609.5Min: 10214 / Avg: 10232.77 / Max: 10261.2Min: 10167.9 / Avg: 10222.43 / Max: 10268.4Min: 10125 / Avg: 10209.25 / Max: 10270.7Min: 10073 / Avg: 10172.67 / Max: 10358.8Min: 10082.6 / Avg: 10170.7 / Max: 10215.9Min: 10103.5 / Avg: 10162.01 / Max: 10237.8Min: 10076.5 / Avg: 10159.67 / Max: 10263.1Min: 10073.2 / Avg: 10118.03 / Max: 10205.1Min: 9978.9 / Avg: 10112.23 / Max: 10180.1Min: 10050 / Avg: 10101 / Max: 10150.6Min: 9990.9 / Avg: 10099.57 / Max: 10277.7Min: 10013.3 / Avg: 10063.03 / Max: 10151.81. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression SpeedEPYC 7F32EPYC 7302PEPYC 7702EPYC 7542EPYC 7502PEPYC 7402PEPYC 7232PEPYC 7532EPYC 7272EPYC 7662EPYC 7552EPYC 7282EPYC 7F522K4K6K8K10KSE +/- 42.46, N = 3SE +/- 18.80, N = 3SE +/- 36.93, N = 3SE +/- 47.44, N = 3SE +/- 5.92, N = 3SE +/- 43.01, N = 3SE +/- 37.22, N = 3SE +/- 41.47, N = 3SE +/- 27.35, N = 3SE +/- 27.42, N = 3SE +/- 23.29, N = 3SE +/- 7.93, N = 3SE +/- 24.26, N = 311210.011006.610996.110993.210968.610951.010936.610931.610929.910926.310920.510898.310701.31. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression SpeedEPYC 7F32EPYC 7302PEPYC 7702EPYC 7542EPYC 7502PEPYC 7402PEPYC 7232PEPYC 7532EPYC 7272EPYC 7662EPYC 7552EPYC 7282EPYC 7F522K4K6K8K10KMin: 11129.3 / Avg: 11209.97 / Max: 11273.3Min: 10976.1 / Avg: 11006.63 / Max: 11040.9Min: 10957.3 / Avg: 10996.07 / Max: 11069.9Min: 10916.9 / Avg: 10993.2 / Max: 11080.2Min: 10961.1 / Avg: 10968.63 / Max: 10980.3Min: 10899.4 / Avg: 10951 / Max: 11036.4Min: 10862.2 / Avg: 10936.63 / Max: 10975.1Min: 10851.3 / Avg: 10931.63 / Max: 10989.7Min: 10878.3 / Avg: 10929.9 / Max: 10971.4Min: 10874.5 / Avg: 10926.3 / Max: 10967.8Min: 10874.2 / Avg: 10920.47 / Max: 10948.2Min: 10885.6 / Avg: 10898.33 / Max: 10912.9Min: 10654.2 / Avg: 10701.33 / Max: 10734.91. (CC) gcc options: -O3

MBW

This is a basic/simple memory (RAM) bandwidth benchmark for memory copy operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterMBW 2018-09-08Test: Memory Copy - Array Size: 8192 MiBEPYC 7F32EPYC 7542EPYC 7502PEPYC 7662EPYC 7702EPYC 7232PEPYC 7402PEPYC 7532EPYC 7282EPYC 7272EPYC 7302PEPYC 7552EPYC 7F523K6K9K12K15KSE +/- 4.94, N = 3SE +/- 24.53, N = 3SE +/- 83.23, N = 3SE +/- 2.47, N = 3SE +/- 67.71, N = 3SE +/- 33.58, N = 3SE +/- 93.82, N = 3SE +/- 36.28, N = 3SE +/- 87.56, N = 3SE +/- 85.10, N = 3SE +/- 60.36, N = 3SE +/- 23.45, N = 3SE +/- 1.29, N = 315666.7015641.0915621.6415616.9015599.9015523.1415510.9215503.0515482.7415482.7115480.2915459.1614958.191. (CC) gcc options: -O3 -march=native
OpenBenchmarking.orgMiB/s, More Is BetterMBW 2018-09-08Test: Memory Copy - Array Size: 8192 MiBEPYC 7F32EPYC 7542EPYC 7502PEPYC 7662EPYC 7702EPYC 7232PEPYC 7402PEPYC 7532EPYC 7282EPYC 7272EPYC 7302PEPYC 7552EPYC 7F523K6K9K12K15KMin: 15657.32 / Avg: 15666.7 / Max: 15674.1Min: 15608.52 / Avg: 15641.09 / Max: 15689.15Min: 15473.05 / Avg: 15621.64 / Max: 15760.91Min: 15613.31 / Avg: 15616.9 / Max: 15621.62Min: 15488.47 / Avg: 15599.9 / Max: 15722.26Min: 15473.92 / Avg: 15523.14 / Max: 15587.31Min: 15344.4 / Avg: 15510.92 / Max: 15669.08Min: 15458.28 / Avg: 15503.05 / Max: 15574.89Min: 15323.81 / Avg: 15482.73 / Max: 15625.9Min: 15312.58 / Avg: 15482.71 / Max: 15571.59Min: 15391.63 / Avg: 15480.29 / Max: 15595.56Min: 15413.52 / Avg: 15459.16 / Max: 15491.31Min: 14955.9 / Avg: 14958.19 / Max: 14960.351. (CC) gcc options: -O3 -march=native

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression SpeedEPYC 7F32EPYC 7702EPYC 7402PEPYC 7542EPYC 7502PEPYC 7F52EPYC 7532EPYC 7302PEPYC 7232PEPYC 7662EPYC 7552EPYC 7272EPYC 72822K4K6K8K10KSE +/- 52.96, N = 3SE +/- 50.78, N = 3SE +/- 57.96, N = 3SE +/- 14.87, N = 3SE +/- 5.54, N = 3SE +/- 19.46, N = 3SE +/- 55.86, N = 3SE +/- 29.02, N = 3SE +/- 52.82, N = 3SE +/- 19.08, N = 3SE +/- 18.08, N = 3SE +/- 18.11, N = 3SE +/- 8.54, N = 39802.819496.009472.869468.599464.009461.099434.319428.779421.699376.869369.359365.449360.351. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression SpeedEPYC 7F32EPYC 7702EPYC 7402PEPYC 7542EPYC 7502PEPYC 7F52EPYC 7532EPYC 7302PEPYC 7232PEPYC 7662EPYC 7552EPYC 7272EPYC 72822K4K6K8K10KMin: 9744.32 / Avg: 9802.81 / Max: 9908.52Min: 9444.69 / Avg: 9496 / Max: 9597.56Min: 9413.59 / Avg: 9472.86 / Max: 9588.77Min: 9438.88 / Avg: 9468.59 / Max: 9484.45Min: 9454.4 / Avg: 9464 / Max: 9473.6Min: 9423.06 / Avg: 9461.09 / Max: 9487.27Min: 9377.21 / Avg: 9434.31 / Max: 9546.03Min: 9379.2 / Avg: 9428.77 / Max: 9479.71Min: 9316.16 / Avg: 9421.69 / Max: 9478.74Min: 9351.87 / Avg: 9376.86 / Max: 9414.33Min: 9339.76 / Avg: 9369.35 / Max: 9402.14Min: 9344.67 / Avg: 9365.44 / Max: 9401.53Min: 9345.51 / Avg: 9360.35 / Max: 9375.081. (CC) gcc options: -O3

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Apache Spark PageRankEPYC 7402PEPYC 7302PEPYC 77029001800270036004500SE +/- 44.65, N = 25SE +/- 38.13, N = 25SE +/- 44.71, N = 53856.013992.304023.97
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Apache Spark PageRankEPYC 7402PEPYC 7302PEPYC 77027001400210028003500Min: 3307.9 / Avg: 3856.01 / Max: 4180.59Min: 3512.82 / Avg: 3992.3 / Max: 4205.12Min: 3891.75 / Avg: 4023.97 / Max: 4135.13

MBW

This is a basic/simple memory (RAM) bandwidth benchmark for memory copy operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterMBW 2018-09-08Test: Memory Copy, Fixed Block Size - Array Size: 8192 MiBEPYC 7F32EPYC 7542EPYC 7502PEPYC 7662EPYC 7702EPYC 7552EPYC 7402PEPYC 7302PEPYC 7232PEPYC 7272EPYC 7532EPYC 7282EPYC 7F522K4K6K8K10KSE +/- 5.81, N = 3SE +/- 2.76, N = 3SE +/- 16.71, N = 3SE +/- 4.98, N = 3SE +/- 18.50, N = 3SE +/- 10.51, N = 3SE +/- 18.97, N = 3SE +/- 19.18, N = 3SE +/- 7.74, N = 3SE +/- 6.09, N = 3SE +/- 27.37, N = 3SE +/- 16.74, N = 3SE +/- 3.26, N = 39215.249117.759078.889073.579048.749022.459012.118995.858983.768970.778963.648945.248872.561. (CC) gcc options: -O3 -march=native
OpenBenchmarking.orgMiB/s, More Is BetterMBW 2018-09-08Test: Memory Copy, Fixed Block Size - Array Size: 8192 MiBEPYC 7F32EPYC 7542EPYC 7502PEPYC 7662EPYC 7702EPYC 7552EPYC 7402PEPYC 7302PEPYC 7232PEPYC 7272EPYC 7532EPYC 7282EPYC 7F5216003200480064008000Min: 9203.97 / Avg: 9215.24 / Max: 9223.37Min: 9112.63 / Avg: 9117.75 / Max: 9122.1Min: 9045.61 / Avg: 9078.88 / Max: 9098.27Min: 9065.01 / Avg: 9073.56 / Max: 9082.25Min: 9023.71 / Avg: 9048.74 / Max: 9084.85Min: 9010.61 / Avg: 9022.45 / Max: 9043.42Min: 8974.68 / Avg: 9012.11 / Max: 9036.23Min: 8971.17 / Avg: 8995.85 / Max: 9033.63Min: 8971.61 / Avg: 8983.76 / Max: 8998.15Min: 8960.57 / Avg: 8970.77 / Max: 8981.63Min: 8910.42 / Avg: 8963.64 / Max: 9001.37Min: 8912.98 / Avg: 8945.24 / Max: 8969.13Min: 8866.67 / Avg: 8872.56 / Max: 8877.911. (CC) gcc options: -O3 -march=native

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Genetic Algorithm Using Jenetics + FuturesEPYC 7402PEPYC 770230060090012001500SE +/- 4.27, N = 5SE +/- 7.23, N = 51468.681479.04
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Genetic Algorithm Using Jenetics + FuturesEPYC 7402PEPYC 770230060090012001500Min: 1458.73 / Avg: 1468.68 / Max: 1482.43Min: 1463.48 / Avg: 1479.04 / Max: 1506.41

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Scala DottyEPYC 7402PEPYC 7702400800120016002000SE +/- 6.43, N = 5SE +/- 8.07, N = 51777.321784.87
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Scala DottyEPYC 7402PEPYC 770230060090012001500Min: 1759.68 / Avg: 1777.32 / Max: 1795.07Min: 1766.58 / Avg: 1784.87 / Max: 1809.96

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: tfft2EPYC 770251015202522.25

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: rnflowEPYC 770251015202519.48

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: proteinEPYC 77024812162015.95

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: linpkEPYC 77020.73131.46262.19392.92523.65653.25

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: fatigue2EPYC 7702142842567061.08

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: aermodEPYC 77022468107.22

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2-v2 - Model: efficientnet-b0EPYC 7F32EPYC 7282EPYC 7F52EPYC 7502PEPYC 7542EPYC 7532EPYC 7702612182430SE +/- 0.02, N = 3SE +/- 0.28, N = 3SE +/- 0.06, N = 3SE +/- 0.49, N = 3SE +/- 0.70, N = 3SE +/- 0.24, N = 3SE +/- 0.70, N = 1210.5310.5811.4613.3313.3614.1625.90MIN: 10.39 / MAX: 10.89MIN: 9.7 / MAX: 23.19MIN: 11.14 / MAX: 14.09MIN: 12.47 / MAX: 17.12MIN: 12.37 / MAX: 16.7MIN: 13.23 / MAX: 19.25MIN: 19.87 / MAX: 167.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2-v2 - Model: efficientnet-b0EPYC 7F32EPYC 7282EPYC 7F52EPYC 7502PEPYC 7542EPYC 7532EPYC 7702612182430Min: 10.5 / Avg: 10.53 / Max: 10.57Min: 10.03 / Avg: 10.58 / Max: 10.89Min: 11.35 / Avg: 11.46 / Max: 11.54Min: 12.76 / Avg: 13.33 / Max: 14.3Min: 12.63 / Avg: 13.36 / Max: 14.75Min: 13.75 / Avg: 14.16 / Max: 14.57Min: 22.68 / Avg: 25.9 / Max: 29.611. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2-v2 - Model: mnasnetEPYC 7F32EPYC 7282EPYC 7F52EPYC 7502PEPYC 7542EPYC 7532EPYC 7702510152025SE +/- 0.02, N = 3SE +/- 0.14, N = 3SE +/- 0.07, N = 3SE +/- 0.34, N = 3SE +/- 0.99, N = 3SE +/- 0.20, N = 3SE +/- 0.85, N = 126.557.138.059.499.9610.0019.85MIN: 6.42 / MAX: 6.82MIN: 6.71 / MAX: 81.49MIN: 7.59 / MAX: 83.63MIN: 8.74 / MAX: 11.4MIN: 8.67 / MAX: 14.91MIN: 9.24 / MAX: 12.27MIN: 14.62 / MAX: 160.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2-v2 - Model: mnasnetEPYC 7F32EPYC 7282EPYC 7F52EPYC 7502PEPYC 7542EPYC 7532EPYC 7702510152025Min: 6.5 / Avg: 6.55 / Max: 6.57Min: 6.87 / Avg: 7.13 / Max: 7.34Min: 7.94 / Avg: 8.05 / Max: 8.19Min: 9.07 / Avg: 9.49 / Max: 10.17Min: 8.96 / Avg: 9.96 / Max: 11.93Min: 9.74 / Avg: 10 / Max: 10.39Min: 17.27 / Avg: 19.85 / Max: 26.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2-v2-v3-v3 - Model: mobilenet-v3EPYC 7F32EPYC 7282EPYC 7F52EPYC 7502PEPYC 7532EPYC 7542EPYC 7702510152025SE +/- 0.18, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.38, N = 3SE +/- 0.07, N = 3SE +/- 0.85, N = 3SE +/- 0.52, N = 126.816.928.019.5510.0010.0519.33MIN: 6.47 / MAX: 7.66MIN: 6.62 / MAX: 16.73MIN: 7.76 / MAX: 11.26MIN: 8.8 / MAX: 14.17MIN: 9.44 / MAX: 13.72MIN: 8.79 / MAX: 16.73MIN: 14.67 / MAX: 163.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2-v2-v3-v3 - Model: mobilenet-v3EPYC 7F32EPYC 7282EPYC 7F52EPYC 7502PEPYC 7532EPYC 7542EPYC 7702510152025Min: 6.62 / Avg: 6.81 / Max: 7.18Min: 6.88 / Avg: 6.92 / Max: 6.99Min: 7.97 / Avg: 8.01 / Max: 8.07Min: 9.05 / Avg: 9.55 / Max: 10.29Min: 9.87 / Avg: 10 / Max: 10.1Min: 9.11 / Avg: 10.05 / Max: 11.75Min: 17.44 / Avg: 19.33 / Max: 22.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2-v2-v2-v2 - Model: mobilenet-v2EPYC 7F32EPYC 7282EPYC 7F52EPYC 7542EPYC 7502PEPYC 7532EPYC 7702510152025SE +/- 0.41, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.50, N = 3SE +/- 0.40, N = 3SE +/- 0.22, N = 3SE +/- 0.67, N = 127.627.838.9910.4810.5711.1621.78MIN: 6.94 / MAX: 8.63MIN: 7.42 / MAX: 20.77MIN: 8.55 / MAX: 12.68MIN: 9.49 / MAX: 15.91MIN: 9.55 / MAX: 89.27MIN: 10.31 / MAX: 15.76MIN: 15.53 / MAX: 156.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2-v2-v2-v2 - Model: mobilenet-v2EPYC 7F32EPYC 7282EPYC 7F52EPYC 7542EPYC 7502PEPYC 7532EPYC 7702510152025Min: 7.19 / Avg: 7.62 / Max: 8.44Min: 7.72 / Avg: 7.83 / Max: 7.89Min: 8.96 / Avg: 8.99 / Max: 9.05Min: 9.78 / Avg: 10.48 / Max: 11.44Min: 9.83 / Avg: 10.57 / Max: 11.21Min: 10.89 / Avg: 11.16 / Max: 11.59Min: 19.15 / Avg: 21.78 / Max: 25.391. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceEPYC 7F32EPYC 7232PEPYC 7272EPYC 7302PEPYC 7282EPYC 7F52EPYC 7402PEPYC 7502PEPYC 7542EPYC 7532EPYC 7552EPYC 7642EPYC 7662EPYC 77023691215SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 11SE +/- 0.67, N = 3SE +/- 0.26, N = 12SE +/- 0.37, N = 9SE +/- 0.25, N = 93.303.493.543.723.743.824.214.604.614.956.576.808.689.32MIN: 3.2 / MAX: 3.51MIN: 3.4 / MAX: 3.75MIN: 3.43 / MAX: 3.74MIN: 3.58 / MAX: 5.16MIN: 3.58 / MAX: 8.88MIN: 3.72 / MAX: 4.1MIN: 4.06 / MAX: 5.91MIN: 4.38 / MAX: 6.31MIN: 4.45 / MAX: 4.81MIN: 4.63 / MAX: 6.53MIN: 5.73 / MAX: 18.41MIN: 5.67 / MAX: 11.73MIN: 6.7 / MAX: 15.99MIN: 7.36 / MAX: 18.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceEPYC 7F32EPYC 7232PEPYC 7272EPYC 7302PEPYC 7282EPYC 7F52EPYC 7402PEPYC 7502PEPYC 7542EPYC 7532EPYC 7552EPYC 7642EPYC 7662EPYC 77023691215Min: 3.26 / Avg: 3.3 / Max: 3.36Min: 3.46 / Avg: 3.49 / Max: 3.51Min: 3.5 / Avg: 3.54 / Max: 3.58Min: 3.68 / Avg: 3.72 / Max: 3.75Min: 3.67 / Avg: 3.74 / Max: 3.82Min: 3.79 / Avg: 3.82 / Max: 3.83Min: 4.19 / Avg: 4.21 / Max: 4.23Min: 4.49 / Avg: 4.6 / Max: 4.66Min: 4.55 / Avg: 4.61 / Max: 4.67Min: 4.75 / Avg: 4.95 / Max: 5.37Min: 5.85 / Avg: 6.57 / Max: 7.9Min: 5.79 / Avg: 6.8 / Max: 8.57Min: 6.85 / Avg: 8.68 / Max: 9.99Min: 8.39 / Avg: 9.32 / Max: 10.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0EPYC 7272EPYC 7282EPYC 7F32EPYC 7302PEPYC 7232PEPYC 7402PEPYC 7F52EPYC 7502PEPYC 7542EPYC 7532EPYC 7552EPYC 7642EPYC 7662EPYC 7702612182430SE +/- 0.04, N = 3SE +/- 0.10, N = 3SE +/- 0.13, N = 3SE +/- 0.08, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.17, N = 3SE +/- 0.07, N = 3SE +/- 0.11, N = 3SE +/- 0.26, N = 11SE +/- 2.24, N = 3SE +/- 0.75, N = 12SE +/- 1.16, N = 9SE +/- 0.85, N = 99.8410.4010.5510.8010.8711.6912.4712.9212.9414.6319.6621.7126.5226.76MIN: 9.58 / MAX: 10.75MIN: 9.83 / MAX: 20.38MIN: 10.24 / MAX: 69.96MIN: 10.45 / MAX: 26MIN: 10.68 / MAX: 11.57MIN: 11.36 / MAX: 14.13MIN: 12.02 / MAX: 14.07MIN: 12.36 / MAX: 16.63MIN: 12.46 / MAX: 15.4MIN: 13.29 / MAX: 20.58MIN: 15.98 / MAX: 32.16MIN: 16.06 / MAX: 33.9MIN: 18.34 / MAX: 159.39MIN: 19.65 / MAX: 171.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0EPYC 7272EPYC 7282EPYC 7F32EPYC 7302PEPYC 7232PEPYC 7402PEPYC 7F52EPYC 7502PEPYC 7542EPYC 7532EPYC 7552EPYC 7642EPYC 7662EPYC 7702612182430Min: 9.77 / Avg: 9.84 / Max: 9.88Min: 10.24 / Avg: 10.4 / Max: 10.57Min: 10.38 / Avg: 10.55 / Max: 10.8Min: 10.65 / Avg: 10.8 / Max: 10.94Min: 10.82 / Avg: 10.87 / Max: 10.92Min: 11.58 / Avg: 11.69 / Max: 11.76Min: 12.29 / Avg: 12.47 / Max: 12.8Min: 12.84 / Avg: 12.92 / Max: 13.06Min: 12.72 / Avg: 12.94 / Max: 13.1Min: 13.74 / Avg: 14.63 / Max: 16.32Min: 17.26 / Avg: 19.66 / Max: 24.13Min: 18.48 / Avg: 21.71 / Max: 26.84Min: 20 / Avg: 26.52 / Max: 29.87Min: 23.41 / Avg: 26.76 / Max: 31.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetEPYC 7F32EPYC 7272EPYC 7232PEPYC 7282EPYC 7302PEPYC 7402PEPYC 7F52EPYC 7502PEPYC 7542EPYC 7532EPYC 7552EPYC 7642EPYC 7662EPYC 7702510152025SE +/- 0.12, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.06, N = 3SE +/- 0.04, N = 2SE +/- 0.31, N = 3SE +/- 0.05, N = 3SE +/- 0.37, N = 3SE +/- 0.23, N = 11SE +/- 2.35, N = 3SE +/- 0.94, N = 12SE +/- 1.14, N = 9SE +/- 0.77, N = 96.426.496.646.957.328.168.729.079.2810.3815.4916.7122.0522.34MIN: 6.09 / MAX: 7.09MIN: 6.37 / MAX: 6.92MIN: 6.54 / MAX: 7.23MIN: 6.71 / MAX: 14.84MIN: 7.05 / MAX: 8.22MIN: 7.75 / MAX: 9.91MIN: 8.12 / MAX: 10.61MIN: 8.75 / MAX: 11.1MIN: 8.66 / MAX: 11.37MIN: 9.3 / MAX: 16.29MIN: 11.65 / MAX: 28.64MIN: 11.37 / MAX: 30.58MIN: 13.54 / MAX: 37.68MIN: 14.54 / MAX: 162.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetEPYC 7F32EPYC 7272EPYC 7232PEPYC 7282EPYC 7302PEPYC 7402PEPYC 7F52EPYC 7502PEPYC 7542EPYC 7532EPYC 7552EPYC 7642EPYC 7662EPYC 7702510152025Min: 6.18 / Avg: 6.42 / Max: 6.58Min: 6.47 / Avg: 6.49 / Max: 6.52Min: 6.59 / Avg: 6.64 / Max: 6.7Min: 6.9 / Avg: 6.95 / Max: 7.02Min: 7.22 / Avg: 7.32 / Max: 7.41Min: 8.12 / Avg: 8.16 / Max: 8.19Min: 8.32 / Avg: 8.72 / Max: 9.32Min: 8.98 / Avg: 9.07 / Max: 9.14Min: 8.87 / Avg: 9.28 / Max: 10.01Min: 9.65 / Avg: 10.38 / Max: 12.34Min: 12.98 / Avg: 15.49 / Max: 20.18Min: 12.43 / Avg: 16.71 / Max: 22.78Min: 14.87 / Avg: 22.05 / Max: 25.24Min: 19.2 / Avg: 22.34 / Max: 26.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2EPYC 7282EPYC 7302PEPYC 7402PEPYC 7F32EPYC 7272EPYC 7502PEPYC 7542EPYC 7F52EPYC 7232PEPYC 7532EPYC 7552EPYC 7642EPYC 7702EPYC 766248121620SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.07, N = 3SE +/- 0.36, N = 3SE +/- 0.01, N = 3SE +/- 0.07, N = 11SE +/- 0.60, N = 3SE +/- 0.42, N = 12SE +/- 0.35, N = 9SE +/- 0.61, N = 98.878.919.529.609.619.949.979.9910.2010.6812.8214.0616.7117.29MIN: 8.52 / MAX: 34.05MIN: 8.74 / MAX: 12.61MIN: 9.28 / MAX: 10.99MIN: 9.43 / MAX: 10.04MIN: 9.51 / MAX: 13.18MIN: 9.64 / MAX: 11.99MIN: 9.68 / MAX: 12.72MIN: 9.5 / MAX: 11.56MIN: 10.07 / MAX: 10.8MIN: 10.17 / MAX: 154.31MIN: 12.07 / MAX: 17.58MIN: 11.71 / MAX: 20.88MIN: 14.39 / MAX: 153.63MIN: 13.68 / MAX: 128.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2EPYC 7282EPYC 7302PEPYC 7402PEPYC 7F32EPYC 7272EPYC 7502PEPYC 7542EPYC 7F52EPYC 7232PEPYC 7532EPYC 7552EPYC 7642EPYC 7702EPYC 766248121620Min: 8.8 / Avg: 8.87 / Max: 8.94Min: 8.84 / Avg: 8.91 / Max: 8.99Min: 9.47 / Avg: 9.52 / Max: 9.55Min: 9.56 / Avg: 9.6 / Max: 9.68Min: 9.56 / Avg: 9.61 / Max: 9.65Min: 9.85 / Avg: 9.94 / Max: 9.98Min: 9.83 / Avg: 9.97 / Max: 10.05Min: 9.61 / Avg: 9.99 / Max: 10.71Min: 10.18 / Avg: 10.2 / Max: 10.21Min: 10.49 / Avg: 10.68 / Max: 11.36Min: 12.17 / Avg: 12.82 / Max: 14.01Min: 12.14 / Avg: 14.06 / Max: 16.52Min: 14.5 / Avg: 16.71 / Max: 17.77Min: 13.79 / Avg: 17.29 / Max: 19.271. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3EPYC 7272EPYC 7F32EPYC 7232PEPYC 7282EPYC 7302PEPYC 7402PEPYC 7F52EPYC 7502PEPYC 7542EPYC 7532EPYC 7552EPYC 7642EPYC 7662EPYC 7702510152025SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.08, N = 3SE +/- 0.04, N = 3SE +/- 0.20, N = 3SE +/- 0.07, N = 3SE +/- 0.19, N = 3SE +/- 0.18, N = 11SE +/- 1.69, N = 3SE +/- 0.60, N = 12SE +/- 1.03, N = 9SE +/- 0.75, N = 96.546.586.726.947.318.138.749.119.2010.5114.2115.9420.2920.70MIN: 6.36 / MAX: 10.78MIN: 6.13 / MAX: 7.08MIN: 6.58 / MAX: 8.56MIN: 6.68 / MAX: 16.72MIN: 7.02 / MAX: 19.94MIN: 7.77 / MAX: 9.9MIN: 8.27 / MAX: 12.35MIN: 8.69 / MAX: 12.94MIN: 8.74 / MAX: 13.15MIN: 9.5 / MAX: 16.4MIN: 11.65 / MAX: 24.12MIN: 11.89 / MAX: 108.3MIN: 13.64 / MAX: 155.26MIN: 14.71 / MAX: 286.241. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3EPYC 7272EPYC 7F32EPYC 7232PEPYC 7282EPYC 7302PEPYC 7402PEPYC 7F52EPYC 7502PEPYC 7542EPYC 7532EPYC 7552EPYC 7642EPYC 7662EPYC 7702510152025Min: 6.53 / Avg: 6.54 / Max: 6.55Min: 6.55 / Avg: 6.58 / Max: 6.62Min: 6.68 / Avg: 6.72 / Max: 6.76Min: 6.9 / Avg: 6.94 / Max: 6.96Min: 7.21 / Avg: 7.31 / Max: 7.46Min: 8.05 / Avg: 8.13 / Max: 8.19Min: 8.53 / Avg: 8.74 / Max: 9.13Min: 8.97 / Avg: 9.11 / Max: 9.18Min: 8.99 / Avg: 9.2 / Max: 9.58Min: 9.94 / Avg: 10.51 / Max: 11.53Min: 12.47 / Avg: 14.21 / Max: 17.58Min: 13.25 / Avg: 15.94 / Max: 19.97Min: 14.9 / Avg: 20.29 / Max: 23.39Min: 17.42 / Avg: 20.7 / Max: 24.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2EPYC 7F32EPYC 7272EPYC 7232PEPYC 7302PEPYC 7282EPYC 7402PEPYC 7502PEPYC 7F52EPYC 7542EPYC 7532EPYC 7552EPYC 7642EPYC 7702EPYC 7662612182430SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.27, N = 3SE +/- 0.17, N = 3SE +/- 0.22, N = 11SE +/- 2.17, N = 3SE +/- 1.02, N = 12SE +/- 0.94, N = 9SE +/- 1.26, N = 97.137.327.617.857.888.4110.0710.1110.1511.5516.0517.7822.9823.01MIN: 6.91 / MAX: 7.7MIN: 7.06 / MAX: 11.87MIN: 7.45 / MAX: 8.18MIN: 7.44 / MAX: 11.79MIN: 7.53 / MAX: 17.78MIN: 7.96 / MAX: 10.39MIN: 9.55 / MAX: 13.68MIN: 9.12 / MAX: 13.81MIN: 9.54 / MAX: 14.14MIN: 10.24 / MAX: 25.89MIN: 12.41 / MAX: 30.85MIN: 12.27 / MAX: 35.04MIN: 15.37 / MAX: 193.31MIN: 14.46 / MAX: 130.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2EPYC 7F32EPYC 7272EPYC 7232PEPYC 7302PEPYC 7282EPYC 7402PEPYC 7502PEPYC 7F52EPYC 7542EPYC 7532EPYC 7552EPYC 7642EPYC 7702EPYC 7662510152025Min: 7.08 / Avg: 7.13 / Max: 7.18Min: 7.3 / Avg: 7.32 / Max: 7.34Min: 7.59 / Avg: 7.61 / Max: 7.64Min: 7.8 / Avg: 7.85 / Max: 7.92Min: 7.86 / Avg: 7.88 / Max: 7.9Min: 8.33 / Avg: 8.41 / Max: 8.48Min: 10 / Avg: 10.07 / Max: 10.13Min: 9.76 / Avg: 10.11 / Max: 10.65Min: 9.86 / Avg: 10.15 / Max: 10.44Min: 10.7 / Avg: 11.55 / Max: 13.04Min: 13.72 / Avg: 16.05 / Max: 20.38Min: 12.86 / Avg: 17.78 / Max: 24.89Min: 19.59 / Avg: 22.98 / Max: 29.47Min: 16.68 / Avg: 23.01 / Max: 27.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Stress-NG

OpenBenchmarking.orgBogo Ops/s Per Watt, More Is BetterStress-NG 0.11.07Test: CPU CacheEPYC 7542EPYC 7502PEPYC 7552EPYC 7232PEPYC 7402PEPYC 7532EPYC 7282EPYC 7302PEPYC 7662EPYC 7272EPYC 7702EPYC 7F52EPYC 7F320.15530.31060.46590.62120.77650.690.660.630.630.620.580.550.540.530.500.500.470.38

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU CacheEPYC 7532EPYC 7552EPYC 7542EPYC 7F52EPYC 7502PEPYC 7662EPYC 7702EPYC 7402PEPYC 7302PEPYC 7232PEPYC 7282EPYC 7272EPYC 7F321224364860SE +/- 1.06, N = 15SE +/- 0.96, N = 12SE +/- 0.67, N = 3SE +/- 1.23, N = 12SE +/- 1.28, N = 12SE +/- 0.36, N = 3SE +/- 0.81, N = 15SE +/- 0.69, N = 12SE +/- 0.68, N = 15SE +/- 0.49, N = 13SE +/- 0.37, N = 15SE +/- 0.77, N = 12SE +/- 0.72, N = 1551.3949.7849.1946.3945.2344.3643.7041.1132.7232.5330.1225.1724.851. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU CacheEPYC 7532EPYC 7552EPYC 7542EPYC 7F52EPYC 7502PEPYC 7662EPYC 7702EPYC 7402PEPYC 7302PEPYC 7232PEPYC 7282EPYC 7272EPYC 7F321020304050Min: 45.08 / Avg: 51.39 / Max: 61.54Min: 45.03 / Avg: 49.78 / Max: 56.32Min: 48.23 / Avg: 49.19 / Max: 50.47Min: 38.66 / Avg: 46.39 / Max: 52.29Min: 35.87 / Avg: 45.23 / Max: 51.94Min: 43.65 / Avg: 44.36 / Max: 44.75Min: 38.22 / Avg: 43.7 / Max: 48.67Min: 35.35 / Avg: 41.11 / Max: 44.78Min: 28.49 / Avg: 32.72 / Max: 38.56Min: 29.83 / Avg: 32.53 / Max: 36.16Min: 28.09 / Avg: 30.12 / Max: 33.69Min: 20.46 / Avg: 25.17 / Max: 29.16Min: 20.06 / Avg: 24.85 / Max: 30.631. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

BlogBench

OpenBenchmarking.orgFinal Score Per Watt, More Is BetterBlogBench 1.1Test: WriteEPYC 7302PEPYC 770220040060080010001143.48984.94

OpenBenchmarking.orgFinal Score, More Is BetterBlogBench 1.1Test: WriteEPYC 7662EPYC 7702EPYC 7502PEPYC 7542EPYC 7402PEPYC 7552EPYC 7532EPYC 7642EPYC 7302PEPYC 7F52EPYC 7282EPYC 7272EPYC 7F32EPYC 7232P10K20K30K40K50KSE +/- 249.84, N = 3SE +/- 312.03, N = 3SE +/- 1515.78, N = 3SE +/- 1923.72, N = 3SE +/- 1363.32, N = 3SE +/- 2893.83, N = 3SE +/- 1222.34, N = 3SE +/- 2455.63, N = 3SE +/- 448.51, N = 3SE +/- 379.22, N = 3SE +/- 361.51, N = 3SE +/- 66.78, N = 3SE +/- 648.32, N = 3SE +/- 510.12, N = 345477444344373343408424914158639412393193702636834336272866327443219161. (CC) gcc options: -O2 -pthread
OpenBenchmarking.orgFinal Score, More Is BetterBlogBench 1.1Test: WriteEPYC 7662EPYC 7702EPYC 7502PEPYC 7542EPYC 7402PEPYC 7552EPYC 7532EPYC 7642EPYC 7302PEPYC 7F52EPYC 7282EPYC 7272EPYC 7F32EPYC 7232P8K16K24K32K40KMin: 44984 / Avg: 45477 / Max: 45794Min: 43811 / Avg: 44434.33 / Max: 44772Min: 40707 / Avg: 43733 / Max: 45405Min: 39566 / Avg: 43408.33 / Max: 45501Min: 39807 / Avg: 42491 / Max: 44249Min: 35861 / Avg: 41586 / Max: 45184Min: 36967 / Avg: 39411.67 / Max: 40640Min: 34451 / Avg: 39318.67 / Max: 42318Min: 36148 / Avg: 37025.67 / Max: 37625Min: 36386 / Avg: 36834 / Max: 37588Min: 33112 / Avg: 33627 / Max: 34324Min: 28551 / Avg: 28663 / Max: 28782Min: 26499 / Avg: 27443.33 / Max: 28685Min: 20947 / Avg: 21916 / Max: 226771. (CC) gcc options: -O2 -pthread

SVT-VP9

OpenBenchmarking.orgFrames Per Second Per Watt, More Is BetterSVT-VP9 0.1Tuning: Visual Quality Optimized - Input: Bosphorus 1080pEPYC 7542EPYC 7502PEPYC 7402PEPYC 7552EPYC 7642EPYC 7662EPYC 7532EPYC 7282EPYC 7702EPYC 7302PEPYC 7272EPYC 7F52EPYC 7232PEPYC 7F321.15652.3133.46954.6265.78255.145.004.594.484.114.044.003.893.673.553.022.191.901.67

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: Visual Quality Optimized - Input: Bosphorus 1080pEPYC 7542EPYC 7642EPYC 7552EPYC 7502PEPYC 7662EPYC 7532EPYC 7402PEPYC 7702EPYC 7302PEPYC 7F52EPYC 7282EPYC 7272EPYC 7F32EPYC 7232P80160240320400SE +/- 6.75, N = 15SE +/- 6.69, N = 15SE +/- 6.16, N = 15SE +/- 5.89, N = 15SE +/- 5.77, N = 15SE +/- 5.85, N = 15SE +/- 5.48, N = 15SE +/- 4.45, N = 15SE +/- 2.50, N = 15SE +/- 2.97, N = 15SE +/- 1.82, N = 15SE +/- 1.04, N = 9SE +/- 0.53, N = 7SE +/- 0.51, N = 7350.63346.63336.24334.18332.56325.84319.05305.10242.24230.31229.82184.83124.35107.111. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: Visual Quality Optimized - Input: Bosphorus 1080pEPYC 7542EPYC 7642EPYC 7552EPYC 7502PEPYC 7662EPYC 7532EPYC 7402PEPYC 7702EPYC 7302PEPYC 7F52EPYC 7282EPYC 7272EPYC 7F32EPYC 7232P60120180240300Min: 256.63 / Avg: 350.63 / Max: 362.54Min: 253.27 / Avg: 346.63 / Max: 357.36Min: 250.31 / Avg: 336.24 / Max: 345.82Min: 252.42 / Avg: 334.18 / Max: 348.23Min: 253.16 / Avg: 332.56 / Max: 350.88Min: 244.6 / Avg: 325.84 / Max: 336.51Min: 242.72 / Avg: 319.05 / Max: 326.26Min: 248.76 / Avg: 305.1 / Max: 317.63Min: 207.33 / Avg: 242.24 / Max: 245.7Min: 188.8 / Avg: 230.31 / Max: 234.56Min: 204.43 / Avg: 229.82 / Max: 232.47Min: 176.68 / Avg: 184.83 / Max: 187.03Min: 121.26 / Avg: 124.35 / Max: 125.39Min: 104.11 / Avg: 107.11 / Max: 107.861. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Cpuminer-Opt

OpenBenchmarking.orgkH/s Per Watt, More Is BetterCpuminer-Opt 3.15.5Algorithm: DeepcoinEPYC 7662EPYC 7702EPYC 7552EPYC 7502PEPYC 7542EPYC 7642EPYC 7402PEPYC 7532EPYC 7282EPYC 7302PEPYC 7272EPYC 7F52EPYC 7232PEPYC 7F32120240360480600565.67562.57474.77441.39438.65432.56318.51286.60255.88224.80201.02175.98140.99128.88

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: DeepcoinEPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7502PEPYC 7542EPYC 7532EPYC 7402PEPYC 7F52EPYC 7282EPYC 7302PEPYC 7272EPYC 7F32EPYC 7232P10K20K30K40K50KSE +/- 2134.76, N = 15SE +/- 1295.28, N = 15SE +/- 1521.20, N = 15SE +/- 964.94, N = 15SE +/- 1900.37, N = 15SE +/- 991.42, N = 15SE +/- 38.44, N = 3SE +/- 94.04, N = 3SE +/- 333.56, N = 15SE +/- 407.34, N = 15SE +/- 14.53, N = 3SE +/- 78.72, N = 3SE +/- 73.09, N = 6SE +/- 0.23, N = 346113.0045051.0034363.0033589.0026546.0026493.0023013.0019363.0015338.0012793.0012697.009361.067644.106207.891. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: DeepcoinEPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7502PEPYC 7542EPYC 7532EPYC 7402PEPYC 7F52EPYC 7282EPYC 7302PEPYC 7272EPYC 7F32EPYC 7232P8K16K24K32K40KMin: 42730 / Avg: 46112.67 / Max: 75810Min: 42590 / Avg: 45050.67 / Max: 63100Min: 32480 / Avg: 34363.33 / Max: 55610Min: 32160 / Avg: 33588.67 / Max: 46960Min: 23580 / Avg: 26546 / Max: 44860Min: 25310 / Avg: 26492.67 / Max: 40340Min: 22940 / Avg: 23013.33 / Max: 23070Min: 19250 / Avg: 19363.33 / Max: 19550Min: 14690 / Avg: 15338 / Max: 19990Min: 12330 / Avg: 12793.33 / Max: 18480Min: 12670 / Avg: 12696.67 / Max: 12720Min: 9269.8 / Avg: 9361.06 / Max: 9517.79Min: 7535.18 / Avg: 7644.1 / Max: 7997.88Min: 6207.47 / Avg: 6207.89 / Max: 6208.241. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s Per Watt, More Is BetterCpuminer-Opt 3.15.5Algorithm: GarlicoinEPYC 7662EPYC 7702EPYC 7552EPYC 7542EPYC 7642EPYC 7502PEPYC 7532EPYC 7402PEPYC 7282EPYC 7302PEPYC 7272EPYC 7F52EPYC 7232PEPYC 7F3220406080100105.24101.8597.5789.0688.0386.4863.1762.9152.4645.5841.6633.5531.4027.22

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: GarlicoinEPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7282EPYC 7302PEPYC 7272EPYC 7F32EPYC 7232P2K4K6K8K10KSE +/- 215.48, N = 14SE +/- 72.85, N = 13SE +/- 113.98, N = 14SE +/- 49.30, N = 3SE +/- 157.39, N = 15SE +/- 179.55, N = 12SE +/- 135.96, N = 15SE +/- 8.85, N = 3SE +/- 3.02, N = 3SE +/- 46.80, N = 15SE +/- 6.92, N = 3SE +/- 22.88, N = 15SE +/- 8.24, N = 3SE +/- 24.63, N = 159581.069507.957961.647811.996242.216104.275725.794490.863522.442991.732965.292192.821769.461473.311. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: GarlicoinEPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7532EPYC 7402PEPYC 7F52EPYC 7282EPYC 7302PEPYC 7272EPYC 7F32EPYC 7232P17003400510068008500Min: 9331.1 / Avg: 9581.06 / Max: 12370Min: 9387.02 / Avg: 9507.95 / Max: 10330Min: 7778.26 / Avg: 7961.64 / Max: 9440.93Min: 7760.8 / Avg: 7811.99 / Max: 7910.57Min: 5942.72 / Avg: 6242.21 / Max: 7690.85Min: 5581 / Avg: 6104.27 / Max: 7538.77Min: 5501.12 / Avg: 5725.79 / Max: 7108.86Min: 4477.91 / Avg: 4490.86 / Max: 4507.79Min: 3516.78 / Avg: 3522.44 / Max: 3527.08Min: 2871.77 / Avg: 2991.73 / Max: 3335.77Min: 2956.9 / Avg: 2965.29 / Max: 2979.02Min: 2113.61 / Avg: 2192.82 / Max: 2419.18Min: 1758.86 / Avg: 1769.46 / Max: 1785.69Min: 1428.33 / Avg: 1473.31 / Max: 1776.691. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s Per Watt, More Is BetterCpuminer-Opt 3.15.5Algorithm: x25xEPYC 7662EPYC 7702EPYC 7552EPYC 7642EPYC 7502PEPYC 7542EPYC 7402PEPYC 7532EPYC 7282EPYC 7302PEPYC 7272EPYC 7232PEPYC 7F52EPYC 7F3236912159.959.948.887.807.537.257.075.884.954.463.883.242.992.72

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: x25xEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7402PEPYC 7532EPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P30060090012001500SE +/- 32.30, N = 15SE +/- 29.63, N = 14SE +/- 1.46, N = 3SE +/- 11.98, N = 15SE +/- 2.23, N = 3SE +/- 37.08, N = 15SE +/- 39.35, N = 15SE +/- 0.86, N = 3SE +/- 1.25, N = 3SE +/- 0.74, N = 3SE +/- 1.04, N = 3SE +/- 0.69, N = 3SE +/- 1.31, N = 3SE +/- 1.33, N = 31398.771360.061147.361139.38891.02883.87862.81807.02524.31447.01429.51322.83262.90216.581. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: x25xEPYC 7662EPYC 7702EPYC 7642EPYC 7552EPYC 7542EPYC 7502PEPYC 7402PEPYC 7532EPYC 7F52EPYC 7302PEPYC 7282EPYC 7272EPYC 7F32EPYC 7232P2004006008001000Min: 1361.4 / Avg: 1398.77 / Max: 1850.16Min: 1313.18 / Avg: 1360.06 / Max: 1735.96Min: 1144.44 / Avg: 1147.36 / Max: 1148.92Min: 1114.29 / Avg: 1139.38 / Max: 1274.78Min: 887.85 / Avg: 891.02 / Max: 895.31Min: 822.42 / Avg: 883.87 / Max: 1393.5Min: 675.58 / Avg: 862.81 / Max: 1074.46Min: 806.02 / Avg: 807.02 / Max: 808.73Min: 521.85 / Avg: 524.31 / Max: 525.94Min: 445.61 / Avg: 447.01 / Max: 448.15Min: 427.95 / Avg: 429.51 / Max: 431.48Min: 321.57 / Avg: 322.83 / Max: 323.96Min: 260.27 / Avg: 262.9 / Max: 264.3Min: 214.04 / Avg: 216.58 / Max: 218.551. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Parboil

The Parboil Benchmarks from the IMPACT Research Group at University of Illinois are a set of throughput computing applications for looking at computing architecture and compilers. Parboil test-cases support OpenMP, OpenCL, and CUDA multi-processing environments. However, at this time the test profile is just making use of the OpenMP and OpenCL test workloads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterParboil 2.5Test: OpenMP StencilEPYC 7702EPYC 7662EPYC 7642EPYC 7532EPYC 7552EPYC 7302PEPYC 7502PEPYC 7542EPYC 7402PEPYC 7282EPYC 7272EPYC 7F52EPYC 7F32EPYC 7232P3691215SE +/- 0.025502, N = 9SE +/- 0.021316, N = 15SE +/- 0.024850, N = 7SE +/- 0.046959, N = 15SE +/- 0.017804, N = 7SE +/- 0.040278, N = 15SE +/- 0.076520, N = 15SE +/- 0.076268, N = 15SE +/- 0.017882, N = 6SE +/- 0.040723, N = 15SE +/- 0.172857, N = 15SE +/- 0.084885, N = 6SE +/- 0.288514, N = 15SE +/- 0.312883, N = 143.0864353.1001223.1946623.6215073.7795684.1769714.2463614.3958504.5725915.7589257.6315978.6831319.84410210.6350231. (CXX) g++ options: -lm -lpthread -lgomp -O3 -ffast-math -fopenmp
OpenBenchmarking.orgSeconds, Fewer Is BetterParboil 2.5Test: OpenMP StencilEPYC 7702EPYC 7662EPYC 7642EPYC 7532EPYC 7552EPYC 7302PEPYC 7502PEPYC 7542EPYC 7402PEPYC 7282EPYC 7272EPYC 7F52EPYC 7F32EPYC 7232P3691215Min: 2.96 / Avg: 3.09 / Max: 3.21Min: 2.98 / Avg: 3.1 / Max: 3.25Min: 3.06 / Avg: 3.19 / Max: 3.27Min: 3.4 / Avg: 3.62 / Max: 3.93Min: 3.72 / Avg: 3.78 / Max: 3.86Min: 4 / Avg: 4.18 / Max: 4.54Min: 3.7 / Avg: 4.25 / Max: 4.54Min: 3.64 / Avg: 4.4 / Max: 4.73Min: 4.53 / Avg: 4.57 / Max: 4.63Min: 5.51 / Avg: 5.76 / Max: 6.05Min: 5.38 / Avg: 7.63 / Max: 8.15Min: 8.26 / Avg: 8.68 / Max: 8.8Min: 6.2 / Avg: 9.84 / Max: 10.6Min: 6.84 / Avg: 10.64 / Max: 11.741. (CXX) g++ options: -lm -lpthread -lgomp -O3 -ffast-math -fopenmp

Kripke

OpenBenchmarking.orgThroughput FoM Per Watt, More Is BetterKripke 1.2.4EPYC 7402PEPYC 7532EPYC 7552EPYC 7642EPYC 7302PEPYC 7542EPYC 7662EPYC 7502PEPYC 7272EPYC 7702EPYC 7282EPYC 7232PEPYC 7F32EPYC 7F52400K800K1200K1600K2000K1932212.231673190.411673028.981663875.881638766.481635500.081604877.741575405.351508621.961423158.291357552.351349588.101163515.81514628.71

OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.4EPYC 7642EPYC 7532EPYC 7662EPYC 7552EPYC 7402PEPYC 7702EPYC 7542EPYC 7502PEPYC 7302PEPYC 7F32EPYC 7272EPYC 7282EPYC 7232PEPYC 7F5250M100M150M200M250MSE +/- 5572341.62, N = 12SE +/- 5452949.69, N = 15SE +/- 2208966.30, N = 15SE +/- 1883204.11, N = 15SE +/- 481377.88, N = 4SE +/- 2966178.44, N = 15SE +/- 3811837.38, N = 15SE +/- 2175939.87, N = 3SE +/- 3144064.63, N = 15SE +/- 1036502.10, N = 15SE +/- 1366026.89, N = 3SE +/- 927929.45, N = 9SE +/- 1075776.09, N = 4SE +/- 574642.20, N = 15230851783216525133215771227211310307209811150199683960187397007176798867162898387129536920121256433112489722100898758714133941. (CXX) g++ options: -O3 -fopenmp
OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.4EPYC 7642EPYC 7532EPYC 7662EPYC 7552EPYC 7402PEPYC 7702EPYC 7542EPYC 7502PEPYC 7302PEPYC 7F32EPYC 7272EPYC 7282EPYC 7232PEPYC 7F5240M80M120M160M200MMin: 196106600 / Avg: 230851783.33 / Max: 255760700Min: 170848300 / Avg: 216525133.33 / Max: 232847700Min: 199388700 / Avg: 215771226.67 / Max: 228027900Min: 200163900 / Avg: 211310306.67 / Max: 221141000Min: 208902500 / Avg: 209811150 / Max: 211002600Min: 179253900 / Avg: 199683960 / Max: 214308000Min: 160767700 / Avg: 187397006.67 / Max: 215708300Min: 172473100 / Avg: 176798866.67 / Max: 179374000Min: 148262800 / Avg: 162898386.67 / Max: 178935200Min: 124322100 / Avg: 129536920 / Max: 133041300Min: 119380500 / Avg: 121256433.33 / Max: 123914500Min: 107412100 / Avg: 112489722.22 / Max: 117685700Min: 97675130 / Avg: 100898757.5 / Max: 102097200Min: 68795100 / Avg: 71413394 / Max: 742738401. (CXX) g++ options: -O3 -fopenmp

412 Results Shown

Cpuminer-Opt
Sysbench
Stress-NG
OpenVINO
NAS Parallel Benchmarks
OpenVINO
NAS Parallel Benchmarks
Stress-NG
John The Ripper
OSPray
Pennant
Stress-NG
oneDNN
ASKAP
Coremark
oneDNN
m-queens
Stockfish
N-Queens
BRL-CAD
C-Ray
IndigoBench
Aircrack-ng
IndigoBench
Stress-NG
OSPray:
  XFrog Forest - Path Tracer
  NASA Streamlines - Path Tracer
asmFish
OSPray
ASKAP
Tachyon
7-Zip Compression
John The Ripper
Chaos Group V-RAY
ASTC Encoder
OSPray:
  Magnetic Reconnection - SciVis
  San Miguel - SciVis
Stress-NG
Chaos Group V-RAY
Pennant
Blender:
  Barbershop - CPU-Only
  Pabellon Barcelona - CPU-Only
Facebook RocksDB
NAMD
Facebook RocksDB
PostgreSQL pgbench:
  100 - 250 - Read Only
  100 - 250 - Read Only - Average Latency
Blender
Rodinia
OSPray
LuxCoreRender
OpenVKL
LuxCoreRender
ASTC Encoder
OpenVINO:
  Face Detection 0106 FP32 - CPU
  Person Detection 0106 FP32 - CPU
  Person Detection 0106 FP16 - CPU
  Face Detection 0106 FP16 - CPU
CloverLeaf
ASKAP
rays1bench
TensorFlow Lite:
  Mobilenet Quant
  Mobilenet Float
PostgreSQL pgbench:
  100 - 100 - Read Only - Average Latency
  100 - 100 - Read Only
OpenFOAM
TensorFlow Lite
Blender
POV-Ray
Blender
oneDNN
TensorFlow Lite
oneDNN
LAMMPS Molecular Dynamics Simulator
GROMACS
oneDNN
Apache Cassandra
ASKAP
Intel Open Image Denoise
ebizzy
Tungsten Renderer
TensorFlow Lite
Tungsten Renderer
Parboil
oneDNN
LAMMPS Molecular Dynamics Simulator
Stress-NG
Zstd Compression
NWChem
LeelaChessZero
Appleseed
Basis Universal
NCNN
SVT-AV1
Parboil
NCNN
oneDNN
NCNN
Timed Linux Kernel Compilation
LeelaChessZero
oneDNN:
  Recurrent Neural Network Training - u8s8f32 - CPU
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
  Recurrent Neural Network Training - f32 - CPU
Facebook RocksDB
Timed MPlayer Compilation
FFTE
Kvazaar
GPAW
oneDNN
PlaidML
GROMACS
PlaidML
Rodinia
NAS Parallel Benchmarks
oneDNN
Timed LLVM Compilation
oneDNN:
  Recurrent Neural Network Inference - u8s8f32 - CPU
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
  IP Shapes 1D - f32 - CPU
Tungsten Renderer
Rodinia
SVT-VP9
Appleseed
oneDNN
ASKAP
NCNN
oneDNN
NAS Parallel Benchmarks
toyBrot Fractal Generator
PostgreSQL pgbench:
  100 - 250 - Read Write
  100 - 250 - Read Write - Average Latency
toyBrot Fractal Generator
dav1d
toyBrot Fractal Generator
Kvazaar
Basis Universal
toyBrot Fractal Generator
NCNN
miniFE
oneDNN
YafaRay
PostgreSQL pgbench
Timed FFmpeg Compilation
PostgreSQL pgbench
NCNN
TTSIOD 3D Renderer
dav1d
x265
Timed Godot Game Engine Compilation
SVT-AV1
x264
TensorFlow Lite
NCNN
NAS Parallel Benchmarks
Kvazaar
Sysbench
NAS Parallel Benchmarks
NCNN
High Performance Conjugate Gradient
OpenFOAM
dav1d
NAS Parallel Benchmarks
Incompact3D
NCNN
NAS Parallel Benchmarks
NCNN
Rodinia
Timed ImageMagick Compilation
OpenVINO:
  Face Detection 0106 FP16 - CPU
  Face Detection 0106 FP32 - CPU
LULESH
OpenVINO
NAS Parallel Benchmarks
Build2
OpenVINO
Parboil
NCNN
WebP2 Image Encode:
  Quality 100, Lossless Compression
  Quality 75, Compression Effort 7
Mobile Neural Network
NCNN
WebP2 Image Encode
Algebraic Multi-Grid Benchmark
dav1d
ACES DGEMM
Facebook RocksDB
WebP2 Image Encode
ECP-CANDLE
Facebook RocksDB
NCNN
AI Benchmark Alpha
NCNN:
  CPU-v2-v2-v2 - shufflenet-v2
  CPU - mobilenet
Numenta Anomaly Benchmark
Timed HMMer Search
Stream-Dynamic
NCNN
Stream-Dynamic
Numenta Anomaly Benchmark
ctx_clock
Stream
Mobile Neural Network
Ngspice
Stream
Timed PHP Compilation
OCRMyPDF
Stream
Stream-Dynamic
Stream
Stream-Dynamic
NCNN:
  CPU-v2-v2-v2 - mobilenet
  CPU-v3-v3-v3 - mobilenet
Zstd Compression
AI Benchmark Alpha
FFTW
NCNN
Numenta Anomaly Benchmark
ASTC Encoder
XZ Compression
Appleseed
Timed MrBayes Analysis
OpenVINO:
  Age Gender Recognition Retail 0013 FP32 - CPU
  Age Gender Recognition Retail 0013 FP16 - CPU
Mobile Neural Network
Tungsten Renderer
Numenta Anomaly Benchmark
NCNN
DaCapo Benchmark
ONNX Runtime
NCNN
BlogBench
NCNN:
  CPU-v3-v3-v3 - squeezenet_ssd
  CPU-v3-v3-v3 - resnet50
Renaissance
NCNN:
  CPU-v2-v2-v2 - resnet50
  CPU-v2-v2-v2 - resnet18
DaCapo Benchmark
Darmstadt Automotive Parallel Heterogeneous Suite
NCNN:
  CPU-v3-v3-v3 - resnet18
  CPU - resnet18
AI Benchmark Alpha
Renaissance
Monte Carlo Simulations of Ionised Nebulae
Ngspice
Mobile Neural Network
RawTherapee
ASTC Encoder
Timed GDB GNU Debugger Compilation
NCNN
Quantum ESPRESSO
NCNN:
  CPU-v2-v2-v2 - alexnet
  CPU - alexnet
Caffe
Mobile Neural Network
ONNX Runtime
C-Blosc
ONNX Runtime:
  bertsquad-10 - OpenMP CPU
  super-resolution-10 - OpenMP CPU
Nebular Empirical Analysis Tool
Basis Universal:
  ETC1S
  UASTC Level 0
NCNN
Caffe
Timed Apache Compilation
FFTW
Timed MAFFT Alignment
NCNN
InfluxDB
Numpy Benchmark
Apache CouchDB
DaCapo Benchmark
NCNN
WebP Image Encode
PyPerformance
WebP Image Encode
NCNN
AOM AV1:
  Speed 4 Two-Pass
  Speed 6 Two-Pass
JPEG XL
AOM AV1
PyPerformance
Timed Eigen Compilation
Darmstadt Automotive Parallel Heterogeneous Suite
librsvg
NCNN
simdjson
PyPerformance
Crafty
simdjson
LZ4 Compression
Radiance Benchmark
Minion
PyPerformance
Dolfyn
Gcrypt Library
Scikit-Learn
simdjson
SQLite Speedtest
WebP Image Encode
TNN
Crypto++
PyPerformance
PyBench
Crypto++
FinanceBench
Minion
Botan
Minion
Hierarchical INTegration
Botan
Google SynthMark
FFTW
PyPerformance
FinanceBench
LZ4 Compression
Swet
Botan:
  CAST-256
  KASUMI
Radiance Benchmark
eSpeak-NG Speech Engine
Perl Benchmarks
Botan
Etcpak
libjpeg-turbo tjbench
Etcpak
AOBench
Etcpak
TSCP
QuantLib
TNN
Perl Benchmarks
Montage Astronomical Image Mosaic Engine
WebP Image Encode
Himeno Benchmark
PyPerformance
PHPBench
GnuPG
Darmstadt Automotive Parallel Heterogeneous Suite
InfluxDB
Etcpak
JPEG XL Decoding
rav1e:
  10
  6
  5
Crypto++
simdjson
ECP-CANDLE
Rodinia
JPEG XL:
  PNG - 8
  JPEG - 8
DaCapo Benchmark
JPEG XL Decoding
Redis
Tinymembench
LibRaw
ONNX Runtime
Redis:
  SET
  LPUSH
  SADD
JPEG XL
Renaissance
Hugin
KeyDB
PlaidML
AOM AV1
JPEG XL:
  JPEG - 5
  JPEG - 7
WireGuard + Linux Networking Stack Stress Test
ECP-CANDLE
DeepSpeech
Tinymembench
Renaissance
LZ4 Compression:
  9 - Decompression Speed
  3 - Decompression Speed
  1 - Decompression Speed
MBW
LZ4 Compression
Renaissance
MBW
Renaissance:
  Genetic Algorithm Using Jenetics + Futures
  Scala Dotty
Polyhedron Fortran Benchmarks:
  tfft2
  rnflow
  protein
  linpk
  fatigue2
  aermod
NCNN:
  CPU-v2-v2-v2 - efficientnet-b0
  CPU-v2-v2-v2 - mnasnet
  CPU-v2-v2-v2-v3-v3 - mobilenet-v3
  CPU-v2-v2-v2-v2-v2 - mobilenet-v2
  CPU - blazeface
  CPU - efficientnet-b0
  CPU - mnasnet
  CPU - shufflenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU-v2-v2 - mobilenet-v2
Stress-NG
Stress-NG
BlogBench
BlogBench
SVT-VP9
SVT-VP9
Cpuminer-Opt
Cpuminer-Opt
Cpuminer-Opt
Cpuminer-Opt
Cpuminer-Opt
Cpuminer-Opt
Parboil
Kripke
Kripke