Ubuntu 22.04 Server Benchmarks

AMD EPYC 7713 64-Core testing with a AMD DAYTONA_X (RYM1009B BIOS) and ASPEED on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2209132-NE-UBUNTU22004
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 2 Tests
AV1 3 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 5 Tests
C++ Boost Tests 5 Tests
Chess Test Suite 4 Tests
Timed Code Compilation 12 Tests
C/C++ Compiler Tests 22 Tests
Compression Tests 3 Tests
CPU Massive 45 Tests
Creator Workloads 33 Tests
Cryptography 5 Tests
Database Test Suite 8 Tests
Encoding 9 Tests
Fortran Tests 7 Tests
Game Development 7 Tests
Go Language Tests 3 Tests
HPC - High Performance Computing 29 Tests
Imaging 7 Tests
Java 2 Tests
Common Kernel Benchmarks 5 Tests
LAPACK (Linear Algebra Pack) Tests 3 Tests
Linear Algebra 2 Tests
Machine Learning 7 Tests
Molecular Dynamics 9 Tests
MPI Benchmarks 8 Tests
Multi-Core 51 Tests
Node.js + NPM Tests 2 Tests
NVIDIA GPU Compute 5 Tests
Intel oneAPI 5 Tests
OpenMPI Tests 18 Tests
Programmer / Developer System Benchmarks 19 Tests
Python 5 Tests
Quantum Mechanics 2 Tests
Raytracing 2 Tests
Renderers 6 Tests
Scientific Computing 15 Tests
Software Defined Radio 3 Tests
Server 16 Tests
Server CPU Tests 30 Tests
Single-Threaded 9 Tests
Telephony 2 Tests
Texture Compression 4 Tests
Video Encoding 7 Tests
Common Workstation Benchmarks 5 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs
No Box Plots
On Line Graphs With Missing Data, Connect The Line Gaps

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
EPYC 7713 2P
September 08 2022
  1 Day, 15 Hours, 49 Minutes
EPYC 7713
September 11 2022
  1 Day, 16 Hours, 26 Minutes
Invert Hiding All Results Option
  1 Day, 16 Hours, 7 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Ubuntu 22.04 Server BenchmarksProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerVulkanCompilerFile-SystemScreen ResolutionEPYC 7713 2PEPYC 77132 x AMD EPYC 7713 64-Core @ 2.00GHz (128 Cores / 256 Threads)AMD DAYTONA_X (RYM1009B BIOS)AMD Starship/Matisse512GB3841GB Micron_9300_MTFDHAL3T8TDPASPEEDVE2282 x Mellanox MT27710Ubuntu 22.045.15.0-47-generic (x86_64)GNOME Shell 42.4X Server 1.21.1.31.2.204GCC 11.2.0ext41920x1080AMD EPYC 7713 64-Core @ 2.00GHz (64 Cores / 128 Threads)256GBOpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa001173Java Details- OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Details- Python 3.10.4Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

EPYC 7713 2P vs. EPYC 7713 ComparisonPhoronix Test SuiteBaseline+99.2%+99.2%+198.4%+198.4%+297.6%+297.6%+396.8%+396.8%396.9%366.9%347.8%343.4%342.1%339.8%339.3%310%215.8%155.9%152%150.4%138.9%131%124.3%120.9%119.7%116.7%116.7%113.2%112.8%110%107.7%106.3%100%99.1%92.6%92.1%92.1%91.6%91.5%91.1%91.1%89.4%88.7%88.1%85.6%76.4%75.4%69.3%62.9%62%61.1%56.8%53.1%45.8%45.1%37.6%31.5%30.8%30.5%29.9%28.8%28.4%26.4%26%24.4%24.3%23.8%22.5%21.9%19.4%19.2%18.9%18.8%17.7%17.6%14.4%14.3%14.3%13.9%12%9.1%9%9%8.2%8%7.7%7.3%6.7%6.6%6.4%6%4.8%4.6%3.8%3.8%3.8%3.8%3.4%3.4%3.2%3%2.9%2.9%2.9%2.7%2.6%2.6%2.5%2.5%2.5%2.3%2.2%2.2%2.1%2.1%2%50 - 1:550 - 1:1V.P.M364.5%50 - 5:1100 - 250 - Read Write - Average Latency100 - 250 - Read Write100 - 500 - Read Write - Average Latency100 - 500 - Read Write2048SpaceshipR.N.N.T - f32 - CPUR.N.N.T - u8s8f32 - CPUR.N.N.T - bf16bf16bf16 - CPUM.M.B.S.T - u8s8f32 - CPUR.N.N.I - bf16bf16bf16 - CPUd.S.M.S - Execution Time125%SP.C124.4%R.N.N.I - f32 - CPUMaterial TesterR.N.N.I - u8s8f32 - CPURANGE - 500 - 100 - Average LatencyPUT - 500 - 100 - Average LatencyRANGE - 500 - 100RANGE - 100 - 100d.L.M.S - Execution Time112.3%PUT - 500 - 100RANGE - 100 - 100 - Average LatencyPUT - 100 - 100X.b.i.i102.8%sedovbig101.3%CoreMark Size 666 - I.P.S100.7%PUT - 100 - 100 - Average LatencytConvolve MPI - Gridding99.3%A.G.R.R.0.F.I - CPURand Read98.4%RSA409698.3%RSA409698.3%CPU98.1%SHA25697.7%Exhaustive97.5%tConvolve MPI - Degridding96.2%F.D.F - CPU96.1%conus 2.5km96%EP.D95.1%14 digit94.8%94.2%Basic - CPU93.6%256 - 256 - 5793.4%5001e1392.4%FT.C92.4%P.D.F - CPU92.2%RANGE - 500 - 1000P.D.F - CPU92.1%PUT - 500 - 1000Thorough92%Time To Solve91.9%PUT - 500 - 1000 - Average LatencyRANGE - 500 - 1000 - Average LatencyLU.C91.2%V.D.F - CPU91.1%PUT - 100 - 1000 - Average LatencyPUT - 100 - 10001 - 4K - 32 - Path Tracer90.4%90%3 - 4K - 32 - Path Tracer89.5%RANGE - 100 - 10002 - 4K - 32 - Path Tracer89.4%CG.C89.3%EP.C89.2%88.8%RANGE - 100 - 1000 - Average LatencyClassroom - CPU-Only88.7%P.V.B.D.F - CPU87.9%128 - 256 - 5787.9%M.T.E.T.D.F - CPU87.5%Total Time87.1%F.D.F.I - CPU86.4%V.D.F.I - CPU85.7%1000W.P.D.F.I - CPU85.2%W.P.D.F - CPU84.9%BT.C84.2%83.9%1.H.M.2.D79.3%D.R78.8%Pabellon Barcelona - CPU-Only78.7%1 - 4K - 1 - Path Tracer78.6%1 - 4K - 16 - Path Tracer78.6%3 - 4K - 16 - Path Tracer77.8%3 - 4K - 1 - Path Tracer77.5%Monero - 1M77.4%IS.D77.1%BMW27 - CPU-Only77%40962 - 4K - 16 - Path Tracer76.1%MG.C75.7%ArcFace ResNet-100 - CPU - Standard2 - 4K - 1 - Path Tracer75.3%Fishy Cat - CPU-Only74.6%Barbershop - CPU-Only74.5%Pathtracer ISPC - Crown74.4%OpenMP LavaMD74.1%Pathtracer - Crown72.9%leblancbig72.4%ATPase Simulation - 327,506 Atoms70.2%allmodconfig69.9%Q.1.C.E.5Carbon Nanotube63.4%H.C.OL.E.HRT.hdr_alb_nrm.3840x216061.4%A.G.R.R.0.F - CPUMedium60.9%RT.ldr_alb_nrm.3840x216060.3%MPI CPU - water_GMX50_bare60.1%Read While Writing58.6%SP.B58.6%F.H.RS.F.P.R53.6%100 - 250 - Read Only - Average Latency53.2%Savina Reactors.IO100 - 250 - Read Only52.8%Sharpen52.4%1 - Bosphorus 4K52.2%Ninja51.9%Orange Juice - CPU49%Bosphorus 4K - Very Fast47.8%100 - 500 - Read Only - Average Latency47.2%Compression Rating47.2%DLSC - CPU46.8%100 - 500 - Read Only46.8%C.B.S.A - f32 - CPU45.9%GET - 1000super-resolution-10 - CPU - StandardTime To Compile44.5%Enhanced42.8%Trace Time40.7%RAM / Memorydefconfig36.3%UASTC Level 334.9%3 - D.SALS Movie Lens9 - D.SA.S.PQ.9.C.E.729.2%1000Allfcn-resnet101-11 - CPU - Standard28.1%Time To Compile27.2%yolov4 - CPU - Standard500R.C.a.P - CPUBosphorus 4KS.C.c.j64 - 256 - 5723.8%I.M.D.SUnix Makefiles22.3%A.G.R.R.0.F - CPU22.2%R.R.W.RDisney Material21.4%UASTC Level 221%Danish Mood - CPU21%2620.6%WritesHWB Color SpaceGPT-2 - CPU - Standard2618.8%19, Long Mode - Compression SpeedPathtracer ISPC - Asian Dragon18.2%Speed 9 Realtime - Bosphorus 4KUpdate RandA.G.R.R.0.F.I - CPU16.2%Wownero - 1M15.1%Q.7.C.E.714.6%d.L.M.S - Mesh Time14.5%Speed 5 - Bosphorus 4K10 - Bosphorus 4KBLAS14.3%TradebeansC240 Buckyball14.1%EmilyTime To Compile13.9%Eigen13.8%Small12.8%Speed 10 Realtime - Bosphorus 4Kd.S.M.S - Mesh Time11.9%LuxCore Benchmark - CPU11.5%Time To Compile11%Time To Compile10.8%Pathtracer - Asian Dragon9.4%scikit_qdaOpenMP LeukocyteApache Spark BayesOpenMP CFD Solver8.9%W.P.D.F - CPUW.P.D.F.I - CPUDefault7.8%V.D.F.I - CPUF.D.F.I - CPUbertsquad-12 - CPU - StandardM.T.E.T.D.F - CPUP.V.B.D.F - CPUPreset 10 - Bosphorus 4KTime To Compile5.3%7 - Bosphorus 4K5.1%OFDM_TestV.D.F - CPUP.D.F - CPUP.P.BJPEG - 7P.D.F - CPUTime To Compile3.7%Bosphorus 4K - Ultra Fast3.5%PNG - 7OPTIONS, StatefulG.A.U.J.F263.2%10, LosslessPreset 8 - Bosphorus 4K6Preset 12 - Bosphorus 4K262.7%9 - Compression SpeedCPU - MobileNet v2UASTC Level 0scikit_icaC75523 - Compression SpeedS.C.m.j2.3%RotateLion19 - D.S2.1%Time To Compile6, LosslessDefault2.1%PNG - 8DragonflydbDragonflydbBRL-CADDragonflydbPostgreSQL pgbenchPostgreSQL pgbenchPostgreSQL pgbenchPostgreSQL pgbenchMariaDBNatrononeDNNoneDNNoneDNNoneDNNoneDNNOpenFOAMNAS Parallel BenchmarksoneDNNAppleseedoneDNNetcdetcdetcdetcdOpenFOAMetcdetcdetcdXcompact3d Incompact3dPennantCoremarketcdASKAPOpenVINOFacebook RocksDBOpenSSLOpenSSLSysbenchOpenSSLASTC EncoderASKAPOpenVINOWRFNAS Parallel BenchmarksHelsingHigh Performance Conjugate GradientRELIONLiquid-DSPnginxPrimesieveNAS Parallel BenchmarksOpenVINOetcdOpenVINOetcdASTC Encoderm-queensetcdetcdNAS Parallel BenchmarksOpenVINOetcdetcdOSPRay StudioAlgebraic Multi-Grid BenchmarkOSPRay StudioetcdOSPRay StudioNAS Parallel BenchmarksNAS Parallel BenchmarksLULESHetcdBlenderKripkeOpenVINOLiquid-DSPOpenVINOStockfishOpenVINOOpenVINOnginxOpenVINOOpenVINONAS Parallel BenchmarksebizzyasmFish7-Zip CompressionBlenderOSPRay StudioOSPRay StudioOSPRay StudioOSPRay StudioXmrigNAS Parallel BenchmarksBlenderMariaDBOSPRay StudioNAS Parallel BenchmarksONNX RuntimeOSPRay StudioBlenderBlenderEmbreeRodiniaEmbreePennantNAMDTimed Linux Kernel CompilationWebP2 Image EncodeGPAWASKAPCloverLeafIntel Open Image DenoiseOpenVINOASTC EncoderIntel Open Image DenoiseGROMACSFacebook RocksDBNAS Parallel BenchmarksRenaissanceACES DGEMMPostgreSQL pgbenchRenaissancePostgreSQL pgbenchGraphicsMagickSVT-HEVCTimed LLVM CompilationLuxCoreRenderKvazaarPostgreSQL pgbench7-Zip CompressionLuxCoreRenderPostgreSQL pgbenchoneDNNRedisONNX RuntimeTimed Node.js CompilationGraphicsMagickPOV-RaySysbenchTimed Linux Kernel CompilationBasis UniversalLZ4 CompressionRenaissanceLZ4 CompressionRenaissanceWebP2 Image EncodeApache HTTP ServerJPEG XL Decoding libjxlONNX RuntimeTimed FFmpeg CompilationONNX RuntimeApache HTTP ServerLuxCoreRenderx265SPECjbb 2015Liquid-DSPRenaissanceTimed LLVM CompilationOpenVINOFacebook RocksDBAppleseedBasis UniversalLuxCoreRenderGraph500Apache CassandraGraphicsMagickONNX RuntimeGraph500Zstd CompressionEmbreeAOM AV1Facebook RocksDBOpenVINOXmrigWebP2 Image EncodeOpenFOAMVP9 libvpx EncodingSVT-HEVCLeelaChessZeroDaCapo BenchmarkNWChemAppleseedTimed Mesa CompilationLeelaChessZerominiFEAOM AV1OpenFOAMLuxCoreRenderTimed Gem5 CompilationBuild2EmbreeMlpack BenchmarkRodiniaRenaissanceRodiniaOpenVINOOpenVINOWebP2 Image EncodeOpenVINOOpenVINOONNX RuntimeOpenVINOOpenVINOSVT-AV1Timed Godot Game Engine CompilationSVT-HEVCsrsRANOpenVINOOpenVINOLibRawJPEG XL libjxlOpenVINOTimed PHP CompilationKvazaarJPEG XL libjxlPJSIPRenaissanceGraph500libavif avifencSVT-AV1libavif avifencSVT-AV1Node.js Express HTTP Load TestGraph500LZ4 CompressionTNNBasis UniversalMlpack BenchmarkNgspiceLZ4 CompressionSPECjbb 2015GraphicsMagickGoogle DracoZstd CompressionTimed Apache Compilationlibavif avifencTimed CPython CompilationJPEG XL libjxlEPYC 7713 2PEPYC 7713

Ubuntu 22.04 Server Benchmarkswrf: conus 2.5kmopenfoam: drivaerFastback, Large Mesh Size - Execution Timeopenfoam: drivaerFastback, Large Mesh Size - Mesh Timespec-jbb2015: SPECjbb2015-Composite critical-jOPSspec-jbb2015: SPECjbb2015-Composite max-jOPSmysqlslap: 4096mysqlslap: 2048nwchem: C240 Buckyballrenaissance: ALS Movie Lensrelion: Basic - CPUbrl-cad: VGR Performance Metricincompact3d: X3D-benchmarking input.i3drodinia: OpenMP HotSpot3Dstockfish: Total Timelczero: Eigenlczero: BLASrenaissance: Savina Reactors.IOqe: AUSURF112onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUpgbench: 100 - 250 - Read Write - Average Latencypgbench: 100 - 250 - Read Writeasmfish: 1024 Hash Memory, 26 Depthpgbench: 100 - 500 - Read Write - Average Latencypgbench: 100 - 500 - Read Writerenaissance: Apache Spark PageRankgraph500: 26graph500: 26graph500: 26graph500: 26onnx: ArcFace ResNet-100 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Standardonnx: yolov4 - CPU - Standardonnx: super-resolution-10 - CPU - Standardjpegxl: PNG - 8securemark: SecureMark-TLShpcg: luaradio: Complex Phaseluaradio: Hilbert Transformluaradio: FM Deemphasis Filterluaradio: Five Back to Back FIR Filtersblender: Barbershop - CPU-Onlyonednn: Recurrent Neural Network Inference - u8s8f32 - CPUmlpack: scikit_qdabuild-linux-kernel: allmodconfigtnn: CPU - DenseNetopenfoam: drivaerFastback, Small Mesh Size - Execution Timeopenfoam: drivaerFastback, Small Mesh Size - Mesh Timebuild-llvm: Unix Makefilesnumpy: luxcorerender: Danish Mood - CPUluxcorerender: LuxCore Benchmark - CPUcompress-lz4: 3 - Decompression Speedcompress-lz4: 3 - Compression Speedopenssl: SHA256build-gem5: Time To Compileappleseed: Material Testerwebp2: Quality 95, Compression Effort 7ospray-studio: 3 - 4K - 32 - Path Tracerbuild-nodejs: Time To Compilepgbench: 100 - 500 - Read Only - Average Latencypgbench: 100 - 500 - Read Onlypgbench: 100 - 250 - Read Only - Average Latencypgbench: 100 - 250 - Read Onlyospray-studio: 3 - 4K - 16 - Path Tracerngspice: C2670build-llvm: Ninjavpxenc: Speed 5 - Bosphorus 4Kluxcorerender: Orange Juice - CPUcassandra: Writesclickhouse: 100M Rows Web Analytics Dataset, Third Runclickhouse: 100M Rows Web Analytics Dataset, Second Runclickhouse: 100M Rows Web Analytics Dataset, First Run / Cold Cacheospray-studio: 3 - 4K - 1 - Path Tracerospray-studio: 2 - 4K - 32 - Path Traceronnx: GPT-2 - CPU - Standardonnx: bertsquad-12 - CPU - Standardospray-studio: 1 - 4K - 32 - Path Tracerospray-studio: 2 - 4K - 1 - Path Tracerospray-studio: 2 - 4K - 16 - Path Tracerospray-studio: 1 - 4K - 1 - Path Tracerospray-studio: 1 - 4K - 16 - Path Tracerrenaissance: Genetic Algorithm Using Jenetics + Futuresnpb: EP.Drenaissance: In-Memory Database Shootoutrenaissance: Finagle HTTP Requestsngspice: C7552etcd: RANGE - 100 - 100 - Average Latencyetcd: RANGE - 100 - 100onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUjpegxl: PNG - 7appleseed: Emilyopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUsimdjson: DistinctUserIDsimdjson: TopTweetapache: 1000helsing: 14 digitsimdjson: PartialTweetsapache: 500nginx: 500nginx: 1000sysbench: CPUbuild-python: Released Build, PGO + LTO Optimizedetcd: PUT - 100 - 100 - Average Latencyetcd: PUT - 100 - 100onednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUetcd: RANGE - 500 - 100 - Average Latencyetcd: RANGE - 500 - 100etcd: PUT - 500 - 100 - Average Latencyetcd: PUT - 500 - 100etcd: PUT - 100 - 1000 - Average Latencyetcd: PUT - 100 - 1000etcd: RANGE - 100 - 1000 - Average Latencyetcd: RANGE - 100 - 1000ebizzy: node-web-tooling: onednn: Recurrent Neural Network Inference - f32 - CPUetcd: PUT - 500 - 1000 - Average Latencyetcd: PUT - 500 - 1000etcd: RANGE - 500 - 1000 - Average Latencyetcd: RANGE - 500 - 1000blender: Pabellon Barcelona - CPU-Onlyopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUcompress-lz4: 9 - Decompression Speedcompress-lz4: 9 - Compression Speedopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUdragonflydb: 50 - 1:5dragonflydb: 50 - 1:1dragonflydb: 50 - 5:1openvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUpyperformance: python_startuppjsip: INVITEpjsip: OPTIONS, Statefulopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUluxcorerender: DLSC - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUrocksdb: Update Randrocksdb: Read Rand Write Randrocksdb: Read While Writinggraphics-magick: Sharpenopenssl: RSA4096openssl: RSA4096graphics-magick: Enhancedrocksdb: Rand Readgraphics-magick: Rotategraphics-magick: HWB Color Spaceetcpak: Single-Threaded - ETC2coremark: CoreMark Size 666 - Iterations Per Secondblender: Classroom - CPU-Onlygpaw: Carbon Nanotubecloverleaf: Lagrangian-Eulerian Hydrodynamicssimdjson: Kostyabuild2: Time To Compileaom-av1: Speed 10 Realtime - Bosphorus 4Knatron: Spaceshiprodinia: OpenMP Leukocytesvt-hevc: 1 - Bosphorus 4Kbuild-wasmer: Time To Compilesimdjson: LargeRandsrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMminife: Smallluxcorerender: Rainbow Colors and Prism - CPUmlpack: scikit_icaaskap: tConvolve MPI - Griddingaskap: tConvolve MPI - Degriddingwebp2: Quality 75, Compression Effort 7compress-zstd: 19 - Decompression Speedcompress-zstd: 19 - Compression Speedcompress-zstd: 19, Long Mode - Decompression Speedcompress-zstd: 19, Long Mode - Compression Speedbuild-linux-kernel: defconfigcompress-7zip: Decompression Ratingcompress-7zip: Compression Ratingkripke: jpegxl-decode: 1webp: Quality 100, Lossless, Highest Compressionbuild-godot: Time To Compileprimesieve: 1e13srsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMsrsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMavifenc: 2pyperformance: chaosgromacs: MPI CPU - water_GMX50_barebuild-php: Time To Compileredis: GET - 1000amg: appleseed: Disney Materialkvazaar: Bosphorus 4K - Ultra Fastrodinia: OpenMP LavaMDsrsran: OFDM_Testsrsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMembree: Pathtracer - Asian Dragonembree: Pathtracer ISPC - Asian Dragononednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUpyperformance: regex_compilepjsip: OPTIONS, Statelesslibraw: Post-Processing Benchmarkblender: Fishy Cat - CPU-Onlynpb: IS.Drenaissance: Apache Spark Bayesaircrack-ng: encode-flac: WAV To FLACx265: Bosphorus 4Ksynthmark: VoiceMark_100quantlib: xmrig: Monero - 1Mnamd: ATPase Simulation - 327,506 Atomspennant: leblancbigphpbench: PHP Benchmark Suitejpegxl: JPEG - 7xmrig: Wownero - 1Mmlpack: scikit_svmdacapobench: Tradebeansblender: BMW27 - CPU-Onlylulesh: pyperformance: pickle_pure_pythontnn: CPU - MobileNet v2cython-bench: N-Queensnpb: SP.Cpybench: Total For Average Test Timesnode-express-loadtest: srsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMsrsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMoidn: RT.hdr_alb_nrm.3840x2160npb: BT.Csvt-hevc: 10 - Bosphorus 4Koidn: RT.ldr_alb_nrm.3840x2160build-apache: Time To Compileaskap: Hogbom Clean OpenMPliquid-dsp: 256 - 256 - 57kvazaar: Bosphorus 4K - Very Fastliquid-dsp: 128 - 256 - 57liquid-dsp: 64 - 256 - 57liquid-dsp: 32 - 256 - 57npb: SP.Bsrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMbuild-mesa: Time To Compiletnn: CPU - SqueezeNet v1.1build-ffmpeg: Time To Compilejpegxl-decode: Allbasis: UASTC Level 3sysbench: RAM / Memorynpb: LU.Csvt-av1: Preset 12 - Bosphorus 4Kaom-av1: Speed 9 Realtime - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 4Kastcenc: Exhaustivewebp: Quality 100, Losslessembree: Pathtracer ISPC - Crownjpegxl: JPEG - 8povray: Trace Timemt-dgemm: Sustained Floating-Point Ratebasis: UASTC Level 2embree: Pathtracer - Crownastcenc: Thoroughencode-mp3: WAV To MP3draco: Church Facadenpb: CG.Conednn: Convolution Batch Shapes Auto - f32 - CPUpennant: sedovbigm-queens: Time To Solveavifenc: 6, Losslessdraco: Lionwebp: Quality 100, Highest Compressionrodinia: OpenMP CFD Solversvt-av1: Preset 10 - Bosphorus 4Knpb: FT.Cbasis: UASTC Level 0webp2: Defaultoctave-benchmark: dacapobench: Jythonsvt-hevc: 7 - Bosphorus 4Kavifenc: 10, Losslessastcenc: Mediumtnn: CPU - SqueezeNet v2npb: EP.Cetcpak: Multi-Threaded - ETC2avifenc: 6npb: MG.Cwebp2: Quality 100, Compression Effort 5webp: Quality 100webp: Defaultbuild-python: Defaultctx-clock: Context Switch Timeblake2: EPYC 7713 2PEPYC 77138650.8397052.62776.38689461308991401502183.624613.1290.9613240515300.69114289.2802794869424093444912254.2399.922840.0515.0761663324887051935.094142613985.739037700030233800065946700064251600087723733045691.0024958237.1011623.898.4364.31211.6171.522754.4531.84160.4633065.920281.7124.96178.675469.387.507.6310743.254.18134629156527159.015336.5831190.3152785113.4660.24820127220.126198315526510137.362105.03213.8618.60209921396.82387.01377.521646454607878668446241424227421379221582314.99109.246136.410535.4105.9512.737230.32737473.9110.74151.5310713597.4717.594.424.3984767.7362.1293.8491255.7990312.0094018.12500125.79261.3372.638209.42027531.127435.342.638506.60512.638992.815823.741860.482523.442305.618345325810.502840.3322.843696.761822.644040.713553.884777.1513.1010851.452.514774.3713.09724442.87724466.22724635.931388.4145.847.7146928819292.82218.1322.162884.4412.2320.383136.1435.751788.342.1157763.8928.054559.423.2737561.0733.551905.513046542922167145646737791638050.525009.613444801753327301093229.6374105747.13977240.8943.85719.442.953.23257.041.947.28415.0851.8691143.6392.324664.816.9946.4043735.839742.00.553473.084.73485.839.921.58463408651661714395026766.930.5641.16728.718141.0426.141.21395.88.21538.3381364479.13192330666750.46425858.5226.740130333333133.6394.168.875967.173630.01161556761737.9622.254690.06638.1149778.79718.13321.48748.0572808.850749.60.267123.556570726003100.7741982.921.45456817.1936456.229385341.28123.312116527.63953619862.2127.92.26235808.54195.622.2620.521319.756535090000054.24508506666731915333331614266667142675.55149.6427.418.100273.84814.271569.9711.4027260.66259899.49179.55055.6268.7976.44991.4282.926328.927.59032.0712278.77389.600658.47447.490752545587.100.6408335.6758956.3767.33457353.476.283123.806116679.266.3799.706.5454106144.705.558380.646765.8948338.276678.5653.957100740.896.4110.6516.7515.391203.3916959.84814972.97888.82853891279572476152491.118814.1563.271697698609.67747088.019149383689359838938004.9397.641229.563.400735271387761027.980626543067.3323593000254477000642278000622681000153818541766311.0224774719.1088628.998.6365.41189.0299.301253.5029.19272.6093107.102633.93139.89218.445464.566.206.8414123.355.4368101809853176.430152.3942040.24100051164.0010.36513709350.193129768447134136.020159.57515.8612.48250596394.98392.92378.442922860819368713849682496400582463395682243.24668.965009.26720.8103.3871.379219.65092984.4711.11133.043473555.788.974.404.39109150.30120.9993.86115006.99173899.19174524.79252440.57261.3651.378829.58882943.082950.431.282098.41781.281895.962312.479978.190512.480131.795824649410.601266.2011.983927.655811.884602.780396.304601.786.8214156.753.894600.946.813599739.573382696.793244708.361293.5124.597.6446909115274.57116.3320.821535.088.3318.931689.1834.17935.581.0649691.0325.972462.402.0330742.1731.021030.7635833635618419182167511825904.112613.19412420747987461303230.4322045229.64589677.1471.65812.002.958.95663.906.043.3929.9151.9651143.6396.021875.121.1345.2721943.020252.10.483401.684.93460.347.429.40935460835107727072252767.620.5643.35455.255141.4431.840.76696.55.13039.7631990067.33101205166761.26563756.5346.547136633333133.7394.462.981456.809812.56081566660239.4038.852648.96585.6149666.94318.12926.69747.3082795.328612.40.454576.133060733332104.5836467.221.33399830.4319305.656387332.61123.38451921.71959636762.4128.91.40127998.61223.661.4120.100520.840276716666736.7127069000002577666667160556666789950.76149.6430.520.610273.48118.146731.8815.3859990.12135917.43184.71165.4970.7833.26591.4047.558329.2410.68020.88027110.61551.833430.45547.472741124085.730.93470611.4272312.2387.18456133.486.845131.27060659.256.2219.006.5824033137.745.398236.602065.6174406.026702.5183.84657329.7010.8510.6916.7815.7081203.39OpenBenchmarking.org

WRF

WRF, the Weather Research and Forecasting Model, is a "next-generation mesoscale numerical weather prediction system designed for both atmospheric research and operational forecasting applications. It features two dynamical cores, a data assimilation system, and a software architecture supporting parallel computation and system extensibility." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWRF 4.2.2Input: conus 2.5kmEPYC 7713 2PEPYC 77134K8K12K16K20K8650.8416959.851. (F9X) gfortran options: -O2 -ftree-vectorize -funroll-loops -ffree-form -fconvert=big-endian -frecord-marker=4 -fallow-invalid-boz -lesmf_time -lwrfio_nf -lnetcdff -lnetcdf -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 9Input: drivaerFastback, Large Mesh Size - Execution TimeEPYC 7713 2PEPYC 77133K6K9K12K15K7052.6214972.971. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -ldecompose -lgenericPatchFields -lmetisDecomp -lscotchDecomp -llagrangian -lregionModels -lOpenFOAM -ldl -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 9Input: drivaerFastback, Large Mesh Size - Mesh TimeEPYC 7713 2PEPYC 77132004006008001000776.38888.821. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -ldecompose -lgenericPatchFields -lmetisDecomp -lscotchDecomp -llagrangian -lregionModels -lOpenFOAM -ldl -lm

SPECjbb 2015

This is a benchmark of SPECjbb 2015. For this test profile to work, you must have a valid license/copy of the SPECjbb 2015 ISO (SPECjbb2015-1.02.iso) in your Phoronix Test Suite download cache. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgjOPS, More Is BetterSPECjbb 2015SPECjbb2015-Composite critical-jOPSEPYC 7713 2PEPYC 771320K40K60K80K100K6894685389

OpenBenchmarking.orgjOPS, More Is BetterSPECjbb 2015SPECjbb2015-Composite max-jOPSEPYC 7713 2PEPYC 771330K60K90K120K150K130899127957

MariaDB

This is a MariaDB MySQL database server benchmark making use of mysqlslap. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 10.8.2Clients: 4096EPYC 7713 2PEPYC 771350100150200250SE +/- 0.87, N = 3SE +/- 0.15, N = 31402471. (CXX) g++ options: -pie -fPIC -fstack-protector -O3 -lnuma -lpcre2-8 -lcrypt -laio -lz -lm -lssl -lcrypto -lpthread -ldl
OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 10.8.2Clients: 4096EPYC 7713 2PEPYC 77134080120160200Min: 138.7 / Avg: 140.38 / Max: 141.58Min: 247.16 / Avg: 247.36 / Max: 247.651. (CXX) g++ options: -pie -fPIC -fstack-protector -O3 -lnuma -lpcre2-8 -lcrypt -laio -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 10.8.2Clients: 2048EPYC 7713 2PEPYC 7713130260390520650SE +/- 0.88, N = 3SE +/- 4.69, N = 31506151. (CXX) g++ options: -pie -fPIC -fstack-protector -O3 -lnuma -lpcre2-8 -lcrypt -laio -lz -lm -lssl -lcrypto -lpthread -ldl
OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 10.8.2Clients: 2048EPYC 7713 2PEPYC 7713110220330440550Min: 148.1 / Avg: 149.71 / Max: 151.13Min: 605.33 / Avg: 614.7 / Max: 619.581. (CXX) g++ options: -pie -fPIC -fstack-protector -O3 -lnuma -lpcre2-8 -lcrypt -laio -lz -lm -lssl -lcrypto -lpthread -ldl

NWChem

NWChem is an open-source high performance computational chemistry package. Per NWChem's documentation, "NWChem aims to provide its users with computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNWChem 7.0.2Input: C240 BuckyballEPYC 7713 2PEPYC 771350010001500200025002183.62491.11. (F9X) gfortran options: -lnwctask -lccsd -lmcscf -lselci -lmp2 -lmoints -lstepper -ldriver -loptim -lnwdft -lgradients -lcphf -lesp -lddscf -ldangchang -lguess -lhessian -lvib -lnwcutil -lrimp2 -lproperty -lsolvation -lnwints -lprepar -lnwmd -lnwpw -lofpw -lpaw -lpspw -lband -lnwpwlib -lcafe -lspace -lanalyze -lqhop -lpfft -ldplot -ldrdy -lvscf -lqmmm -lqmd -letrans -ltce -lbq -lmm -lcons -lperfm -ldntmc -lccca -ldimqm -lga -larmci -lpeigs -l64to32 -lopenblas -lpthread -lrt -llapack -lnwcblas -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz -lcomex -m64 -ffast-math -std=legacy -fdefault-integer-8 -finline-functions -O2

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie LensEPYC 7713 2PEPYC 77135K10K15K20K25KSE +/- 323.63, N = 4SE +/- 25.65, N = 324613.118814.1MIN: 20413.78 / MAX: 30021.19MIN: 18778.84 / MAX: 20660.87
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie LensEPYC 7713 2PEPYC 77134K8K12K16K20KMin: 23875.6 / Avg: 24613.06 / Max: 25293.82Min: 18778.84 / Avg: 18814.09 / Max: 18863.99

RELION

RELION - REgularised LIkelihood OptimisatioN - is a stand-alone computer program for Maximum A Posteriori refinement of (multiple) 3D reconstructions or 2D class averages in cryo-electron microscopy (cryo-EM). It is developed in the research group of Sjors Scheres at the MRC Laboratory of Molecular Biology. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRELION 3.1.1Test: Basic - Device: CPUEPYC 7713 2PEPYC 7713120240360480600SE +/- 3.51, N = 4SE +/- 3.66, N = 3290.96563.271. (CXX) g++ options: -fopenmp -std=c++0x -O3 -rdynamic -ldl -ltiff -lfftw3f -lfftw3 -lpng -lmpi_cxx -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterRELION 3.1.1Test: Basic - Device: CPUEPYC 7713 2PEPYC 7713100200300400500Min: 287.24 / Avg: 290.96 / Max: 301.5Min: 559.55 / Avg: 563.27 / Max: 570.61. (CXX) g++ options: -fopenmp -std=c++0x -O3 -rdynamic -ldl -ltiff -lfftw3f -lfftw3 -lpng -lmpi_cxx -lmpi

BRL-CAD

BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.32.6VGR Performance MetricEPYC 7713 2PEPYC 7713700K1400K2100K2800K3500K32405156976981. (CXX) g++ options: -std=c++11 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -ldl -lm

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: X3D-benchmarking input.i3dEPYC 7713 2PEPYC 7713130260390520650SE +/- 0.29, N = 3SE +/- 1.64, N = 3300.69609.681. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: X3D-benchmarking input.i3dEPYC 7713 2PEPYC 7713110220330440550Min: 300.24 / Avg: 300.69 / Max: 301.24Min: 606.43 / Avg: 609.68 / Max: 611.671. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DEPYC 7713 2PEPYC 771320406080100SE +/- 1.37, N = 15SE +/- 1.40, N = 1589.2888.021. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DEPYC 7713 2PEPYC 771320406080100Min: 81.94 / Avg: 89.28 / Max: 95.52Min: 81.76 / Avg: 88.02 / Max: 93.271. (CXX) g++ options: -O2 -lOpenCL

Stockfish

This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 15Total TimeEPYC 7713 2PEPYC 771360M120M180M240M300MSE +/- 4884547.18, N = 15SE +/- 3372481.12, N = 122794869421493836891. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 15Total TimeEPYC 7713 2PEPYC 771350M100M150M200M250MMin: 247473323 / Avg: 279486942.2 / Max: 322235487Min: 135468169 / Avg: 149383689.42 / Max: 1688024671. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: EigenEPYC 7713 2PEPYC 77139001800270036004500SE +/- 42.95, N = 4SE +/- 38.17, N = 3409335981. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: EigenEPYC 7713 2PEPYC 77137001400210028003500Min: 4025 / Avg: 4092.75 / Max: 4205Min: 3527 / Avg: 3597.67 / Max: 36581. (CXX) g++ options: -flto -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLASEPYC 7713 2PEPYC 771310002000300040005000SE +/- 49.21, N = 4SE +/- 30.99, N = 3444938931. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLASEPYC 7713 2PEPYC 77138001600240032004000Min: 4328 / Avg: 4449 / Max: 4536Min: 3853 / Avg: 3893 / Max: 39541. (CXX) g++ options: -flto -pthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IOEPYC 7713 2PEPYC 77133K6K9K12K15KSE +/- 134.48, N = 12SE +/- 89.73, N = 312254.28004.9MIN: 11483.79 / MAX: 29956.39MIN: 7833.11 / MAX: 12272.32
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IOEPYC 7713 2PEPYC 77132K4K6K8K10KMin: 11483.79 / Avg: 12254.18 / Max: 13111.79Min: 7833.11 / Avg: 8004.88 / Max: 8135.75

Quantum ESPRESSO

Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterQuantum ESPRESSO 7.0Input: AUSURF112EPYC 7713 2PEPYC 771390180270360450SE +/- 0.17, N = 3SE +/- 0.15, N = 3399.92397.641. (F9X) gfortran options: -pthread -fopenmp -ldevXlib -lopenblas -lFoX_dom -lFoX_sax -lFoX_wxml -lFoX_common -lFoX_utils -lFoX_fsys -lfftw3_omp -lfftw3 -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterQuantum ESPRESSO 7.0Input: AUSURF112EPYC 7713 2PEPYC 771370140210280350Min: 399.59 / Avg: 399.92 / Max: 400.14Min: 397.42 / Avg: 397.64 / Max: 397.921. (F9X) gfortran options: -pthread -fopenmp -ldevXlib -lopenblas -lFoX_dom -lFoX_sax -lFoX_wxml -lFoX_common -lFoX_utils -lFoX_fsys -lfftw3_omp -lfftw3 -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUEPYC 7713 2PEPYC 77136001200180024003000SE +/- 49.56, N = 15SE +/- 16.05, N = 152840.051229.56MIN: 2016.64MIN: 1044.761. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUEPYC 7713 2PEPYC 77135001000150020002500Min: 2308.57 / Avg: 2840.05 / Max: 2980.76Min: 1085.23 / Avg: 1229.56 / Max: 1326.641. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average LatencyEPYC 7713 2PEPYC 771348121620SE +/- 0.235, N = 12SE +/- 0.005, N = 315.0763.4001. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average LatencyEPYC 7713 2PEPYC 771348121620Min: 12.71 / Avg: 15.08 / Max: 15.89Min: 3.39 / Avg: 3.4 / Max: 3.411. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read WriteEPYC 7713 2PEPYC 771316K32K48K64K80KSE +/- 294.20, N = 12SE +/- 106.66, N = 316633735271. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read WriteEPYC 7713 2PEPYC 771313K26K39K52K65KMin: 15735.2 / Avg: 16632.78 / Max: 19668.66Min: 73412.16 / Avg: 73526.86 / Max: 73739.971. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthEPYC 7713 2PEPYC 771350M100M150M200M250MSE +/- 2119737.62, N = 12SE +/- 277360.08, N = 3248870519138776102
OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthEPYC 7713 2PEPYC 771340M80M120M160M200MMin: 241087047 / Avg: 248870518.5 / Max: 265609626Min: 138351810 / Avg: 138776101.67 / Max: 139297712

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read Write - Average LatencyEPYC 7713 2PEPYC 7713816243240SE +/- 0.318, N = 12SE +/- 0.022, N = 335.0947.9801. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read Write - Average LatencyEPYC 7713 2PEPYC 7713714212835Min: 32.46 / Avg: 35.09 / Max: 36.95Min: 7.94 / Avg: 7.98 / Max: 8.011. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read WriteEPYC 7713 2PEPYC 771313K26K39K52K65KSE +/- 133.03, N = 12SE +/- 172.39, N = 314261626541. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read WriteEPYC 7713 2PEPYC 771311K22K33K44K55KMin: 13532.97 / Avg: 14260.54 / Max: 15402.76Min: 62437.32 / Avg: 62654.37 / Max: 62994.91. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark PageRankEPYC 7713 2PEPYC 77139001800270036004500SE +/- 42.53, N = 12SE +/- 36.17, N = 43985.73067.3MIN: 3177.8 / MAX: 5051.59MIN: 2704.9 / MAX: 3282.25
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark PageRankEPYC 7713 2PEPYC 77137001400210028003500Min: 3661.68 / Avg: 3985.7 / Max: 4209.6Min: 2972.6 / Avg: 3067.3 / Max: 3134.7

Graph500

This is a benchmark of the reference implementation of Graph500, an HPC benchmark focused on data intensive loads and commonly tested on supercomputers for complex data problems. Graph500 primarily stresses the communication subsystem of the hardware under test. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsssp max_TEPS, More Is BetterGraph500 3.0Scale: 26EPYC 7713 2PEPYC 771380M160M240M320M400M3903770003235930001. (CC) gcc options: -fcommon -O3 -lpthread -lm -lmpi

OpenBenchmarking.orgsssp median_TEPS, More Is BetterGraph500 3.0Scale: 26EPYC 7713 2PEPYC 771360M120M180M240M300M3023380002544770001. (CC) gcc options: -fcommon -O3 -lpthread -lm -lmpi

OpenBenchmarking.orgbfs max_TEPS, More Is BetterGraph500 3.0Scale: 26EPYC 7713 2PEPYC 7713140M280M420M560M700M6594670006422780001. (CC) gcc options: -fcommon -O3 -lpthread -lm -lmpi

OpenBenchmarking.orgbfs median_TEPS, More Is BetterGraph500 3.0Scale: 26EPYC 7713 2PEPYC 7713140M280M420M560M700M6425160006226810001. (CC) gcc options: -fcommon -O3 -lpthread -lm -lmpi

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardEPYC 7713 2PEPYC 771330060090012001500SE +/- 14.15, N = 12SE +/- 5.25, N = 387715381. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardEPYC 7713 2PEPYC 771330060090012001500Min: 806 / Avg: 876.67 / Max: 955.5Min: 1527.5 / Avg: 1537.67 / Max: 15451. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: StandardEPYC 7713 2PEPYC 771350100150200250SE +/- 2.05, N = 3SE +/- 3.69, N = 122371851. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: StandardEPYC 7713 2PEPYC 77134080120160200Min: 233.5 / Avg: 237.33 / Max: 240.5Min: 178 / Avg: 184.67 / Max: 2251. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: StandardEPYC 7713 2PEPYC 771390180270360450SE +/- 5.46, N = 12SE +/- 1.15, N = 33304171. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: StandardEPYC 7713 2PEPYC 771370140210280350Min: 310 / Avg: 329.92 / Max: 371.5Min: 414.5 / Avg: 416.5 / Max: 418.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: StandardEPYC 7713 2PEPYC 771314002800420056007000SE +/- 45.38, N = 12SE +/- 6.02, N = 3456966311. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: StandardEPYC 7713 2PEPYC 771312002400360048006000Min: 4425.5 / Avg: 4569.13 / Max: 4880.5Min: 6620 / Avg: 6631.33 / Max: 6640.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.6.1Input: PNG - Encode Speed: 8EPYC 7713 2PEPYC 77130.22950.4590.68850.9181.1475SE +/- 0.00, N = 3SE +/- 0.00, N = 31.001.021. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.6.1Input: PNG - Encode Speed: 8EPYC 7713 2PEPYC 7713246810Min: 0.99 / Avg: 1 / Max: 1Min: 1.02 / Avg: 1.02 / Max: 1.021. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie

SecureMark

SecureMark is an objective, standardized benchmarking framework for measuring the efficiency of cryptographic processing solutions developed by EEMBC. SecureMark-TLS is benchmarking Transport Layer Security performance with a focus on IoT/edge computing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmarks, More Is BetterSecureMark 1.0.4Benchmark: SecureMark-TLSEPYC 7713 2PEPYC 771350K100K150K200K250KSE +/- 512.51, N = 3SE +/- 2502.24, N = 32495822477471. (CC) gcc options: -pedantic -O3
OpenBenchmarking.orgmarks, More Is BetterSecureMark 1.0.4Benchmark: SecureMark-TLSEPYC 7713 2PEPYC 771340K80K120K160K200KMin: 248561.64 / Avg: 249581.93 / Max: 250177.3Min: 243834.89 / Avg: 247747.18 / Max: 252405.941. (CC) gcc options: -pedantic -O3

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1EPYC 7713 2PEPYC 7713918273645SE +/- 0.11, N = 3SE +/- 0.01, N = 337.1019.111. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi
OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1EPYC 7713 2PEPYC 7713816243240Min: 36.88 / Avg: 37.1 / Max: 37.23Min: 19.1 / Avg: 19.11 / Max: 19.121. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

LuaRadio

LuaRadio is a lightweight software-defined radio (SDR) framework built atop LuaJIT. LuaRadio provides a suite of source, sink, and processing blocks, with a simple API for defining flow graphs, running flow graphs, creating blocks, and creating data types. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Complex PhaseEPYC 7713 2PEPYC 7713140280420560700SE +/- 0.67, N = 3SE +/- 0.40, N = 3623.8628.9
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Complex PhaseEPYC 7713 2PEPYC 7713110220330440550Min: 622.5 / Avg: 623.83 / Max: 624.6Min: 628.4 / Avg: 628.9 / Max: 629.7

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Hilbert TransformEPYC 7713 2PEPYC 771320406080100SE +/- 0.06, N = 3SE +/- 0.12, N = 398.498.6
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Hilbert TransformEPYC 7713 2PEPYC 771320406080100Min: 98.3 / Avg: 98.4 / Max: 98.5Min: 98.4 / Avg: 98.57 / Max: 98.8

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: FM Deemphasis FilterEPYC 7713 2PEPYC 771380160240320400SE +/- 0.33, N = 3SE +/- 0.12, N = 3364.3365.4
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: FM Deemphasis FilterEPYC 7713 2PEPYC 771370140210280350Min: 363.6 / Avg: 364.27 / Max: 364.6Min: 365.2 / Avg: 365.37 / Max: 365.6

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Five Back to Back FIR FiltersEPYC 7713 2PEPYC 771330060090012001500SE +/- 4.33, N = 3SE +/- 13.28, N = 31211.61189.0
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Five Back to Back FIR FiltersEPYC 7713 2PEPYC 77132004006008001000Min: 1204.2 / Avg: 1211.6 / Max: 1219.2Min: 1164.4 / Avg: 1188.97 / Max: 1210

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Barbershop - Compute: CPU-OnlyEPYC 7713 2PEPYC 771370140210280350SE +/- 0.28, N = 3SE +/- 0.18, N = 3171.52299.30
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Barbershop - Compute: CPU-OnlyEPYC 7713 2PEPYC 771350100150200250Min: 171.18 / Avg: 171.52 / Max: 172.08Min: 298.95 / Avg: 299.3 / Max: 299.5

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUEPYC 7713 2PEPYC 77136001200180024003000SE +/- 32.96, N = 15SE +/- 2.41, N = 32754.451253.50MIN: 2158.82MIN: 1197.71. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUEPYC 7713 2PEPYC 77135001000150020002500Min: 2489.25 / Avg: 2754.45 / Max: 2916.02Min: 1248.96 / Avg: 1253.5 / Max: 1257.161. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qdaEPYC 7713 2PEPYC 7713714212835SE +/- 0.28, N = 15SE +/- 0.09, N = 331.8429.19
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qdaEPYC 7713 2PEPYC 7713714212835Min: 30.13 / Avg: 31.84 / Max: 33.56Min: 29.01 / Avg: 29.19 / Max: 29.3

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: allmodconfigEPYC 7713 2PEPYC 771360120180240300SE +/- 0.47, N = 3SE +/- 0.54, N = 3160.46272.61
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: allmodconfigEPYC 7713 2PEPYC 771350100150200250Min: 159.85 / Avg: 160.46 / Max: 161.38Min: 271.94 / Avg: 272.61 / Max: 273.68

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetEPYC 7713 2PEPYC 77137001400210028003500SE +/- 3.94, N = 3SE +/- 0.99, N = 33065.923107.10MIN: 3032.01 / MAX: 3618.4MIN: 3090.23 / MAX: 3134.071. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetEPYC 7713 2PEPYC 77135001000150020002500Min: 3058.87 / Avg: 3065.92 / Max: 3072.5Min: 3105.16 / Avg: 3107.1 / Max: 3108.431. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 9Input: drivaerFastback, Small Mesh Size - Execution TimeEPYC 7713 2PEPYC 7713140280420560700281.70633.931. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -ldecompose -lgenericPatchFields -lmetisDecomp -lscotchDecomp -llagrangian -lregionModels -lOpenFOAM -ldl -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 9Input: drivaerFastback, Small Mesh Size - Mesh TimeEPYC 7713 2PEPYC 7713306090120150124.96139.891. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -ldecompose -lgenericPatchFields -lmetisDecomp -lscotchDecomp -llagrangian -lregionModels -lOpenFOAM -ldl -lm

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Unix MakefilesEPYC 7713 2PEPYC 771350100150200250SE +/- 0.32, N = 3SE +/- 0.36, N = 3178.68218.45
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Unix MakefilesEPYC 7713 2PEPYC 77134080120160200Min: 178.11 / Avg: 178.67 / Max: 179.21Min: 217.73 / Avg: 218.44 / Max: 218.93

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkEPYC 7713 2PEPYC 7713100200300400500SE +/- 0.78, N = 3SE +/- 0.32, N = 3469.38464.56
OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkEPYC 7713 2PEPYC 771380160240320400Min: 467.88 / Avg: 469.38 / Max: 470.5Min: 463.97 / Avg: 464.56 / Max: 465.08

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Danish Mood - Acceleration: CPUEPYC 7713 2PEPYC 7713246810SE +/- 0.10, N = 15SE +/- 0.07, N = 37.506.20MIN: 2.67 / MAX: 10.06MIN: 2.45 / MAX: 7.36
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Danish Mood - Acceleration: CPUEPYC 7713 2PEPYC 77133691215Min: 6.85 / Avg: 7.5 / Max: 8.37Min: 6.07 / Avg: 6.2 / Max: 6.31

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: LuxCore Benchmark - Acceleration: CPUEPYC 7713 2PEPYC 7713246810SE +/- 0.12, N = 15SE +/- 0.06, N = 37.636.84MIN: 2.51 / MAX: 10.1MIN: 2.61 / MAX: 8.19
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: LuxCore Benchmark - Acceleration: CPUEPYC 7713 2PEPYC 77133691215Min: 7.16 / Avg: 7.63 / Max: 8.52Min: 6.76 / Avg: 6.84 / Max: 6.96

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedEPYC 7713 2PEPYC 77133K6K9K12K15KSE +/- 16.62, N = 3SE +/- 23.01, N = 1510743.214123.31. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedEPYC 7713 2PEPYC 77132K4K6K8K10KMin: 10710.4 / Avg: 10743.2 / Max: 10764.3Min: 13997.4 / Avg: 14123.35 / Max: 14282.81. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedEPYC 7713 2PEPYC 77131224364860SE +/- 0.40, N = 3SE +/- 0.38, N = 1554.1855.431. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedEPYC 7713 2PEPYC 77131122334455Min: 53.59 / Avg: 54.18 / Max: 54.95Min: 54.02 / Avg: 55.43 / Max: 57.021. (CC) gcc options: -O3

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA256EPYC 7713 2PEPYC 771330000M60000M90000M120000M150000MSE +/- 153008615.66, N = 3SE +/- 67558357.93, N = 3134629156527681018098531. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA256EPYC 7713 2PEPYC 771320000M40000M60000M80000M100000MMin: 134352638050 / Avg: 134629156526.67 / Max: 134880941350Min: 67986313010 / Avg: 68101809853.33 / Max: 682202851601. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

Timed Gem5 Compilation

This test times how long it takes to compile Gem5. Gem5 is a simulator for computer system architecture research. Gem5 is widely used for computer architecture research within the industry, academia, and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 21.2Time To CompileEPYC 7713 2PEPYC 77134080120160200SE +/- 1.20, N = 3SE +/- 1.96, N = 3159.02176.43
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 21.2Time To CompileEPYC 7713 2PEPYC 7713306090120150Min: 156.61 / Avg: 159.01 / Max: 160.28Min: 173.15 / Avg: 176.43 / Max: 179.94

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Material TesterEPYC 7713 2PEPYC 771370140210280350336.58152.39

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 95, Compression Effort 7EPYC 7713 2PEPYC 77130.06980.13960.20940.27920.349SE +/- 0.00, N = 7SE +/- 0.00, N = 30.310.241. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 95, Compression Effort 7EPYC 7713 2PEPYC 771312345Min: 0.29 / Avg: 0.31 / Max: 0.32Min: 0.23 / Avg: 0.24 / Max: 0.241. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerEPYC 7713 2PEPYC 771320K40K60K80K100KSE +/- 265.11, N = 3SE +/- 269.96, N = 3527851000511. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerEPYC 7713 2PEPYC 771320K40K60K80K100KMin: 52318 / Avg: 52784.67 / Max: 53236Min: 99554 / Avg: 100051.33 / Max: 1004821. (CXX) g++ options: -O3 -ldl

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 18.8Time To CompileEPYC 7713 2PEPYC 77134080120160200SE +/- 0.71, N = 3SE +/- 0.14, N = 3113.47164.00
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 18.8Time To CompileEPYC 7713 2PEPYC 7713306090120150Min: 112.06 / Avg: 113.47 / Max: 114.25Min: 163.71 / Avg: 164 / Max: 164.15

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read Only - Average LatencyEPYC 7713 2PEPYC 77130.08210.16420.24630.32840.4105SE +/- 0.002, N = 3SE +/- 0.004, N = 30.2480.3651. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read Only - Average LatencyEPYC 7713 2PEPYC 771312345Min: 0.25 / Avg: 0.25 / Max: 0.25Min: 0.36 / Avg: 0.36 / Max: 0.371. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read OnlyEPYC 7713 2PEPYC 7713400K800K1200K1600K2000KSE +/- 13496.08, N = 3SE +/- 13290.57, N = 3201272213709351. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read OnlyEPYC 7713 2PEPYC 7713300K600K900K1200K1500KMin: 1991172.28 / Avg: 2012722.28 / Max: 2037573.16Min: 1344611.61 / Avg: 1370935.5 / Max: 1387292.351. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average LatencyEPYC 7713 2PEPYC 77130.04340.08680.13020.17360.217SE +/- 0.001, N = 3SE +/- 0.000, N = 30.1260.1931. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average LatencyEPYC 7713 2PEPYC 771312345Min: 0.13 / Avg: 0.13 / Max: 0.13Min: 0.19 / Avg: 0.19 / Max: 0.191. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read OnlyEPYC 7713 2PEPYC 7713400K800K1200K1600K2000KSE +/- 12988.21, N = 3SE +/- 818.86, N = 3198315512976841. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read OnlyEPYC 7713 2PEPYC 7713300K600K900K1200K1500KMin: 1958719.44 / Avg: 1983155.1 / Max: 2003005.39Min: 1296537.65 / Avg: 1297683.52 / Max: 1299269.791. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerEPYC 7713 2PEPYC 771310K20K30K40K50KSE +/- 146.03, N = 3SE +/- 66.05, N = 326510471341. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerEPYC 7713 2PEPYC 77138K16K24K32K40KMin: 26280 / Avg: 26510.33 / Max: 26781Min: 47011 / Avg: 47134.33 / Max: 472371. (CXX) g++ options: -O3 -ldl

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C2670EPYC 7713 2PEPYC 7713306090120150SE +/- 0.58, N = 3SE +/- 0.70, N = 3137.36136.021. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE
OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C2670EPYC 7713 2PEPYC 7713306090120150Min: 136.54 / Avg: 137.36 / Max: 138.48Min: 134.73 / Avg: 136.02 / Max: 137.121. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: NinjaEPYC 7713 2PEPYC 77134080120160200SE +/- 0.51, N = 3SE +/- 0.28, N = 3105.03159.58
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: NinjaEPYC 7713 2PEPYC 7713306090120150Min: 104.33 / Avg: 105.03 / Max: 106.02Min: 159.06 / Avg: 159.57 / Max: 160.02

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 4KEPYC 7713 2PEPYC 771348121620SE +/- 0.37, N = 15SE +/- 0.08, N = 313.8615.861. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 4KEPYC 7713 2PEPYC 771348121620Min: 11.62 / Avg: 13.86 / Max: 15.79Min: 15.73 / Avg: 15.86 / Max: 161. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Orange Juice - Acceleration: CPUEPYC 7713 2PEPYC 7713510152025SE +/- 0.15, N = 9SE +/- 0.01, N = 318.6012.48MIN: 15.92 / MAX: 24.08MIN: 10.85 / MAX: 14.14
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Orange Juice - Acceleration: CPUEPYC 7713 2PEPYC 7713510152025Min: 18.31 / Avg: 18.6 / Max: 19.8Min: 12.46 / Avg: 12.48 / Max: 12.49

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: WritesEPYC 7713 2PEPYC 771350K100K150K200K250KSE +/- 1824.75, N = 3SE +/- 786.81, N = 3209921250596
OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: WritesEPYC 7713 2PEPYC 771340K80K120K160K200KMin: 206950 / Avg: 209921 / Max: 213242Min: 249704 / Avg: 250596.33 / Max: 252165

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Third RunEPYC 7713 2PEPYC 771390180270360450SE +/- 2.46, N = 3SE +/- 1.58, N = 15396.82394.98MIN: 60.67 / MAX: 20000MIN: 40.79 / MAX: 200001. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Third RunEPYC 7713 2PEPYC 771370140210280350Min: 393.57 / Avg: 396.82 / Max: 401.65Min: 383.42 / Avg: 394.98 / Max: 403.081. ClickHouse server version 22.5.4.19 (official build).

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Second RunEPYC 7713 2PEPYC 771390180270360450SE +/- 2.92, N = 3SE +/- 2.54, N = 15387.01392.92MIN: 63.9 / MAX: 20000MIN: 41.75 / MAX: 300001. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Second RunEPYC 7713 2PEPYC 771370140210280350Min: 382.82 / Avg: 387.01 / Max: 392.63Min: 378.84 / Avg: 392.92 / Max: 409.551. ClickHouse server version 22.5.4.19 (official build).

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, First Run / Cold CacheEPYC 7713 2PEPYC 771380160240320400SE +/- 4.60, N = 3SE +/- 3.68, N = 15377.52378.44MIN: 43.89 / MAX: 20000MIN: 37.15 / MAX: 200001. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, First Run / Cold CacheEPYC 7713 2PEPYC 771370140210280350Min: 371.81 / Avg: 377.52 / Max: 386.62Min: 331.77 / Avg: 378.44 / Max: 395.461. ClickHouse server version 22.5.4.19 (official build).

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerEPYC 7713 2PEPYC 77136001200180024003000SE +/- 0.33, N = 3SE +/- 0.88, N = 3164629221. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerEPYC 7713 2PEPYC 77135001000150020002500Min: 1645 / Avg: 1645.67 / Max: 1646Min: 2920 / Avg: 2921.67 / Max: 29231. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerEPYC 7713 2PEPYC 771320K40K60K80K100KSE +/- 39.74, N = 3SE +/- 135.08, N = 345460860811. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerEPYC 7713 2PEPYC 771315K30K45K60K75KMin: 45381 / Avg: 45460 / Max: 45507Min: 85856 / Avg: 86081 / Max: 863231. (CXX) g++ options: -O3 -ldl

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: StandardEPYC 7713 2PEPYC 77132K4K6K8K10KSE +/- 48.25, N = 3SE +/- 26.19, N = 3787893681. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: StandardEPYC 7713 2PEPYC 771316003200480064008000Min: 7805 / Avg: 7877.67 / Max: 7969Min: 9336.5 / Avg: 9368 / Max: 94201. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: StandardEPYC 7713 2PEPYC 7713150300450600750SE +/- 2.52, N = 3SE +/- 0.50, N = 36687131. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: StandardEPYC 7713 2PEPYC 7713130260390520650Min: 665 / Avg: 668 / Max: 673Min: 712.5 / Avg: 713 / Max: 7141. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerEPYC 7713 2PEPYC 771320K40K60K80K100KSE +/- 224.62, N = 3SE +/- 162.94, N = 344624849681. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerEPYC 7713 2PEPYC 771315K30K45K60K75KMin: 44187 / Avg: 44624.33 / Max: 44932Min: 84644 / Avg: 84968.33 / Max: 851581. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerEPYC 7713 2PEPYC 77135001000150020002500SE +/- 0.33, N = 3SE +/- 1.73, N = 3142424961. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerEPYC 7713 2PEPYC 7713400800120016002000Min: 1424 / Avg: 1424.33 / Max: 1425Min: 2493 / Avg: 2496 / Max: 24991. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerEPYC 7713 2PEPYC 77139K18K27K36K45KSE +/- 44.67, N = 3SE +/- 10.17, N = 322742400581. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerEPYC 7713 2PEPYC 77137K14K21K28K35KMin: 22661 / Avg: 22742.33 / Max: 22815Min: 40042 / Avg: 40058.33 / Max: 400771. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerEPYC 7713 2PEPYC 77135001000150020002500SE +/- 1.76, N = 3SE +/- 0.33, N = 3137924631. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerEPYC 7713 2PEPYC 7713400800120016002000Min: 1376 / Avg: 1378.67 / Max: 1382Min: 2462 / Avg: 2462.67 / Max: 24631. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerEPYC 7713 2PEPYC 77138K16K24K32K40KSE +/- 60.36, N = 3SE +/- 29.02, N = 322158395681. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerEPYC 7713 2PEPYC 77137K14K21K28K35KMin: 22037 / Avg: 22157.67 / Max: 22221Min: 39530 / Avg: 39568 / Max: 396251. (CXX) g++ options: -O3 -ldl

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Genetic Algorithm Using Jenetics + FuturesEPYC 7713 2PEPYC 77135001000150020002500SE +/- 20.78, N = 3SE +/- 24.42, N = 32314.92243.2MIN: 2126.5 / MAX: 2897.34MIN: 2049.08 / MAX: 2446.27
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Genetic Algorithm Using Jenetics + FuturesEPYC 7713 2PEPYC 7713400800120016002000Min: 2273.47 / Avg: 2314.9 / Max: 2338.52Min: 2195.18 / Avg: 2243.18 / Max: 2275

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.DEPYC 7713 2PEPYC 77132K4K6K8K10KSE +/- 151.57, N = 15SE +/- 87.49, N = 159109.244668.961. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.DEPYC 7713 2PEPYC 771316003200480064008000Min: 7678.59 / Avg: 9109.24 / Max: 9603.79Min: 3731.01 / Avg: 4668.96 / Max: 4845.031. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: In-Memory Database ShootoutEPYC 7713 2PEPYC 771313002600390052006500SE +/- 42.85, N = 3SE +/- 58.97, N = 46136.45009.2MIN: 5231.89 / MAX: 7748.89MIN: 4716.84 / MAX: 5514.55
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: In-Memory Database ShootoutEPYC 7713 2PEPYC 771311002200330044005500Min: 6063.17 / Avg: 6136.35 / Max: 6211.57Min: 4914.87 / Avg: 5009.18 / Max: 5180.71

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsEPYC 7713 2PEPYC 77132K4K6K8K10KSE +/- 107.40, N = 3SE +/- 23.48, N = 310535.46720.8MIN: 9747.42 / MAX: 10878.16MIN: 6349.06 / MAX: 6850.05
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsEPYC 7713 2PEPYC 77132K4K6K8K10KMin: 10379.06 / Avg: 10535.43 / Max: 10741.15Min: 6681.76 / Avg: 6720.81 / Max: 6762.91

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C7552EPYC 7713 2PEPYC 771320406080100SE +/- 0.54, N = 3SE +/- 0.58, N = 3105.95103.391. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE
OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C7552EPYC 7713 2PEPYC 771320406080100Min: 104.88 / Avg: 105.95 / Max: 106.64Min: 102.75 / Avg: 103.39 / Max: 104.551. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE

etcd

Etcd is a distributed, reliable key-value store intended for critical data of a distributed system. Etcd is written in Golang and part of the Cloud Native Computing Foundation (CNCF) and used by Kubernetes, Rook, CoreDNS, and other open-source software. This test profile uses Etcd's built-in benchmark to stress the PUT and RANGE performance of a single node / local system. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is Betteretcd 3.5.4Test: RANGE - Connections: 100 - Clients: 100 - Average LatencyEPYC 7713 2PEPYC 77130.60751.2151.82252.433.0375SE +/- 0.03, N = 4SE +/- 0.03, N = 32.71.3
OpenBenchmarking.orgms, Fewer Is Betteretcd 3.5.4Test: RANGE - Connections: 100 - Clients: 100 - Average LatencyEPYC 7713 2PEPYC 7713246810Min: 2.6 / Avg: 2.68 / Max: 2.7Min: 1.2 / Avg: 1.27 / Max: 1.3

OpenBenchmarking.orgRequests/sec, More Is Betteretcd 3.5.4Test: RANGE - Connections: 100 - Clients: 100EPYC 7713 2PEPYC 771320K40K60K80K100KSE +/- 394.99, N = 4SE +/- 329.90, N = 337230.3379219.65
OpenBenchmarking.orgRequests/sec, More Is Betteretcd 3.5.4Test: RANGE - Connections: 100 - Clients: 100EPYC 7713 2PEPYC 771314K28K42K56K70KMin: 36335.37 / Avg: 37230.33 / Max: 38247.61Min: 78579.62 / Avg: 79219.65 / Max: 79678.51

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUEPYC 7713 2PEPYC 771316003200480064008000SE +/- 85.87, N = 4SE +/- 21.84, N = 37473.912984.47MIN: 6964.91MIN: 2892.791. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUEPYC 7713 2PEPYC 771313002600390052006500Min: 7293.39 / Avg: 7473.91 / Max: 7649.44Min: 2953.53 / Avg: 2984.47 / Max: 3026.651. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.6.1Input: PNG - Encode Speed: 7EPYC 7713 2PEPYC 77133691215SE +/- 0.04, N = 3SE +/- 0.02, N = 310.7411.111. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.6.1Input: PNG - Encode Speed: 7EPYC 7713 2PEPYC 77133691215Min: 10.7 / Avg: 10.74 / Max: 10.81Min: 11.07 / Avg: 11.11 / Max: 11.131. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: EmilyEPYC 7713 2PEPYC 7713306090120150151.53133.04

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUEPYC 7713 2PEPYC 77138001600240032004000SE +/- 39.69, N = 5SE +/- 2.50, N = 33597.473555.78MIN: 1881.18 / MAX: 6268.95MIN: 3306.65 / MAX: 3702.181. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUEPYC 7713 2PEPYC 77136001200180024003000Min: 3527.85 / Avg: 3597.47 / Max: 3752.52Min: 3550.82 / Avg: 3555.78 / Max: 3558.771. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUEPYC 7713 2PEPYC 771348121620SE +/- 0.18, N = 5SE +/- 0.01, N = 317.598.971. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUEPYC 7713 2PEPYC 771348121620Min: 16.87 / Avg: 17.59 / Max: 17.79Min: 8.95 / Avg: 8.97 / Max: 8.981. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: DistinctUserIDEPYC 7713 2PEPYC 77130.99451.9892.98353.9784.9725SE +/- 0.01, N = 3SE +/- 0.01, N = 34.424.401. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: DistinctUserIDEPYC 7713 2PEPYC 7713246810Min: 4.41 / Avg: 4.42 / Max: 4.43Min: 4.39 / Avg: 4.4 / Max: 4.411. (CXX) g++ options: -O3

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: TopTweetEPYC 7713 2PEPYC 77130.98781.97562.96343.95124.939SE +/- 0.02, N = 3SE +/- 0.00, N = 34.394.391. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: TopTweetEPYC 7713 2PEPYC 7713246810Min: 4.35 / Avg: 4.39 / Max: 4.41Min: 4.39 / Avg: 4.39 / Max: 4.41. (CXX) g++ options: -O3

Apache HTTP Server

This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 1000EPYC 7713 2PEPYC 771320K40K60K80K100KSE +/- 151.47, N = 3SE +/- 83.20, N = 384767.73109150.301. (CC) gcc options: -shared -fPIC -O2
OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 1000EPYC 7713 2PEPYC 771320K40K60K80K100KMin: 84592.68 / Avg: 84767.73 / Max: 85069.37Min: 108984.94 / Avg: 109150.3 / Max: 109249.121. (CC) gcc options: -shared -fPIC -O2

Helsing

Helsing is an open-source POSIX vampire number generator. This test profile measures the time it takes to generate vampire numbers between varying numbers of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHelsing 1.0-betaDigit Range: 14 digitEPYC 7713 2PEPYC 7713306090120150SE +/- 0.11, N = 3SE +/- 0.16, N = 362.13121.001. (CC) gcc options: -O2 -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterHelsing 1.0-betaDigit Range: 14 digitEPYC 7713 2PEPYC 771320406080100Min: 61.93 / Avg: 62.13 / Max: 62.29Min: 120.76 / Avg: 121 / Max: 121.321. (CC) gcc options: -O2 -pthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: PartialTweetsEPYC 7713 2PEPYC 77130.86851.7372.60553.4744.3425SE +/- 0.00, N = 3SE +/- 0.01, N = 33.843.861. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: PartialTweetsEPYC 7713 2PEPYC 7713246810Min: 3.84 / Avg: 3.84 / Max: 3.85Min: 3.84 / Avg: 3.86 / Max: 3.871. (CXX) g++ options: -O3

Apache HTTP Server

This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 500EPYC 7713 2PEPYC 771320K40K60K80K100KSE +/- 594.33, N = 3SE +/- 303.82, N = 391255.79115006.991. (CC) gcc options: -shared -fPIC -O2
OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 500EPYC 7713 2PEPYC 771320K40K60K80K100KMin: 90067.35 / Avg: 91255.79 / Max: 91869.97Min: 114413.06 / Avg: 115006.99 / Max: 115415.111. (CC) gcc options: -shared -fPIC -O2

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 500EPYC 7713 2PEPYC 771340K80K120K160K200KSE +/- 154.59, N = 3SE +/- 283.75, N = 390312.00173899.191. (CC) gcc options: -lcrypt -lz -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 500EPYC 7713 2PEPYC 771330K60K90K120K150KMin: 90011.81 / Avg: 90312 / Max: 90526.19Min: 173339.87 / Avg: 173899.19 / Max: 174261.981. (CC) gcc options: -lcrypt -lz -O3 -march=native

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 1000EPYC 7713 2PEPYC 771340K80K120K160K200KSE +/- 98.75, N = 3SE +/- 625.94, N = 394018.12174524.791. (CC) gcc options: -lcrypt -lz -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 1000EPYC 7713 2PEPYC 771330K60K90K120K150KMin: 93868.78 / Avg: 94018.12 / Max: 94204.72Min: 173373.52 / Avg: 174524.79 / Max: 175526.281. (CC) gcc options: -lcrypt -lz -O3 -march=native

Sysbench

This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPUEPYC 7713 2PEPYC 7713110K220K330K440K550KSE +/- 659.19, N = 3SE +/- 51.07, N = 3500125.79252440.571. (CC) gcc options: -O2 -funroll-loops -rdynamic -ldl -laio -lm
OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPUEPYC 7713 2PEPYC 771390K180K270K360K450KMin: 499146.64 / Avg: 500125.79 / Max: 501379.93Min: 252342.35 / Avg: 252440.57 / Max: 252513.941. (CC) gcc options: -O2 -funroll-loops -rdynamic -ldl -laio -lm

Timed CPython Compilation

This test times how long it takes to build the reference Python implementation, CPython, with optimizations and LTO enabled for a release build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: Released Build, PGO + LTO OptimizedEPYC 7713 2PEPYC 771360120180240300261.34261.37

etcd

Etcd is a distributed, reliable key-value store intended for critical data of a distributed system. Etcd is written in Golang and part of the Cloud Native Computing Foundation (CNCF) and used by Kubernetes, Rook, CoreDNS, and other open-source software. This test profile uses Etcd's built-in benchmark to stress the PUT and RANGE performance of a single node / local system. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is Betteretcd 3.5.4Test: PUT - Connections: 100 - Clients: 100 - Average LatencyEPYC 7713 2PEPYC 77130.5851.171.7552.342.925SE +/- 0.00, N = 3SE +/- 0.00, N = 32.61.3
OpenBenchmarking.orgms, Fewer Is Betteretcd 3.5.4Test: PUT - Connections: 100 - Clients: 100 - Average LatencyEPYC 7713 2PEPYC 7713246810Min: 2.6 / Avg: 2.6 / Max: 2.6Min: 1.3 / Avg: 1.3 / Max: 1.3

OpenBenchmarking.orgRequests/sec, More Is Betteretcd 3.5.4Test: PUT - Connections: 100 - Clients: 100EPYC 7713 2PEPYC 771320K40K60K80K100KSE +/- 77.98, N = 3SE +/- 356.10, N = 338209.4278829.59
OpenBenchmarking.orgRequests/sec, More Is Betteretcd 3.5.4Test: PUT - Connections: 100 - Clients: 100EPYC 7713 2PEPYC 771314K28K42K56K70KMin: 38121.93 / Avg: 38209.42 / Max: 38364.97Min: 78415.46 / Avg: 78829.59 / Max: 79538.44

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUEPYC 7713 2PEPYC 771316003200480064008000SE +/- 91.23, N = 3SE +/- 8.18, N = 37531.122943.08MIN: 7148.44MIN: 2859.581. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUEPYC 7713 2PEPYC 771313002600390052006500Min: 7414.83 / Avg: 7531.12 / Max: 7711.02Min: 2926.71 / Avg: 2943.08 / Max: 2951.531. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUEPYC 7713 2PEPYC 771316003200480064008000SE +/- 55.80, N = 3SE +/- 31.08, N = 37435.342950.43MIN: 7037.37MIN: 2820.961. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUEPYC 7713 2PEPYC 771313002600390052006500Min: 7363.19 / Avg: 7435.34 / Max: 7545.16Min: 2888.32 / Avg: 2950.43 / Max: 2983.581. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

etcd

Etcd is a distributed, reliable key-value store intended for critical data of a distributed system. Etcd is written in Golang and part of the Cloud Native Computing Foundation (CNCF) and used by Kubernetes, Rook, CoreDNS, and other open-source software. This test profile uses Etcd's built-in benchmark to stress the PUT and RANGE performance of a single node / local system. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is Betteretcd 3.5.4Test: RANGE - Connections: 500 - Clients: 100 - Average LatencyEPYC 7713 2PEPYC 77130.5851.171.7552.342.925SE +/- 0.00, N = 3SE +/- 0.00, N = 32.61.2
OpenBenchmarking.orgms, Fewer Is Betteretcd 3.5.4Test: RANGE - Connections: 500 - Clients: 100 - Average LatencyEPYC 7713 2PEPYC 7713246810Min: 2.6 / Avg: 2.6 / Max: 2.6Min: 1.2 / Avg: 1.2 / Max: 1.2

OpenBenchmarking.orgRequests/sec, More Is Betteretcd 3.5.4Test: RANGE - Connections: 500 - Clients: 100EPYC 7713 2PEPYC 771320K40K60K80K100KSE +/- 133.45, N = 3SE +/- 99.62, N = 338506.6182098.42
OpenBenchmarking.orgRequests/sec, More Is Betteretcd 3.5.4Test: RANGE - Connections: 500 - Clients: 100EPYC 7713 2PEPYC 771314K28K42K56K70KMin: 38263.77 / Avg: 38506.61 / Max: 38723.92Min: 81913.98 / Avg: 82098.42 / Max: 82255.9

OpenBenchmarking.orgms, Fewer Is Betteretcd 3.5.4Test: PUT - Connections: 500 - Clients: 100 - Average LatencyEPYC 7713 2PEPYC 77130.5851.171.7552.342.925SE +/- 0.03, N = 3SE +/- 0.00, N = 32.61.2
OpenBenchmarking.orgms, Fewer Is Betteretcd 3.5.4Test: PUT - Connections: 500 - Clients: 100 - Average LatencyEPYC 7713 2PEPYC 7713246810Min: 2.5 / Avg: 2.57 / Max: 2.6Min: 1.2 / Avg: 1.2 / Max: 1.2

OpenBenchmarking.orgRequests/sec, More Is Betteretcd 3.5.4Test: PUT - Connections: 500 - Clients: 100EPYC 7713 2PEPYC 771320K40K60K80K100KSE +/- 317.01, N = 3SE +/- 259.78, N = 338992.8281895.96
OpenBenchmarking.orgRequests/sec, More Is Betteretcd 3.5.4Test: PUT - Connections: 500 - Clients: 100EPYC 7713 2PEPYC 771314K28K42K56K70KMin: 38637.45 / Avg: 38992.82 / Max: 39625.23Min: 81588.54 / Avg: 81895.96 / Max: 82412.41

OpenBenchmarking.orgms, Fewer Is Betteretcd 3.5.4Test: PUT - Connections: 100 - Clients: 1000 - Average LatencyEPYC 7713 2PEPYC 7713612182430SE +/- 0.06, N = 3SE +/- 0.06, N = 323.712.4
OpenBenchmarking.orgms, Fewer Is Betteretcd 3.5.4Test: PUT - Connections: 100 - Clients: 1000 - Average LatencyEPYC 7713 2PEPYC 7713612182430Min: 23.6 / Avg: 23.7 / Max: 23.8Min: 12.3 / Avg: 12.4 / Max: 12.5

OpenBenchmarking.orgRequests/sec, More Is Betteretcd 3.5.4Test: PUT - Connections: 100 - Clients: 1000EPYC 7713 2PEPYC 771320K40K60K80K100KSE +/- 80.91, N = 3SE +/- 324.17, N = 341860.4879978.19
OpenBenchmarking.orgRequests/sec, More Is Betteretcd 3.5.4Test: PUT - Connections: 100 - Clients: 1000EPYC 7713 2PEPYC 771314K28K42K56K70KMin: 41758.72 / Avg: 41860.48 / Max: 42020.32Min: 79421.19 / Avg: 79978.19 / Max: 80544.03

OpenBenchmarking.orgms, Fewer Is Betteretcd 3.5.4Test: RANGE - Connections: 100 - Clients: 1000 - Average LatencyEPYC 7713 2PEPYC 7713612182430SE +/- 0.12, N = 3SE +/- 0.03, N = 323.412.4
OpenBenchmarking.orgms, Fewer Is Betteretcd 3.5.4Test: RANGE - Connections: 100 - Clients: 1000 - Average LatencyEPYC 7713 2PEPYC 7713510152025Min: 23.2 / Avg: 23.43 / Max: 23.6Min: 12.3 / Avg: 12.37 / Max: 12.4

OpenBenchmarking.orgRequests/sec, More Is Betteretcd 3.5.4Test: RANGE - Connections: 100 - Clients: 1000EPYC 7713 2PEPYC 771320K40K60K80K100KSE +/- 218.72, N = 3SE +/- 303.55, N = 342305.6280131.80
OpenBenchmarking.orgRequests/sec, More Is Betteretcd 3.5.4Test: RANGE - Connections: 100 - Clients: 1000EPYC 7713 2PEPYC 771314K28K42K56K70KMin: 42024.62 / Avg: 42305.62 / Max: 42736.45Min: 79794.62 / Avg: 80131.8 / Max: 80737.6

ebizzy

This is a test of ebizzy, a program to generate workloads resembling web server workloads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRecords/s, More Is Betterebizzy 0.3EPYC 7713 2PEPYC 7713100K200K300K400K500KSE +/- 5345.80, N = 15SE +/- 2042.62, N = 94532582464941. (CC) gcc options: -pthread -lpthread -O3 -march=native
OpenBenchmarking.orgRecords/s, More Is Betterebizzy 0.3EPYC 7713 2PEPYC 771380K160K240K320K400KMin: 424842 / Avg: 453257.53 / Max: 492149Min: 234584 / Avg: 246494.22 / Max: 2549721. (CC) gcc options: -pthread -lpthread -O3 -march=native

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkEPYC 7713 2PEPYC 77133691215SE +/- 0.12, N = 3SE +/- 0.06, N = 310.5010.60
OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkEPYC 7713 2PEPYC 77133691215Min: 10.26 / Avg: 10.5 / Max: 10.66Min: 10.52 / Avg: 10.6 / Max: 10.73

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUEPYC 7713 2PEPYC 77136001200180024003000SE +/- 23.56, N = 3SE +/- 12.45, N = 32840.331266.20MIN: 2353.16MIN: 1204.111. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUEPYC 7713 2PEPYC 77135001000150020002500Min: 2812.86 / Avg: 2840.33 / Max: 2887.22Min: 1251.82 / Avg: 1266.2 / Max: 12911. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

etcd

Etcd is a distributed, reliable key-value store intended for critical data of a distributed system. Etcd is written in Golang and part of the Cloud Native Computing Foundation (CNCF) and used by Kubernetes, Rook, CoreDNS, and other open-source software. This test profile uses Etcd's built-in benchmark to stress the PUT and RANGE performance of a single node / local system. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is Betteretcd 3.5.4Test: PUT - Connections: 500 - Clients: 1000 - Average LatencyEPYC 7713 2PEPYC 7713510152025SE +/- 0.12, N = 3SE +/- 0.03, N = 322.811.9
OpenBenchmarking.orgms, Fewer Is Betteretcd 3.5.4Test: PUT - Connections: 500 - Clients: 1000 - Average LatencyEPYC 7713 2PEPYC 7713510152025Min: 22.6 / Avg: 22.83 / Max: 23Min: 11.8 / Avg: 11.87 / Max: 11.9

OpenBenchmarking.orgRequests/sec, More Is Betteretcd 3.5.4Test: PUT - Connections: 500 - Clients: 1000EPYC 7713 2PEPYC 771320K40K60K80K100KSE +/- 202.15, N = 3SE +/- 327.59, N = 343696.7683927.66
OpenBenchmarking.orgRequests/sec, More Is Betteretcd 3.5.4Test: PUT - Connections: 500 - Clients: 1000EPYC 7713 2PEPYC 771315K30K45K60K75KMin: 43444.71 / Avg: 43696.76 / Max: 44096.55Min: 83331.26 / Avg: 83927.66 / Max: 84460.77

OpenBenchmarking.orgms, Fewer Is Betteretcd 3.5.4Test: RANGE - Connections: 500 - Clients: 1000 - Average LatencyEPYC 7713 2PEPYC 7713510152025SE +/- 0.07, N = 3SE +/- 0.06, N = 322.611.8
OpenBenchmarking.orgms, Fewer Is Betteretcd 3.5.4Test: RANGE - Connections: 500 - Clients: 1000 - Average LatencyEPYC 7713 2PEPYC 7713510152025Min: 22.5 / Avg: 22.63 / Max: 22.7Min: 11.7 / Avg: 11.8 / Max: 11.9

OpenBenchmarking.orgRequests/sec, More Is Betteretcd 3.5.4Test: RANGE - Connections: 500 - Clients: 1000EPYC 7713 2PEPYC 771320K40K60K80K100KSE +/- 121.36, N = 3SE +/- 408.33, N = 344040.7184602.78
OpenBenchmarking.orgRequests/sec, More Is Betteretcd 3.5.4Test: RANGE - Connections: 500 - Clients: 1000EPYC 7713 2PEPYC 771315K30K45K60K75KMin: 43889.83 / Avg: 44040.71 / Max: 44280.8Min: 83996.63 / Avg: 84602.78 / Max: 85379.81

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Pabellon Barcelona - Compute: CPU-OnlyEPYC 7713 2PEPYC 771320406080100SE +/- 0.25, N = 3SE +/- 0.08, N = 353.8896.30
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Pabellon Barcelona - Compute: CPU-OnlyEPYC 7713 2PEPYC 771320406080100Min: 53.38 / Avg: 53.88 / Max: 54.16Min: 96.16 / Avg: 96.3 / Max: 96.44

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUEPYC 7713 2PEPYC 771310002000300040005000SE +/- 16.64, N = 3SE +/- 8.17, N = 34777.154601.78MIN: 2420.5 / MAX: 6044.24MIN: 2414.7 / MAX: 5219.011. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUEPYC 7713 2PEPYC 77138001600240032004000Min: 4753.24 / Avg: 4777.15 / Max: 4809.16Min: 4590.11 / Avg: 4601.78 / Max: 4617.521. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUEPYC 7713 2PEPYC 77133691215SE +/- 0.05, N = 3SE +/- 0.02, N = 313.106.821. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUEPYC 7713 2PEPYC 771348121620Min: 13.01 / Avg: 13.1 / Max: 13.16Min: 6.79 / Avg: 6.82 / Max: 6.841. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedEPYC 7713 2PEPYC 77133K6K9K12K15KSE +/- 34.05, N = 3SE +/- 33.09, N = 410851.414156.71. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedEPYC 7713 2PEPYC 77132K4K6K8K10KMin: 10813 / Avg: 10851.4 / Max: 10919.3Min: 14071.4 / Avg: 14156.73 / Max: 14216.21. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedEPYC 7713 2PEPYC 77131224364860SE +/- 0.20, N = 3SE +/- 0.66, N = 452.5153.891. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedEPYC 7713 2PEPYC 77131122334455Min: 52.24 / Avg: 52.51 / Max: 52.9Min: 52.85 / Avg: 53.89 / Max: 55.631. (CC) gcc options: -O3

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUEPYC 7713 2PEPYC 771310002000300040005000SE +/- 9.45, N = 3SE +/- 5.38, N = 34774.374600.94MIN: 2371.17 / MAX: 6026.05MIN: 2381.08 / MAX: 5204.911. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUEPYC 7713 2PEPYC 77138001600240032004000Min: 4759.76 / Avg: 4774.37 / Max: 4792.06Min: 4591.51 / Avg: 4600.94 / Max: 4610.161. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUEPYC 7713 2PEPYC 77133691215SE +/- 0.03, N = 3SE +/- 0.01, N = 313.096.811. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUEPYC 7713 2PEPYC 771348121620Min: 13.04 / Avg: 13.09 / Max: 13.12Min: 6.8 / Avg: 6.81 / Max: 6.821. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 1:5EPYC 7713 2PEPYC 7713800K1600K2400K3200K4000KSE +/- 634.91, N = 3SE +/- 3261.92, N = 3724442.873599739.571. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 1:5EPYC 7713 2PEPYC 7713600K1200K1800K2400K3000KMin: 723173.15 / Avg: 724442.87 / Max: 725091.42Min: 3593234.69 / Avg: 3599739.57 / Max: 3603422.521. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 1:1EPYC 7713 2PEPYC 7713700K1400K2100K2800K3500KSE +/- 1586.54, N = 3SE +/- 14735.29, N = 3724466.223382696.791. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 1:1EPYC 7713 2PEPYC 7713600K1200K1800K2400K3000KMin: 721293.97 / Avg: 724466.22 / Max: 726115.45Min: 3357598.72 / Avg: 3382696.79 / Max: 3408623.091. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 5:1EPYC 7713 2PEPYC 7713700K1400K2100K2800K3500KSE +/- 986.76, N = 3SE +/- 917.19, N = 3724635.933244708.361. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 5:1EPYC 7713 2PEPYC 7713600K1200K1800K2400K3000KMin: 723088.15 / Avg: 724635.93 / Max: 726470.18Min: 3243051.11 / Avg: 3244708.36 / Max: 3246218.061. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUEPYC 7713 2PEPYC 771330060090012001500SE +/- 2.23, N = 3SE +/- 1.10, N = 31388.411293.51MIN: 1233.32 / MAX: 1772.3MIN: 1118.15 / MAX: 1339.171. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUEPYC 7713 2PEPYC 77132004006008001000Min: 1384.14 / Avg: 1388.41 / Max: 1391.66Min: 1292.18 / Avg: 1293.51 / Max: 1295.71. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUEPYC 7713 2PEPYC 77131020304050SE +/- 0.04, N = 3SE +/- 0.02, N = 345.8424.591. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUEPYC 7713 2PEPYC 7713918273645Min: 45.78 / Avg: 45.84 / Max: 45.92Min: 24.56 / Avg: 24.59 / Max: 24.631. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupEPYC 7713 2PEPYC 7713246810SE +/- 0.00, N = 3SE +/- 0.00, N = 37.717.64
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupEPYC 7713 2PEPYC 77133691215Min: 7.7 / Avg: 7.71 / Max: 7.71Min: 7.64 / Avg: 7.64 / Max: 7.65

PJSIP

PJSIP is a free and open source multimedia communication library written in C language implementing standard based protocols such as SIP, SDP, RTP, STUN, TURN, and ICE. It combines signaling protocol (SIP) with rich multimedia framework and NAT traversal functionality into high level API that is portable and suitable for almost any type of systems ranging from desktops, embedded systems, to mobile handsets. This test profile is making use of pjsip-perf with both the client/server on teh system. More details on the PJSIP benchmark at https://www.pjsip.org/high-performance-sip.htm Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: INVITEEPYC 7713 2PEPYC 771310002000300040005000SE +/- 8.69, N = 3SE +/- 16.00, N = 3469246901. (CC) gcc options: -lavformat -lavcodec -lswscale -lavutil -lstdc++ -lopus -lssl -lcrypto -luuid -lm -lrt -lpthread -lasound -O2
OpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: INVITEEPYC 7713 2PEPYC 77138001600240032004000Min: 4676 / Avg: 4691.67 / Max: 4706Min: 4674 / Avg: 4690 / Max: 47221. (CC) gcc options: -lavformat -lavcodec -lswscale -lavutil -lstdc++ -lopus -lssl -lcrypto -luuid -lm -lrt -lpthread -lasound -O2

OpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: OPTIONS, StatefulEPYC 7713 2PEPYC 77132K4K6K8K10KSE +/- 15.60, N = 3SE +/- 72.18, N = 3881991151. (CC) gcc options: -lavformat -lavcodec -lswscale -lavutil -lstdc++ -lopus -lssl -lcrypto -luuid -lm -lrt -lpthread -lasound -O2
OpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: OPTIONS, StatefulEPYC 7713 2PEPYC 771316003200480064008000Min: 8791 / Avg: 8818.67 / Max: 8845Min: 8971 / Avg: 9115.33 / Max: 91901. (CC) gcc options: -lavformat -lavcodec -lswscale -lavutil -lstdc++ -lopus -lssl -lcrypto -luuid -lm -lrt -lpthread -lasound -O2

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUEPYC 7713 2PEPYC 771360120180240300SE +/- 0.60, N = 3SE +/- 0.42, N = 3292.82274.57MIN: 139.85 / MAX: 449.45MIN: 116.27 / MAX: 347.981. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUEPYC 7713 2PEPYC 771350100150200250Min: 291.8 / Avg: 292.82 / Max: 293.87Min: 273.9 / Avg: 274.57 / Max: 275.351. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUEPYC 7713 2PEPYC 771350100150200250SE +/- 0.45, N = 3SE +/- 0.19, N = 3218.13116.331. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUEPYC 7713 2PEPYC 77134080120160200Min: 217.34 / Avg: 218.13 / Max: 218.88Min: 115.97 / Avg: 116.33 / Max: 116.641. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUEPYC 7713 2PEPYC 7713510152025SE +/- 0.03, N = 3SE +/- 0.01, N = 322.1620.82MIN: 11.48 / MAX: 97.44MIN: 12.46 / MAX: 44.771. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUEPYC 7713 2PEPYC 7713510152025Min: 22.12 / Avg: 22.16 / Max: 22.22Min: 20.81 / Avg: 20.82 / Max: 20.831. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUEPYC 7713 2PEPYC 77136001200180024003000SE +/- 3.95, N = 3SE +/- 0.60, N = 32884.441535.081. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUEPYC 7713 2PEPYC 77135001000150020002500Min: 2876.7 / Avg: 2884.44 / Max: 2889.69Min: 1534.12 / Avg: 1535.08 / Max: 1536.191. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: DLSC - Acceleration: CPUEPYC 7713 2PEPYC 77133691215SE +/- 0.06, N = 3SE +/- 0.04, N = 312.238.33MIN: 11.53 / MAX: 15.5MIN: 8.11 / MAX: 9.42
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: DLSC - Acceleration: CPUEPYC 7713 2PEPYC 771348121620Min: 12.12 / Avg: 12.23 / Max: 12.31Min: 8.26 / Avg: 8.33 / Max: 8.38

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUEPYC 7713 2PEPYC 7713510152025SE +/- 0.01, N = 3SE +/- 0.02, N = 320.3818.93MIN: 8.69 / MAX: 87.48MIN: 11.41 / MAX: 69.861. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUEPYC 7713 2PEPYC 7713510152025Min: 20.37 / Avg: 20.38 / Max: 20.41Min: 18.9 / Avg: 18.93 / Max: 18.951. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUEPYC 7713 2PEPYC 77137001400210028003500SE +/- 2.08, N = 3SE +/- 1.36, N = 33136.141689.181. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUEPYC 7713 2PEPYC 77135001000150020002500Min: 3131.99 / Avg: 3136.14 / Max: 3138.38Min: 1687.34 / Avg: 1689.18 / Max: 1691.841. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUEPYC 7713 2PEPYC 7713816243240SE +/- 0.25, N = 3SE +/- 0.14, N = 335.7534.17MIN: 18.65 / MAX: 151.87MIN: 17.07 / MAX: 68.691. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUEPYC 7713 2PEPYC 7713816243240Min: 35.41 / Avg: 35.75 / Max: 36.24Min: 34.02 / Avg: 34.17 / Max: 34.451. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUEPYC 7713 2PEPYC 7713400800120016002000SE +/- 12.59, N = 3SE +/- 3.92, N = 31788.34935.581. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUEPYC 7713 2PEPYC 771330060090012001500Min: 1763.7 / Avg: 1788.34 / Max: 1805.12Min: 927.75 / Avg: 935.58 / Max: 939.671. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUEPYC 7713 2PEPYC 77130.47480.94961.42441.89922.374SE +/- 0.01, N = 3SE +/- 0.00, N = 32.111.06MIN: 0.53 / MAX: 66.38MIN: 0.55 / MAX: 15.241. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUEPYC 7713 2PEPYC 7713246810Min: 2.09 / Avg: 2.11 / Max: 2.13Min: 1.05 / Avg: 1.06 / Max: 1.061. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUEPYC 7713 2PEPYC 771312K24K36K48K60KSE +/- 378.72, N = 3SE +/- 78.11, N = 357763.8949691.031. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUEPYC 7713 2PEPYC 771310K20K30K40K50KMin: 57131.3 / Avg: 57763.89 / Max: 58440.96Min: 49563.47 / Avg: 49691.03 / Max: 49832.911. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUEPYC 7713 2PEPYC 7713714212835SE +/- 0.01, N = 3SE +/- 0.02, N = 328.0525.97MIN: 10.65 / MAX: 66.97MIN: 12.2 / MAX: 39.521. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUEPYC 7713 2PEPYC 7713612182430Min: 28.04 / Avg: 28.05 / Max: 28.06Min: 25.95 / Avg: 25.97 / Max: 26.011. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUEPYC 7713 2PEPYC 771310002000300040005000SE +/- 0.96, N = 3SE +/- 1.64, N = 34559.422462.401. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUEPYC 7713 2PEPYC 77138001600240032004000Min: 4557.6 / Avg: 4559.42 / Max: 4560.84Min: 2459.17 / Avg: 2462.4 / Max: 2464.521. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUEPYC 7713 2PEPYC 77130.73581.47162.20742.94323.679SE +/- 0.06, N = 3SE +/- 0.00, N = 33.272.03MIN: 0.88 / MAX: 81.36MIN: 0.97 / MAX: 18.021. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUEPYC 7713 2PEPYC 7713246810Min: 3.2 / Avg: 3.27 / Max: 3.39Min: 2.03 / Avg: 2.03 / Max: 2.031. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUEPYC 7713 2PEPYC 77138K16K24K32K40KSE +/- 533.11, N = 3SE +/- 11.83, N = 337561.0730742.171. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUEPYC 7713 2PEPYC 77137K14K21K28K35KMin: 36500.38 / Avg: 37561.07 / Max: 38185.32Min: 30718.52 / Avg: 30742.17 / Max: 30754.381. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUEPYC 7713 2PEPYC 7713816243240SE +/- 0.05, N = 3SE +/- 0.01, N = 333.5531.02MIN: 14.43 / MAX: 183.11MIN: 15.59 / MAX: 49.441. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUEPYC 7713 2PEPYC 7713714212835Min: 33.48 / Avg: 33.55 / Max: 33.64Min: 31.01 / Avg: 31.02 / Max: 31.031. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUEPYC 7713 2PEPYC 7713400800120016002000SE +/- 2.71, N = 3SE +/- 0.14, N = 31905.511030.761. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUEPYC 7713 2PEPYC 771330060090012001500Min: 1900.26 / Avg: 1905.51 / Max: 1909.32Min: 1030.57 / Avg: 1030.76 / Max: 1031.041. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Update RandomEPYC 7713 2PEPYC 771380K160K240K320K400KSE +/- 2065.20, N = 3SE +/- 2028.78, N = 33046543583361. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Update RandomEPYC 7713 2PEPYC 771360K120K180K240K300KMin: 301782 / Avg: 304654.33 / Max: 308661Min: 354444 / Avg: 358335.67 / Max: 3612761. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Read Random Write RandomEPYC 7713 2PEPYC 7713800K1600K2400K3200K4000KSE +/- 5555.94, N = 3SE +/- 34091.93, N = 3292216735618411. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Read Random Write RandomEPYC 7713 2PEPYC 7713600K1200K1800K2400K3000KMin: 2912287 / Avg: 2922166.67 / Max: 2931511Min: 3523704 / Avg: 3561841 / Max: 36298581. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Read While WritingEPYC 7713 2PEPYC 77133M6M9M12M15MSE +/- 195708.96, N = 3SE +/- 66207.97, N = 31456467391821671. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Read While WritingEPYC 7713 2PEPYC 77133M6M9M12M15MMin: 14174664 / Avg: 14564673 / Max: 14788413Min: 9075856 / Avg: 9182167.33 / Max: 93036891. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SharpenEPYC 7713 2PEPYC 77132004006008001000SE +/- 1.45, N = 3SE +/- 2.52, N = 37795111. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SharpenEPYC 7713 2PEPYC 7713140280420560700Min: 776 / Avg: 778.67 / Max: 781Min: 506 / Avg: 511 / Max: 5141. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096EPYC 7713 2PEPYC 7713400K800K1200K1600K2000KSE +/- 338.78, N = 3SE +/- 47.96, N = 31638050.5825904.11. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096EPYC 7713 2PEPYC 7713300K600K900K1200K1500KMin: 1637374.8 / Avg: 1638050.47 / Max: 1638432.1Min: 825835.3 / Avg: 825904.13 / Max: 825996.41. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096EPYC 7713 2PEPYC 77135K10K15K20K25KSE +/- 3.07, N = 3SE +/- 1.58, N = 325009.612613.11. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096EPYC 7713 2PEPYC 77134K8K12K16K20KMin: 25006.4 / Avg: 25009.57 / Max: 25015.7Min: 12611.2 / Avg: 12613.07 / Max: 12616.21. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: EnhancedEPYC 7713 2PEPYC 771330060090012001500SE +/- 9.60, N = 3SE +/- 1.15, N = 313449411. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: EnhancedEPYC 7713 2PEPYC 77132004006008001000Min: 1326 / Avg: 1343.67 / Max: 1359Min: 939 / Avg: 941 / Max: 9431. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Random ReadEPYC 7713 2PEPYC 7713100M200M300M400M500MSE +/- 753688.43, N = 3SE +/- 482617.87, N = 34801753322420747981. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Random ReadEPYC 7713 2PEPYC 771380M160M240M320M400MMin: 478828617 / Avg: 480175332 / Max: 481435126Min: 241133676 / Avg: 242074798 / Max: 2427310391. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: RotateEPYC 7713 2PEPYC 7713160320480640800SE +/- 7.54, N = 3SE +/- 1.00, N = 37307461. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: RotateEPYC 7713 2PEPYC 7713130260390520650Min: 716 / Avg: 729.67 / Max: 742Min: 745 / Avg: 746 / Max: 7481. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: HWB Color SpaceEPYC 7713 2PEPYC 771330060090012001500SE +/- 4.67, N = 3SE +/- 2.96, N = 3109313031. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: HWB Color SpaceEPYC 7713 2PEPYC 77132004006008001000Min: 1086 / Avg: 1093.33 / Max: 1102Min: 1299 / Avg: 1303.33 / Max: 13091. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. The test profile uses a 8K x 8K game texture as a sample input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 1.0Benchmark: Single-Threaded - Configuration: ETC2EPYC 7713 2PEPYC 771350100150200250SE +/- 0.03, N = 3SE +/- 0.14, N = 3229.64230.431. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 1.0Benchmark: Single-Threaded - Configuration: ETC2EPYC 7713 2PEPYC 77134080120160200Min: 229.57 / Avg: 229.64 / Max: 229.68Min: 230.16 / Avg: 230.43 / Max: 230.641. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondEPYC 7713 2PEPYC 7713900K1800K2700K3600K4500KSE +/- 2413.85, N = 3SE +/- 10186.88, N = 34105747.142045229.651. (CC) gcc options: -O2 -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondEPYC 7713 2PEPYC 7713700K1400K2100K2800K3500KMin: 4100921.11 / Avg: 4105747.14 / Max: 4108270.03Min: 2025182.94 / Avg: 2045229.65 / Max: 2058401.791. (CC) gcc options: -O2 -lrt" -lrt

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Classroom - Compute: CPU-OnlyEPYC 7713 2PEPYC 771320406080100SE +/- 0.23, N = 3SE +/- 0.09, N = 340.8977.14
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Classroom - Compute: CPU-OnlyEPYC 7713 2PEPYC 77131530456075Min: 40.5 / Avg: 40.89 / Max: 41.31Min: 76.97 / Avg: 77.14 / Max: 77.29

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 22.1Input: Carbon NanotubeEPYC 7713 2PEPYC 77131632486480SE +/- 0.25, N = 3SE +/- 0.19, N = 343.8671.661. (CC) gcc options: -shared -fwrapv -O2 -lxc -lblas -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 22.1Input: Carbon NanotubeEPYC 7713 2PEPYC 77131428425670Min: 43.38 / Avg: 43.86 / Max: 44.25Min: 71.31 / Avg: 71.66 / Max: 71.961. (CC) gcc options: -shared -fwrapv -O2 -lxc -lblas -lmpi

CloverLeaf

CloverLeaf is a Lagrangian-Eulerian hydrodynamics benchmark. This test profile currently makes use of CloverLeaf's OpenMP version and benchmarked with the clover_bm.in input file (Problem 5). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeafLagrangian-Eulerian HydrodynamicsEPYC 7713 2PEPYC 7713510152025SE +/- 0.36, N = 15SE +/- 0.07, N = 419.4412.001. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp
OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeafLagrangian-Eulerian HydrodynamicsEPYC 7713 2PEPYC 7713510152025Min: 17.66 / Avg: 19.44 / Max: 21.24Min: 11.8 / Avg: 12 / Max: 12.11. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: KostyaEPYC 7713 2PEPYC 77130.65251.3051.95752.613.2625SE +/- 0.00, N = 3SE +/- 0.00, N = 32.92.91. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: KostyaEPYC 7713 2PEPYC 7713246810Min: 2.9 / Avg: 2.9 / Max: 2.9Min: 2.9 / Avg: 2.9 / Max: 2.91. (CXX) g++ options: -O3

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileEPYC 7713 2PEPYC 77131326395265SE +/- 0.25, N = 3SE +/- 0.18, N = 353.2358.96
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileEPYC 7713 2PEPYC 77131224364860Min: 52.93 / Avg: 53.23 / Max: 53.73Min: 58.67 / Avg: 58.96 / Max: 59.3

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.4Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KEPYC 7713 2PEPYC 77131428425670SE +/- 0.53, N = 15SE +/- 0.51, N = 1557.0463.901. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.4Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KEPYC 7713 2PEPYC 77131224364860Min: 52.39 / Avg: 57.04 / Max: 60.35Min: 59.72 / Avg: 63.9 / Max: 66.121. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Natron

Natron is an open-source, cross-platform compositing software for visual effects (VFX) and motion graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: SpaceshipEPYC 7713 2PEPYC 7713246810SE +/- 0.02, N = 5SE +/- 0.03, N = 31.96.0
OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: SpaceshipEPYC 7713 2PEPYC 7713246810Min: 1.9 / Avg: 1.92 / Max: 2Min: 5.9 / Avg: 5.97 / Max: 6

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteEPYC 7713 2PEPYC 77131122334455SE +/- 0.41, N = 3SE +/- 0.47, N = 447.2843.391. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteEPYC 7713 2PEPYC 77131020304050Min: 46.58 / Avg: 47.28 / Max: 48.01Min: 42.35 / Avg: 43.39 / Max: 44.61. (CXX) g++ options: -O2 -lOpenCL

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 4KEPYC 7713 2PEPYC 771348121620SE +/- 0.14, N = 3SE +/- 0.01, N = 315.089.911. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 4KEPYC 7713 2PEPYC 771348121620Min: 14.89 / Avg: 15.08 / Max: 15.35Min: 9.9 / Avg: 9.91 / Max: 9.931. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Timed Wasmer Compilation

This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 2.3Time To CompileEPYC 7713 2PEPYC 77131224364860SE +/- 0.51, N = 3SE +/- 0.11, N = 351.8751.971. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 2.3Time To CompileEPYC 7713 2PEPYC 77131020304050Min: 51.04 / Avg: 51.87 / Max: 52.79Min: 51.85 / Avg: 51.97 / Max: 52.191. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: LargeRandomEPYC 7713 2PEPYC 77130.2250.450.6750.91.125SE +/- 0.00, N = 3SE +/- 0.00, N = 3111. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: LargeRandomEPYC 7713 2PEPYC 7713246810Min: 1 / Avg: 1 / Max: 1Min: 1 / Avg: 1 / Max: 11. (CXX) g++ options: -O3

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMEPYC 7713 2PEPYC 7713306090120150SE +/- 0.20, N = 15SE +/- 0.23, N = 4143.6143.61. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMEPYC 7713 2PEPYC 7713306090120150Min: 142 / Avg: 143.61 / Max: 144.8Min: 143.1 / Avg: 143.63 / Max: 144.11. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMEPYC 7713 2PEPYC 771390180270360450SE +/- 2.15, N = 15SE +/- 0.59, N = 4392.3396.01. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMEPYC 7713 2PEPYC 771370140210280350Min: 362.5 / Avg: 392.28 / Max: 396.2Min: 394.5 / Avg: 396 / Max: 397.21. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

miniFE

MiniFE Finite Element is an application for unstructured implicit finite element codes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgCG Mflops, More Is BetterminiFE 2.2Problem Size: SmallEPYC 7713 2PEPYC 77135K10K15K20K25KSE +/- 292.61, N = 15SE +/- 3.33, N = 424664.821875.11. (CXX) g++ options: -O3 -fopenmp -lmpi_cxx -lmpi
OpenBenchmarking.orgCG Mflops, More Is BetterminiFE 2.2Problem Size: SmallEPYC 7713 2PEPYC 77134K8K12K16K20KMin: 23485.4 / Avg: 24664.83 / Max: 27288.2Min: 21867.6 / Avg: 21875.08 / Max: 21883.51. (CXX) g++ options: -O3 -fopenmp -lmpi_cxx -lmpi

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Rainbow Colors and Prism - Acceleration: CPUEPYC 7713 2PEPYC 7713510152025SE +/- 0.15, N = 15SE +/- 0.54, N = 1516.9921.13MIN: 15.23 / MAX: 18.28MIN: 17.03 / MAX: 23.53
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Rainbow Colors and Prism - Acceleration: CPUEPYC 7713 2PEPYC 7713510152025Min: 16.53 / Avg: 16.99 / Max: 18.2Min: 18.51 / Avg: 21.13 / Max: 23.38

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_icaEPYC 7713 2PEPYC 77131122334455SE +/- 0.13, N = 3SE +/- 0.05, N = 346.4045.27
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_icaEPYC 7713 2PEPYC 7713918273645Min: 46.24 / Avg: 46.4 / Max: 46.67Min: 45.19 / Avg: 45.27 / Max: 45.37

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - GriddingEPYC 7713 2PEPYC 77139K18K27K36K45KSE +/- 263.02, N = 3SE +/- 76.73, N = 343735.821943.01. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - GriddingEPYC 7713 2PEPYC 77138K16K24K32K40KMin: 43281.8 / Avg: 43735.77 / Max: 44192.9Min: 21866.3 / Avg: 21943.03 / Max: 22096.51. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - DegriddingEPYC 7713 2PEPYC 77139K18K27K36K45KSE +/- 448.28, N = 3SE +/- 173.09, N = 339742.020252.11. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - DegriddingEPYC 7713 2PEPYC 77137K14K21K28K35KMin: 38873.4 / Avg: 39742.03 / Max: 40368.6Min: 19992 / Avg: 20252.1 / Max: 205801. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 75, Compression Effort 7EPYC 7713 2PEPYC 77130.12380.24760.37140.49520.619SE +/- 0.01, N = 3SE +/- 0.00, N = 30.550.481. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 75, Compression Effort 7EPYC 7713 2PEPYC 7713246810Min: 0.54 / Avg: 0.55 / Max: 0.56Min: 0.47 / Avg: 0.48 / Max: 0.481. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Decompression SpeedEPYC 7713 2PEPYC 77137001400210028003500SE +/- 23.51, N = 3SE +/- 14.77, N = 33473.03401.61. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Decompression SpeedEPYC 7713 2PEPYC 77136001200180024003000Min: 3431.5 / Avg: 3473 / Max: 3512.9Min: 3382.3 / Avg: 3401.57 / Max: 3430.61. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Compression SpeedEPYC 7713 2PEPYC 771320406080100SE +/- 1.20, N = 3SE +/- 0.84, N = 384.784.91. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Compression SpeedEPYC 7713 2PEPYC 77131632486480Min: 82.5 / Avg: 84.73 / Max: 86.6Min: 83.2 / Avg: 84.87 / Max: 85.81. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression SpeedEPYC 7713 2PEPYC 77137001400210028003500SE +/- 21.87, N = 3SE +/- 14.37, N = 33485.83460.31. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression SpeedEPYC 7713 2PEPYC 77136001200180024003000Min: 3448.7 / Avg: 3485.8 / Max: 3524.4Min: 3434.2 / Avg: 3460.27 / Max: 3483.81. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression SpeedEPYC 7713 2PEPYC 77131122334455SE +/- 0.25, N = 3SE +/- 0.21, N = 339.947.41. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression SpeedEPYC 7713 2PEPYC 77131020304050Min: 39.4 / Avg: 39.9 / Max: 40.2Min: 47.1 / Avg: 47.4 / Max: 47.81. (CC) gcc options: -O3 -pthread -lz -llzma

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: defconfigEPYC 7713 2PEPYC 7713714212835SE +/- 0.20, N = 7SE +/- 0.33, N = 421.5829.41
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: defconfigEPYC 7713 2PEPYC 7713714212835Min: 21.28 / Avg: 21.58 / Max: 22.79Min: 29.01 / Avg: 29.41 / Max: 30.41

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingEPYC 7713 2PEPYC 7713140K280K420K560K700KSE +/- 2913.98, N = 3SE +/- 180.78, N = 36340863546081. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingEPYC 7713 2PEPYC 7713110K220K330K440K550KMin: 630892 / Avg: 634086.33 / Max: 639905Min: 354416 / Avg: 354607.67 / Max: 3549691. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingEPYC 7713 2PEPYC 7713110K220K330K440K550KSE +/- 4935.82, N = 3SE +/- 361.70, N = 35166173510771. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingEPYC 7713 2PEPYC 771390K180K270K360K450KMin: 509406 / Avg: 516617 / Max: 526061Min: 350677 / Avg: 351077 / Max: 3517991. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Kripke

Kripke is a simple, scalable, 3D Sn deterministic particle transport code. Its primary purpose is to research how data layout, programming paradigms and architectures effect the implementation and performance of Sn transport. Kripke is developed by LLNL. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.4EPYC 7713 2PEPYC 771360M120M180M240M300MSE +/- 886909.06, N = 3SE +/- 2567603.42, N = 151439502672707225271. (CXX) g++ options: -O3 -fopenmp
OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.4EPYC 7713 2PEPYC 771350M100M150M200M250MMin: 142181400 / Avg: 143950266.67 / Max: 144949400Min: 247918800 / Avg: 270722526.67 / Max: 2860178001. (CXX) g++ options: -O3 -fopenmp

JPEG XL Decoding libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. The JPEG XL encoding/decoding is done using the libjxl codebase. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.6.1CPU Threads: 1EPYC 7713 2PEPYC 77131530456075SE +/- 0.11, N = 3SE +/- 0.16, N = 366.9367.62
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.6.1CPU Threads: 1EPYC 7713 2PEPYC 77131326395265Min: 66.72 / Avg: 66.93 / Max: 67.06Min: 67.32 / Avg: 67.62 / Max: 67.85

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Lossless, Highest CompressionEPYC 7713 2PEPYC 77130.1260.2520.3780.5040.63SE +/- 0.00, N = 3SE +/- 0.00, N = 30.560.561. (CC) gcc options: -fvisibility=hidden -O2 -lm
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Lossless, Highest CompressionEPYC 7713 2PEPYC 7713246810Min: 0.55 / Avg: 0.56 / Max: 0.56Min: 0.56 / Avg: 0.56 / Max: 0.561. (CC) gcc options: -fvisibility=hidden -O2 -lm

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To CompileEPYC 7713 2PEPYC 77131020304050SE +/- 0.08, N = 3SE +/- 0.08, N = 341.1743.35
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To CompileEPYC 7713 2PEPYC 7713918273645Min: 41.05 / Avg: 41.17 / Max: 41.31Min: 43.19 / Avg: 43.35 / Max: 43.47

Primesieve

Primesieve generates prime numbers using a highly optimized sieve of Eratosthenes implementation. Primesieve primarily benchmarks the CPU's L1/L2 cache performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e13EPYC 7713 2PEPYC 77131224364860SE +/- 0.07, N = 3SE +/- 0.05, N = 328.7255.261. (CXX) g++ options: -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e13EPYC 7713 2PEPYC 77131122334455Min: 28.63 / Avg: 28.72 / Max: 28.86Min: 55.19 / Avg: 55.25 / Max: 55.351. (CXX) g++ options: -O3

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMEPYC 7713 2PEPYC 7713306090120150SE +/- 0.26, N = 3SE +/- 0.24, N = 3141.0141.41. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMEPYC 7713 2PEPYC 7713306090120150Min: 140.6 / Avg: 141.03 / Max: 141.5Min: 141.1 / Avg: 141.43 / Max: 141.91. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMEPYC 7713 2PEPYC 771390180270360450SE +/- 0.64, N = 3SE +/- 0.18, N = 3426.1431.81. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMEPYC 7713 2PEPYC 771380160240320400Min: 425.3 / Avg: 426.13 / Max: 427.4Min: 431.5 / Avg: 431.83 / Max: 432.11. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 2EPYC 7713 2PEPYC 7713918273645SE +/- 0.21, N = 3SE +/- 0.21, N = 341.2140.771. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 2EPYC 7713 2PEPYC 7713918273645Min: 40.97 / Avg: 41.21 / Max: 41.63Min: 40.36 / Avg: 40.77 / Max: 41.051. (CXX) g++ options: -O3 -fPIC -lm

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosEPYC 7713 2PEPYC 771320406080100SE +/- 0.15, N = 3SE +/- 0.13, N = 395.896.5
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosEPYC 7713 2PEPYC 771320406080100Min: 95.5 / Avg: 95.77 / Max: 96Min: 96.4 / Avg: 96.53 / Max: 96.8

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bareEPYC 7713 2PEPYC 7713246810SE +/- 0.024, N = 3SE +/- 0.010, N = 38.2155.1301. (CXX) g++ options: -O3
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bareEPYC 7713 2PEPYC 77133691215Min: 8.17 / Avg: 8.22 / Max: 8.26Min: 5.11 / Avg: 5.13 / Max: 5.151. (CXX) g++ options: -O3

Timed PHP Compilation

This test times how long it takes to build PHP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 8.1.9Time To CompileEPYC 7713 2PEPYC 7713918273645SE +/- 0.46, N = 3SE +/- 0.41, N = 338.3439.76
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 8.1.9Time To CompileEPYC 7713 2PEPYC 7713816243240Min: 37.83 / Avg: 38.34 / Max: 39.25Min: 39.27 / Avg: 39.76 / Max: 40.57

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 1000EPYC 7713 2PEPYC 7713400K800K1200K1600K2000KSE +/- 7896.51, N = 3SE +/- 8028.55, N = 31364479.131990067.331. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 1000EPYC 7713 2PEPYC 7713300K600K900K1200K1500KMin: 1350591.75 / Avg: 1364479.13 / Max: 1377935.88Min: 1980774 / Avg: 1990067.33 / Max: 2006054.121. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Algebraic Multi-Grid Benchmark

AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2EPYC 7713 2PEPYC 7713400M800M1200M1600M2000MSE +/- 699794.57, N = 3SE +/- 863687.12, N = 3192330666710120516671. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi
OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2EPYC 7713 2PEPYC 7713300M600M900M1200M1500MMin: 1922022000 / Avg: 1923306666.67 / Max: 1924430000Min: 1011066000 / Avg: 1012051666.67 / Max: 10137730001. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Disney MaterialEPYC 7713 2PEPYC 7713142842567050.4661.27

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Ultra FastEPYC 7713 2PEPYC 77131326395265SE +/- 0.59, N = 6SE +/- 0.82, N = 1558.5256.531. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Ultra FastEPYC 7713 2PEPYC 77131224364860Min: 55.87 / Avg: 58.52 / Max: 59.88Min: 51.81 / Avg: 56.53 / Max: 60.241. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDEPYC 7713 2PEPYC 77131122334455SE +/- 0.14, N = 3SE +/- 0.08, N = 326.7446.551. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDEPYC 7713 2PEPYC 7713918273645Min: 26.47 / Avg: 26.74 / Max: 26.92Min: 46.43 / Avg: 46.55 / Max: 46.71. (CXX) g++ options: -O2 -lOpenCL

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSamples / Second, More Is BettersrsRAN 22.04.1Test: OFDM_TestEPYC 7713 2PEPYC 771330M60M90M120M150MSE +/- 1304266.50, N = 3SE +/- 1516941.37, N = 31303333331366333331. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.orgSamples / Second, More Is BettersrsRAN 22.04.1Test: OFDM_TestEPYC 7713 2PEPYC 771320M40M60M80M100MMin: 128200000 / Avg: 130333333.33 / Max: 132700000Min: 134600000 / Avg: 136633333.33 / Max: 1396000001. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMEPYC 7713 2PEPYC 7713306090120150SE +/- 0.46, N = 3SE +/- 0.30, N = 3133.6133.71. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMEPYC 7713 2PEPYC 7713306090120150Min: 133 / Avg: 133.6 / Max: 134.5Min: 133.4 / Avg: 133.7 / Max: 134.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMEPYC 7713 2PEPYC 771390180270360450SE +/- 0.24, N = 3SE +/- 2.00, N = 3394.1394.41. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMEPYC 7713 2PEPYC 771370140210280350Min: 393.6 / Avg: 394.07 / Max: 394.4Min: 390.4 / Avg: 394.4 / Max: 396.51. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Asian DragonEPYC 7713 2PEPYC 77131530456075SE +/- 0.62, N = 15SE +/- 0.07, N = 568.8862.98MIN: 62.6 / MAX: 74.57MIN: 61.69 / MAX: 65.06
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Asian DragonEPYC 7713 2PEPYC 77131326395265Min: 65.51 / Avg: 68.88 / Max: 72.47Min: 62.82 / Avg: 62.98 / Max: 63.2

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Asian DragonEPYC 7713 2PEPYC 77131530456075SE +/- 0.75, N = 15SE +/- 0.17, N = 467.1756.81MIN: 59.54 / MAX: 73.59MIN: 55.32 / MAX: 58.61
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Asian DragonEPYC 7713 2PEPYC 77131326395265Min: 62.38 / Avg: 67.17 / Max: 71.64Min: 56.34 / Avg: 56.81 / Max: 57.1

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUEPYC 7713 2PEPYC 7713714212835SE +/- 0.34, N = 4SE +/- 0.24, N = 1230.0112.56MIN: 23.66MIN: 9.161. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUEPYC 7713 2PEPYC 7713714212835Min: 29.13 / Avg: 30.01 / Max: 30.75Min: 10.52 / Avg: 12.56 / Max: 13.421. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileEPYC 7713 2PEPYC 7713306090120150SE +/- 0.33, N = 3SE +/- 0.33, N = 3155156
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileEPYC 7713 2PEPYC 7713306090120150Min: 155 / Avg: 155.33 / Max: 156Min: 155 / Avg: 155.67 / Max: 156

PJSIP

PJSIP is a free and open source multimedia communication library written in C language implementing standard based protocols such as SIP, SDP, RTP, STUN, TURN, and ICE. It combines signaling protocol (SIP) with rich multimedia framework and NAT traversal functionality into high level API that is portable and suitable for almost any type of systems ranging from desktops, embedded systems, to mobile handsets. This test profile is making use of pjsip-perf with both the client/server on teh system. More details on the PJSIP benchmark at https://www.pjsip.org/high-performance-sip.htm Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: OPTIONS, StatelessEPYC 7713 2PEPYC 771314K28K42K56K70KSE +/- 232.96, N = 3SE +/- 187.70, N = 367617666021. (CC) gcc options: -lavformat -lavcodec -lswscale -lavutil -lstdc++ -lopus -lssl -lcrypto -luuid -lm -lrt -lpthread -lasound -O2
OpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: OPTIONS, StatelessEPYC 7713 2PEPYC 771312K24K36K48K60KMin: 67214 / Avg: 67616.67 / Max: 68021Min: 66235 / Avg: 66602.33 / Max: 668531. (CC) gcc options: -lavformat -lavcodec -lswscale -lavutil -lstdc++ -lopus -lssl -lcrypto -luuid -lm -lrt -lpthread -lasound -O2

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkEPYC 7713 2PEPYC 7713918273645SE +/- 0.12, N = 3SE +/- 0.18, N = 337.9639.401. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkEPYC 7713 2PEPYC 7713816243240Min: 37.75 / Avg: 37.96 / Max: 38.18Min: 39.16 / Avg: 39.4 / Max: 39.761. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Fishy Cat - Compute: CPU-OnlyEPYC 7713 2PEPYC 7713918273645SE +/- 0.08, N = 3SE +/- 0.03, N = 322.2538.85
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Fishy Cat - Compute: CPU-OnlyEPYC 7713 2PEPYC 7713816243240Min: 22.12 / Avg: 22.25 / Max: 22.4Min: 38.79 / Avg: 38.85 / Max: 38.89

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.DEPYC 7713 2PEPYC 771310002000300040005000SE +/- 43.17, N = 15SE +/- 13.59, N = 44690.062648.961. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.DEPYC 7713 2PEPYC 77138001600240032004000Min: 4402.73 / Avg: 4690.06 / Max: 4929.94Min: 2622.48 / Avg: 2648.96 / Max: 2684.21. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark BayesEPYC 7713 2PEPYC 7713140280420560700SE +/- 4.99, N = 3SE +/- 2.85, N = 3638.1585.6MIN: 368.33 / MAX: 1098.6MIN: 368.12 / MAX: 849.82
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark BayesEPYC 7713 2PEPYC 7713110220330440550Min: 628.08 / Avg: 638.06 / Max: 643.19Min: 580.42 / Avg: 585.6 / Max: 590.27

Aircrack-ng

Aircrack-ng is a tool for assessing WiFi/WLAN network security. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.7EPYC 7713 2PEPYC 771330K60K90K120K150KSE +/- 293.66, N = 3SE +/- 854.57, N = 3149778.80149666.941. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpcre -lsqlite3 -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -pthread
OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.7EPYC 7713 2PEPYC 771330K60K90K120K150KMin: 149275.22 / Avg: 149778.8 / Max: 150292.34Min: 148492.91 / Avg: 149666.94 / Max: 151329.661. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpcre -lsqlite3 -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -pthread

FLAC Audio Encoding

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.4WAV To FLACEPYC 7713 2PEPYC 771348121620SE +/- 0.01, N = 5SE +/- 0.01, N = 518.1318.131. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.4WAV To FLACEPYC 7713 2PEPYC 7713510152025Min: 18.11 / Avg: 18.13 / Max: 18.15Min: 18.1 / Avg: 18.13 / Max: 18.151. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KEPYC 7713 2PEPYC 7713612182430SE +/- 0.26, N = 4SE +/- 0.03, N = 321.4826.691. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KEPYC 7713 2PEPYC 7713612182430Min: 20.78 / Avg: 21.48 / Max: 22.06Min: 26.63 / Avg: 26.69 / Max: 26.751. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Google SynthMark

SynthMark is a cross platform tool for benchmarking CPU performance under a variety of real-time audio workloads. It uses a polyphonic synthesizer model to provide standardized tests for latency, jitter and computational throughput. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_100EPYC 7713 2PEPYC 7713160320480640800SE +/- 0.12, N = 3SE +/- 1.04, N = 3748.06747.311. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast
OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_100EPYC 7713 2PEPYC 7713130260390520650Min: 747.91 / Avg: 748.06 / Max: 748.29Min: 745.41 / Avg: 747.31 / Max: 748.991. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21EPYC 7713 2PEPYC 77136001200180024003000SE +/- 1.70, N = 3SE +/- 28.31, N = 32808.82795.31. (CXX) g++ options: -O3 -march=native -rdynamic
OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21EPYC 7713 2PEPYC 77135001000150020002500Min: 2805.9 / Avg: 2808.8 / Max: 2811.8Min: 2738.7 / Avg: 2795.3 / Max: 2824.71. (CXX) g++ options: -O3 -march=native -rdynamic

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.12.1Variant: Monero - Hash Count: 1MEPYC 7713 2PEPYC 771311K22K33K44K55KSE +/- 161.81, N = 3SE +/- 67.12, N = 350749.628612.41. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.12.1Variant: Monero - Hash Count: 1MEPYC 7713 2PEPYC 77139K18K27K36K45KMin: 50441.4 / Avg: 50749.6 / Max: 50989.2Min: 28538.8 / Avg: 28612.37 / Max: 28746.41. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsEPYC 7713 2PEPYC 77130.10230.20460.30690.40920.5115SE +/- 0.00056, N = 3SE +/- 0.00038, N = 30.267120.45457
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsEPYC 7713 2PEPYC 771312345Min: 0.27 / Avg: 0.27 / Max: 0.27Min: 0.45 / Avg: 0.45 / Max: 0.46

Pennant

Pennant is an application focused on hydrodynamics on general unstructured meshes in 2D. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: leblancbigEPYC 7713 2PEPYC 7713246810SE +/- 0.079319, N = 15SE +/- 0.094028, N = 153.5565706.1330601. (CXX) g++ options: -fopenmp -lmpi_cxx -lmpi
OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: leblancbigEPYC 7713 2PEPYC 7713246810Min: 3.16 / Avg: 3.56 / Max: 4.21Min: 5.64 / Avg: 6.13 / Max: 7.141. (CXX) g++ options: -fopenmp -lmpi_cxx -lmpi

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteEPYC 7713 2PEPYC 7713160K320K480K640K800KSE +/- 2382.18, N = 3SE +/- 5040.99, N = 3726003733332
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteEPYC 7713 2PEPYC 7713130K260K390K520K650KMin: 722685 / Avg: 726003 / Max: 730623Min: 723624 / Avg: 733332 / Max: 740542

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.6.1Input: JPEG - Encode Speed: 7EPYC 7713 2PEPYC 771320406080100SE +/- 0.79, N = 10SE +/- 0.16, N = 5100.77104.581. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.6.1Input: JPEG - Encode Speed: 7EPYC 7713 2PEPYC 771320406080100Min: 94.67 / Avg: 100.77 / Max: 103.68Min: 104.24 / Avg: 104.58 / Max: 105.051. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.12.1Variant: Wownero - Hash Count: 1MEPYC 7713 2PEPYC 77139K18K27K36K45KSE +/- 68.12, N = 3SE +/- 49.92, N = 341982.936467.21. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.12.1Variant: Wownero - Hash Count: 1MEPYC 7713 2PEPYC 77137K14K21K28K35KMin: 41855 / Avg: 41982.9 / Max: 42087.5Min: 36367.6 / Avg: 36467.2 / Max: 365231. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svmEPYC 7713 2PEPYC 7713510152025SE +/- 0.01, N = 3SE +/- 0.02, N = 321.4521.33
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svmEPYC 7713 2PEPYC 7713510152025Min: 21.44 / Avg: 21.45 / Max: 21.48Min: 21.31 / Avg: 21.33 / Max: 21.37

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansEPYC 7713 2PEPYC 771310002000300040005000SE +/- 35.66, N = 9SE +/- 31.86, N = 545683998
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansEPYC 7713 2PEPYC 77138001600240032004000Min: 4465 / Avg: 4568.22 / Max: 4826Min: 3906 / Avg: 3998.4 / Max: 4077

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: BMW27 - Compute: CPU-OnlyEPYC 7713 2PEPYC 7713714212835SE +/- 0.23, N = 3SE +/- 0.08, N = 317.1930.43
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: BMW27 - Compute: CPU-OnlyEPYC 7713 2PEPYC 7713714212835Min: 16.73 / Avg: 17.19 / Max: 17.44Min: 30.28 / Avg: 30.43 / Max: 30.56

LULESH

LULESH is the Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.3EPYC 7713 2PEPYC 77138K16K24K32K40KSE +/- 104.62, N = 3SE +/- 64.60, N = 336456.2319305.661. (CXX) g++ options: -O3 -fopenmp -lm -lmpi_cxx -lmpi
OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.3EPYC 7713 2PEPYC 77136K12K18K24K30KMin: 36247.75 / Avg: 36456.23 / Max: 36575.91Min: 19177.08 / Avg: 19305.66 / Max: 19380.881. (CXX) g++ options: -O3 -fopenmp -lm -lmpi_cxx -lmpi

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonEPYC 7713 2PEPYC 771380160240320400SE +/- 1.00, N = 3SE +/- 1.45, N = 3385387
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonEPYC 7713 2PEPYC 771370140210280350Min: 384 / Avg: 385 / Max: 387Min: 384 / Avg: 386.67 / Max: 389

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2EPYC 7713 2PEPYC 771370140210280350SE +/- 0.48, N = 3SE +/- 0.13, N = 3341.28332.61MIN: 339.15 / MAX: 401.2MIN: 331.37 / MAX: 346.441. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2EPYC 7713 2PEPYC 771360120180240300Min: 340.44 / Avg: 341.28 / Max: 342.12Min: 332.42 / Avg: 332.61 / Max: 332.851. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

Cython Benchmark

Cython provides a superset of Python that is geared to deliver C-like levels of performance. This test profile makes use of Cython's bundled benchmark tests and runs an N-Queens sample test as a simple benchmark to the system's Cython performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCython Benchmark 0.29.21Test: N-QueensEPYC 7713 2PEPYC 7713612182430SE +/- 0.04, N = 3SE +/- 0.08, N = 323.3123.38
OpenBenchmarking.orgSeconds, Fewer Is BetterCython Benchmark 0.29.21Test: N-QueensEPYC 7713 2PEPYC 7713510152025Min: 23.25 / Avg: 23.31 / Max: 23.38Min: 23.24 / Avg: 23.38 / Max: 23.5

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.CEPYC 7713 2PEPYC 771320K40K60K80K100KSE +/- 212.54, N = 4SE +/- 33.24, N = 3116527.6351921.711. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.CEPYC 7713 2PEPYC 771320K40K60K80K100KMin: 115917.53 / Avg: 116527.63 / Max: 116899.32Min: 51857.67 / Avg: 51921.71 / Max: 51969.181. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesEPYC 7713 2PEPYC 77132004006008001000SE +/- 2.60, N = 3SE +/- 4.10, N = 3953959
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesEPYC 7713 2PEPYC 77132004006008001000Min: 948 / Avg: 952.67 / Max: 957Min: 953 / Avg: 959.33 / Max: 967

Node.js Express HTTP Load Test

A Node.js Express server with a Node-based loadtest client for facilitating HTTP benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterNode.js Express HTTP Load TestEPYC 7713 2PEPYC 771314002800420056007000SE +/- 59.99, N = 4SE +/- 32.02, N = 461986367
OpenBenchmarking.orgRequests Per Second, More Is BetterNode.js Express HTTP Load TestEPYC 7713 2PEPYC 771311002200330044005500Min: 6041 / Avg: 6197.75 / Max: 6298Min: 6284 / Avg: 6367 / Max: 6434

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMEPYC 7713 2PEPYC 77131428425670SE +/- 0.06, N = 3SE +/- 0.12, N = 362.262.41. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMEPYC 7713 2PEPYC 77131224364860Min: 62.1 / Avg: 62.2 / Max: 62.3Min: 62.2 / Avg: 62.4 / Max: 62.61. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMEPYC 7713 2PEPYC 7713306090120150SE +/- 0.52, N = 3SE +/- 0.18, N = 3127.9128.91. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMEPYC 7713 2PEPYC 771320406080100Min: 126.9 / Avg: 127.93 / Max: 128.6Min: 128.6 / Avg: 128.93 / Max: 129.21. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RT.hdr_alb_nrm.3840x2160EPYC 7713 2PEPYC 77130.50851.0171.52552.0342.5425SE +/- 0.00, N = 4SE +/- 0.00, N = 32.261.40
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RT.hdr_alb_nrm.3840x2160EPYC 7713 2PEPYC 7713246810Min: 2.25 / Avg: 2.26 / Max: 2.26Min: 1.4 / Avg: 1.4 / Max: 1.41

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.CEPYC 7713 2PEPYC 771350K100K150K200K250KSE +/- 391.85, N = 4SE +/- 152.57, N = 3235808.54127998.611. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.CEPYC 7713 2PEPYC 771340K80K120K160K200KMin: 234636.28 / Avg: 235808.54 / Max: 236280.08Min: 127746.52 / Avg: 127998.61 / Max: 128273.561. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 4KEPYC 7713 2PEPYC 771350100150200250SE +/- 2.01, N = 15SE +/- 0.71, N = 9195.62223.661. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 4KEPYC 7713 2PEPYC 77134080120160200Min: 183.26 / Avg: 195.62 / Max: 207.11Min: 219.3 / Avg: 223.66 / Max: 226.841. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RT.ldr_alb_nrm.3840x2160EPYC 7713 2PEPYC 77130.50851.0171.52552.0342.5425SE +/- 0.00, N = 4SE +/- 0.00, N = 32.261.41
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RT.ldr_alb_nrm.3840x2160EPYC 7713 2PEPYC 7713246810Min: 2.26 / Avg: 2.26 / Max: 2.27Min: 1.41 / Avg: 1.41 / Max: 1.41

Timed Apache Compilation

This test times how long it takes to build the Apache HTTPD web server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileEPYC 7713 2PEPYC 7713510152025SE +/- 0.01, N = 3SE +/- 0.01, N = 320.5220.10
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileEPYC 7713 2PEPYC 7713510152025Min: 20.5 / Avg: 20.52 / Max: 20.55Min: 20.08 / Avg: 20.1 / Max: 20.12

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Second, More Is BetterASKAP 1.0Test: Hogbom Clean OpenMPEPYC 7713 2PEPYC 7713110220330440550SE +/- 1.13, N = 4SE +/- 1.11, N = 4319.76520.841. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgIterations Per Second, More Is BetterASKAP 1.0Test: Hogbom Clean OpenMPEPYC 7713 2PEPYC 771390180270360450Min: 316.46 / Avg: 319.76 / Max: 321.54Min: 518.14 / Avg: 520.84 / Max: 523.561. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 256 - Buffer Length: 256 - Filter Length: 57EPYC 7713 2PEPYC 77131100M2200M3300M4400M5500MSE +/- 1628905.56, N = 3SE +/- 11518439.32, N = 3535090000027671666671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 256 - Buffer Length: 256 - Filter Length: 57EPYC 7713 2PEPYC 7713900M1800M2700M3600M4500MMin: 5348300000 / Avg: 5350900000 / Max: 5353900000Min: 2755300000 / Avg: 2767166666.67 / Max: 27902000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Very FastEPYC 7713 2PEPYC 77131224364860SE +/- 0.17, N = 5SE +/- 0.03, N = 454.2436.711. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Very FastEPYC 7713 2PEPYC 77131122334455Min: 53.78 / Avg: 54.24 / Max: 54.76Min: 36.63 / Avg: 36.71 / Max: 36.791. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 128 - Buffer Length: 256 - Filter Length: 57EPYC 7713 2PEPYC 77131100M2200M3300M4400M5500MSE +/- 3142362.88, N = 3SE +/- 360555.13, N = 3508506666727069000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 128 - Buffer Length: 256 - Filter Length: 57EPYC 7713 2PEPYC 7713900M1800M2700M3600M4500MMin: 5078900000 / Avg: 5085066666.67 / Max: 5089200000Min: 2706200000 / Avg: 2706900000 / Max: 27074000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 64 - Buffer Length: 256 - Filter Length: 57EPYC 7713 2PEPYC 7713700M1400M2100M2800M3500MSE +/- 2355372.11, N = 3SE +/- 5184056.76, N = 3319153333325776666671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 64 - Buffer Length: 256 - Filter Length: 57EPYC 7713 2PEPYC 7713600M1200M1800M2400M3000MMin: 3187200000 / Avg: 3191533333.33 / Max: 3195300000Min: 2567300000 / Avg: 2577666666.67 / Max: 25830000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 32 - Buffer Length: 256 - Filter Length: 57EPYC 7713 2PEPYC 7713300M600M900M1200M1500MSE +/- 6470531.49, N = 3SE +/- 1217009.08, N = 3161426666716055666671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 32 - Buffer Length: 256 - Filter Length: 57EPYC 7713 2PEPYC 7713300M600M900M1200M1500MMin: 1605900000 / Avg: 1614266666.67 / Max: 1627000000Min: 1603800000 / Avg: 1605566666.67 / Max: 16079000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.BEPYC 7713 2PEPYC 771330K60K90K120K150KSE +/- 1089.48, N = 15SE +/- 824.60, N = 15142675.5589950.761. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.BEPYC 7713 2PEPYC 771320K40K60K80K100KMin: 134030.73 / Avg: 142675.55 / Max: 148784.61Min: 83428.97 / Avg: 89950.76 / Max: 94131.161. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMEPYC 7713 2PEPYC 7713306090120150SE +/- 0.29, N = 3SE +/- 0.86, N = 3149.6149.61. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMEPYC 7713 2PEPYC 7713306090120150Min: 149.1 / Avg: 149.57 / Max: 150.1Min: 148 / Avg: 149.63 / Max: 150.91. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMEPYC 7713 2PEPYC 771390180270360450SE +/- 0.95, N = 3SE +/- 1.02, N = 3427.4430.51. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMEPYC 7713 2PEPYC 771380160240320400Min: 425.6 / Avg: 427.4 / Max: 428.8Min: 428.5 / Avg: 430.53 / Max: 431.61. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Timed Mesa Compilation

This test profile times how long it takes to compile Mesa with Meson/Ninja. For minimizing build dependencies and avoid versioning conflicts, test this is just the core Mesa build without LLVM or the extra Gallium3D/Mesa drivers enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To CompileEPYC 7713 2PEPYC 7713510152025SE +/- 0.08, N = 3SE +/- 0.01, N = 318.1020.61
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To CompileEPYC 7713 2PEPYC 7713510152025Min: 17.95 / Avg: 18.1 / Max: 18.23Min: 20.59 / Avg: 20.61 / Max: 20.62

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.1EPYC 7713 2PEPYC 771360120180240300SE +/- 0.08, N = 3SE +/- 0.03, N = 3273.85273.48MIN: 273.49 / MAX: 274.55MIN: 273.31 / MAX: 275.211. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.1EPYC 7713 2PEPYC 771350100150200250Min: 273.76 / Avg: 273.85 / Max: 274.01Min: 273.45 / Avg: 273.48 / Max: 273.541. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.4Time To CompileEPYC 7713 2PEPYC 771348121620SE +/- 0.08, N = 4SE +/- 0.06, N = 314.2718.15
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.4Time To CompileEPYC 7713 2PEPYC 7713510152025Min: 14.16 / Avg: 14.27 / Max: 14.49Min: 18.08 / Avg: 18.15 / Max: 18.26

JPEG XL Decoding libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. The JPEG XL encoding/decoding is done using the libjxl codebase. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.6.1CPU Threads: AllEPYC 7713 2PEPYC 7713160320480640800SE +/- 3.90, N = 3SE +/- 0.45, N = 4569.97731.88
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.6.1CPU Threads: AllEPYC 7713 2PEPYC 7713130260390520650Min: 565.86 / Avg: 569.97 / Max: 577.77Min: 730.63 / Avg: 731.88 / Max: 732.8

Basis Universal

Basis Universal is a GPU texture codec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 3EPYC 7713 2PEPYC 771348121620SE +/- 0.07, N = 4SE +/- 0.01, N = 411.4015.391. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 3EPYC 7713 2PEPYC 771348121620Min: 11.2 / Avg: 11.4 / Max: 11.51Min: 15.37 / Avg: 15.38 / Max: 15.411. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Sysbench

This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/sec, More Is BetterSysbench 1.0.20Test: RAM / MemoryEPYC 7713 2PEPYC 77132K4K6K8K10KSE +/- 89.77, N = 4SE +/- 21.53, N = 57260.669990.121. (CC) gcc options: -O2 -funroll-loops -rdynamic -ldl -laio -lm
OpenBenchmarking.orgMiB/sec, More Is BetterSysbench 1.0.20Test: RAM / MemoryEPYC 7713 2PEPYC 77132K4K6K8K10KMin: 7047.24 / Avg: 7260.66 / Max: 7462.58Min: 9934.79 / Avg: 9990.12 / Max: 10063.21. (CC) gcc options: -O2 -funroll-loops -rdynamic -ldl -laio -lm

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CEPYC 7713 2PEPYC 771360K120K180K240K300KSE +/- 2852.04, N = 5SE +/- 264.51, N = 4259899.49135917.431. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CEPYC 7713 2PEPYC 771350K100K150K200K250KMin: 250489.26 / Avg: 259899.49 / Max: 266188.54Min: 135299.69 / Avg: 135917.43 / Max: 136381.451. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 4KEPYC 7713 2PEPYC 77134080120160200SE +/- 1.07, N = 7SE +/- 7.71, N = 15179.55184.711. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 4KEPYC 7713 2PEPYC 7713306090120150Min: 174.72 / Avg: 179.55 / Max: 182.72Min: 76.91 / Avg: 184.71 / Max: 194.41. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.4Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KEPYC 7713 2PEPYC 77131530456075SE +/- 0.33, N = 5SE +/- 0.10, N = 555.6265.491. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.4Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KEPYC 7713 2PEPYC 77131326395265Min: 54.76 / Avg: 55.62 / Max: 56.36Min: 65.1 / Avg: 65.49 / Max: 65.631. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 4KEPYC 7713 2PEPYC 77131632486480SE +/- 0.45, N = 5SE +/- 0.18, N = 568.8070.781. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 4KEPYC 7713 2PEPYC 77131428425670Min: 68.1 / Avg: 68.8 / Max: 70.49Min: 70.29 / Avg: 70.78 / Max: 71.331. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ExhaustiveEPYC 7713 2PEPYC 7713246810SE +/- 0.0100, N = 4SE +/- 0.0030, N = 36.44993.26591. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ExhaustiveEPYC 7713 2PEPYC 77133691215Min: 6.42 / Avg: 6.45 / Max: 6.47Min: 3.26 / Avg: 3.27 / Max: 3.271. (CXX) g++ options: -O3 -flto -pthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, LosslessEPYC 7713 2PEPYC 77130.31950.6390.95851.2781.5975SE +/- 0.01, N = 3SE +/- 0.01, N = 31.421.401. (CC) gcc options: -fvisibility=hidden -O2 -lm
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, LosslessEPYC 7713 2PEPYC 7713246810Min: 1.4 / Avg: 1.42 / Max: 1.43Min: 1.39 / Avg: 1.4 / Max: 1.431. (CC) gcc options: -fvisibility=hidden -O2 -lm

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: CrownEPYC 7713 2PEPYC 771320406080100SE +/- 0.07, N = 6SE +/- 0.05, N = 482.9347.56MIN: 77.94 / MAX: 92.77MIN: 46.72 / MAX: 49.62
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: CrownEPYC 7713 2PEPYC 77131632486480Min: 82.71 / Avg: 82.93 / Max: 83.18Min: 47.43 / Avg: 47.56 / Max: 47.66

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.6.1Input: JPEG - Encode Speed: 8EPYC 7713 2PEPYC 7713714212835SE +/- 0.04, N = 5SE +/- 0.17, N = 528.9229.241. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.6.1Input: JPEG - Encode Speed: 8EPYC 7713 2PEPYC 7713612182430Min: 28.78 / Avg: 28.92 / Max: 29.03Min: 28.84 / Avg: 29.24 / Max: 29.871. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie

POV-Ray

This is a test of POV-Ray, the Persistence of Vision Raytracer. POV-Ray is used to create 3D graphics using ray-tracing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace TimeEPYC 7713 2PEPYC 77133691215SE +/- 0.050, N = 5SE +/- 0.067, N = 47.59010.6801. (CXX) g++ options: -pipe -O3 -ffast-math -march=native -R/usr/lib -lSDL -lXpm -lSM -lICE -lX11 -lIlmImf -lIlmImf-2_5 -lImath-2_5 -lHalf-2_5 -lIex-2_5 -lIexMath-2_5 -lIlmThread-2_5 -lIlmThread -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system
OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace TimeEPYC 7713 2PEPYC 77133691215Min: 7.42 / Avg: 7.59 / Max: 7.7Min: 10.6 / Avg: 10.68 / Max: 10.881. (CXX) g++ options: -pipe -O3 -ffast-math -march=native -R/usr/lib -lSDL -lXpm -lSM -lICE -lX11 -lIlmImf -lIlmImf-2_5 -lImath-2_5 -lHalf-2_5 -lIex-2_5 -lIexMath-2_5 -lIlmThread-2_5 -lIlmThread -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system

ACES DGEMM

This is a multi-threaded DGEMM benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point RateEPYC 7713 2PEPYC 7713714212835SE +/- 0.29, N = 6SE +/- 0.08, N = 532.0720.881. (CC) gcc options: -O3 -march=native -fopenmp
OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point RateEPYC 7713 2PEPYC 7713714212835Min: 30.95 / Avg: 32.07 / Max: 32.67Min: 20.7 / Avg: 20.88 / Max: 21.131. (CC) gcc options: -O3 -march=native -fopenmp

Basis Universal

Basis Universal is a GPU texture codec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 2EPYC 7713 2PEPYC 77133691215SE +/- 0.029, N = 5SE +/- 0.005, N = 58.77310.6151. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 2EPYC 7713 2PEPYC 77133691215Min: 8.68 / Avg: 8.77 / Max: 8.83Min: 10.61 / Avg: 10.62 / Max: 10.631. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: CrownEPYC 7713 2PEPYC 771320406080100SE +/- 0.10, N = 6SE +/- 0.16, N = 489.6051.83MIN: 83.63 / MAX: 104.61MIN: 50.82 / MAX: 57.89
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: CrownEPYC 7713 2PEPYC 771320406080100Min: 89.43 / Avg: 89.6 / Max: 90.07Min: 51.64 / Avg: 51.83 / Max: 52.31

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ThoroughEPYC 7713 2PEPYC 77131326395265SE +/- 0.06, N = 5SE +/- 0.02, N = 458.4730.461. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ThoroughEPYC 7713 2PEPYC 77131224364860Min: 58.31 / Avg: 58.47 / Max: 58.62Min: 30.43 / Avg: 30.46 / Max: 30.511. (CXX) g++ options: -O3 -flto -pthread

LAME MP3 Encoding

LAME is an MP3 encoder licensed under the LGPL. This test measures the time required to encode a WAV file to MP3 format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3EPYC 7713 2PEPYC 7713246810SE +/- 0.006, N = 6SE +/- 0.005, N = 67.4907.4721. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lncurses -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3EPYC 7713 2PEPYC 77133691215Min: 7.47 / Avg: 7.49 / Max: 7.52Min: 7.46 / Avg: 7.47 / Max: 7.491. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lncurses -lm

Google Draco

Draco is a library developed by Google for compressing/decompressing 3D geometric meshes and point clouds. This test profile uses some Artec3D PLY models as the sample 3D model input formats for Draco compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.0Model: Church FacadeEPYC 7713 2PEPYC 771316003200480064008000SE +/- 3.69, N = 5SE +/- 13.40, N = 5752574111. (CXX) g++ options: -O3
OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.0Model: Church FacadeEPYC 7713 2PEPYC 771313002600390052006500Min: 7516 / Avg: 7525 / Max: 7536Min: 7370 / Avg: 7410.6 / Max: 74531. (CXX) g++ options: -O3

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.CEPYC 7713 2PEPYC 771310K20K30K40K50KSE +/- 340.17, N = 11SE +/- 119.51, N = 645587.1024085.731. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.CEPYC 7713 2PEPYC 77138K16K24K32K40KMin: 44109.44 / Avg: 45587.1 / Max: 47919.25Min: 23812.97 / Avg: 24085.73 / Max: 24548.471. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUEPYC 7713 2PEPYC 77130.21030.42060.63090.84121.0515SE +/- 0.003112, N = 7SE +/- 0.000519, N = 70.6408330.934706MIN: 0.59MIN: 0.871. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUEPYC 7713 2PEPYC 7713246810Min: 0.63 / Avg: 0.64 / Max: 0.65Min: 0.93 / Avg: 0.93 / Max: 0.941. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

Pennant

Pennant is an application focused on hydrodynamics on general unstructured meshes in 2D. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: sedovbigEPYC 7713 2PEPYC 77133691215SE +/- 0.039464, N = 6SE +/- 0.119128, N = 45.67589511.4272301. (CXX) g++ options: -fopenmp -lmpi_cxx -lmpi
OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: sedovbigEPYC 7713 2PEPYC 77133691215Min: 5.54 / Avg: 5.68 / Max: 5.83Min: 11.17 / Avg: 11.43 / Max: 11.721. (CXX) g++ options: -fopenmp -lmpi_cxx -lmpi

m-queens

A solver for the N-queens problem with multi-threading support via the OpenMP library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterm-queens 1.2Time To SolveEPYC 7713 2PEPYC 77133691215SE +/- 0.027, N = 6SE +/- 0.015, N = 46.37612.2381. (CXX) g++ options: -fopenmp -O2 -march=native
OpenBenchmarking.orgSeconds, Fewer Is Betterm-queens 1.2Time To SolveEPYC 7713 2PEPYC 771348121620Min: 6.26 / Avg: 6.38 / Max: 6.43Min: 12.2 / Avg: 12.24 / Max: 12.271. (CXX) g++ options: -fopenmp -O2 -march=native

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6, LosslessEPYC 7713 2PEPYC 7713246810SE +/- 0.018, N = 6SE +/- 0.022, N = 67.3347.1841. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6, LosslessEPYC 7713 2PEPYC 77133691215Min: 7.25 / Avg: 7.33 / Max: 7.36Min: 7.12 / Avg: 7.18 / Max: 7.281. (CXX) g++ options: -O3 -fPIC -lm

Google Draco

Draco is a library developed by Google for compressing/decompressing 3D geometric meshes and point clouds. This test profile uses some Artec3D PLY models as the sample 3D model input formats for Draco compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.0Model: LionEPYC 7713 2PEPYC 771312002400360048006000SE +/- 20.00, N = 6SE +/- 13.48, N = 6573556131. (CXX) g++ options: -O3
OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.0Model: LionEPYC 7713 2PEPYC 771310002000300040005000Min: 5700 / Avg: 5734.67 / Max: 5829Min: 5580 / Avg: 5612.67 / Max: 56751. (CXX) g++ options: -O3

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Highest CompressionEPYC 7713 2PEPYC 77130.7831.5662.3493.1323.915SE +/- 0.00, N = 6SE +/- 0.00, N = 63.473.481. (CC) gcc options: -fvisibility=hidden -O2 -lm
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Highest CompressionEPYC 7713 2PEPYC 7713246810Min: 3.45 / Avg: 3.47 / Max: 3.48Min: 3.46 / Avg: 3.48 / Max: 3.481. (CC) gcc options: -fvisibility=hidden -O2 -lm

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverEPYC 7713 2PEPYC 7713246810SE +/- 0.026, N = 7SE +/- 0.014, N = 66.2836.8451. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverEPYC 7713 2PEPYC 77133691215Min: 6.16 / Avg: 6.28 / Max: 6.36Min: 6.81 / Avg: 6.84 / Max: 6.911. (CXX) g++ options: -O2 -lOpenCL

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 4KEPYC 7713 2PEPYC 7713306090120150SE +/- 0.78, N = 6SE +/- 0.47, N = 7123.81131.271. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 4KEPYC 7713 2PEPYC 771320406080100Min: 121.01 / Avg: 123.81 / Max: 126.38Min: 128.79 / Avg: 131.27 / Max: 132.771. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.CEPYC 7713 2PEPYC 771320K40K60K80K100KSE +/- 650.27, N = 8SE +/- 292.35, N = 6116679.2660659.251. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.CEPYC 7713 2PEPYC 771320K40K60K80K100KMin: 113550.16 / Avg: 116679.26 / Max: 119103.82Min: 59369.75 / Avg: 60659.25 / Max: 61117.581. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

Basis Universal

Basis Universal is a GPU texture codec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 0EPYC 7713 2PEPYC 7713246810SE +/- 0.015, N = 6SE +/- 0.006, N = 76.3796.2211. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 0EPYC 7713 2PEPYC 77133691215Min: 6.33 / Avg: 6.38 / Max: 6.43Min: 6.2 / Avg: 6.22 / Max: 6.251. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: DefaultEPYC 7713 2PEPYC 77133691215SE +/- 0.21, N = 15SE +/- 0.08, N = 159.709.001. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: DefaultEPYC 7713 2PEPYC 77133691215Min: 8.72 / Avg: 9.7 / Max: 10.79Min: 8.55 / Avg: 9 / Max: 9.81. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

GNU Octave Benchmark

This test profile measures how long it takes to complete several reference GNU Octave files via octave-benchmark. GNU Octave is used for numerical computations and is an open-source alternative to MATLAB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 6.4.0EPYC 7713 2PEPYC 7713246810SE +/- 0.021, N = 6SE +/- 0.020, N = 66.5456.582
OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 6.4.0EPYC 7713 2PEPYC 77133691215Min: 6.51 / Avg: 6.54 / Max: 6.63Min: 6.54 / Avg: 6.58 / Max: 6.66

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonEPYC 7713 2PEPYC 77139001800270036004500SE +/- 23.41, N = 7SE +/- 15.40, N = 741064033
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonEPYC 7713 2PEPYC 77137001400210028003500Min: 3988 / Avg: 4106.43 / Max: 4157Min: 3971 / Avg: 4033.43 / Max: 4078

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 4KEPYC 7713 2PEPYC 7713306090120150SE +/- 0.72, N = 6SE +/- 0.23, N = 7144.70137.741. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 4KEPYC 7713 2PEPYC 7713306090120150Min: 142.59 / Avg: 144.7 / Max: 147.38Min: 136.96 / Avg: 137.74 / Max: 138.571. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 10, LosslessEPYC 7713 2PEPYC 77131.25062.50123.75185.00246.253SE +/- 0.021, N = 7SE +/- 0.011, N = 75.5585.3981. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 10, LosslessEPYC 7713 2PEPYC 7713246810Min: 5.48 / Avg: 5.56 / Max: 5.66Min: 5.36 / Avg: 5.4 / Max: 5.451. (CXX) g++ options: -O3 -fPIC -lm

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: MediumEPYC 7713 2PEPYC 771380160240320400SE +/- 1.30, N = 7SE +/- 0.11, N = 7380.65236.601. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: MediumEPYC 7713 2PEPYC 771370140210280350Min: 376.97 / Avg: 380.65 / Max: 387.16Min: 236.01 / Avg: 236.6 / Max: 236.941. (CXX) g++ options: -O3 -flto -pthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v2EPYC 7713 2PEPYC 77131530456075SE +/- 0.18, N = 8SE +/- 0.23, N = 865.8965.62MIN: 65.21 / MAX: 67.08MIN: 64.64 / MAX: 71.421. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v2EPYC 7713 2PEPYC 77131326395265Min: 65.32 / Avg: 65.89 / Max: 66.75Min: 64.71 / Avg: 65.62 / Max: 66.921. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.CEPYC 7713 2PEPYC 77132K4K6K8K10KSE +/- 61.53, N = 15SE +/- 45.43, N = 158338.274406.021. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.CEPYC 7713 2PEPYC 771314002800420056007000Min: 7611.1 / Avg: 8338.27 / Max: 8776.83Min: 4158.52 / Avg: 4406.02 / Max: 4682.361. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. The test profile uses a 8K x 8K game texture as a sample input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 1.0Benchmark: Multi-Threaded - Configuration: ETC2EPYC 7713 2PEPYC 771314002800420056007000SE +/- 36.45, N = 8SE +/- 15.08, N = 96678.576702.521. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 1.0Benchmark: Multi-Threaded - Configuration: ETC2EPYC 7713 2PEPYC 771312002400360048006000Min: 6504.69 / Avg: 6678.57 / Max: 6796.52Min: 6642.47 / Avg: 6702.52 / Max: 6804.11. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6EPYC 7713 2PEPYC 77130.89031.78062.67093.56124.4515SE +/- 0.022, N = 8SE +/- 0.024, N = 83.9573.8461. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6EPYC 7713 2PEPYC 7713246810Min: 3.86 / Avg: 3.96 / Max: 4.04Min: 3.78 / Avg: 3.85 / Max: 3.951. (CXX) g++ options: -O3 -fPIC -lm

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.CEPYC 7713 2PEPYC 771320K40K60K80K100KSE +/- 291.87, N = 10SE +/- 290.32, N = 9100740.8957329.701. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.CEPYC 7713 2PEPYC 771320K40K60K80K100KMin: 99938.58 / Avg: 100740.89 / Max: 102920.06Min: 56059.56 / Avg: 57329.7 / Max: 58352.191. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 100, Compression Effort 5EPYC 7713 2PEPYC 77133691215SE +/- 0.03, N = 8SE +/- 0.04, N = 106.4110.851. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 100, Compression Effort 5EPYC 7713 2PEPYC 77133691215Min: 6.33 / Avg: 6.41 / Max: 6.51Min: 10.65 / Avg: 10.85 / Max: 11.081. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100EPYC 7713 2PEPYC 77133691215SE +/- 0.00, N = 10SE +/- 0.01, N = 1010.6510.691. (CC) gcc options: -fvisibility=hidden -O2 -lm
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100EPYC 7713 2PEPYC 77133691215Min: 10.63 / Avg: 10.65 / Max: 10.67Min: 10.61 / Avg: 10.69 / Max: 10.721. (CC) gcc options: -fvisibility=hidden -O2 -lm

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: DefaultEPYC 7713 2PEPYC 771348121620SE +/- 0.01, N = 11SE +/- 0.01, N = 1116.7516.781. (CC) gcc options: -fvisibility=hidden -O2 -lm
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: DefaultEPYC 7713 2PEPYC 771348121620Min: 16.71 / Avg: 16.75 / Max: 16.78Min: 16.75 / Avg: 16.78 / Max: 16.821. (CC) gcc options: -fvisibility=hidden -O2 -lm

Timed CPython Compilation

This test times how long it takes to build the reference Python implementation, CPython, with optimizations and LTO enabled for a release build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: DefaultEPYC 7713 2PEPYC 77134812162015.3915.71

ctx_clock

Ctx_clock is a simple test program to measure the context switch time in clock cycles. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgClocks, Fewer Is Betterctx_clockContext Switch TimeEPYC 7713 2PEPYC 7713306090120150SE +/- 0.00, N = 13SE +/- 0.00, N = 13120120
OpenBenchmarking.orgClocks, Fewer Is Betterctx_clockContext Switch TimeEPYC 7713 2PEPYC 771320406080100Min: 120 / Avg: 120 / Max: 120Min: 120 / Avg: 120 / Max: 120

BLAKE2

This is a benchmark of BLAKE2 using the blake2s binary. BLAKE2 is a high-performance crypto alternative to MD5 and SHA-2/3. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgCycles Per Byte, Fewer Is BetterBLAKE2 20170307EPYC 7713 2PEPYC 77130.76281.52562.28843.05123.814SE +/- 0.00, N = 14SE +/- 0.00, N = 143.393.391. (CC) gcc options: -O3 -march=native -lcrypto -lz
OpenBenchmarking.orgCycles Per Byte, Fewer Is BetterBLAKE2 20170307EPYC 7713 2PEPYC 7713246810Min: 3.38 / Avg: 3.39 / Max: 3.39Min: 3.38 / Avg: 3.39 / Max: 3.391. (CC) gcc options: -O3 -march=native -lcrypto -lz

CPU Power Consumption Monitor

OpenBenchmarking.orgWattsCPU Power Consumption MonitorPhoronix Test Suite System MonitoringEPYC 77134080120160200Min: 18.73 / Avg: 175.52 / Max: 233.05

Meta Performance Per Watts

OpenBenchmarking.orgPerformance Per Watts, More Is BetterMeta Performance Per WattsPerformance Per WattsEPYC 771360012001800240030002802.51

308 Results Shown

WRF
OpenFOAM:
  drivaerFastback, Large Mesh Size - Execution Time
  drivaerFastback, Large Mesh Size - Mesh Time
SPECjbb 2015:
  SPECjbb2015-Composite critical-jOPS
  SPECjbb2015-Composite max-jOPS
MariaDB:
  4096
  2048
NWChem
Renaissance
RELION
BRL-CAD
Xcompact3d Incompact3d
Rodinia
Stockfish
LeelaChessZero:
  Eigen
  BLAS
Renaissance
Quantum ESPRESSO
oneDNN
PostgreSQL pgbench:
  100 - 250 - Read Write - Average Latency
  100 - 250 - Read Write
asmFish
PostgreSQL pgbench:
  100 - 500 - Read Write - Average Latency
  100 - 500 - Read Write
Renaissance
Graph500:
  26:
    sssp max_TEPS
    sssp median_TEPS
    bfs max_TEPS
    bfs median_TEPS
ONNX Runtime:
  ArcFace ResNet-100 - CPU - Standard
  fcn-resnet101-11 - CPU - Standard
  yolov4 - CPU - Standard
  super-resolution-10 - CPU - Standard
JPEG XL libjxl
SecureMark
High Performance Conjugate Gradient
LuaRadio:
  Complex Phase
  Hilbert Transform
  FM Deemphasis Filter
  Five Back to Back FIR Filters
Blender
oneDNN
Mlpack Benchmark
Timed Linux Kernel Compilation
TNN
OpenFOAM:
  drivaerFastback, Small Mesh Size - Execution Time
  drivaerFastback, Small Mesh Size - Mesh Time
Timed LLVM Compilation
Numpy Benchmark
LuxCoreRender:
  Danish Mood - CPU
  LuxCore Benchmark - CPU
LZ4 Compression:
  3 - Decompression Speed
  3 - Compression Speed
OpenSSL
Timed Gem5 Compilation
Appleseed
WebP2 Image Encode
OSPRay Studio
Timed Node.js Compilation
PostgreSQL pgbench:
  100 - 500 - Read Only - Average Latency
  100 - 500 - Read Only
  100 - 250 - Read Only - Average Latency
  100 - 250 - Read Only
OSPRay Studio
Ngspice
Timed LLVM Compilation
VP9 libvpx Encoding
LuxCoreRender
Apache Cassandra
ClickHouse:
  100M Rows Web Analytics Dataset, Third Run
  100M Rows Web Analytics Dataset, Second Run
  100M Rows Web Analytics Dataset, First Run / Cold Cache
OSPRay Studio:
  3 - 4K - 1 - Path Tracer
  2 - 4K - 32 - Path Tracer
ONNX Runtime:
  GPT-2 - CPU - Standard
  bertsquad-12 - CPU - Standard
OSPRay Studio:
  1 - 4K - 32 - Path Tracer
  2 - 4K - 1 - Path Tracer
  2 - 4K - 16 - Path Tracer
  1 - 4K - 1 - Path Tracer
  1 - 4K - 16 - Path Tracer
Renaissance
NAS Parallel Benchmarks
Renaissance:
  In-Memory Database Shootout
  Finagle HTTP Requests
Ngspice
etcd:
  RANGE - 100 - 100 - Average Latency
  RANGE - 100 - 100
oneDNN
JPEG XL libjxl
Appleseed
OpenVINO:
  Face Detection FP16 - CPU:
    ms
    FPS
simdjson:
  DistinctUserID
  TopTweet
Apache HTTP Server
Helsing
simdjson
Apache HTTP Server
nginx:
  500
  1000
Sysbench
Timed CPython Compilation
etcd:
  PUT - 100 - 100 - Average Latency
  PUT - 100 - 100
oneDNN:
  Recurrent Neural Network Training - f32 - CPU
  Recurrent Neural Network Training - u8s8f32 - CPU
etcd:
  RANGE - 500 - 100 - Average Latency
  RANGE - 500 - 100
  PUT - 500 - 100 - Average Latency
  PUT - 500 - 100
  PUT - 100 - 1000 - Average Latency
  PUT - 100 - 1000
  RANGE - 100 - 1000 - Average Latency
  RANGE - 100 - 1000
ebizzy
Node.js V8 Web Tooling Benchmark
oneDNN
etcd:
  PUT - 500 - 1000 - Average Latency
  PUT - 500 - 1000
  RANGE - 500 - 1000 - Average Latency
  RANGE - 500 - 1000
Blender
OpenVINO:
  Person Detection FP16 - CPU:
    ms
    FPS
LZ4 Compression:
  9 - Decompression Speed
  9 - Compression Speed
OpenVINO:
  Person Detection FP32 - CPU:
    ms
    FPS
Dragonflydb:
  50 - 1:5
  50 - 1:1
  50 - 5:1
OpenVINO:
  Face Detection FP16-INT8 - CPU:
    ms
    FPS
PyPerformance
PJSIP:
  INVITE
  OPTIONS, Stateful
OpenVINO:
  Machine Translation EN To DE FP16 - CPU:
    ms
    FPS
  Person Vehicle Bike Detection FP16 - CPU:
    ms
    FPS
LuxCoreRender
OpenVINO:
  Vehicle Detection FP16-INT8 - CPU:
    ms
    FPS
  Vehicle Detection FP16 - CPU:
    ms
    FPS
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU:
    ms
    FPS
  Weld Porosity Detection FP16-INT8 - CPU:
    ms
    FPS
  Age Gender Recognition Retail 0013 FP16 - CPU:
    ms
    FPS
  Weld Porosity Detection FP16 - CPU:
    ms
    FPS
Facebook RocksDB:
  Update Rand
  Read Rand Write Rand
  Read While Writing
GraphicsMagick
OpenSSL:
  RSA4096:
    verify/s
    sign/s
GraphicsMagick
Facebook RocksDB
GraphicsMagick:
  Rotate
  HWB Color Space
Etcpak
Coremark
Blender
GPAW
CloverLeaf
simdjson
Build2
AOM AV1
Natron
Rodinia
SVT-HEVC
Timed Wasmer Compilation
simdjson
srsRAN:
  4G PHY_DL_Test 100 PRB SISO 64-QAM:
    UE Mb/s
    eNb Mb/s
miniFE
LuxCoreRender
Mlpack Benchmark
ASKAP:
  tConvolve MPI - Gridding
  tConvolve MPI - Degridding
WebP2 Image Encode
Zstd Compression:
  19 - Decompression Speed
  19 - Compression Speed
  19, Long Mode - Decompression Speed
  19, Long Mode - Compression Speed
Timed Linux Kernel Compilation
7-Zip Compression:
  Decompression Rating
  Compression Rating
Kripke
JPEG XL Decoding libjxl
WebP Image Encode
Timed Godot Game Engine Compilation
Primesieve
srsRAN:
  4G PHY_DL_Test 100 PRB MIMO 256-QAM:
    UE Mb/s
    eNb Mb/s
libavif avifenc
PyPerformance
GROMACS
Timed PHP Compilation
Redis
Algebraic Multi-Grid Benchmark
Appleseed
Kvazaar
Rodinia
srsRAN:
  OFDM_Test
  4G PHY_DL_Test 100 PRB MIMO 64-QAM
  4G PHY_DL_Test 100 PRB MIMO 64-QAM
Embree:
  Pathtracer - Asian Dragon
  Pathtracer ISPC - Asian Dragon
oneDNN
PyPerformance
PJSIP
LibRaw
Blender
NAS Parallel Benchmarks
Renaissance
Aircrack-ng
FLAC Audio Encoding
x265
Google SynthMark
QuantLib
Xmrig
NAMD
Pennant
PHPBench
JPEG XL libjxl
Xmrig
Mlpack Benchmark
DaCapo Benchmark
Blender
LULESH
PyPerformance
TNN
Cython Benchmark
NAS Parallel Benchmarks
PyBench
Node.js Express HTTP Load Test
srsRAN:
  5G PHY_DL_NR Test 52 PRB SISO 64-QAM:
    UE Mb/s
    eNb Mb/s
Intel Open Image Denoise
NAS Parallel Benchmarks
SVT-HEVC
Intel Open Image Denoise
Timed Apache Compilation
ASKAP
Liquid-DSP
Kvazaar
Liquid-DSP:
  128 - 256 - 57
  64 - 256 - 57
  32 - 256 - 57
NAS Parallel Benchmarks
srsRAN:
  4G PHY_DL_Test 100 PRB SISO 256-QAM:
    UE Mb/s
    eNb Mb/s
Timed Mesa Compilation
TNN
Timed FFmpeg Compilation
JPEG XL Decoding libjxl
Basis Universal
Sysbench
NAS Parallel Benchmarks
SVT-AV1
AOM AV1
SVT-AV1
ASTC Encoder
WebP Image Encode
Embree
JPEG XL libjxl
POV-Ray
ACES DGEMM
Basis Universal
Embree
ASTC Encoder
LAME MP3 Encoding
Google Draco
NAS Parallel Benchmarks
oneDNN
Pennant
m-queens
libavif avifenc
Google Draco
WebP Image Encode
Rodinia
SVT-AV1
NAS Parallel Benchmarks
Basis Universal
WebP2 Image Encode
GNU Octave Benchmark
DaCapo Benchmark
SVT-HEVC
libavif avifenc
ASTC Encoder
TNN
NAS Parallel Benchmarks
Etcpak
libavif avifenc
NAS Parallel Benchmarks
WebP2 Image Encode
WebP Image Encode:
  Quality 100
  Default
Timed CPython Compilation
ctx_clock
BLAKE2
CPU Power Consumption Monitor:
  Phoronix Test Suite System Monitoring
  Performance Per Watts