AMD EPYC Zen

Benchmarks for a future article. AMD EPYC 8534PN 64-Core testing with a AMD Cinnabar (RCB1009C BIOS) and ASPEED on Ubuntu 23.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2401092-NE-AMDEPYCZE04
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 2 Tests
C++ Boost Tests 3 Tests
Timed Code Compilation 5 Tests
C/C++ Compiler Tests 11 Tests
CPU Massive 15 Tests
Creator Workloads 14 Tests
Cryptography 2 Tests
Database Test Suite 4 Tests
Encoding 6 Tests
Fortran Tests 4 Tests
Game Development 2 Tests
HPC - High Performance Computing 16 Tests
Java Tests 2 Tests
Common Kernel Benchmarks 3 Tests
Machine Learning 4 Tests
Molecular Dynamics 6 Tests
MPI Benchmarks 5 Tests
Multi-Core 25 Tests
NVIDIA GPU Compute 4 Tests
Intel oneAPI 5 Tests
OpenMPI Tests 10 Tests
Programmer / Developer System Benchmarks 6 Tests
Python Tests 9 Tests
Raytracing 2 Tests
Renderers 5 Tests
Scientific Computing 10 Tests
Server 9 Tests
Server CPU Tests 8 Tests
Video Encoding 6 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs
No Box Plots
On Line Graphs With Missing Data, Connect The Line Gaps

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable
Show Perf Per Clock Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Zen 1 - EPYC 7601
January 06
  1 Day, 1 Hour, 51 Minutes
Zen 4C - EPYC 8534PN
January 08
  16 Hours, 31 Minutes
Invert Hiding All Results Option
  21 Hours, 11 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


AMD EPYC ZenProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionZen 1 - EPYC 7601Zen 4C - EPYC 8534PNAMD EPYC 7601 32-Core @ 2.20GHz (32 Cores / 64 Threads)TYAN B8026T70AE24HR (V1.02.B10 BIOS)AMD 17h128GB280GB INTEL SSDPE21D280GA + 1000GB INTEL SSDPE2KX010T8llvmpipeVE2282 x Broadcom NetXtreme BCM5720 PCIeUbuntu 23.106.6.9-060609-generic (x86_64)GNOME Shell 45.0X Server 1.21.1.74.5 Mesa 23.2.1-1ubuntu3.1 (LLVM 15.0.7 256 bits)GCC 13.2.0ext41920x1080AMD EPYC 8534PN 64-Core @ 2.00GHz (64 Cores / 128 Threads)AMD Cinnabar (RCB1009C BIOS)AMD Device 14a46 x 32 GB DRAM-4800MT/s Samsung M321R4GA0BB0-CQKMG1000GB INTEL SSDPE2KX010T8ASPEED1920x1200OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Zen 1 - EPYC 7601: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0x800126e- Zen 4C - EPYC 8534PN: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212Java Details- OpenJDK Runtime Environment (build 11.0.21+9-post-Ubuntu-0ubuntu123.10)Python Details- Python 3.11.6Security Details- Zen 1 - EPYC 7601: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT vulnerable + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: disabled RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - Zen 4C - EPYC 8534PN: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

Zen 1 - EPYC 7601 vs. Zen 4C - EPYC 8534PN ComparisonPhoronix Test SuiteBaseline+588.7%+588.7%+1177.4%+1177.4%+1766.1%+1766.1%881.3%726.8%726.8%720.4%720.4%649.8%607.4%589%588.6%581.4%570.9%553.6%550.2%521.7%505.5%473.6%459%455.2%452.1%440.9%428.6%412.3%411.9%396.6%373.5%357.7%356.8%331.9%326.9%314.7%308.9%305%303.3%283.3%281.1%280.8%280.8%269.8%256.9%255.5%253.5%251.4%250.6%243.6%241.8%241.6%239.6%235%231.5%226.5%225.3%223.3%221.1%219.2%215.9%213.6%212.6%207.8%202.5%202.1%201.9%2354.6%198.7%197.5%195.1%194.1%190.1%189.1%188.9%186.1%185.8%185.6%183.9%179.9%177.6%175.9%169.7%165.7%164.6%164.2%160.6%156%154.8%153.9%150.2%150.1%147.8%145.4%141.2%136.1%134.7%131.3%127.6%125.5%123.5%123.3%123.3%123.1%120.3%118%110.7%105.7%105.2%102.8%1300.8%1298%1128.6%907.5%97.4%95.4%90.8%89.7%89.3%87.5%82.9%79.5%79.2%78.7%78.1%78%78%77.6%77.2%77.2%76.9%75.9%74.6%73.9%73.9%73.5%72.4%70.7%70.4%69.8%68.9%68.8%67.7%67.5%66.3%65.7%65.5%64.5%64.1%63.7%63.3%62.8%61.8%61.1%61%60.7%59.2%59.1%58.4%57%56.4%53.7%51.5%47.8%47.5%47.5%45.5%45.2%45.1%43.6%42.8%38.8%38.3%37.7%32.8%32%30.1%28.2%27.7%25%24.8%24.2%23.9%17.8%7.7%V.D.F.I - CPUOpenMP - BM2OpenMP - BM2OpenMP - BM1OpenMP - BM1M.T.E.T.D.F - CPUW.P.D.F.I - CPUChaCha20-Poly1305C.S.9.P.Y.P - A.M.Sgravity_spheres_volume/dim_512/scivis/real_timegravity_spheres_volume/dim_512/ao/real_timeR.S.A.F.I - CPUH.E.R.F.I - CPUChaCha20S.F.P.RA.G.R.R.0.F.I - CPURSA4096N.T.C.B.b.u.S.S.I - A.M.SP.V.B.D.F - CPUAES-128-GCMB.L.N.Q.A.S.I - A.M.SAES-256-GCMCPU - 16 - ResNet-50P.D.F - CPUgravity_spheres_volume/dim_512/pathtracer/real_time3 - 4K - 1 - Path Tracer - CPU1 - 4K - 1 - Path Tracer - CPU3 - 4K - 32 - Path Tracer - CPU1 - 4K - 32 - Path Tracer - CPU3 - 4K - 16 - Path Tracer - CPUGhostRider - 1MA.G.R.R.0.F.I - CPU1 - 4K - 16 - Path Tracer - CPURT.ldr_alb_nrm.3840x2160 - CPU-OnlyPathtracer ISPC - Asian DragonWownero - 1Me.G.B.S - 1200Pathtracer ISPC - CrownRand ReadN.T.C.D.m - A.M.SB.L.N.Q.A - A.M.SF.D.F.I - CPURand ReadC.S.9.P.Y.P - A.M.SN.T.C.B.b.u.c - A.M.SN.D.C.o.b.u.o.I - A.M.SR.5.B - A.M.SC.C.R.5.I - A.M.S1Bparticle_volume/scivis/real_timeH.E.R.F.I - CPUparticle_volume/ao/real_timee.G.B.S - 2400Barbershop - CPU-OnlySHA512C.D.Y.C.S.I - A.M.SC.D.Y.C - A.M.S500MRSA4096D.RCPUR.5.S.I - A.M.SKawPow - 1MC.F.U - 1MSHA256Monero - 1MCryptoNight-Heavy - 1MPreset 13 - Bosphorus 4KCompression RatingFishy Cat - CPU-OnlyCPU - SupercarPabellon Barcelona - CPU-OnlyClassroom - CPU-OnlyCPU - BedroomN.T.C.B.b.u.S.S.I - A.M.SPreset 12 - Bosphorus 4KBMW27 - CPU-OnlyMulti-Threaded1000B.L.N.Q.A.S.I - A.M.SRedis - 100 - 1:10Preset 8 - Bosphorus 4K500Redis - 100 - 1:5NinjaATPase Simulation - 327,506 AtomsV.D.F.I - CPUallmodconfigTime To CompileBosphorus 4K - Very FastBosphorus 4K - Super Fastd.S.M.S - Execution TimeBosphorus 4K - MediumCarbon Nanotube20k Atomslibx265 - Platformlibx265 - Video On Demandlibx265 - UploadTime To CompilePreset 4 - Bosphorus 4KRead While WritingBosphorus 4K - Ultra FastRead While WritingUnix MakefilesF.D.F.I - CPUW.P.D.F.I - CPUR.5.S.I - A.M.SF.D.R.F.I - CPUlibx265 - Liveclover_bm16Bosphorus 4K - FastBosphorus 4Kclover_bm64_shortM.T.E.T.D.F - CPUdefconfigR.R.W.Ri.i.1.C.P.DTime To CompileBosphorus 4K - FasterN.T.C.D.m - A.M.SB.L.N.Q.A - A.M.S500 - 100 - 800 - 100500 - 100 - 800 - 10010Update RandR.R.W.R800 - 100 - 500 - 100500 - 100 - 800 - 400800 - 100 - 500 - 100d.M.M.S - Execution TimeCPU - 256 - ResNet-50N.D.C.o.b.u.o.I - A.M.SN.T.C.B.b.u.c - A.M.SR.5.B - A.M.SCPU - 1 - ResNet-50particle_volume/pathtracer/real_timeChrysler Neon 1MC.C.R.5.I - A.M.SUpdate Rand500 - 100 - 800 - 4006WritesCORAL2 P1R.S.A.F.I - CPU500 - 100 - 500 - 400800 - 100 - 500 - 400800 - 100 - 500 - 400500 - 100 - 500 - 100500 - 100 - 500 - 1005IMDBP.P.AC.D.Y.C.S.I - A.M.SC.D.Y.C - A.M.S500 - 100 - 500 - 40011000800 - 100 - 200 - 100800 - 100 - 200 - 400X.b.i.id.M.M.S - Mesh Time800 - 100 - 200 - 100d.S.M.S - Mesh TimeCTS2P.V.B.D.F - CPU800 - 100 - 200 - 400TPC-H Parquet500 - 100 - 200 - 100500 - 100 - 200 - 400500 - 100 - 200 - 100F.D.R.F.I - CPU800 - 100 - 800 - 100800 - 100 - 800 - 100500 - 100 - 200 - 400P.D.F - CPU800 - 100 - 800 - 400800 - 100 - 800 - 400CORAL2 P2OpenVINOminiBUDEminiBUDEminiBUDEminiBUDEOpenVINOOpenVINOOpenSSLNeural Magic DeepSparseOSPRayOSPRayOpenVINOOpenVINOOpenSSLACES DGEMMOpenVINOOpenSSLNeural Magic DeepSparseOpenVINOOpenSSLNeural Magic DeepSparseOpenSSLTensorFlowOpenVINOOSPRayOSPRay StudioOSPRay StudioOSPRay StudioOSPRay StudioOSPRay StudioXmrigOpenVINOOSPRay StudioIntel Open Image DenoiseEmbreeXmrigeasyWaveEmbreeRocksDBNeural Magic DeepSparseNeural Magic DeepSparseOpenVINOSpeedbNeural Magic DeepSparseNeural Magic DeepSparseNeural Magic DeepSparseNeural Magic DeepSparseNeural Magic DeepSparseY-CruncherOSPRayOpenVINOOSPRayeasyWaveBlenderOpenSSLNeural Magic DeepSparseNeural Magic DeepSparseY-CruncherOpenSSL7-Zip CompressionChaos Group V-RAYNeural Magic DeepSparseXmrigXmrigOpenSSLXmrigXmrigSVT-AV17-Zip CompressionBlenderIndigoBenchBlenderBlenderIndigoBenchNeural Magic DeepSparseSVT-AV1BlenderQuantLibnginxNeural Magic DeepSparseRedis 7.0.12 + memtier_benchmarkSVT-AV1nginxRedis 7.0.12 + memtier_benchmarkTimed LLVM CompilationNAMDOpenVINOTimed Linux Kernel CompilationTimed Node.js Compilationuvg266uvg266OpenFOAMuvg266GPAWLAMMPS Molecular Dynamics SimulatorFFmpegFFmpegFFmpegTimed FFmpeg CompilationSVT-AV1Speedbuvg266RocksDBTimed LLVM CompilationOpenVINOOpenVINONeural Magic DeepSparseOpenVINOFFmpegCloverLeafVVenCx265CloverLeafOpenVINOTimed Linux Kernel CompilationSpeedbXcompact3d Incompact3dTimed Gem5 CompilationVVenCNeural Magic DeepSparseNeural Magic DeepSparseApache IoTDBApache IoTDBrav1eSpeedbRocksDBApache IoTDBApache IoTDBApache IoTDBOpenFOAMPyTorchNeural Magic DeepSparseNeural Magic DeepSparseNeural Magic DeepSparsePyTorchOSPRayOpenRadiossNeural Magic DeepSparseRocksDBApache IoTDBrav1eApache CassandraQuicksilverOpenVINOApache IoTDBApache IoTDBKripkeApache IoTDBApache IoTDBApache IoTDBrav1eDuckDBTimed MrBayes AnalysisNeural Magic DeepSparseNeural Magic DeepSparseApache IoTDBrav1eApache HTTP ServerApache IoTDBApache IoTDBXcompact3d Incompact3dOpenFOAMApache IoTDBOpenFOAMQuicksilverOpenVINOApache IoTDBDuckDBApache IoTDBApache IoTDBApache IoTDBOpenVINOApache IoTDBApache IoTDBApache IoTDBOpenVINOApache IoTDBApache IoTDBQuicksilverZen 1 - EPYC 7601Zen 4C - EPYC 8534PN

AMD EPYC Zencloverleaf: clover_bm16incompact3d: X3D-benchmarking input.i3dapache-iotdb: 800 - 100 - 800 - 100apache-iotdb: 800 - 100 - 800 - 100xmrig: GhostRider - 1Mbuild-linux-kernel: allmodconfigblender: Barbershop - CPU-Onlymrbayes: Primate Phylogeny Analysislammps: 20k Atomsopenradioss: Chrysler Neon 1Mapache-iotdb: 800 - 100 - 500 - 400apache-iotdb: 800 - 100 - 500 - 400easywave: e2Asean Grid + BengkuluSept2007 Source - 2400build-llvm: Unix Makefilesquicksilver: CTS2openfoam: drivaerFastback, Medium Mesh Size - Execution Timeopenfoam: drivaerFastback, Medium Mesh Size - Mesh Timexmrig: KawPow - 1Mxmrig: CryptoNight-Femto UPX2 - 1Mapache-iotdb: 800 - 100 - 800 - 400apache-iotdb: 800 - 100 - 800 - 400build-llvm: Ninjabuild-gem5: Time To Compilequicksilver: CORAL2 P2ffmpeg: libx265 - Platformffmpeg: libx265 - Video On Demandduckdb: TPC-H Parquetbuild-nodejs: Time To Compileffmpeg: libx265 - Uploadduckdb: IMDBospray-studio: 3 - 4K - 32 - Path Tracer - CPUminibude: OpenMP - BM2minibude: OpenMP - BM2apache-iotdb: 800 - 100 - 500 - 100apache-iotdb: 800 - 100 - 500 - 100apache-iotdb: 500 - 100 - 800 - 400apache-iotdb: 500 - 100 - 800 - 400ospray-studio: 1 - 4K - 32 - Path Tracer - CPUapache-iotdb: 500 - 100 - 800 - 100apache-iotdb: 500 - 100 - 800 - 100speedb: Read While Writingrocksdb: Read Rand Write Randopenssl: AES-256-GCMopenssl: AES-128-GCMopenssl: ChaCha20-Poly1305openssl: ChaCha20openssl: SHA512openssl: SHA256apache-iotdb: 500 - 100 - 200 - 400apache-iotdb: 500 - 100 - 200 - 400blender: Pabellon Barcelona - CPU-Onlyospray: particle_volume/pathtracer/real_timemt-dgemm: Sustained Floating-Point Rateospray: particle_volume/scivis/real_timememtier-benchmark: Redis - 100 - 1:5rocksdb: Read While Writingcassandra: Writesapache-iotdb: 500 - 100 - 500 - 400apache-iotdb: 500 - 100 - 500 - 400vvenc: Bosphorus 4K - Fastapache-iotdb: 500 - 100 - 500 - 100apache-iotdb: 500 - 100 - 500 - 100memtier-benchmark: Redis - 100 - 1:10blender: Classroom - CPU-Onlyapache-iotdb: 800 - 100 - 200 - 400apache-iotdb: 800 - 100 - 200 - 400easywave: e2Asean Grid + BengkuluSept2007 Source - 1200ospray-studio: 3 - 4K - 16 - Path Tracer - CPUtensorflow: CPU - 16 - ResNet-50apache-iotdb: 800 - 100 - 200 - 100apache-iotdb: 800 - 100 - 200 - 100rav1e: 1ospray: particle_volume/ao/real_timeospray-studio: 1 - 4K - 16 - Path Tracer - CPUgpaw: Carbon Nanotubeapache-iotdb: 500 - 100 - 200 - 100apache-iotdb: 500 - 100 - 200 - 100xmrig: Monero - 1Mxmrig: CryptoNight-Heavy - 1Mgromacs: MPI CPU - water_GMX50_baredeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamffmpeg: libx265 - Livenginx: 1000apache: 1000nginx: 500quantlib: Multi-Threadedospray-studio: 3 - 4K - 1 - Path Tracer - CPUcloverleaf: clover_bm64_shortospray-studio: 1 - 4K - 1 - Path Tracer - CPUspecfem3d: Layered Halfspacev-ray: CPUspecfem3d: Water-layered Halfspacepytorch: CPU - 256 - ResNet-50vvenc: Bosphorus 4K - Fasterrav1e: 5quicksilver: CORAL2 P1blender: Fishy Cat - CPU-Onlyindigobench: CPU - Supercarospray: gravity_spheres_volume/dim_512/pathtracer/real_timerav1e: 10openvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUbuild-linux-kernel: defconfigdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamindigobench: CPU - Bedroomxmrig: Wownero - 1Mopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUspeedb: Update Randopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUspeedb: Read Rand Write Randrocksdb: Update Randspeedb: Rand Readopenssl: RSA4096openssl: RSA4096rocksdb: Rand Readospray: gravity_spheres_volume/dim_512/ao/real_timeospray: gravity_spheres_volume/dim_512/scivis/real_timerav1e: 6deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamsvt-av1: Preset 12 - Bosphorus 4Kdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamblender: BMW27 - CPU-Onlydeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamuvg266: Bosphorus 4K - Mediumsvt-av1: Preset 13 - Bosphorus 4Kcompress-7zip: Decompression Ratingcompress-7zip: Compression Ratingsvt-av1: Preset 4 - Bosphorus 4Kdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamopenfoam: drivaerFastback, Small Mesh Size - Execution Timeopenfoam: drivaerFastback, Small Mesh Size - Mesh Timenamd: ATPase Simulation - 327,506 Atomspytorch: CPU - 1 - ResNet-50specfem3d: Homogeneous Halfspaceoidn: RT.ldr_alb_nrm.3840x2160 - CPU-Onlydeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamx265: Bosphorus 4Kdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamkripke: minibude: OpenMP - BM1minibude: OpenMP - BM1specfem3d: Tomographic Modelspecfem3d: Mount St. Helensincompact3d: input.i3d 193 Cells Per Directionembree: Pathtracer ISPC - Crowny-cruncher: 1Bbuild-ffmpeg: Time To Compileembree: Pathtracer ISPC - Asian Dragonsvt-av1: Preset 8 - Bosphorus 4Kuvg266: Bosphorus 4K - Very Fastuvg266: Bosphorus 4K - Super Fastuvg266: Bosphorus 4K - Ultra Fasty-cruncher: 500MZen 1 - EPYC 7601Zen 4C - EPYC 8534PN971.50818.054118129.77584037561070.1755.017767.15201.18213.872499.30318.2457037441352.687529.717114266671179.2541231.744466937.96957.2488.6460237625435.054388.0231501333321.0321.04245.742387.65210.39199.01433164314.220355.50182.0756993811421.3059444334280606127.26587740906145679150918489948428697976450094873031549069047920025967831618417027056830400166.0038176456247.2498.44133.7907995.221161142464.163371876152887294.75528491253.29185.07537599401183656.97192.80147.8945921074148.0771711889.8239.40460495890.5635.28931145528142.16546.29387122637044.27081.01.9921200.100313.259758.23100664.3588038.28103269.8364481.710050108.60847878.6799043842017574.95062589621.216.4932.24912996667101.208.5804.148416.9712094.213.8276.300176.431790.66243.99110414.7181.9243.91193.3341.3253.47149.4916.95471.38169.04189.059.1877.4927.01295.8020155584.68377.402.4313011.10143607429061886867041293924.14510.7847982042.578292.466922.93761.3934260.249670.525121.2093131.760573.001460.704110.80701457.915510.75458.7367.9341344171239343.0601701.37479.2992126.190844.445310.9795926.7238.0631792500.48226.951870.3395102.6582155.7019226.374470.558614.52110.4484144.6829111.8988142.796418737593314.245356.12230.28359304029.95188620337.406419118.361333.92339.78921.831526.25222.2723.4427.2115.693497.09562.359884101.60730312884376.0307.653240.35126.96931.007297.72195.4991869780109.847261.16116316667679.78992159.5742920720.820699.5394.5170984166173.868217.1261617666746.9646.98178.435160.72523.18125.10076785117.5742939.35647.1999531709254.311033976386572571.671041732201294750726542864608148531905281788973532088772542472979153504932627196054779857401783133.015039688486.57166.15222.95331517.04842901182.236918267251434191.77862783486.28052.95865302723084431.3767.91106.916771294438.8844127850.2727.16679372260.85317.09933608563.04535.585139669820716.520541.8674.260846.8734114.95266322.55130116.68263177.49171336.6219657.3818566091136.5611.5643.5812132666735.3724.52219.643412.355595.9253.5141.70999.1066322.281311.17239663.3146.53218.07103.12309.8232.67977.0712.212602.2751.961229.167.108840.4110.902902.5735660211.975276.110.6074625.612577564483387304542944889214.225217.030267444017.298616.80894.86122.11791444.9067194.54745.8818696.514627.07855.930436.9206855.734036.755519.87196.4234060213580936.672495.144264.036154.55916830.9512470.3917545.131.84145.1134219.91198.35603821.9241144.1447221.301827.5465.9305484.732665.9139484.8697303107200116.8662921.65320.874774967.903310.23318.06183.197567.20952.5955.0155.985.099OpenBenchmarking.org

CloverLeaf

CloverLeaf is a Lagrangian-Eulerian hydrodynamics benchmark. This test profile currently makes use of CloverLeaf's OpenMP version. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeaf 1.3Input: clover_bm16Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN2004006008001000SE +/- 1.74, N = 3SE +/- 0.79, N = 3971.50497.091. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp
OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeaf 1.3Input: clover_bm16Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN2004006008001000Min: 968.67 / Avg: 971.5 / Max: 974.67Min: 495.9 / Avg: 497.09 / Max: 498.591. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: X3D-benchmarking input.i3dZen 1 - EPYC 7601Zen 4C - EPYC 8534PN2004006008001000SE +/- 0.24, N = 3SE +/- 1.66, N = 3818.05562.361. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: X3D-benchmarking input.i3dZen 1 - EPYC 7601Zen 4C - EPYC 8534PN140280420560700Min: 817.65 / Avg: 818.05 / Max: 818.48Min: 559.1 / Avg: 562.36 / Max: 564.531. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN306090120150SE +/- 0.61, N = 3SE +/- 1.78, N = 12129.77101.60MAX: 23980.17MAX: 30258.71
OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN20406080100Min: 128.56 / Avg: 129.77 / Max: 130.46Min: 90.99 / Avg: 101.6 / Max: 114.7

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN16M32M48M64M80MSE +/- 295389.20, N = 3SE +/- 1143156.28, N = 125840375673031288
OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN13M26M39M52M65MMin: 58072839.37 / Avg: 58403756.04 / Max: 58993048.02Min: 65285668 / Avg: 73031288.05 / Max: 80046991.52

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: GhostRider - Hash Count: 1MZen 1 - EPYC 7601Zen 4C - EPYC 8534PN9001800270036004500SE +/- 6.67, N = 3SE +/- 5.88, N = 31070.14376.01. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: GhostRider - Hash Count: 1MZen 1 - EPYC 7601Zen 4C - EPYC 8534PN8001600240032004000Min: 1059.2 / Avg: 1070.1 / Max: 1082.2Min: 4367.9 / Avg: 4375.97 / Max: 4387.41. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfigZen 1 - EPYC 7601Zen 4C - EPYC 8534PN160320480640800SE +/- 0.57, N = 3SE +/- 0.63, N = 3755.02307.65
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfigZen 1 - EPYC 7601Zen 4C - EPYC 8534PN130260390520650Min: 754.31 / Avg: 755.02 / Max: 756.14Min: 306.76 / Avg: 307.65 / Max: 308.86

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Barbershop - Compute: CPU-OnlyZen 1 - EPYC 7601Zen 4C - EPYC 8534PN170340510680850SE +/- 0.52, N = 3SE +/- 0.15, N = 3767.15240.35
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Barbershop - Compute: CPU-OnlyZen 1 - EPYC 7601Zen 4C - EPYC 8534PN130260390520650Min: 766.13 / Avg: 767.15 / Max: 767.82Min: 240.05 / Avg: 240.35 / Max: 240.51

Timed MrBayes Analysis

This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisZen 1 - EPYC 7601Zen 4C - EPYC 8534PN4080120160200SE +/- 4.88, N = 12SE +/- 0.47, N = 3201.18126.97-mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi1. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -mabm -O3 -std=c99 -pedantic -lm -lreadline
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisZen 1 - EPYC 7601Zen 4C - EPYC 8534PN4080120160200Min: 192.79 / Avg: 201.18 / Max: 248.39Min: 126.12 / Avg: 126.97 / Max: 127.751. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -mabm -O3 -std=c99 -pedantic -lm -lreadline

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k AtomsZen 1 - EPYC 7601Zen 4C - EPYC 8534PN714212835SE +/- 0.05, N = 3SE +/- 0.03, N = 313.8731.011. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k AtomsZen 1 - EPYC 7601Zen 4C - EPYC 8534PN714212835Min: 13.79 / Avg: 13.87 / Max: 13.97Min: 30.96 / Avg: 31.01 / Max: 31.041. (CXX) g++ options: -O3 -lm -ldl

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Chrysler Neon 1MZen 1 - EPYC 7601Zen 4C - EPYC 8534PN110220330440550SE +/- 0.62, N = 3SE +/- 0.36, N = 3499.30297.72
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Chrysler Neon 1MZen 1 - EPYC 7601Zen 4C - EPYC 8534PN90180270360450Min: 498.26 / Avg: 499.3 / Max: 500.41Min: 297.09 / Avg: 297.72 / Max: 298.32

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN70140210280350SE +/- 4.76, N = 5SE +/- 4.45, N = 12318.24195.49MAX: 28518.21MAX: 28108.09
OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN60120180240300Min: 302.63 / Avg: 318.24 / Max: 326.98Min: 177.11 / Avg: 195.49 / Max: 221.49

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN20M40M60M80M100MSE +/- 596691.26, N = 5SE +/- 1905882.84, N = 125703744191869780
OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN16M32M48M64M80MMin: 55717682.39 / Avg: 57037441.18 / Max: 59158869.06Min: 81428267.66 / Avg: 91869779.79 / Max: 101190938.54

easyWave

The easyWave software allows simulating tsunami generation and propagation in the context of early warning systems. EasyWave supports making use of OpenMP for CPU multi-threading and there are also GPU ports available but not currently incorporated as part of this test profile. The easyWave tsunami generation software is run with one of the example/reference input files for measuring the CPU execution time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 2400Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN80160240320400SE +/- 6.02, N = 6SE +/- 0.43, N = 3352.69109.851. (CXX) g++ options: -O3 -fopenmp
OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 2400Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN60120180240300Min: 333.03 / Avg: 352.69 / Max: 371.19Min: 109.01 / Avg: 109.85 / Max: 110.471. (CXX) g++ options: -O3 -fopenmp

Timed LLVM Compilation

This test times how long it takes to compile/build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Unix MakefilesZen 1 - EPYC 7601Zen 4C - EPYC 8534PN110220330440550SE +/- 1.95, N = 3SE +/- 0.55, N = 3529.72261.16
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Unix MakefilesZen 1 - EPYC 7601Zen 4C - EPYC 8534PN90180270360450Min: 526.3 / Avg: 529.72 / Max: 533.05Min: 260.21 / Avg: 261.16 / Max: 262.12

Quicksilver

Quicksilver is a proxy application that represents some elements of the Mercury workload by solving a simplified dynamic Monte Carlo particle transport problem. Quicksilver is developed by Lawrence Livermore National Laboratory (LLNL) and this test profile currently makes use of the OpenMP CPU threaded code path. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CTS2Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN3M6M9M12M15MSE +/- 16666.67, N = 3SE +/- 6666.67, N = 311426667163166671. (CXX) g++ options: -fopenmp -O3 -march=native
OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CTS2Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN3M6M9M12M15MMin: 11410000 / Avg: 11426666.67 / Max: 11460000Min: 16310000 / Avg: 16316666.67 / Max: 163300001. (CXX) g++ options: -fopenmp -O3 -march=native

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Medium Mesh Size - Execution TimeZen 1 - EPYC 7601Zen 4C - EPYC 8534PN300600900120015001179.25679.791. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Medium Mesh Size - Mesh TimeZen 1 - EPYC 7601Zen 4C - EPYC 8534PN50100150200250231.74159.571. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: KawPow - Hash Count: 1MZen 1 - EPYC 7601Zen 4C - EPYC 8534PN4K8K12K16K20KSE +/- 60.77, N = 12SE +/- 3.75, N = 36937.920720.81. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: KawPow - Hash Count: 1MZen 1 - EPYC 7601Zen 4C - EPYC 8534PN4K8K12K16K20KMin: 6560.3 / Avg: 6937.85 / Max: 7182.9Min: 20713.4 / Avg: 20720.83 / Max: 20725.41. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Femto UPX2 - Hash Count: 1MZen 1 - EPYC 7601Zen 4C - EPYC 8534PN4K8K12K16K20KSE +/- 75.94, N = 12SE +/- 7.38, N = 36957.220699.51. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Femto UPX2 - Hash Count: 1MZen 1 - EPYC 7601Zen 4C - EPYC 8534PN4K8K12K16K20KMin: 6242.2 / Avg: 6957.23 / Max: 7183Min: 20685.1 / Avg: 20699.5 / Max: 20709.51. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN110220330440550SE +/- 3.72, N = 3SE +/- 8.85, N = 3488.64394.51MAX: 28615.16MAX: 32858.26
OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN90180270360450Min: 481.47 / Avg: 488.64 / Max: 493.92Min: 378.36 / Avg: 394.51 / Max: 408.86

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN15M30M45M60M75MSE +/- 138623.83, N = 3SE +/- 878468.51, N = 36023762570984166
OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN12M24M36M48M60MMin: 59969856.05 / Avg: 60237624.64 / Max: 60433755.91Min: 69557910.93 / Avg: 70984165.82 / Max: 72585813.67

Timed LLVM Compilation

This test times how long it takes to compile/build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: NinjaZen 1 - EPYC 7601Zen 4C - EPYC 8534PN90180270360450SE +/- 1.49, N = 3SE +/- 0.09, N = 3435.05173.87
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: NinjaZen 1 - EPYC 7601Zen 4C - EPYC 8534PN80160240320400Min: 433.45 / Avg: 435.05 / Max: 438.02Min: 173.76 / Avg: 173.87 / Max: 174.04

Timed Gem5 Compilation

This test times how long it takes to compile Gem5. Gem5 is a simulator for computer system architecture research. Gem5 is widely used for computer architecture research within the industry, academia, and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 23.0.1Time To CompileZen 1 - EPYC 7601Zen 4C - EPYC 8534PN80160240320400SE +/- 4.50, N = 3SE +/- 2.38, N = 3388.02217.13
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 23.0.1Time To CompileZen 1 - EPYC 7601Zen 4C - EPYC 8534PN70140210280350Min: 381.31 / Avg: 388.02 / Max: 396.57Min: 213.43 / Avg: 217.13 / Max: 221.57

Quicksilver

Quicksilver is a proxy application that represents some elements of the Mercury workload by solving a simplified dynamic Monte Carlo particle transport problem. Quicksilver is developed by Lawrence Livermore National Laboratory (LLNL) and this test profile currently makes use of the OpenMP CPU threaded code path. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P2Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN3M6M9M12M15MSE +/- 37118.43, N = 3SE +/- 14529.66, N = 315013333161766671. (CXX) g++ options: -fopenmp -O3 -march=native
OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P2Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN3M6M9M12M15MMin: 14940000 / Avg: 15013333.33 / Max: 15060000Min: 16150000 / Avg: 16176666.67 / Max: 162000001. (CXX) g++ options: -fopenmp -O3 -march=native

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: PlatformZen 1 - EPYC 7601Zen 4C - EPYC 8534PN1122334455SE +/- 0.02, N = 3SE +/- 0.04, N = 321.0346.961. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: PlatformZen 1 - EPYC 7601Zen 4C - EPYC 8534PN1020304050Min: 21 / Avg: 21.03 / Max: 21.07Min: 46.92 / Avg: 46.96 / Max: 47.041. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: Video On DemandZen 1 - EPYC 7601Zen 4C - EPYC 8534PN1122334455SE +/- 0.00, N = 3SE +/- 0.01, N = 321.0446.981. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: Video On DemandZen 1 - EPYC 7601Zen 4C - EPYC 8534PN1020304050Min: 21.04 / Avg: 21.04 / Max: 21.05Min: 46.95 / Avg: 46.98 / Max: 46.991. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

DuckDB

DuckDB is an in-progress SQL OLAP database management system optimized for analytics and features a vectorized and parallel engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDuckDB 0.9.1Benchmark: TPC-H ParquetZen 1 - EPYC 7601Zen 4C - EPYC 8534PN50100150200250SE +/- 0.21, N = 3SE +/- 0.17, N = 3245.74178.441. (CXX) g++ options: -O3 -rdynamic -lssl -lcrypto -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterDuckDB 0.9.1Benchmark: TPC-H ParquetZen 1 - EPYC 7601Zen 4C - EPYC 8534PN4080120160200Min: 245.43 / Avg: 245.74 / Max: 246.13Min: 178.1 / Avg: 178.44 / Max: 178.661. (CXX) g++ options: -O3 -rdynamic -lssl -lcrypto -ldl

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 19.8.1Time To CompileZen 1 - EPYC 7601Zen 4C - EPYC 8534PN80160240320400SE +/- 0.64, N = 3SE +/- 0.12, N = 3387.65160.73
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 19.8.1Time To CompileZen 1 - EPYC 7601Zen 4C - EPYC 8534PN70140210280350Min: 386.55 / Avg: 387.65 / Max: 388.78Min: 160.58 / Avg: 160.73 / Max: 160.96

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: UploadZen 1 - EPYC 7601Zen 4C - EPYC 8534PN612182430SE +/- 0.00, N = 3SE +/- 0.02, N = 310.3923.181. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: UploadZen 1 - EPYC 7601Zen 4C - EPYC 8534PN510152025Min: 10.38 / Avg: 10.39 / Max: 10.39Min: 23.16 / Avg: 23.18 / Max: 23.211. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

DuckDB

DuckDB is an in-progress SQL OLAP database management system optimized for analytics and features a vectorized and parallel engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDuckDB 0.9.1Benchmark: IMDBZen 1 - EPYC 7601Zen 4C - EPYC 8534PN4080120160200SE +/- 0.56, N = 3SE +/- 0.24, N = 3199.01125.101. (CXX) g++ options: -O3 -rdynamic -lssl -lcrypto -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterDuckDB 0.9.1Benchmark: IMDBZen 1 - EPYC 7601Zen 4C - EPYC 8534PN4080120160200Min: 197.89 / Avg: 199.01 / Max: 199.68Min: 124.86 / Avg: 125.1 / Max: 125.581. (CXX) g++ options: -O3 -rdynamic -lssl -lcrypto -ldl

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN70K140K210K280K350KSE +/- 188.22, N = 3SE +/- 336.36, N = 333164376785
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN60K120K180K240K300KMin: 331269 / Avg: 331643 / Max: 331867Min: 76122 / Avg: 76785.33 / Max: 77214

miniBUDE

MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN306090120150SE +/- 0.01, N = 3SE +/- 0.10, N = 314.22117.571. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN20406080100Min: 14.2 / Avg: 14.22 / Max: 14.23Min: 117.44 / Avg: 117.57 / Max: 117.771. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN6001200180024003000SE +/- 0.27, N = 3SE +/- 2.46, N = 3355.502939.361. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN5001000150020002500Min: 354.98 / Avg: 355.5 / Max: 355.86Min: 2935.87 / Avg: 2939.36 / Max: 2944.121. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN20406080100SE +/- 0.21, N = 3SE +/- 0.08, N = 382.0747.19MAX: 23993.13MAX: 23867.11
OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN1632486480Min: 81.82 / Avg: 82.07 / Max: 82.49Min: 47.1 / Avg: 47.19 / Max: 47.34

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN20M40M60M80M100MSE +/- 197187.95, N = 3SE +/- 349666.50, N = 35699381199531709
OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN20M40M60M80M100MMin: 56603441.78 / Avg: 56993811.31 / Max: 57237555.15Min: 98835726.11 / Avg: 99531708.9 / Max: 99938911.32

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN90180270360450SE +/- 5.76, N = 3SE +/- 2.13, N = 3421.30254.31MAX: 27834.33MAX: 27111.61
OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN70140210280350Min: 409.94 / Avg: 421.3 / Max: 428.6Min: 250.63 / Avg: 254.31 / Max: 258

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN20M40M60M80M100MSE +/- 470631.65, N = 3SE +/- 417895.46, N = 359444334103397638
OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN20M40M60M80M100MMin: 58928279.39 / Avg: 59444333.92 / Max: 60384085.64Min: 102610751.02 / Avg: 103397638.46 / Max: 104035041.11

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN60K120K180K240K300KSE +/- 353.76, N = 3SE +/- 237.26, N = 328060665725
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN50K100K150K200K250KMin: 280238 / Avg: 280605.67 / Max: 281313Min: 65256 / Avg: 65725 / Max: 66022

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN306090120150SE +/- 0.22, N = 3SE +/- 0.06, N = 3127.2671.67MAX: 10220.3MAX: 10076.95
OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN20406080100Min: 126.93 / Avg: 127.26 / Max: 127.68Min: 71.56 / Avg: 71.67 / Max: 71.73

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN20M40M60M80M100MSE +/- 85534.45, N = 3SE +/- 102156.43, N = 358774090104173220
OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN20M40M60M80M100MMin: 58686953.65 / Avg: 58774090.2 / Max: 58945149.04Min: 104043034.34 / Avg: 104173220.24 / Max: 104374682.28

Speedb

Speedb is a next-generation key value storage engine that is RocksDB compatible and aiming for stability, efficiency, and performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read While WritingZen 1 - EPYC 7601Zen 4C - EPYC 8534PN3M6M9M12M15MSE +/- 67628.42, N = 3SE +/- 104743.64, N = 156145679129475071. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read While WritingZen 1 - EPYC 7601Zen 4C - EPYC 8534PN2M4M6M8M10MMin: 6010427 / Avg: 6145679.33 / Max: 6214262Min: 12291122 / Avg: 12947506.6 / Max: 138223191. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read Random Write RandomZen 1 - EPYC 7601Zen 4C - EPYC 8534PN600K1200K1800K2400K3000KSE +/- 4649.81, N = 3SE +/- 22479.06, N = 15150918426542861. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read Random Write RandomZen 1 - EPYC 7601Zen 4C - EPYC 8534PN500K1000K1500K2000K2500KMin: 1499893 / Avg: 1509184 / Max: 1514176Min: 2487636 / Avg: 2654286.2 / Max: 27461401. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: AES-256-GCMZen 1 - EPYC 7601Zen 4C - EPYC 8534PN100000M200000M300000M400000M500000MSE +/- 95870018.66, N = 3SE +/- 158535708.73, N = 3899484286974608148531901. (CC) gcc options: -pthread -m64 -O3 -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: AES-256-GCMZen 1 - EPYC 7601Zen 4C - EPYC 8534PN80000M160000M240000M320000M400000MMin: 89772640940 / Avg: 89948428696.67 / Max: 90102633810Min: 460625600240 / Avg: 460814853190 / Max: 4611297935401. (CC) gcc options: -pthread -m64 -O3 -ldl

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: AES-128-GCMZen 1 - EPYC 7601Zen 4C - EPYC 8534PN110000M220000M330000M440000M550000MSE +/- 101833027.65, N = 3SE +/- 261674063.26, N = 3976450094875281788973531. (CC) gcc options: -pthread -m64 -O3 -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: AES-128-GCMZen 1 - EPYC 7601Zen 4C - EPYC 8534PN90000M180000M270000M360000M450000MMin: 97442024930 / Avg: 97645009486.67 / Max: 97760918730Min: 527775885180 / Avg: 528178897353.33 / Max: 5286695501801. (CC) gcc options: -pthread -m64 -O3 -ldl

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20-Poly1305Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN40000M80000M120000M160000M200000MSE +/- 12075597.72, N = 3SE +/- 493572176.62, N = 3303154906902088772542471. (CC) gcc options: -pthread -m64 -O3 -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20-Poly1305Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN40000M80000M120000M160000M200000MMin: 30295781100 / Avg: 30315490690 / Max: 30337433050Min: 207890245220 / Avg: 208877254246.67 / Max: 2093849138501. (CC) gcc options: -pthread -m64 -O3 -ldl

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN60000M120000M180000M240000M300000MSE +/- 23640894.78, N = 3SE +/- 69144081.68, N = 3479200259672979153504931. (CC) gcc options: -pthread -m64 -O3 -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN50000M100000M150000M200000M250000MMin: 47877229230 / Avg: 47920025966.67 / Max: 47958831380Min: 297777076910 / Avg: 297915350493.33 / Max: 2979862263101. (CC) gcc options: -pthread -m64 -O3 -ldl

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: SHA512Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN6000M12000M18000M24000M30000MSE +/- 6971539.66, N = 3SE +/- 5378103.90, N = 38316184170262719605471. (CC) gcc options: -pthread -m64 -O3 -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: SHA512Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN5000M10000M15000M20000M25000MMin: 8303805050 / Avg: 8316184170 / Max: 8327930270Min: 26261392610 / Avg: 26271960546.67 / Max: 262789797501. (CC) gcc options: -pthread -m64 -O3 -ldl

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: SHA256Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN20000M40000M60000M80000M100000MSE +/- 48383276.04, N = 3SE +/- 59376757.34, N = 327056830400798574017831. (CC) gcc options: -pthread -m64 -O3 -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: SHA256Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN14000M28000M42000M56000M70000MMin: 26963781120 / Avg: 27056830400 / Max: 27126359340Min: 79771218960 / Avg: 79857401783.33 / Max: 799712476201. (CC) gcc options: -pthread -m64 -O3 -ldl

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN4080120160200SE +/- 1.86, N = 8SE +/- 1.34, N = 3166.00133.01MAX: 27445.2MAX: 27113.93
OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN306090120150Min: 159.47 / Avg: 166 / Max: 177.42Min: 130.83 / Avg: 133.01 / Max: 135.44

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN11M22M33M44M55MSE +/- 325879.11, N = 8SE +/- 524236.55, N = 33817645650396884
OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN9M18M27M36M45MMin: 36382512.95 / Avg: 38176456.25 / Max: 39343597.25Min: 49445484.42 / Avg: 50396884.23 / Max: 51254160.72

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Pabellon Barcelona - Compute: CPU-OnlyZen 1 - EPYC 7601Zen 4C - EPYC 8534PN50100150200250SE +/- 0.13, N = 3SE +/- 0.13, N = 3247.2486.57
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Pabellon Barcelona - Compute: CPU-OnlyZen 1 - EPYC 7601Zen 4C - EPYC 8534PN4080120160200Min: 246.99 / Avg: 247.24 / Max: 247.42Min: 86.35 / Avg: 86.57 / Max: 86.81

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/pathtracer/real_timeZen 1 - EPYC 7601Zen 4C - EPYC 8534PN4080120160200SE +/- 0.08, N = 3SE +/- 0.13, N = 398.44166.15
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/pathtracer/real_timeZen 1 - EPYC 7601Zen 4C - EPYC 8534PN306090120150Min: 98.27 / Avg: 98.44 / Max: 98.53Min: 165.95 / Avg: 166.15 / Max: 166.39

ACES DGEMM

This is a multi-threaded DGEMM benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point RateZen 1 - EPYC 7601Zen 4C - EPYC 8534PN510152025SE +/- 0.057778, N = 15SE +/- 0.145648, N = 53.79079922.9533151. (CC) gcc options: -O3 -march=native -fopenmp
OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point RateZen 1 - EPYC 7601Zen 4C - EPYC 8534PN510152025Min: 3.37 / Avg: 3.79 / Max: 4.11Min: 22.46 / Avg: 22.95 / Max: 23.361. (CC) gcc options: -O3 -march=native -fopenmp

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/scivis/real_timeZen 1 - EPYC 7601Zen 4C - EPYC 8534PN48121620SE +/- 0.00822, N = 3SE +/- 0.05559, N = 35.2211617.04840
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/scivis/real_timeZen 1 - EPYC 7601Zen 4C - EPYC 8534PN48121620Min: 5.21 / Avg: 5.22 / Max: 5.24Min: 16.94 / Avg: 17.05 / Max: 17.11

Redis 7.0.12 + memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN600K1200K1800K2400K3000KSE +/- 8571.80, N = 11SE +/- 25230.96, N = 31142464.162901182.231. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN500K1000K1500K2000K2500KMin: 1088495.84 / Avg: 1142464.16 / Max: 1190818.44Min: 2873200.15 / Avg: 2901182.23 / Max: 2951540.231. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read While WritingZen 1 - EPYC 7601Zen 4C - EPYC 8534PN1.5M3M4.5M6M7.5MSE +/- 34108.45, N = 3SE +/- 125140.33, N = 12337187669182671. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read While WritingZen 1 - EPYC 7601Zen 4C - EPYC 8534PN1.2M2.4M3.6M4.8M6MMin: 3311458 / Avg: 3371876.33 / Max: 3429515Min: 6483923 / Avg: 6918267 / Max: 82259891. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.1.3Test: WritesZen 1 - EPYC 7601Zen 4C - EPYC 8534PN50K100K150K200K250KSE +/- 702.01, N = 3SE +/- 417.51, N = 3152887251434
OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.1.3Test: WritesZen 1 - EPYC 7601Zen 4C - EPYC 8534PN40K80K120K160K200KMin: 151723 / Avg: 152887.33 / Max: 154149Min: 251011 / Avg: 251434 / Max: 252269

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN60120180240300SE +/- 3.57, N = 3SE +/- 0.93, N = 3294.75191.77MAX: 29204.63MAX: 26841.55
OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN50100150200250Min: 288.89 / Avg: 294.75 / Max: 301.22Min: 190.16 / Avg: 191.77 / Max: 193.37

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN20M40M60M80M100MSE +/- 507374.51, N = 3SE +/- 286751.08, N = 35284912586278348
OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN15M30M45M60M75MMin: 51842885.53 / Avg: 52849125.44 / Max: 53465812.98Min: 85902017.24 / Avg: 86278348.23 / Max: 86841293.38

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: FastZen 1 - EPYC 7601Zen 4C - EPYC 8534PN246810SE +/- 0.034, N = 3SE +/- 0.008, N = 33.2916.2801. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: FastZen 1 - EPYC 7601Zen 4C - EPYC 8534PN3691215Min: 3.23 / Avg: 3.29 / Max: 3.34Min: 6.27 / Avg: 6.28 / Max: 6.291. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN20406080100SE +/- 0.50, N = 3SE +/- 0.43, N = 385.0752.95MAX: 11426.59MAX: 10080.74
OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN1632486480Min: 84.48 / Avg: 85.07 / Max: 86.06Min: 52.42 / Avg: 52.95 / Max: 53.79

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN20M40M60M80M100MSE +/- 175336.13, N = 3SE +/- 262330.46, N = 35375994086530272
OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN15M30M45M60M75MMin: 53410111.33 / Avg: 53759939.98 / Max: 53955907.01Min: 86114901.31 / Avg: 86530271.74 / Max: 87015540.02

Redis 7.0.12 + memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN700K1400K2100K2800K3500KSE +/- 1850.76, N = 3SE +/- 25227.98, N = 91183656.973084431.371. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN500K1000K1500K2000K2500KMin: 1179977.74 / Avg: 1183656.97 / Max: 1185847.84Min: 2930157.26 / Avg: 3084431.37 / Max: 3164136.471. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Classroom - Compute: CPU-OnlyZen 1 - EPYC 7601Zen 4C - EPYC 8534PN4080120160200SE +/- 1.40, N = 3SE +/- 0.09, N = 3192.8067.91
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Classroom - Compute: CPU-OnlyZen 1 - EPYC 7601Zen 4C - EPYC 8534PN4080120160200Min: 191.27 / Avg: 192.8 / Max: 195.59Min: 67.74 / Avg: 67.91 / Max: 68.06

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN306090120150SE +/- 1.86, N = 3SE +/- 0.31, N = 3147.89106.91MAX: 27830.18MAX: 26717.27
OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN306090120150Min: 144.73 / Avg: 147.89 / Max: 151.16Min: 106.35 / Avg: 106.91 / Max: 107.41

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN15M30M45M60M75MSE +/- 330018.02, N = 3SE +/- 292969.25, N = 34592107467712944
OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN12M24M36M48M60MMin: 45262386.21 / Avg: 45921074.45 / Max: 46286929.27Min: 67136991.03 / Avg: 67712944.15 / Max: 68094202.22

easyWave

The easyWave software allows simulating tsunami generation and propagation in the context of early warning systems. EasyWave supports making use of OpenMP for CPU multi-threading and there are also GPU ports available but not currently incorporated as part of this test profile. The easyWave tsunami generation software is run with one of the example/reference input files for measuring the CPU execution time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN306090120150SE +/- 1.77, N = 4SE +/- 0.29, N = 3148.0838.881. (CXX) g++ options: -O3 -fopenmp
OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN306090120150Min: 145.45 / Avg: 148.08 / Max: 153.23Min: 38.32 / Avg: 38.88 / Max: 39.291. (CXX) g++ options: -O3 -fopenmp

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN40K80K120K160K200KSE +/- 133.22, N = 3SE +/- 228.15, N = 317118841278
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN30K60K90K120K150KMin: 170972 / Avg: 171187.67 / Max: 171431Min: 40825 / Avg: 41278 / Max: 41552

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-50Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN1122334455SE +/- 0.04, N = 3SE +/- 0.04, N = 39.8250.27
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-50Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN1020304050Min: 9.78 / Avg: 9.82 / Max: 9.89Min: 50.22 / Avg: 50.27 / Max: 50.34

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN918273645SE +/- 0.33, N = 3SE +/- 0.16, N = 339.4027.16MAX: 23930.49MAX: 23871.38
OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN816243240Min: 39.02 / Avg: 39.4 / Max: 40.05Min: 26.84 / Avg: 27.16 / Max: 27.38

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN15M30M45M60M75MSE +/- 405601.66, N = 3SE +/- 89839.93, N = 34604958967937226
OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN12M24M36M48M60MMin: 45239125.92 / Avg: 46049588.87 / Max: 46484828.34Min: 67791889.6 / Avg: 67937225.78 / Max: 68101390.77

rav1e

Xiph rav1e is a Rust-written AV1 video encoder that claims to be the fastest and safest AV1 encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.7Speed: 1Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN0.19190.38380.57570.76760.9595SE +/- 0.001, N = 3SE +/- 0.001, N = 30.5630.853
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.7Speed: 1Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN246810Min: 0.56 / Avg: 0.56 / Max: 0.57Min: 0.85 / Avg: 0.85 / Max: 0.86

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/ao/real_timeZen 1 - EPYC 7601Zen 4C - EPYC 8534PN48121620SE +/- 0.01171, N = 3SE +/- 0.01999, N = 35.2893117.09930
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/ao/real_timeZen 1 - EPYC 7601Zen 4C - EPYC 8534PN48121620Min: 5.27 / Avg: 5.29 / Max: 5.31Min: 17.06 / Avg: 17.1 / Max: 17.12

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN30K60K90K120K150KSE +/- 81.84, N = 3SE +/- 45.84, N = 314552836085
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN30K60K90K120K150KMin: 145445 / Avg: 145528.33 / Max: 145692Min: 35993 / Avg: 36084.67 / Max: 36132

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 23.6Input: Carbon NanotubeZen 1 - EPYC 7601Zen 4C - EPYC 8534PN306090120150SE +/- 0.91, N = 3SE +/- 0.20, N = 3142.1763.051. (CC) gcc options: -shared -fwrapv -O2 -lxc -lblas -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 23.6Input: Carbon NanotubeZen 1 - EPYC 7601Zen 4C - EPYC 8534PN306090120150Min: 140.66 / Avg: 142.17 / Max: 143.81Min: 62.65 / Avg: 63.05 / Max: 63.331. (CC) gcc options: -shared -fwrapv -O2 -lxc -lblas -lmpi

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN1020304050SE +/- 0.62, N = 3SE +/- 0.57, N = 346.2935.58MAX: 12751.47MAX: 12564.03
OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN918273645Min: 45.13 / Avg: 46.29 / Max: 47.26Min: 34.61 / Avg: 35.58 / Max: 36.58

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN11M22M33M44M55MSE +/- 524500.20, N = 3SE +/- 567305.29, N = 33871226351396698
OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN9M18M27M36M45MMin: 37895592.94 / Avg: 38712262.71 / Max: 39690763.57Min: 50299720.52 / Avg: 51396697.65 / Max: 52196157.81

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Monero - Hash Count: 1MZen 1 - EPYC 7601Zen 4C - EPYC 8534PN4K8K12K16K20KSE +/- 58.63, N = 3SE +/- 8.43, N = 37044.220716.51. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Monero - Hash Count: 1MZen 1 - EPYC 7601Zen 4C - EPYC 8534PN4K8K12K16K20KMin: 6940.1 / Avg: 7044.2 / Max: 7143Min: 20701.8 / Avg: 20716.53 / Max: 207311. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Heavy - Hash Count: 1MZen 1 - EPYC 7601Zen 4C - EPYC 8534PN4K8K12K16K20KSE +/- 21.41, N = 3SE +/- 174.63, N = 37081.020541.81. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Heavy - Hash Count: 1MZen 1 - EPYC 7601Zen 4C - EPYC 8534PN4K8K12K16K20KMin: 7038.4 / Avg: 7081.03 / Max: 7105.9Min: 20192.6 / Avg: 20541.77 / Max: 20723.21. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2023Implementation: MPI CPU - Input: water_GMX50_bareZen 1 - EPYC 76010.44820.89641.34461.79282.241SE +/- 0.017, N = 31.9921. (CXX) g++ options: -O3

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN30060090012001500SE +/- 3.61, N = 3SE +/- 0.35, N = 31200.10674.26
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN2004006008001000Min: 1193.93 / Avg: 1200.1 / Max: 1206.44Min: 673.57 / Avg: 674.26 / Max: 674.66

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN1122334455SE +/- 0.06, N = 3SE +/- 0.03, N = 313.2646.87
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN1020304050Min: 13.18 / Avg: 13.26 / Max: 13.39Min: 46.84 / Avg: 46.87 / Max: 46.93

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: LiveZen 1 - EPYC 7601Zen 4C - EPYC 8534PN306090120150SE +/- 0.07, N = 3SE +/- 0.14, N = 358.23114.951. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: LiveZen 1 - EPYC 7601Zen 4C - EPYC 8534PN20406080100Min: 58.11 / Avg: 58.23 / Max: 58.33Min: 114.69 / Avg: 114.95 / Max: 115.141. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 1000Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN60K120K180K240K300KSE +/- 212.46, N = 3SE +/- 312.48, N = 3100664.35266322.551. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 1000Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN50K100K150K200K250KMin: 100353.21 / Avg: 100664.35 / Max: 101070.55Min: 265897.05 / Avg: 266322.55 / Max: 266931.711. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

Apache HTTP Server

This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.56Concurrent Requests: 1000Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN30K60K90K120K150KSE +/- 319.15, N = 3SE +/- 1806.36, N = 388038.28130116.681. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.56Concurrent Requests: 1000Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN20K40K60K80K100KMin: 87407.17 / Avg: 88038.28 / Max: 88436.53Min: 127367.42 / Avg: 130116.68 / Max: 133521.081. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 500Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN60K120K180K240K300KSE +/- 450.48, N = 3SE +/- 1464.33, N = 3103269.83263177.491. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 500Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN50K100K150K200K250KMin: 102408.84 / Avg: 103269.83 / Max: 103930.14Min: 260481.48 / Avg: 263177.49 / Max: 265516.191. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.32Configuration: Multi-ThreadedZen 1 - EPYC 7601Zen 4C - EPYC 8534PN40K80K120K160K200KSE +/- 8.17, N = 3SE +/- 779.11, N = 364481.7171336.61. (CXX) g++ options: -O3 -march=native -fPIE -pie
OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.32Configuration: Multi-ThreadedZen 1 - EPYC 7601Zen 4C - EPYC 8534PN30K60K90K120K150KMin: 64465.8 / Avg: 64481.7 / Max: 64492.9Min: 170233.1 / Avg: 171336.6 / Max: 172841.11. (CXX) g++ options: -O3 -march=native -fPIE -pie

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN2K4K6K8K10KSE +/- 34.23, N = 3SE +/- 3.51, N = 3100502196
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN2K4K6K8K10KMin: 9982 / Avg: 10050.33 / Max: 10088Min: 2192 / Avg: 2196 / Max: 2203

CloverLeaf

CloverLeaf is a Lagrangian-Eulerian hydrodynamics benchmark. This test profile currently makes use of CloverLeaf's OpenMP version. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeaf 1.3Input: clover_bm64_shortZen 1 - EPYC 7601Zen 4C - EPYC 8534PN20406080100SE +/- 1.39, N = 3SE +/- 0.02, N = 3108.6057.381. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp
OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeaf 1.3Input: clover_bm64_shortZen 1 - EPYC 7601Zen 4C - EPYC 8534PN20406080100Min: 105.84 / Avg: 108.6 / Max: 110.23Min: 57.33 / Avg: 57.38 / Max: 57.411. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN2K4K6K8K10KSE +/- 16.50, N = 3SE +/- 2.08, N = 384781856
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN15003000450060007500Min: 8446 / Avg: 8478 / Max: 8501Min: 1852 / Avg: 1856 / Max: 1859

SPECFEM3D

simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra. This test profile currently relies on CPU-based execution for SPECFEM3D and using a variety of their built-in examples/models for benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Layered HalfspaceZen 1 - EPYC 760120406080100SE +/- 0.98, N = 378.681. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. V-RAY is a commercial renderer that can integrate with various creator software products like SketchUp and 3ds Max. The V-RAY benchmark is standalone and supports CPU and NVIDIA CUDA/RTX based rendering. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5.02Mode: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN13K26K39K52K65KSE +/- 15.72, N = 3SE +/- 764.13, N = 32017560911
OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5.02Mode: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN11K22K33K44K55KMin: 20144 / Avg: 20175 / Max: 20195Min: 59762 / Avg: 60910.67 / Max: 62358

SPECFEM3D

simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra. This test profile currently relies on CPU-based execution for SPECFEM3D and using a variety of their built-in examples/models for benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Water-layered HalfspaceZen 1 - EPYC 760120406080100SE +/- 0.29, N = 374.951. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

PyTorch

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: ResNet-50Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN816243240SE +/- 0.22, N = 3SE +/- 0.11, N = 321.2136.56MIN: 13.9 / MAX: 22.04MIN: 35.25 / MAX: 37.29
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: ResNet-50Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN816243240Min: 20.77 / Avg: 21.21 / Max: 21.49Min: 36.37 / Avg: 36.56 / Max: 36.75

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: FasterZen 1 - EPYC 7601Zen 4C - EPYC 8534PN3691215SE +/- 0.018, N = 3SE +/- 0.020, N = 36.49311.5641. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: FasterZen 1 - EPYC 7601Zen 4C - EPYC 8534PN3691215Min: 6.46 / Avg: 6.49 / Max: 6.52Min: 11.54 / Avg: 11.56 / Max: 11.61. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

rav1e

Xiph rav1e is a Rust-written AV1 video encoder that claims to be the fastest and safest AV1 encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.7Speed: 5Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN0.80571.61142.41713.22284.0285SE +/- 0.014, N = 3SE +/- 0.002, N = 32.2493.581
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.7Speed: 5Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN246810Min: 2.23 / Avg: 2.25 / Max: 2.28Min: 3.58 / Avg: 3.58 / Max: 3.58

Quicksilver

Quicksilver is a proxy application that represents some elements of the Mercury workload by solving a simplified dynamic Monte Carlo particle transport problem. Quicksilver is developed by Lawrence Livermore National Laboratory (LLNL) and this test profile currently makes use of the OpenMP CPU threaded code path. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P1Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN5M10M15M20M25MSE +/- 66916.20, N = 3SE +/- 46666.67, N = 312996667213266671. (CXX) g++ options: -fopenmp -O3 -march=native
OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P1Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN4M8M12M16M20MMin: 12890000 / Avg: 12996666.67 / Max: 13120000Min: 21280000 / Avg: 21326666.67 / Max: 214200001. (CXX) g++ options: -fopenmp -O3 -march=native

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Fishy Cat - Compute: CPU-OnlyZen 1 - EPYC 7601Zen 4C - EPYC 8534PN20406080100SE +/- 0.13, N = 3SE +/- 0.08, N = 3101.2035.37
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Fishy Cat - Compute: CPU-OnlyZen 1 - EPYC 7601Zen 4C - EPYC 8534PN20406080100Min: 101.02 / Avg: 101.2 / Max: 101.46Min: 35.22 / Avg: 35.37 / Max: 35.47

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarZen 1 - EPYC 7601Zen 4C - EPYC 8534PN612182430SE +/- 0.050, N = 3SE +/- 0.040, N = 38.58024.522
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarZen 1 - EPYC 7601Zen 4C - EPYC 8534PN612182430Min: 8.49 / Avg: 8.58 / Max: 8.66Min: 24.46 / Avg: 24.52 / Max: 24.59

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeZen 1 - EPYC 7601Zen 4C - EPYC 8534PN510152025SE +/- 0.01550, N = 3SE +/- 0.02744, N = 34.1484119.64340
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeZen 1 - EPYC 7601Zen 4C - EPYC 8534PN510152025Min: 4.12 / Avg: 4.15 / Max: 4.18Min: 19.61 / Avg: 19.64 / Max: 19.7

rav1e

Xiph rav1e is a Rust-written AV1 video encoder that claims to be the fastest and safest AV1 encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.7Speed: 10Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN3691215SE +/- 0.080, N = 3SE +/- 0.020, N = 36.97112.355
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.7Speed: 10Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN48121620Min: 6.83 / Avg: 6.97 / Max: 7.11Min: 12.32 / Avg: 12.36 / Max: 12.38

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection FP16-INT8 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN400800120016002000SE +/- 0.06, N = 3SE +/- 2.44, N = 32094.21595.92MIN: 2092.79 / MAX: 2112.41MIN: 511.76 / MAX: 618.531. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection FP16-INT8 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN400800120016002000Min: 2094.12 / Avg: 2094.21 / Max: 2094.33Min: 591.25 / Avg: 595.92 / Max: 599.471. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection FP16-INT8 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN1224364860SE +/- 0.00, N = 3SE +/- 0.20, N = 33.8253.511. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection FP16-INT8 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN1122334455Min: 3.82 / Avg: 3.82 / Max: 3.82Min: 53.19 / Avg: 53.51 / Max: 53.891. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigZen 1 - EPYC 7601Zen 4C - EPYC 8534PN20406080100SE +/- 0.88, N = 3SE +/- 0.50, N = 476.3041.71
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigZen 1 - EPYC 7601Zen 4C - EPYC 8534PN1530456075Min: 75.05 / Avg: 76.3 / Max: 78.01Min: 41.14 / Avg: 41.71 / Max: 43.22

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN4080120160200SE +/- 1.64, N = 6SE +/- 0.07, N = 3176.4399.11
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN306090120150Min: 171.7 / Avg: 176.43 / Max: 183.34Min: 98.99 / Avg: 99.11 / Max: 99.21

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN70140210280350SE +/- 0.85, N = 6SE +/- 0.25, N = 390.66322.28
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN60120180240300Min: 87.12 / Avg: 90.66 / Max: 93.16Min: 321.89 / Avg: 322.28 / Max: 322.76

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomZen 1 - EPYC 7601Zen 4C - EPYC 8534PN3691215SE +/- 0.022, N = 3SE +/- 0.005, N = 33.99111.172
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomZen 1 - EPYC 7601Zen 4C - EPYC 8534PN3691215Min: 3.95 / Avg: 3.99 / Max: 4.02Min: 11.16 / Avg: 11.17 / Max: 11.18

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Wownero - Hash Count: 1MZen 1 - EPYC 7601Zen 4C - EPYC 8534PN8K16K24K32K40KSE +/- 27.32, N = 3SE +/- 114.17, N = 310414.739663.31. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Wownero - Hash Count: 1MZen 1 - EPYC 7601Zen 4C - EPYC 8534PN7K14K21K28K35KMin: 10369.8 / Avg: 10414.67 / Max: 10464.1Min: 39454 / Avg: 39663.27 / Max: 398471. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Detection FP16 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN4080120160200SE +/- 0.36, N = 3SE +/- 0.14, N = 3181.92146.53MIN: 171.42 / MAX: 202.33MIN: 71.86 / MAX: 230.851. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Detection FP16 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN306090120150Min: 181.28 / Avg: 181.92 / Max: 182.53Min: 146.29 / Avg: 146.53 / Max: 146.761. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Person Detection FP16 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN50100150200250SE +/- 0.09, N = 3SE +/- 0.21, N = 343.91218.071. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Person Detection FP16 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN4080120160200Min: 43.76 / Avg: 43.91 / Max: 44.07Min: 217.71 / Avg: 218.07 / Max: 218.441. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Machine Translation EN To DE FP16 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN4080120160200SE +/- 0.49, N = 3SE +/- 0.21, N = 3193.33103.12MIN: 178.13 / MAX: 252.21MIN: 51.8 / MAX: 150.841. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Machine Translation EN To DE FP16 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN4080120160200Min: 192.36 / Avg: 193.33 / Max: 193.99Min: 102.75 / Avg: 103.12 / Max: 103.491. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Machine Translation EN To DE FP16 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN70140210280350SE +/- 0.10, N = 3SE +/- 0.68, N = 341.32309.821. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Machine Translation EN To DE FP16 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN60120180240300Min: 41.17 / Avg: 41.32 / Max: 41.52Min: 308.66 / Avg: 309.82 / Max: 311.011. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16-INT8 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN1224364860SE +/- 0.05, N = 3SE +/- 0.10, N = 353.4732.67MIN: 52.68 / MAX: 66.73MIN: 17.88 / MAX: 49.591. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16-INT8 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN1122334455Min: 53.37 / Avg: 53.47 / Max: 53.52Min: 32.53 / Avg: 32.67 / Max: 32.851. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16-INT8 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN2004006008001000SE +/- 0.14, N = 3SE +/- 2.89, N = 3149.49977.071. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16-INT8 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN2004006008001000Min: 149.35 / Avg: 149.49 / Max: 149.77Min: 971.54 / Avg: 977.07 / Max: 981.291. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN48121620SE +/- 0.06, N = 3SE +/- 0.01, N = 316.9512.21MIN: 16.19 / MAX: 27.98MIN: 7.44 / MAX: 23.581. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN48121620Min: 16.87 / Avg: 16.95 / Max: 17.06Min: 12.19 / Avg: 12.21 / Max: 12.221. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN6001200180024003000SE +/- 1.63, N = 3SE +/- 1.69, N = 3471.382602.271. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN5001000150020002500Min: 468.22 / Avg: 471.38 / Max: 473.64Min: 2600.36 / Avg: 2602.27 / Max: 2605.651. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16-INT8 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN4080120160200SE +/- 0.65, N = 3SE +/- 0.03, N = 3169.0451.96MIN: 141.74 / MAX: 192.16MIN: 33.44 / MAX: 81.121. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16-INT8 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN306090120150Min: 168.06 / Avg: 169.04 / Max: 170.27Min: 51.93 / Avg: 51.96 / Max: 52.021. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16-INT8 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN30060090012001500SE +/- 0.70, N = 3SE +/- 0.72, N = 3189.051229.161. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16-INT8 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN2004006008001000Min: 187.74 / Avg: 189.05 / Max: 190.12Min: 1227.72 / Avg: 1229.16 / Max: 1229.971. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16-INT8 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN3691215SE +/- 0.00, N = 3SE +/- 0.01, N = 39.107.10MIN: 9.04 / MAX: 17.25MIN: 4.1 / MAX: 18.361. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16-INT8 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN3691215Min: 9.1 / Avg: 9.1 / Max: 9.1Min: 7.08 / Avg: 7.1 / Max: 7.111. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16-INT8 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN2K4K6K8K10KSE +/- 0.26, N = 3SE +/- 14.71, N = 3877.498840.411. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16-INT8 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN15003000450060007500Min: 877.02 / Avg: 877.49 / Max: 877.92Min: 8823.98 / Avg: 8840.41 / Max: 8869.761. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN612182430SE +/- 0.03, N = 3SE +/- 0.02, N = 327.0110.90MIN: 26.86 / MAX: 38.57MIN: 5.86 / MAX: 22.421. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN612182430Min: 26.97 / Avg: 27.01 / Max: 27.07Min: 10.86 / Avg: 10.9 / Max: 10.921. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN6001200180024003000SE +/- 0.35, N = 3SE +/- 5.14, N = 3295.802902.571. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN5001000150020002500Min: 295.16 / Avg: 295.8 / Max: 296.38Min: 2896.75 / Avg: 2902.57 / Max: 2912.821. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Speedb

Speedb is a next-generation key value storage engine that is RocksDB compatible and aiming for stability, efficiency, and performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Update RandomZen 1 - EPYC 7601Zen 4C - EPYC 8534PN80K160K240K320K400KSE +/- 239.59, N = 3SE +/- 832.97, N = 32015553566021. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Update RandomZen 1 - EPYC 7601Zen 4C - EPYC 8534PN60K120K180K240K300KMin: 201108 / Avg: 201555 / Max: 201928Min: 354938 / Avg: 356601.67 / Max: 3575091. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN20406080100SE +/- 0.00, N = 3SE +/- 0.01, N = 384.6811.97MIN: 84.05 / MAX: 93.44MIN: 6.19 / MAX: 26.191. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN1632486480Min: 84.68 / Avg: 84.68 / Max: 84.69Min: 11.94 / Avg: 11.97 / Max: 11.991. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN11002200330044005500SE +/- 0.02, N = 3SE +/- 6.62, N = 3377.405276.111. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN9001800270036004500Min: 377.37 / Avg: 377.4 / Max: 377.44Min: 5265.11 / Avg: 5276.11 / Max: 52881. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN0.54681.09361.64042.18722.734SE +/- 0.00, N = 3SE +/- 0.00, N = 32.430.60MIN: 2.39 / MAX: 10.02MIN: 0.27 / MAX: 24.871. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN246810Min: 2.43 / Avg: 2.43 / Max: 2.43Min: 0.59 / Avg: 0.6 / Max: 0.61. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN16K32K48K64K80KSE +/- 3.60, N = 3SE +/- 275.21, N = 313011.1074625.611. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUZen 1 - EPYC 7601Zen 4C - EPYC 8534PN13K26K39K52K65KMin: 13004.67 / Avg: 13011.1 / Max: 13017.13Min: 74285.92 / Avg: 74625.61 / Max: 75170.541. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Speedb

Speedb is a next-generation key value storage engine that is RocksDB compatible and aiming for stability, efficiency, and performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read Random Write RandomZen 1 - EPYC 7601Zen 4C - EPYC 8534PN600K1200K1800K2400K3000KSE +/- 1989.50, N = 3SE +/- 17447.09, N = 3143607425775641. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read Random Write RandomZen 1 - EPYC 7601Zen 4C - EPYC 8534PN400K800K1200K1600K2000KMin: 1432343 / Avg: 1436074 / Max: 1439137Min: 2545918 / Avg: 2577563.67 / Max: 26061191. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Update RandomZen 1 - EPYC 7601Zen 4C - EPYC 8534PN100K200K300K400K500KSE +/- 737.50, N = 3SE +/- 566.63, N = 32906184833871. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Update RandomZen 1 - EPYC 7601Zen 4C - EPYC 8534PN80K160K240K320K400KMin: 289144 / Avg: 290618 / Max: 291402Min: 482286 / Avg: 483387 / Max: 4841701. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Speedb

Speedb is a next-generation key value storage engine that is RocksDB compatible and aiming for stability, efficiency, and performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Random ReadZen 1 - EPYC 7601Zen 4C - EPYC 8534PN70M140M210M280M350MSE +/- 795244.11, N = 3SE +/- 908871.00, N = 3868670413045429441. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Random ReadZen 1 - EPYC 7601Zen 4C - EPYC 8534PN50M100M150M200M250MMin: 85668526 / Avg: 86867041.33 / Max: 88371788Min: 303265949 / Avg: 304542944 / Max: 3063017551. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.1Algorithm: RSA4096Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN200K400K600K800K1000KSE +/- 361.69, N = 3SE +/- 395.01, N = 3293924.1889214.21. (CC) gcc options: -pthread -m64 -O3 -ldl
OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.1Algorithm: RSA4096Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN150K300K450K600K750KMin: 293269.4 / Avg: 293924.07 / Max: 294517.9Min: 888702.3 / Avg: 889214.23 / Max: 889991.31. (CC) gcc options: -pthread -m64 -O3 -ldl

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.1Algorithm: RSA4096Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN5K10K15K20K25KSE +/- 8.66, N = 3SE +/- 24.99, N = 34510.725217.01. (CC) gcc options: -pthread -m64 -O3 -ldl
OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.1Algorithm: RSA4096Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN4K8K12K16K20KMin: 4495.7 / Avg: 4510.7 / Max: 4525.7Min: 25189.5 / Avg: 25217 / Max: 25266.91. (CC) gcc options: -pthread -m64 -O3 -ldl

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random ReadZen 1 - EPYC 7601Zen 4C - EPYC 8534PN60M120M180M240M300MSE +/- 416491.60, N = 3SE +/- 1581555.58, N = 3847982043026744401. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random ReadZen 1 - EPYC 7601Zen 4C - EPYC 8534PN50M100M150M200M250MMin: 83968592 / Avg: 84798204 / Max: 85277846Min: 300256220 / Avg: 302674440 / Max: 3056493681. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/ao/real_timeZen 1 - EPYC 7601Zen 4C - EPYC 8534PN48121620SE +/- 0.00282, N = 3SE +/- 0.03849, N = 32.5782917.29860
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/ao/real_timeZen 1 - EPYC 7601Zen 4C - EPYC 8534PN48121620Min: 2.57 / Avg: 2.58 / Max: 2.58Min: 17.22 / Avg: 17.3 / Max: 17.34

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeZen 1 - EPYC 7601Zen 4C - EPYC 8534PN48121620SE +/- 0.01722, N = 3SE +/- 0.01858, N = 32.4669216.80890
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeZen 1 - EPYC 7601Zen 4C - EPYC 8534PN48121620Min: 2.44 / Avg: 2.47 / Max: 2.5Min: 16.78 / Avg: 16.81 / Max: 16.84

rav1e

Xiph rav1e is a Rust-written AV1 video encoder that claims to be the fastest and safest AV1 encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.7Speed: 6Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN1.09372.18743.28114.37485.4685SE +/- 0.003, N = 3SE +/- 0.020, N = 32.9374.861
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.7Speed: 6Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN246810Min: 2.93 / Avg: 2.94 / Max: 2.94Min: 4.84 / Avg: 4.86 / Max: 4.9

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN1428425670SE +/- 0.36, N = 3SE +/- 0.01, N = 361.3922.12
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN1224364860Min: 60.73 / Avg: 61.39 / Max: 61.99Min: 22.11 / Avg: 22.12 / Max: 22.13

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN30060090012001500SE +/- 1.56, N = 3SE +/- 0.50, N = 3260.251444.91
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN30060090012001500Min: 257.69 / Avg: 260.25 / Max: 263.07Min: 1443.93 / Avg: 1444.91 / Max: 1445.59

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 12 - Input: Bosphorus 4KZen 1 - EPYC 7601Zen 4C - EPYC 8534PN4080120160200SE +/- 0.80, N = 15SE +/- 1.97, N = 670.53194.551. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 12 - Input: Bosphorus 4KZen 1 - EPYC 7601Zen 4C - EPYC 8534PN4080120160200Min: 65.01 / Avg: 70.52 / Max: 74.73Min: 187.73 / Avg: 194.55 / Max: 200.691. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN306090120150SE +/- 1.16, N = 3SE +/- 0.07, N = 3121.2145.88
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN20406080100Min: 119.27 / Avg: 121.21 / Max: 123.27Min: 45.76 / Avg: 45.88 / Max: 45.99

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN150300450600750SE +/- 1.27, N = 3SE +/- 1.12, N = 3131.76696.51
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN120240360480600Min: 129.5 / Avg: 131.76 / Max: 133.87Min: 694.59 / Avg: 696.51 / Max: 698.48

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: BMW27 - Compute: CPU-OnlyZen 1 - EPYC 7601Zen 4C - EPYC 8534PN1632486480SE +/- 0.35, N = 3SE +/- 0.12, N = 373.0027.07
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: BMW27 - Compute: CPU-OnlyZen 1 - EPYC 7601Zen 4C - EPYC 8534PN1428425670Min: 72.48 / Avg: 73 / Max: 73.66Min: 26.91 / Avg: 27.07 / Max: 27.31

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN30060090012001500SE +/- 2.60, N = 3SE +/- 0.81, N = 31460.70855.93
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN30060090012001500Min: 1455.55 / Avg: 1460.7 / Max: 1463.83Min: 854.35 / Avg: 855.93 / Max: 857.04

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN816243240SE +/- 0.07, N = 3SE +/- 0.10, N = 310.8136.92
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN816243240Min: 10.68 / Avg: 10.81 / Max: 10.92Min: 36.73 / Avg: 36.92 / Max: 37.06

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN30060090012001500SE +/- 5.55, N = 3SE +/- 0.50, N = 31457.92855.73
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN30060090012001500Min: 1449.56 / Avg: 1457.92 / Max: 1468.43Min: 855.24 / Avg: 855.73 / Max: 856.73

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN816243240SE +/- 0.04, N = 3SE +/- 0.06, N = 310.7536.76
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN816243240Min: 10.69 / Avg: 10.75 / Max: 10.82Min: 36.64 / Avg: 36.76 / Max: 36.86

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: MediumZen 1 - EPYC 7601Zen 4C - EPYC 8534PN510152025SE +/- 0.04, N = 3SE +/- 0.03, N = 38.7319.87
OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: MediumZen 1 - EPYC 7601Zen 4C - EPYC 8534PN510152025Min: 8.65 / Avg: 8.73 / Max: 8.78Min: 19.83 / Avg: 19.87 / Max: 19.92

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 13 - Input: Bosphorus 4KZen 1 - EPYC 7601Zen 4C - EPYC 8534PN4080120160200SE +/- 2.72, N = 12SE +/- 1.62, N = 967.93196.421. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 13 - Input: Bosphorus 4KZen 1 - EPYC 7601Zen 4C - EPYC 8534PN4080120160200Min: 39.99 / Avg: 67.93 / Max: 76.05Min: 185.64 / Avg: 196.42 / Max: 202.051. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingZen 1 - EPYC 7601Zen 4C - EPYC 8534PN90K180K270K360K450KSE +/- 1450.84, N = 3SE +/- 942.48, N = 31344174060211. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingZen 1 - EPYC 7601Zen 4C - EPYC 8534PN70K140K210K280K350KMin: 131515 / Avg: 134416.67 / Max: 135873Min: 405050 / Avg: 406021.33 / Max: 4079061. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingZen 1 - EPYC 7601Zen 4C - EPYC 8534PN80K160K240K320K400KSE +/- 1034.27, N = 3SE +/- 548.98, N = 31239343580931. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingZen 1 - EPYC 7601Zen 4C - EPYC 8534PN60K120K180K240K300KMin: 122447 / Avg: 123934.33 / Max: 125923Min: 357499 / Avg: 358093.33 / Max: 3591901. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 4 - Input: Bosphorus 4KZen 1 - EPYC 7601Zen 4C - EPYC 8534PN246810SE +/- 0.011, N = 3SE +/- 0.041, N = 33.0606.6721. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 4 - Input: Bosphorus 4KZen 1 - EPYC 7601Zen 4C - EPYC 8534PN3691215Min: 3.04 / Avg: 3.06 / Max: 3.08Min: 6.62 / Avg: 6.67 / Max: 6.751. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN400800120016002000SE +/- 3.29, N = 3SE +/- 1.38, N = 31701.37495.14
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN30060090012001500Min: 1695.84 / Avg: 1701.37 / Max: 1707.21Min: 492.78 / Avg: 495.14 / Max: 497.55

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN1428425670SE +/- 0.0191, N = 3SE +/- 0.1281, N = 39.299264.0361
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN1326395265Min: 9.26 / Avg: 9.3 / Max: 9.32Min: 63.79 / Avg: 64.04 / Max: 64.22

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution TimeZen 1 - EPYC 7601Zen 4C - EPYC 8534PN306090120150126.1954.561. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh TimeZen 1 - EPYC 7601Zen 4C - EPYC 8534PN102030405044.4530.951. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsZen 1 - EPYC 7601Zen 4C - EPYC 8534PN0.22040.44080.66120.88161.102SE +/- 0.00385, N = 3SE +/- 0.00016, N = 30.979590.39175
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsZen 1 - EPYC 7601Zen 4C - EPYC 8534PN246810Min: 0.98 / Avg: 0.98 / Max: 0.99Min: 0.39 / Avg: 0.39 / Max: 0.39

PyTorch

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-50Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN1020304050SE +/- 0.12, N = 3SE +/- 0.15, N = 326.7245.13MIN: 15.15 / MAX: 28.15MIN: 43.52 / MAX: 46.26
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-50Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN918273645Min: 26.5 / Avg: 26.72 / Max: 26.93Min: 44.87 / Avg: 45.13 / Max: 45.37

SPECFEM3D

simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra. This test profile currently relies on CPU-based execution for SPECFEM3D and using a variety of their built-in examples/models for benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Homogeneous HalfspaceZen 1 - EPYC 7601918273645SE +/- 0.31, N = 338.061. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Intel Open Image Denoise

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-OnlyZen 1 - EPYC 7601Zen 4C - EPYC 8534PN0.4140.8281.2421.6562.07SE +/- 0.00, N = 3SE +/- 0.00, N = 30.481.84
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-OnlyZen 1 - EPYC 7601Zen 4C - EPYC 8534PN246810Min: 0.48 / Avg: 0.48 / Max: 0.48Min: 1.84 / Avg: 1.84 / Max: 1.85

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN50100150200250SE +/- 0.66, N = 3SE +/- 0.24, N = 3226.95145.11
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN4080120160200Min: 225.81 / Avg: 226.95 / Max: 228.11Min: 144.69 / Avg: 145.11 / Max: 145.5

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN50100150200250SE +/- 0.21, N = 3SE +/- 0.36, N = 370.34219.91
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN4080120160200Min: 69.96 / Avg: 70.34 / Max: 70.66Min: 219.46 / Avg: 219.91 / Max: 220.62

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN20406080100SE +/- 0.3037, N = 3SE +/- 0.0152, N = 3102.65828.3560
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN20406080100Min: 102.21 / Avg: 102.66 / Max: 103.24Min: 8.34 / Avg: 8.36 / Max: 8.39

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN8001600240032004000SE +/- 0.49, N = 3SE +/- 7.35, N = 3155.703821.92
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN7001400210028003500Min: 154.77 / Avg: 155.7 / Max: 156.43Min: 3807.27 / Avg: 3821.92 / Max: 3830.19

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN50100150200250SE +/- 0.35, N = 3SE +/- 0.20, N = 3226.37144.14
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN4080120160200Min: 225.71 / Avg: 226.37 / Max: 226.92Min: 143.78 / Avg: 144.14 / Max: 144.45

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN50100150200250SE +/- 0.11, N = 3SE +/- 0.29, N = 370.56221.30
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN4080120160200Min: 70.36 / Avg: 70.56 / Max: 70.75Min: 220.86 / Avg: 221.3 / Max: 221.85

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KZen 1 - EPYC 7601Zen 4C - EPYC 8534PN612182430SE +/- 0.17, N = 4SE +/- 0.05, N = 314.5227.541. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KZen 1 - EPYC 7601Zen 4C - EPYC 8534PN612182430Min: 14.3 / Avg: 14.52 / Max: 15.02Min: 27.45 / Avg: 27.54 / Max: 27.631. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN20406080100SE +/- 0.40, N = 3SE +/- 0.04, N = 3110.4565.93
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN20406080100Min: 109.64 / Avg: 110.45 / Max: 110.86Min: 65.87 / Avg: 65.93 / Max: 65.99

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN100200300400500SE +/- 0.49, N = 3SE +/- 0.29, N = 3144.68484.73
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN90180270360450Min: 144.17 / Avg: 144.68 / Max: 145.67Min: 484.16 / Avg: 484.73 / Max: 485.03

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN306090120150SE +/- 1.29, N = 3SE +/- 0.01, N = 3111.9065.91
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN20406080100Min: 110.27 / Avg: 111.9 / Max: 114.45Min: 65.9 / Avg: 65.91 / Max: 65.93

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN100200300400500SE +/- 1.66, N = 3SE +/- 0.12, N = 3142.80484.87
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-StreamZen 1 - EPYC 7601Zen 4C - EPYC 8534PN90180270360450Min: 139.52 / Avg: 142.8 / Max: 144.86Min: 484.62 / Avg: 484.87 / Max: 485.02

Kripke

Kripke is a simple, scalable, 3D Sn deterministic particle transport code. Its primary purpose is to research how data layout, programming paradigms and architectures effect the implementation and performance of Sn transport. Kripke is developed by LLNL. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.6Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN60M120M180M240M300MSE +/- 1601742.67, N = 3SE +/- 668520.88, N = 31873759333031072001. (CXX) g++ options: -O3 -fopenmp -ldl
OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.6Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN50M100M150M200M250MMin: 184240900 / Avg: 187375933.33 / Max: 189513900Min: 302393500 / Avg: 303107200 / Max: 3044432001. (CXX) g++ options: -O3 -fopenmp -ldl

miniBUDE

MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN306090120150SE +/- 0.04, N = 3SE +/- 0.23, N = 614.25116.871. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN20406080100Min: 14.17 / Avg: 14.25 / Max: 14.3Min: 115.84 / Avg: 116.87 / Max: 117.461. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN6001200180024003000SE +/- 0.94, N = 3SE +/- 5.71, N = 6356.122921.651. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1Zen 1 - EPYC 7601Zen 4C - EPYC 8534PN5001000150020002500Min: 354.29 / Avg: 356.12 / Max: 357.42Min: 2895.91 / Avg: 2921.65 / Max: 2936.491. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

SPECFEM3D

simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra. This test profile currently relies on CPU-based execution for SPECFEM3D and using a variety of their built-in examples/models for benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Tomographic ModelZen 1 - EPYC 7601714212835SE +/- 0.13, N = 330.281. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Mount St. HelensZen 1 - EPYC 7601714212835SE +/- 0.41, N = 329.951. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per DirectionZen 1 - EPYC 7601Zen 4C - EPYC 8534PN918273645SE +/- 0.22, N = 3SE +/- 0.03, N = 337.4120.871. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per DirectionZen 1 - EPYC 7601Zen 4C - EPYC 8534PN816243240Min: 36.96 / Avg: 37.41 / Max: 37.68Min: 20.83 / Avg: 20.87 / Max: 20.941. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: CrownZen 1 - EPYC 7601Zen 4C - EPYC 8534PN1530456075SE +/- 0.09, N = 3SE +/- 0.09, N = 518.3667.90MIN: 18.01 / MAX: 18.84MIN: 66.05 / MAX: 70.27
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: CrownZen 1 - EPYC 7601Zen 4C - EPYC 8534PN1326395265Min: 18.18 / Avg: 18.36 / Max: 18.5Min: 67.7 / Avg: 67.9 / Max: 68.22

Y-Cruncher

Y-Cruncher is a multi-threaded Pi benchmark capable of computing Pi to trillions of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.3Pi Digits To Calculate: 1BZen 1 - EPYC 7601Zen 4C - EPYC 8534PN816243240SE +/- 0.09, N = 3SE +/- 0.01, N = 433.9210.23
OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.3Pi Digits To Calculate: 1BZen 1 - EPYC 7601Zen 4C - EPYC 8534PN714212835Min: 33.8 / Avg: 33.92 / Max: 34.1Min: 10.22 / Avg: 10.23 / Max: 10.25

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 6.1Time To CompileZen 1 - EPYC 7601Zen 4C - EPYC 8534PN918273645SE +/- 0.10, N = 3SE +/- 0.03, N = 339.7918.06
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 6.1Time To CompileZen 1 - EPYC 7601Zen 4C - EPYC 8534PN816243240Min: 39.66 / Avg: 39.79 / Max: 39.99Min: 18.01 / Avg: 18.06 / Max: 18.11

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian DragonZen 1 - EPYC 7601Zen 4C - EPYC 8534PN20406080100SE +/- 0.05, N = 3SE +/- 0.04, N = 521.8383.20MIN: 21.6 / MAX: 22.16MIN: 82.46 / MAX: 84.5
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian DragonZen 1 - EPYC 7601Zen 4C - EPYC 8534PN1632486480Min: 21.73 / Avg: 21.83 / Max: 21.9Min: 83.09 / Avg: 83.2 / Max: 83.3

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 8 - Input: Bosphorus 4KZen 1 - EPYC 7601Zen 4C - EPYC 8534PN1530456075SE +/- 0.17, N = 3SE +/- 0.10, N = 426.2567.211. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 8 - Input: Bosphorus 4KZen 1 - EPYC 7601Zen 4C - EPYC 8534PN1326395265Min: 25.94 / Avg: 26.25 / Max: 26.54Min: 67 / Avg: 67.21 / Max: 67.461. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Very FastZen 1 - EPYC 7601Zen 4C - EPYC 8534PN1224364860SE +/- 0.08, N = 3SE +/- 0.04, N = 522.2752.59
OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Very FastZen 1 - EPYC 7601Zen 4C - EPYC 8534PN1122334455Min: 22.12 / Avg: 22.27 / Max: 22.35Min: 52.46 / Avg: 52.59 / Max: 52.71

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Super FastZen 1 - EPYC 7601Zen 4C - EPYC 8534PN1224364860SE +/- 0.01, N = 3SE +/- 0.07, N = 523.4455.01
OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Super FastZen 1 - EPYC 7601Zen 4C - EPYC 8534PN1122334455Min: 23.42 / Avg: 23.44 / Max: 23.47Min: 54.81 / Avg: 55.01 / Max: 55.17

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Ultra FastZen 1 - EPYC 7601Zen 4C - EPYC 8534PN1326395265SE +/- 0.04, N = 3SE +/- 0.14, N = 527.2155.98
OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Ultra FastZen 1 - EPYC 7601Zen 4C - EPYC 8534PN1122334455Min: 27.14 / Avg: 27.21 / Max: 27.26Min: 55.7 / Avg: 55.98 / Max: 56.46

Y-Cruncher

Y-Cruncher is a multi-threaded Pi benchmark capable of computing Pi to trillions of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.3Pi Digits To Calculate: 500MZen 1 - EPYC 7601Zen 4C - EPYC 8534PN48121620SE +/- 0.118, N = 3SE +/- 0.006, N = 515.6935.099
OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.3Pi Digits To Calculate: 500MZen 1 - EPYC 7601Zen 4C - EPYC 8534PN48121620Min: 15.47 / Avg: 15.69 / Max: 15.86Min: 5.08 / Avg: 5.1 / Max: 5.12

CPU Power Consumption Monitor

OpenBenchmarking.orgWattsCPU Power Consumption MonitorPhoronix Test Suite System MonitoringZen 1 - EPYC 7601Zen 4C - EPYC 8534PN140280420560700Min: 134.19 / Avg: 579.81 / Max: 789.58Min: 7.2 / Avg: 112.32 / Max: 180

Meta Performance Per Watts

OpenBenchmarking.orgPerformance Per Watts, More Is BetterMeta Performance Per WattsPerformance Per WattsZen 1 - EPYC 7601Zen 4C - EPYC 8534PN50010001500200025002562.10180.52

187 Results Shown

CloverLeaf
Xcompact3d Incompact3d
Apache IoTDB:
  800 - 100 - 800 - 100:
    Average Latency
    point/sec
Xmrig
Timed Linux Kernel Compilation
Blender
Timed MrBayes Analysis
LAMMPS Molecular Dynamics Simulator
OpenRadioss
Apache IoTDB:
  800 - 100 - 500 - 400:
    Average Latency
    point/sec
easyWave
Timed LLVM Compilation
Quicksilver
OpenFOAM:
  drivaerFastback, Medium Mesh Size - Execution Time
  drivaerFastback, Medium Mesh Size - Mesh Time
Xmrig:
  KawPow - 1M
  CryptoNight-Femto UPX2 - 1M
Apache IoTDB:
  800 - 100 - 800 - 400:
    Average Latency
    point/sec
Timed LLVM Compilation
Timed Gem5 Compilation
Quicksilver
FFmpeg:
  libx265 - Platform
  libx265 - Video On Demand
DuckDB
Timed Node.js Compilation
FFmpeg
DuckDB
OSPRay Studio
miniBUDE:
  OpenMP - BM2:
    Billion Interactions/s
    GFInst/s
Apache IoTDB:
  800 - 100 - 500 - 100:
    Average Latency
    point/sec
  500 - 100 - 800 - 400:
    Average Latency
    point/sec
OSPRay Studio
Apache IoTDB:
  500 - 100 - 800 - 100:
    Average Latency
    point/sec
Speedb
RocksDB
OpenSSL:
  AES-256-GCM
  AES-128-GCM
  ChaCha20-Poly1305
  ChaCha20
  SHA512
  SHA256
Apache IoTDB:
  500 - 100 - 200 - 400:
    Average Latency
    point/sec
Blender
OSPRay
ACES DGEMM
OSPRay
Redis 7.0.12 + memtier_benchmark
RocksDB
Apache Cassandra
Apache IoTDB:
  500 - 100 - 500 - 400:
    Average Latency
    point/sec
VVenC
Apache IoTDB:
  500 - 100 - 500 - 100:
    Average Latency
    point/sec
Redis 7.0.12 + memtier_benchmark
Blender
Apache IoTDB:
  800 - 100 - 200 - 400:
    Average Latency
    point/sec
easyWave
OSPRay Studio
TensorFlow
Apache IoTDB:
  800 - 100 - 200 - 100:
    Average Latency
    point/sec
rav1e
OSPRay
OSPRay Studio
GPAW
Apache IoTDB:
  500 - 100 - 200 - 100:
    Average Latency
    point/sec
Xmrig:
  Monero - 1M
  CryptoNight-Heavy - 1M
GROMACS
Neural Magic DeepSparse:
  BERT-Large, NLP Question Answering - Asynchronous Multi-Stream:
    ms/batch
    items/sec
FFmpeg
nginx
Apache HTTP Server
nginx
QuantLib
OSPRay Studio
CloverLeaf
OSPRay Studio
SPECFEM3D
Chaos Group V-RAY
SPECFEM3D
PyTorch
VVenC
rav1e
Quicksilver
Blender
IndigoBench
OSPRay
rav1e
OpenVINO:
  Face Detection FP16-INT8 - CPU:
    ms
    FPS
Timed Linux Kernel Compilation
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream:
    ms/batch
    items/sec
IndigoBench
Xmrig
OpenVINO:
  Person Detection FP16 - CPU:
    ms
    FPS
  Machine Translation EN To DE FP16 - CPU:
    ms
    FPS
  Road Segmentation ADAS FP16-INT8 - CPU:
    ms
    FPS
  Person Vehicle Bike Detection FP16 - CPU:
    ms
    FPS
  Handwritten English Recognition FP16-INT8 - CPU:
    ms
    FPS
  Face Detection Retail FP16-INT8 - CPU:
    ms
    FPS
  Vehicle Detection FP16-INT8 - CPU:
    ms
    FPS
Speedb
OpenVINO:
  Weld Porosity Detection FP16-INT8 - CPU:
    ms
    FPS
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU:
    ms
    FPS
Speedb
RocksDB
Speedb
OpenSSL:
  RSA4096:
    verify/s
    sign/s
RocksDB
OSPRay:
  gravity_spheres_volume/dim_512/ao/real_time
  gravity_spheres_volume/dim_512/scivis/real_time
rav1e
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
SVT-AV1
Neural Magic DeepSparse:
  BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Blender
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
uvg266
SVT-AV1
7-Zip Compression:
  Decompression Rating
  Compression Rating
SVT-AV1
Neural Magic DeepSparse:
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream:
    ms/batch
    items/sec
OpenFOAM:
  drivaerFastback, Small Mesh Size - Execution Time
  drivaerFastback, Small Mesh Size - Mesh Time
NAMD
PyTorch
SPECFEM3D
Intel Open Image Denoise
Neural Magic DeepSparse:
  CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  ResNet-50, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
x265
Neural Magic DeepSparse:
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  ResNet-50, Baseline - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Kripke
miniBUDE:
  OpenMP - BM1:
    Billion Interactions/s
    GFInst/s
SPECFEM3D:
  Tomographic Model
  Mount St. Helens
Xcompact3d Incompact3d
Embree
Y-Cruncher
Timed FFmpeg Compilation
Embree
SVT-AV1
uvg266:
  Bosphorus 4K - Very Fast
  Bosphorus 4K - Super Fast
  Bosphorus 4K - Ultra Fast
Y-Cruncher
CPU Power Consumption Monitor:
  Phoronix Test Suite System Monitoring
  Performance Per Watts