AMD EPYC Zen 1

AMD EPYC 7601 32-Core testing with a TYAN B8026T70AE24HR (V1.02.B10 BIOS) and llvmpipe on Ubuntu 23.10 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2401063-NE-AMDEPYCZE11&grs.

AMD EPYC Zen 1ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionEPYC 7601AMD EPYC 7601 32-Core @ 2.20GHz (32 Cores / 64 Threads)TYAN B8026T70AE24HR (V1.02.B10 BIOS)AMD 17h128GB1000GB INTEL SSDPE2KX010T8 + 280GB INTEL SSDPE21D280GAllvmpipeVE2282 x Broadcom NetXtreme BCM5720 PCIeUbuntu 23.106.6.9-060609-generic (x86_64)GNOME Shell 45.0X Server 1.21.1.74.5 Mesa 23.2.1-1ubuntu3.1 (LLVM 15.0.7 256 bits)GCC 13.2.0ext41920x1080OpenBenchmarking.org- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0x800126e - OpenJDK Runtime Environment (build 11.0.21+9-post-Ubuntu-0ubuntu123.10)- Python 3.11.6- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT vulnerable + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: disabled RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

AMD EPYC Zen 1xmrig: GhostRider - 1Mxmrig: KawPow - 1Mxmrig: CryptoNight-Heavy - 1Mxmrig: CryptoNight-Femto UPX2 - 1Mxmrig: Wownero - 1Mxmrig: Monero - 1Mpytorch: CPU - 256 - ResNet-50pytorch: CPU - 1 - ResNet-50tensorflow: CPU - 16 - ResNet-50deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUvvenc: Bosphorus 4K - Fastervvenc: Bosphorus 4K - Fastuvg266: Bosphorus 4K - Ultra Fastuvg266: Bosphorus 4K - Super Fastuvg266: Bosphorus 4K - Very Fastuvg266: Bosphorus 4K - Mediumx265: Bosphorus 4Krav1e: 1rav1e: 5rav1e: 6rav1e: 10svt-av1: Preset 4 - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Kffmpeg: libx265 - Video On Demandffmpeg: libx265 - Platformffmpeg: libx265 - Uploadffmpeg: libx265 - Liveindigobench: CPU - Bedroomindigobench: CPU - Supercarv-ray: CPUospray: particle_volume/pathtracer/real_timeospray: particle_volume/scivis/real_timeospray: particle_volume/ao/real_timeospray: gravity_spheres_volume/dim_512/pathtracer/real_timeospray: gravity_spheres_volume/dim_512/scivis/real_timeospray: gravity_spheres_volume/dim_512/ao/real_timeospray-studio: 3 - 4K - 32 - Path Tracer - CPUospray-studio: 3 - 4K - 16 - Path Tracer - CPUospray-studio: 3 - 4K - 1 - Path Tracer - CPUospray-studio: 1 - 4K - 32 - Path Tracer - CPUospray-studio: 1 - 4K - 16 - Path Tracer - CPUospray-studio: 1 - 4K - 1 - Path Tracer - CPUoidn: RT.ldr_alb_nrm.3840x2160 - CPU-Onlyembree: Pathtracer ISPC - Crownembree: Pathtracer ISPC - Asian Dragonblender: Barbershop - CPU-Onlyblender: Pabellon Barcelona - CPU-Onlyblender: Fishy Cat - CPU-Onlyblender: Classroom - CPU-Onlyblender: BMW27 - CPU-Onlymrbayes: Primate Phylogeny Analysisgromacs: MPI CPU - water_GMX50_barenamd: ATPase Simulation - 327,506 Atomsincompact3d: X3D-benchmarking input.i3dincompact3d: input.i3d 193 Cells Per Directionkripke: cloverleaf: clover_bm16cloverleaf: clover_bm64_shortlammps: 20k Atomsgpaw: Carbon Nanotubeeasywave: e2Asean Grid + BengkuluSept2007 Source - 2400easywave: e2Asean Grid + BengkuluSept2007 Source - 1200minibude: OpenMP - BM2minibude: OpenMP - BM2minibude: OpenMP - BM1minibude: OpenMP - BM1openradioss: Chrysler Neon 1Mopenfoam: drivaerFastback, Medium Mesh Size - Execution Timeopenfoam: drivaerFastback, Medium Mesh Size - Mesh Timeopenfoam: drivaerFastback, Small Mesh Size - Execution Timeopenfoam: drivaerFastback, Small Mesh Size - Mesh Timespecfem3d: Tomographic Modelspecfem3d: Mount St. Helensspecfem3d: Homogeneous Halfspacespecfem3d: Water-layered Halfspacespecfem3d: Layered Halfspacequantlib: Multi-Threadedbuild-ffmpeg: Time To Compilebuild-gem5: Time To Compilebuild-nodejs: Time To Compilebuild-llvm: Unix Makefilesbuild-llvm: Ninjabuild-linux-kernel: allmodconfigbuild-linux-kernel: defconfigcompress-7zip: Decompression Ratingcompress-7zip: Compression Ratingmemtier-benchmark: Redis - 100 - 1:5memtier-benchmark: Redis - 100 - 1:10rocksdb: Update Randrocksdb: Read Rand Write Randrocksdb: Read While Writingrocksdb: Rand Readspeedb: Update Randspeedb: Read Rand Write Randspeedb: Read While Writingspeedb: Rand Readcassandra: Writesduckdb: TPC-H Parquetduckdb: IMDBapache-iotdb: 800 - 100 - 800 - 400apache-iotdb: 800 - 100 - 800 - 400apache-iotdb: 800 - 100 - 800 - 100apache-iotdb: 800 - 100 - 800 - 100apache-iotdb: 800 - 100 - 500 - 400apache-iotdb: 800 - 100 - 500 - 400apache-iotdb: 800 - 100 - 500 - 100apache-iotdb: 800 - 100 - 500 - 100apache-iotdb: 800 - 100 - 200 - 400apache-iotdb: 800 - 100 - 200 - 400apache-iotdb: 800 - 100 - 200 - 100apache-iotdb: 800 - 100 - 200 - 100apache-iotdb: 500 - 100 - 800 - 400apache-iotdb: 500 - 100 - 800 - 400apache-iotdb: 500 - 100 - 800 - 100apache-iotdb: 500 - 100 - 800 - 100apache-iotdb: 500 - 100 - 500 - 400apache-iotdb: 500 - 100 - 500 - 400apache-iotdb: 500 - 100 - 500 - 100apache-iotdb: 500 - 100 - 500 - 100apache-iotdb: 500 - 100 - 200 - 400apache-iotdb: 500 - 100 - 200 - 400apache-iotdb: 500 - 100 - 200 - 100apache-iotdb: 500 - 100 - 200 - 100apache: 1000openssl: ChaCha20-Poly1305openssl: ChaCha20openssl: AES-256-GCMopenssl: AES-128-GCMopenssl: SHA512openssl: SHA256openssl: RSA4096openssl: RSA4096nginx: 1000nginx: 500svt-av1: Preset 13 - Bosphorus 4Kmt-dgemm: Sustained Floating-Point RateEPYC 76011104.36984.46932.67092.510393.06890.321.4026.369.90118.7130134.52231200.042013.2941102.9609155.2477109.7383145.51891695.40089.29581444.593010.9721222.238271.7820224.520971.06451448.312610.9118111.0498143.8266171.848693.025361.4111260.163653.35149.82169.37188.689.09877.95192.5941.4716.98470.4226.93296.7184.70377.39182.4443.792.4412968.702094.383.826.4783.3127.2723.3622.208.6614.570.5662.2452.9486.9683.03926.51371.05620.9921.0010.3858.344.0068.6072022398.24065.234975.278374.156032.467132.597423318491709141007028111914613384770.4818.400221.8202770.00247.45101.52193.3673.03193.3681.9960.974031279.5923537.5522614184352267978.18105.8013.749143.424355.129152.99114.238355.94614.298357.456498.821180.3543226.93585123.2192341.84612730.28310627530.01922020638.68835285374.73885889478.30224842164508.539.870394.217388.767532.477437.561755.32476.3251360231260821159703.101247481.4729056715119323389791830653662014101434755611482786210286153313250.440199.322493.2360202628123.2161643882302.915949844779.0959180937145.624718986338.2347189787401.4161779270122.2260408895279.345503935581.8155722448162.643885394446.483855674988228.7230304478457478735817838986748318397808988750834396337327295403453293636.74520.998966.93102247.1468.3803.762393OpenBenchmarking.org

Meta Performance Per Watts

Performance Per Watts

OpenBenchmarking.orgPerformance Per Watts, More Is BetterMeta Performance Per WattsPerformance Per WattsEPYC 760160012001800240030002564.49

Xmrig

Variant: GhostRider - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: GhostRider - Hash Count: 1MEPYC 76012004006008001000SE +/- 0.39, N = 31104.31. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Xmrig

Variant: KawPow - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: KawPow - Hash Count: 1MEPYC 760115003000450060007500SE +/- 72.58, N = 126984.41. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Xmrig

Variant: CryptoNight-Heavy - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Heavy - Hash Count: 1MEPYC 760115003000450060007500SE +/- 57.05, N = 126932.61. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Xmrig

Variant: CryptoNight-Femto UPX2 - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Femto UPX2 - Hash Count: 1MEPYC 760115003000450060007500SE +/- 66.27, N = 37092.51. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Xmrig

Variant: Wownero - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Wownero - Hash Count: 1MEPYC 76012K4K6K8K10KSE +/- 18.58, N = 310393.01. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Xmrig

Variant: Monero - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Monero - Hash Count: 1MEPYC 760115003000450060007500SE +/- 77.42, N = 126890.31. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

PyTorch

Device: CPU - Batch Size: 256 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: ResNet-50EPYC 7601510152025SE +/- 0.26, N = 321.40MIN: 12 / MAX: 22.24

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-50EPYC 7601612182430SE +/- 0.24, N = 326.36MIN: 12.3 / MAX: 27.98

TensorFlow

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-50EPYC 76013691215SE +/- 0.08, N = 39.90

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-StreamEPYC 7601306090120150SE +/- 0.34, N = 3118.71

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-StreamEPYC 7601306090120150SE +/- 0.35, N = 3134.52

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-StreamEPYC 760130060090012001500SE +/- 3.77, N = 31200.04

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-StreamEPYC 76013691215SE +/- 0.06, N = 313.29

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-StreamEPYC 760120406080100SE +/- 0.23, N = 3102.96

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-StreamEPYC 7601306090120150SE +/- 0.33, N = 3155.25

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-StreamEPYC 760120406080100SE +/- 0.26, N = 3109.74

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-StreamEPYC 7601306090120150SE +/- 0.39, N = 3145.52

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamEPYC 7601400800120016002000SE +/- 7.60, N = 31695.40

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamEPYC 76013691215SE +/- 0.0505, N = 39.2958

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamEPYC 760130060090012001500SE +/- 5.15, N = 31444.59

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamEPYC 76013691215SE +/- 0.02, N = 310.97

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-StreamEPYC 760150100150200250SE +/- 0.44, N = 3222.24

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-StreamEPYC 76011632486480SE +/- 0.17, N = 371.78

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-StreamEPYC 760150100150200250SE +/- 1.65, N = 3224.52

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-StreamEPYC 76011632486480SE +/- 0.58, N = 371.06

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamEPYC 760130060090012001500SE +/- 12.42, N = 31448.31

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamEPYC 76013691215SE +/- 0.10, N = 310.91

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamEPYC 760120406080100SE +/- 0.90, N = 3111.05

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamEPYC 7601306090120150SE +/- 1.14, N = 3143.83

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamEPYC 76014080120160200SE +/- 0.69, N = 3171.85

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamEPYC 760120406080100SE +/- 0.41, N = 393.03

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-StreamEPYC 76011428425670SE +/- 0.40, N = 361.41

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-StreamEPYC 760160120180240300SE +/- 1.67, N = 3260.16

OpenVINO

Model: Road Segmentation ADAS FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16-INT8 - Device: CPUEPYC 76011224364860SE +/- 0.04, N = 353.35MIN: 52.66 / MAX: 66.861. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Road Segmentation ADAS FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16-INT8 - Device: CPUEPYC 7601306090120150SE +/- 0.12, N = 3149.821. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Handwritten English Recognition FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16-INT8 - Device: CPUEPYC 76014080120160200SE +/- 0.18, N = 3169.37MIN: 138.88 / MAX: 192.11. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Handwritten English Recognition FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16-INT8 - Device: CPUEPYC 76014080120160200SE +/- 0.20, N = 3188.681. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection Retail FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16-INT8 - Device: CPUEPYC 76013691215SE +/- 0.00, N = 39.09MIN: 9.03 / MAX: 17.891. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection Retail FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16-INT8 - Device: CPUEPYC 76012004006008001000SE +/- 0.47, N = 3877.951. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Machine Translation EN To DE FP16 - Device: CPUEPYC 76014080120160200SE +/- 0.95, N = 3192.59MIN: 169.23 / MAX: 247.321. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Machine Translation EN To DE FP16 - Device: CPUEPYC 7601918273645SE +/- 0.20, N = 341.471. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUEPYC 760148121620SE +/- 0.06, N = 316.98MIN: 16.32 / MAX: 31.761. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUEPYC 7601100200300400500SE +/- 1.69, N = 3470.421. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUEPYC 7601612182430SE +/- 0.01, N = 326.93MIN: 26.82 / MAX: 37.141. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUEPYC 760160120180240300SE +/- 0.08, N = 3296.711. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUEPYC 760120406080100SE +/- 0.01, N = 384.70MIN: 84.34 / MAX: 93.361. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUEPYC 760180160240320400SE +/- 0.04, N = 3377.391. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Detection FP16 - Device: CPUEPYC 76014080120160200SE +/- 0.54, N = 3182.44MIN: 171.2 / MAX: 204.21. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Person Detection FP16 - Device: CPUEPYC 76011020304050SE +/- 0.14, N = 343.791. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUEPYC 76010.5491.0981.6472.1962.745SE +/- 0.01, N = 32.44MIN: 2.4 / MAX: 10.41. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUEPYC 76013K6K9K12K15KSE +/- 41.63, N = 312968.701. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection FP16-INT8 - Device: CPUEPYC 7601400800120016002000SE +/- 0.19, N = 32094.38MIN: 2092.56 / MAX: 2107.431. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection FP16-INT8 - Device: CPUEPYC 76010.85951.7192.57853.4384.2975SE +/- 0.00, N = 33.821. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

VVenC

Video Input: Bosphorus 4K - Video Preset: Faster

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: FasterEPYC 7601246810SE +/- 0.027, N = 36.4781. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

VVenC

Video Input: Bosphorus 4K - Video Preset: Fast

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: FastEPYC 76010.74481.48962.23442.97923.724SE +/- 0.02, N = 33.311. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

uvg266

Video Input: Bosphorus 4K - Video Preset: Ultra Fast

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Ultra FastEPYC 7601612182430SE +/- 0.02, N = 327.27

uvg266

Video Input: Bosphorus 4K - Video Preset: Super Fast

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Super FastEPYC 7601612182430SE +/- 0.08, N = 323.36

uvg266

Video Input: Bosphorus 4K - Video Preset: Very Fast

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Very FastEPYC 7601510152025SE +/- 0.08, N = 322.20

uvg266

Video Input: Bosphorus 4K - Video Preset: Medium

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: MediumEPYC 7601246810SE +/- 0.03, N = 38.66

x265

Video Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KEPYC 760148121620SE +/- 0.10, N = 1514.571. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

rav1e

Speed: 1

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.7Speed: 1EPYC 76010.12740.25480.38220.50960.637SE +/- 0.001, N = 30.566

rav1e

Speed: 5

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.7Speed: 5EPYC 76010.50511.01021.51532.02042.5255SE +/- 0.010, N = 32.245

rav1e

Speed: 6

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.7Speed: 6EPYC 76010.66331.32661.98992.65323.3165SE +/- 0.023, N = 32.948

rav1e

Speed: 10

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.7Speed: 10EPYC 7601246810SE +/- 0.051, N = 36.968

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 4 - Input: Bosphorus 4KEPYC 76010.68381.36762.05142.73523.419SE +/- 0.012, N = 33.0391. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 8 - Input: Bosphorus 4KEPYC 7601612182430SE +/- 0.36, N = 326.511. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 12 - Input: Bosphorus 4KEPYC 76011632486480SE +/- 0.94, N = 1571.061. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

FFmpeg

Encoder: libx265 - Scenario: Video On Demand

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: Video On DemandEPYC 7601510152025SE +/- 0.01, N = 320.991. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Platform

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: PlatformEPYC 7601510152025SE +/- 0.01, N = 321.001. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Upload

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: UploadEPYC 76013691215SE +/- 0.01, N = 310.381. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Live

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: LiveEPYC 76011326395265SE +/- 0.07, N = 358.341. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

IndigoBench

Acceleration: CPU - Scene: Bedroom

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomEPYC 76010.90141.80282.70423.60564.507SE +/- 0.007, N = 34.006

IndigoBench

Acceleration: CPU - Scene: Supercar

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarEPYC 7601246810SE +/- 0.015, N = 38.607

Chaos Group V-RAY

Mode: CPU

OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5.02Mode: CPUEPYC 76014K8K12K16K20KSE +/- 36.50, N = 320223

OSPRay

Benchmark: particle_volume/pathtracer/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/pathtracer/real_timeEPYC 760120406080100SE +/- 0.37, N = 398.24

OSPRay

Benchmark: particle_volume/scivis/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/scivis/real_timeEPYC 76011.17792.35583.53374.71165.8895SE +/- 0.00820, N = 35.23497

OSPRay

Benchmark: particle_volume/ao/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/ao/real_timeEPYC 76011.18762.37523.56284.75045.938SE +/- 0.00806, N = 35.27837

OSPRay

Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeEPYC 76010.93511.87022.80533.74044.6755SE +/- 0.00653, N = 34.15603

OSPRay

Benchmark: gravity_spheres_volume/dim_512/scivis/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeEPYC 76010.55511.11021.66532.22042.7755SE +/- 0.01201, N = 32.46713

OSPRay

Benchmark: gravity_spheres_volume/dim_512/ao/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/ao/real_timeEPYC 76010.58441.16881.75322.33762.922SE +/- 0.00623, N = 32.59742

OSPRay Studio

Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUEPYC 760170K140K210K280K350KSE +/- 39.68, N = 3331849

OSPRay Studio

Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUEPYC 760140K80K120K160K200KSE +/- 353.87, N = 3170914

OSPRay Studio

Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUEPYC 76012K4K6K8K10KSE +/- 41.15, N = 310070

OSPRay Studio

Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUEPYC 760160K120K180K240K300KSE +/- 171.33, N = 3281119

OSPRay Studio

Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUEPYC 760130K60K90K120K150KSE +/- 162.97, N = 3146133

OSPRay Studio

Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUEPYC 76012K4K6K8K10KSE +/- 7.00, N = 38477

Intel Open Image Denoise

Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-OnlyEPYC 76010.1080.2160.3240.4320.54SE +/- 0.00, N = 30.48

Embree

Binary: Pathtracer ISPC - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: CrownEPYC 7601510152025SE +/- 0.02, N = 318.40MIN: 18.17 / MAX: 18.71

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian DragonEPYC 7601510152025SE +/- 0.06, N = 321.82MIN: 21.6 / MAX: 22.27

Blender

Blend File: Barbershop - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Barbershop - Compute: CPU-OnlyEPYC 7601170340510680850SE +/- 0.16, N = 3770.00

Blender

Blend File: Pabellon Barcelona - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Pabellon Barcelona - Compute: CPU-OnlyEPYC 760150100150200250SE +/- 0.16, N = 3247.45

Blender

Blend File: Fishy Cat - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Fishy Cat - Compute: CPU-OnlyEPYC 760120406080100SE +/- 0.40, N = 3101.52

Blender

Blend File: Classroom - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Classroom - Compute: CPU-OnlyEPYC 76014080120160200SE +/- 0.51, N = 3193.36

Blender

Blend File: BMW27 - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: BMW27 - Compute: CPU-OnlyEPYC 76011632486480SE +/- 0.24, N = 373.03

Timed MrBayes Analysis

Primate Phylogeny Analysis

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisEPYC 76014080120160200SE +/- 0.03, N = 3193.371. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -mabm -O3 -std=c99 -pedantic -lm -lreadline

GROMACS

Implementation: MPI CPU - Input: water_GMX50_bare

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2023Implementation: MPI CPU - Input: water_GMX50_bareEPYC 76010.44910.89821.34731.79642.2455SE +/- 0.021, N = 31.9961. (CXX) g++ options: -O3

NAMD

ATPase Simulation - 327,506 Atoms

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsEPYC 76010.21920.43840.65760.87681.096SE +/- 0.00114, N = 30.97403

Xcompact3d Incompact3d

Input: X3D-benchmarking input.i3d

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: X3D-benchmarking input.i3dEPYC 760130060090012001500SE +/- 15.06, N = 91279.591. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Xcompact3d Incompact3d

Input: input.i3d 193 Cells Per Direction

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per DirectionEPYC 7601918273645SE +/- 0.14, N = 337.551. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Kripke

OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.6EPYC 760140M80M120M160M200MSE +/- 2458810.58, N = 31843522671. (CXX) g++ options: -O3 -fopenmp -ldl

CloverLeaf

Input: clover_bm16

OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeaf 1.3Input: clover_bm16EPYC 76012004006008001000SE +/- 2.36, N = 3978.181. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp

CloverLeaf

Input: clover_bm64_short

OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeaf 1.3Input: clover_bm64_shortEPYC 760120406080100SE +/- 0.56, N = 3105.801. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp

LAMMPS Molecular Dynamics Simulator

Model: 20k Atoms

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k AtomsEPYC 760148121620SE +/- 0.05, N = 313.751. (CXX) g++ options: -O3 -lm -ldl

GPAW

Input: Carbon Nanotube

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 23.6Input: Carbon NanotubeEPYC 7601306090120150SE +/- 1.55, N = 5143.421. (CC) gcc options: -shared -fwrapv -O2 -lxc -lblas -lmpi

easyWave

Input: e2Asean Grid + BengkuluSept2007 Source - Time: 2400

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 2400EPYC 760180160240320400SE +/- 4.29, N = 9355.131. (CXX) g++ options: -O3 -fopenmp

easyWave

Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200EPYC 7601306090120150SE +/- 2.54, N = 9152.991. (CXX) g++ options: -O3 -fopenmp

miniBUDE

Implementation: OpenMP - Input Deck: BM2

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2EPYC 760148121620SE +/- 0.02, N = 314.241. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

miniBUDE

Implementation: OpenMP - Input Deck: BM2

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2EPYC 760180160240320400SE +/- 0.42, N = 3355.951. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

miniBUDE

Implementation: OpenMP - Input Deck: BM1

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1EPYC 760148121620SE +/- 0.02, N = 314.301. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

miniBUDE

Implementation: OpenMP - Input Deck: BM1

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1EPYC 760180160240320400SE +/- 0.39, N = 3357.461. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenRadioss

Model: Chrysler Neon 1M

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Chrysler Neon 1MEPYC 7601110220330440550SE +/- 1.47, N = 3498.82

OpenFOAM

Input: drivaerFastback, Medium Mesh Size - Execution Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Medium Mesh Size - Execution TimeEPYC 7601300600900120015001180.351. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

OpenFOAM

Input: drivaerFastback, Medium Mesh Size - Mesh Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Medium Mesh Size - Mesh TimeEPYC 760150100150200250226.941. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

OpenFOAM

Input: drivaerFastback, Small Mesh Size - Execution Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution TimeEPYC 7601306090120150123.221. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

OpenFOAM

Input: drivaerFastback, Small Mesh Size - Mesh Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh TimeEPYC 7601102030405041.851. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

SPECFEM3D

Model: Tomographic Model

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Tomographic ModelEPYC 7601714212835SE +/- 0.10, N = 330.281. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

SPECFEM3D

Model: Mount St. Helens

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Mount St. HelensEPYC 7601714212835SE +/- 0.22, N = 330.021. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

SPECFEM3D

Model: Homogeneous Halfspace

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Homogeneous HalfspaceEPYC 7601918273645SE +/- 0.55, N = 338.691. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

SPECFEM3D

Model: Water-layered Halfspace

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Water-layered HalfspaceEPYC 760120406080100SE +/- 0.26, N = 374.741. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

SPECFEM3D

Model: Layered Halfspace

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Layered HalfspaceEPYC 760120406080100SE +/- 0.22, N = 378.301. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

QuantLib

Configuration: Multi-Threaded

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.32Configuration: Multi-ThreadedEPYC 760114K28K42K56K70KSE +/- 155.83, N = 364508.51. (CXX) g++ options: -O3 -march=native -fPIE -pie

Timed FFmpeg Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 6.1Time To CompileEPYC 7601918273645SE +/- 0.07, N = 339.87

Timed Gem5 Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 23.0.1Time To CompileEPYC 760190180270360450SE +/- 4.34, N = 9394.22

Timed Node.js Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 19.8.1Time To CompileEPYC 760180160240320400SE +/- 0.70, N = 3388.77

Timed LLVM Compilation

Build System: Unix Makefiles

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Unix MakefilesEPYC 7601120240360480600SE +/- 1.64, N = 3532.48

Timed LLVM Compilation

Build System: Ninja

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: NinjaEPYC 760190180270360450SE +/- 2.68, N = 3437.56

Timed Linux Kernel Compilation

Build: allmodconfig

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfigEPYC 7601160320480640800SE +/- 0.82, N = 3755.32

Timed Linux Kernel Compilation

Build: defconfig

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigEPYC 760120406080100SE +/- 0.90, N = 376.33

7-Zip Compression

Test: Decompression Rating

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingEPYC 760130K60K90K120K150KSE +/- 1266.83, N = 31360231. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

7-Zip Compression

Test: Compression Rating

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingEPYC 760130K60K90K120K150KSE +/- 597.93, N = 31260821. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Redis 7.0.12 + memtier_benchmark

Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5EPYC 7601200K400K600K800K1000KSE +/- 8378.54, N = 31159703.101. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Redis 7.0.12 + memtier_benchmark

Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10EPYC 7601300K600K900K1200K1500KSE +/- 10426.11, N = 31247481.471. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

RocksDB

Test: Update Random

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Update RandomEPYC 760160K120K180K240K300KSE +/- 255.89, N = 32905671. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Read Random Write Random

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read Random Write RandomEPYC 7601300K600K900K1200K1500KSE +/- 3730.48, N = 315119321. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Read While Writing

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read While WritingEPYC 7601700K1400K2100K2800K3500KSE +/- 24594.27, N = 333897911. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Random Read

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random ReadEPYC 760120M40M60M80M100MSE +/- 381556.63, N = 3830653661. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Speedb

Test: Update Random

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Update RandomEPYC 760140K80K120K160K200KSE +/- 300.37, N = 32014101. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Speedb

Test: Read Random Write Random

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read Random Write RandomEPYC 7601300K600K900K1200K1500KSE +/- 2860.74, N = 314347551. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Speedb

Test: Read While Writing

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read While WritingEPYC 76011.3M2.6M3.9M5.2M6.5MSE +/- 64578.15, N = 561148271. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Speedb

Test: Random Read

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Random ReadEPYC 760120M40M60M80M100MSE +/- 884018.85, N = 3862102861. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Apache Cassandra

Test: Writes

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.1.3Test: WritesEPYC 760130K60K90K120K150KSE +/- 878.30, N = 3153313

DuckDB

Benchmark: TPC-H Parquet

OpenBenchmarking.orgSeconds, Fewer Is BetterDuckDB 0.9.1Benchmark: TPC-H ParquetEPYC 760150100150200250SE +/- 0.51, N = 3250.441. (CXX) g++ options: -O3 -rdynamic -lssl -lcrypto -ldl

DuckDB

Benchmark: IMDB

OpenBenchmarking.orgSeconds, Fewer Is BetterDuckDB 0.9.1Benchmark: IMDBEPYC 76014080120160200SE +/- 0.58, N = 3199.321. (CXX) g++ options: -O3 -rdynamic -lssl -lcrypto -ldl

Apache IoTDB

Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400EPYC 7601110220330440550SE +/- 3.92, N = 3493.23MAX: 28692.29

Apache IoTDB

Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400EPYC 760113M26M39M52M65MSE +/- 476066.06, N = 360202628

Apache IoTDB

Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100EPYC 7601306090120150SE +/- 0.87, N = 3123.21MAX: 24077.59

Apache IoTDB

Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100EPYC 760113M26M39M52M65MSE +/- 444016.97, N = 361643882

Apache IoTDB

Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400EPYC 760170140210280350SE +/- 7.59, N = 3302.91MAX: 28097.51

Apache IoTDB

Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400EPYC 760113M26M39M52M65MSE +/- 789480.85, N = 359498447

Apache IoTDB

Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100EPYC 760120406080100SE +/- 0.22, N = 379.09MAX: 23926.25

Apache IoTDB

Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100EPYC 760113M26M39M52M65MSE +/- 203912.87, N = 359180937

Apache IoTDB

Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400EPYC 7601306090120150SE +/- 1.72, N = 3145.62MAX: 28352.35

Apache IoTDB

Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400EPYC 760110M20M30M40M50MSE +/- 137590.10, N = 347189863

Apache IoTDB

Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100EPYC 7601918273645SE +/- 0.06, N = 338.23MAX: 23968.07

Apache IoTDB

Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100EPYC 760110M20M30M40M50MSE +/- 185384.87, N = 347189787

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400EPYC 760190180270360450SE +/- 7.91, N = 3401.41MAX: 27863.73

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400EPYC 760113M26M39M52M65MSE +/- 497201.21, N = 361779270

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100EPYC 7601306090120150SE +/- 0.52, N = 3122.22MAX: 11435.4

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100EPYC 760113M26M39M52M65MSE +/- 301284.23, N = 360408895

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400EPYC 760160120180240300SE +/- 5.14, N = 3279.34MAX: 29130.29

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400EPYC 760112M24M36M48M60MSE +/- 725985.88, N = 355039355

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100EPYC 760120406080100SE +/- 0.47, N = 381.81MAX: 12685.84

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100EPYC 760112M24M36M48M60MSE +/- 91826.95, N = 355722448

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400EPYC 76014080120160200SE +/- 2.77, N = 3162.64MAX: 27310.19

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400EPYC 76018M16M24M32M40MSE +/- 379243.48, N = 338853944

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100EPYC 76011122334455SE +/- 0.60, N = 346.48MAX: 13899.69

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100EPYC 76018M16M24M32M40MSE +/- 250473.81, N = 338556749

Apache HTTP Server

Concurrent Requests: 1000

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.56Concurrent Requests: 1000EPYC 760120K40K60K80K100KSE +/- 364.41, N = 388228.721. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

OpenSSL

Algorithm: ChaCha20-Poly1305

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20-Poly1305EPYC 76016000M12000M18000M24000M30000MSE +/- 33396892.07, N = 3303044784571. (CC) gcc options: -pthread -m64 -O3 -ldl

OpenSSL

Algorithm: ChaCha20

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20EPYC 760110000M20000M30000M40000M50000MSE +/- 19889425.87, N = 3478735817831. (CC) gcc options: -pthread -m64 -O3 -ldl

OpenSSL

Algorithm: AES-256-GCM

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: AES-256-GCMEPYC 760120000M40000M60000M80000M100000MSE +/- 261314502.58, N = 3898674831831. (CC) gcc options: -pthread -m64 -O3 -ldl

OpenSSL

Algorithm: AES-128-GCM

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: AES-128-GCMEPYC 760120000M40000M60000M80000M100000MSE +/- 326361512.63, N = 3978089887501. (CC) gcc options: -pthread -m64 -O3 -ldl

OpenSSL

Algorithm: SHA512

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: SHA512EPYC 76012000M4000M6000M8000M10000MSE +/- 14339737.74, N = 383439633731. (CC) gcc options: -pthread -m64 -O3 -ldl

OpenSSL

Algorithm: SHA256

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: SHA256EPYC 76016000M12000M18000M24000M30000MSE +/- 33884690.29, N = 3272954034531. (CC) gcc options: -pthread -m64 -O3 -ldl

OpenSSL

Algorithm: RSA4096

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.1Algorithm: RSA4096EPYC 760160K120K180K240K300KSE +/- 943.05, N = 3293636.71. (CC) gcc options: -pthread -m64 -O3 -ldl

OpenSSL

Algorithm: RSA4096

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.1Algorithm: RSA4096EPYC 760110002000300040005000SE +/- 20.79, N = 34520.91. (CC) gcc options: -pthread -m64 -O3 -ldl

nginx

Connections: 1000

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 1000EPYC 760120K40K60K80K100KSE +/- 581.17, N = 398966.931. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

nginx

Connections: 500

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 500EPYC 760120K40K60K80K100KSE +/- 281.84, N = 3102247.141. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

CPU Power Consumption Monitor

Phoronix Test Suite System Monitoring

OpenBenchmarking.orgWattsCPU Power Consumption MonitorPhoronix Test Suite System MonitoringEPYC 7601130260390520650Min: 121.04 / Avg: 559.03 / Max: 719.64

Xmrig

CPU Power Consumption Monitor

OpenBenchmarking.orgWatts, Fewer Is BetterXmrig 6.21CPU Power Consumption MonitorEPYC 7601100200300400500Min: 265.12 / Avg: 557.27 / Max: 583.07

Xmrig

Variant: GhostRider - Hash Count: 1M

OpenBenchmarking.orgH/s Per Watt, More Is BetterXmrig 6.21Variant: GhostRider - Hash Count: 1MEPYC 76010.4460.8921.3381.7842.231.982

Xmrig

CPU Power Consumption Monitor

OpenBenchmarking.orgWatts, Fewer Is BetterXmrig 6.21CPU Power Consumption MonitorEPYC 7601120240360480600Min: 266.42 / Avg: 643.61 / Max: 674.47

Xmrig

Variant: KawPow - Hash Count: 1M

OpenBenchmarking.orgH/s Per Watt, More Is BetterXmrig 6.21Variant: KawPow - Hash Count: 1MEPYC 7601369121510.85

Xmrig

CPU Power Consumption Monitor

OpenBenchmarking.orgWatts, Fewer Is BetterXmrig 6.21CPU Power Consumption MonitorEPYC 7601120240360480600Min: 264 / Avg: 644.79 / Max: 676.59

Xmrig

Variant: CryptoNight-Heavy - Hash Count: 1M

OpenBenchmarking.orgH/s Per Watt, More Is BetterXmrig 6.21Variant: CryptoNight-Heavy - Hash Count: 1MEPYC 7601369121510.75

Xmrig

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601264644678OpenBenchmarking.orgWatts, Fewer Is BetterXmrig 6.21CPU Power Consumption Monitor2004006008001000

Xmrig

Variant: CryptoNight-Femto UPX2 - Hash Count: 1M

OpenBenchmarking.orgH/s Per Watt, More Is BetterXmrig 6.21Variant: CryptoNight-Femto UPX2 - Hash Count: 1MEPYC 7601369121511.01

Xmrig

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601266660708OpenBenchmarking.orgWatts, Fewer Is BetterXmrig 6.21CPU Power Consumption Monitor2004006008001000

Xmrig

Variant: Wownero - Hash Count: 1M

OpenBenchmarking.orgH/s Per Watt, More Is BetterXmrig 6.21Variant: Wownero - Hash Count: 1MEPYC 76014812162015.76

Xmrig

CPU Power Consumption Monitor

OpenBenchmarking.orgWatts, Fewer Is BetterXmrig 6.21CPU Power Consumption MonitorEPYC 7601120240360480600Min: 230.88 / Avg: 643.11 / Max: 685.6

Xmrig

Variant: Monero - Hash Count: 1M

OpenBenchmarking.orgH/s Per Watt, More Is BetterXmrig 6.21Variant: Monero - Hash Count: 1MEPYC 7601369121510.71

PyTorch

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601260652698OpenBenchmarking.orgWatts, Fewer Is BetterPyTorch 2.1CPU Power Consumption Monitor2004006008001000

PyTorch

Device: CPU - Batch Size: 256 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec Per Watt, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: ResNet-50EPYC 76010.00740.01480.02220.02960.0370.033

PyTorch

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601259617697OpenBenchmarking.orgWatts, Fewer Is BetterPyTorch 2.1CPU Power Consumption Monitor2004006008001000

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec Per Watt, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-50EPYC 76010.00970.01940.02910.03880.04850.043

TensorFlow

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601259.2526.4584.0OpenBenchmarking.orgWatts, Fewer Is BetterTensorFlow 2.12CPU Power Consumption Monitor160320480640800

TensorFlow

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgimages/sec Per Watt, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-50EPYC 76010.00430.00860.01290.01720.02150.019

Neural Magic DeepSparse

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601261564712OpenBenchmarking.orgWatts, Fewer Is BetterNeural Magic DeepSparse 1.6CPU Power Consumption Monitor2004006008001000

Neural Magic DeepSparse

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601260440713OpenBenchmarking.orgWatts, Fewer Is BetterNeural Magic DeepSparse 1.6CPU Power Consumption Monitor2004006008001000

Neural Magic DeepSparse

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601263635709OpenBenchmarking.orgWatts, Fewer Is BetterNeural Magic DeepSparse 1.6CPU Power Consumption Monitor2004006008001000

Neural Magic DeepSparse

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601258582700OpenBenchmarking.orgWatts, Fewer Is BetterNeural Magic DeepSparse 1.6CPU Power Consumption Monitor2004006008001000

Neural Magic DeepSparse

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601262563712OpenBenchmarking.orgWatts, Fewer Is BetterNeural Magic DeepSparse 1.6CPU Power Consumption Monitor2004006008001000

Neural Magic DeepSparse

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601260631709OpenBenchmarking.orgWatts, Fewer Is BetterNeural Magic DeepSparse 1.6CPU Power Consumption Monitor2004006008001000

Neural Magic DeepSparse

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601258630711OpenBenchmarking.orgWatts, Fewer Is BetterNeural Magic DeepSparse 1.6CPU Power Consumption Monitor2004006008001000

Neural Magic DeepSparse

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601261564712OpenBenchmarking.orgWatts, Fewer Is BetterNeural Magic DeepSparse 1.6CPU Power Consumption Monitor2004006008001000

Neural Magic DeepSparse

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601268634713OpenBenchmarking.orgWatts, Fewer Is BetterNeural Magic DeepSparse 1.6CPU Power Consumption Monitor2004006008001000

Neural Magic DeepSparse

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601260595710OpenBenchmarking.orgWatts, Fewer Is BetterNeural Magic DeepSparse 1.6CPU Power Consumption Monitor2004006008001000

Neural Magic DeepSparse

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601260556711OpenBenchmarking.orgWatts, Fewer Is BetterNeural Magic DeepSparse 1.6CPU Power Consumption Monitor2004006008001000

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601262616670OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2023.2.devCPU Power Consumption Monitor2004006008001000

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601263672710OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2023.2.devCPU Power Consumption Monitor2004006008001000

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601263604643OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2023.2.devCPU Power Consumption Monitor2004006008001000

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601263665707OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2023.2.devCPU Power Consumption Monitor2004006008001000

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601262662707OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2023.2.devCPU Power Consumption Monitor2004006008001000

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601261616658OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2023.2.devCPU Power Consumption Monitor2004006008001000

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601261607648OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2023.2.devCPU Power Consumption Monitor2004006008001000

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601262669709OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2023.2.devCPU Power Consumption Monitor2004006008001000

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601260630668OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2023.2.devCPU Power Consumption Monitor2004006008001000

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601263598661OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2023.2.devCPU Power Consumption Monitor2004006008001000

VVenC

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601265523658OpenBenchmarking.orgWatts, Fewer Is BetterVVenC 1.9CPU Power Consumption Monitor2004006008001000

VVenC

Video Input: Bosphorus 4K - Video Preset: Faster

OpenBenchmarking.orgFrames Per Second Per Watt, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: FasterEPYC 76010.00270.00540.00810.01080.01350.012

VVenC

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601259512711OpenBenchmarking.orgWatts, Fewer Is BetterVVenC 1.9CPU Power Consumption Monitor2004006008001000

VVenC

Video Input: Bosphorus 4K - Video Preset: Fast

OpenBenchmarking.orgFrames Per Second Per Watt, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: FastEPYC 76010.00140.00280.00420.00560.0070.006

uvg266

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601260586668OpenBenchmarking.orgWatts, Fewer Is Betteruvg266 0.4.1CPU Power Consumption Monitor2004006008001000

uvg266

Video Input: Bosphorus 4K - Video Preset: Ultra Fast

OpenBenchmarking.orgFrames Per Second Per Watt, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Ultra FastEPYC 76010.01060.02120.03180.04240.0530.047

uvg266

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601260617690OpenBenchmarking.orgWatts, Fewer Is Betteruvg266 0.4.1CPU Power Consumption Monitor2004006008001000

uvg266

Video Input: Bosphorus 4K - Video Preset: Super Fast

OpenBenchmarking.orgFrames Per Second Per Watt, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Super FastEPYC 76010.00860.01720.02580.03440.0430.038

uvg266

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601261610681OpenBenchmarking.orgWatts, Fewer Is Betteruvg266 0.4.1CPU Power Consumption Monitor2004006008001000

uvg266

Video Input: Bosphorus 4K - Video Preset: Very Fast

OpenBenchmarking.orgFrames Per Second Per Watt, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Very FastEPYC 76010.00810.01620.02430.03240.04050.036

uvg266

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601260659703OpenBenchmarking.orgWatts, Fewer Is Betteruvg266 0.4.1CPU Power Consumption Monitor2004006008001000

uvg266

Video Input: Bosphorus 4K - Video Preset: Medium

OpenBenchmarking.orgFrames Per Second Per Watt, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: MediumEPYC 76010.00290.00580.00870.01160.01450.013

x265

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601259.0515.5599.5OpenBenchmarking.orgWatts, Fewer Is Betterx265 3.4CPU Power Consumption Monitor160320480640800

x265

Video Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second Per Watt, More Is Betterx265 3.4Video Input: Bosphorus 4KEPYC 76010.00630.01260.01890.02520.03150.028

rav1e

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601260.9423.5543.9OpenBenchmarking.orgWatts, Fewer Is Betterrav1e 0.7CPU Power Consumption Monitor140280420560700

rav1e

Speed: 1

OpenBenchmarking.orgFrames Per Second Per Watt, More Is Betterrav1e 0.7Speed: 1EPYC 76010.00020.00040.00060.00080.0010.001

rav1e

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601265.8372.0500.9OpenBenchmarking.orgWatts, Fewer Is Betterrav1e 0.7CPU Power Consumption Monitor130260390520650

rav1e

Speed: 5

OpenBenchmarking.orgFrames Per Second Per Watt, More Is Betterrav1e 0.7Speed: 5EPYC 76010.00140.00280.00420.00560.0070.006

rav1e

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601187.9370.1417.3OpenBenchmarking.orgWatts, Fewer Is Betterrav1e 0.7CPU Power Consumption Monitor110220330440550

rav1e

Speed: 6

OpenBenchmarking.orgFrames Per Second Per Watt, More Is Betterrav1e 0.7Speed: 6EPYC 76010.00180.00360.00540.00720.0090.008

rav1e

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601253.6349.3376.3OpenBenchmarking.orgWatts, Fewer Is Betterrav1e 0.7CPU Power Consumption Monitor100200300400500

rav1e

Speed: 10

OpenBenchmarking.orgFrames Per Second Per Watt, More Is Betterrav1e 0.7Speed: 10EPYC 76010.00450.0090.01350.0180.02250.02

SVT-AV1

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601255496685OpenBenchmarking.orgWatts, Fewer Is BetterSVT-AV1 1.8CPU Power Consumption Monitor2004006008001000

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second Per Watt, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 4 - Input: Bosphorus 4KEPYC 76010.00140.00280.00420.00560.0070.006

SVT-AV1

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601266493682OpenBenchmarking.orgWatts, Fewer Is BetterSVT-AV1 1.8CPU Power Consumption Monitor2004006008001000

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second Per Watt, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 8 - Input: Bosphorus 4KEPYC 76010.01220.02440.03660.04880.0610.054

SVT-AV1

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601263397653OpenBenchmarking.orgWatts, Fewer Is BetterSVT-AV1 1.8CPU Power Consumption Monitor2004006008001000

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second Per Watt, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 12 - Input: Bosphorus 4KEPYC 76010.04030.08060.12090.16120.20150.179

SVT-AV1

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601249384645OpenBenchmarking.orgWatts, Fewer Is BetterSVT-AV1 1.8CPU Power Consumption Monitor2004006008001000

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second Per Watt, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 13 - Input: Bosphorus 4KEPYC 76010.04010.08020.12030.16040.20050.178

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 13 - Input: Bosphorus 4KEPYC 76011530456075SE +/- 3.17, N = 1268.381. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

FFmpeg

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601201.1349.5507.7OpenBenchmarking.orgWatts, Fewer Is BetterFFmpeg 6.1CPU Power Consumption Monitor130260390520650

FFmpeg

Encoder: libx265 - Scenario: Video On Demand

OpenBenchmarking.orgFPS Per Watt, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: Video On DemandEPYC 76010.01350.0270.04050.0540.06750.06

FFmpeg

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601179.7349.1507.6OpenBenchmarking.orgWatts, Fewer Is BetterFFmpeg 6.1CPU Power Consumption Monitor130260390520650

FFmpeg

Encoder: libx265 - Scenario: Platform

OpenBenchmarking.orgFPS Per Watt, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: PlatformEPYC 76010.01350.0270.04050.0540.06750.06

FFmpeg

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601252.8340.3554.4OpenBenchmarking.orgWatts, Fewer Is BetterFFmpeg 6.1CPU Power Consumption Monitor140280420560700

FFmpeg

Encoder: libx265 - Scenario: Upload

OpenBenchmarking.orgFPS Per Watt, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: UploadEPYC 76010.0070.0140.0210.0280.0350.031

FFmpeg

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601256.6331.4522.3OpenBenchmarking.orgWatts, Fewer Is BetterFFmpeg 6.1CPU Power Consumption Monitor130260390520650

FFmpeg

Encoder: libx265 - Scenario: Live

OpenBenchmarking.orgFPS Per Watt, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: LiveEPYC 76010.03960.07920.11880.15840.1980.176

IndigoBench

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601262663717OpenBenchmarking.orgWatts, Fewer Is BetterIndigoBench 4.4CPU Power Consumption Monitor2004006008001000

IndigoBench

Acceleration: CPU - Scene: Bedroom

OpenBenchmarking.orgM samples/s Per Watt, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomEPYC 76010.00140.00280.00420.00560.0070.006

IndigoBench

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601261667718OpenBenchmarking.orgWatts, Fewer Is BetterIndigoBench 4.4CPU Power Consumption Monitor2004006008001000

IndigoBench

Acceleration: CPU - Scene: Supercar

OpenBenchmarking.orgM samples/s Per Watt, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarEPYC 76010.00290.00580.00870.01160.01450.013

Chaos Group V-RAY

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601260585717OpenBenchmarking.orgWatts, Fewer Is BetterChaos Group V-RAY 5.02CPU Power Consumption Monitor2004006008001000

Chaos Group V-RAY

Mode: CPU

OpenBenchmarking.orgvsamples Per Watt, More Is BetterChaos Group V-RAY 5.02Mode: CPUEPYC 760181624324034.60

OSPRay

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601264608699OpenBenchmarking.orgWatts, Fewer Is BetterOSPRay 2.12CPU Power Consumption Monitor2004006008001000

OSPRay

Benchmark: particle_volume/pathtracer/real_time

OpenBenchmarking.orgItems Per Second Per Watt, More Is BetterOSPRay 2.12Benchmark: particle_volume/pathtracer/real_timeEPYC 76010.03650.0730.10950.1460.18250.162

OSPRay

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601263623703OpenBenchmarking.orgWatts, Fewer Is BetterOSPRay 2.12CPU Power Consumption Monitor2004006008001000

OSPRay

Benchmark: particle_volume/scivis/real_time

OpenBenchmarking.orgItems Per Second Per Watt, More Is BetterOSPRay 2.12Benchmark: particle_volume/scivis/real_timeEPYC 76010.00180.00360.00540.00720.0090.008

OSPRay

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601262680701OpenBenchmarking.orgWatts, Fewer Is BetterOSPRay 2.12CPU Power Consumption Monitor2004006008001000

OSPRay

Benchmark: particle_volume/ao/real_time

OpenBenchmarking.orgItems Per Second Per Watt, More Is BetterOSPRay 2.12Benchmark: particle_volume/ao/real_timeEPYC 76010.00180.00360.00540.00720.0090.008

OSPRay

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601261632708OpenBenchmarking.orgWatts, Fewer Is BetterOSPRay 2.12CPU Power Consumption Monitor2004006008001000

OSPRay

Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time

OpenBenchmarking.orgItems Per Second Per Watt, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeEPYC 76010.00160.00320.00480.00640.0080.007

OSPRay

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601262617710OpenBenchmarking.orgWatts, Fewer Is BetterOSPRay 2.12CPU Power Consumption Monitor2004006008001000

OSPRay

Benchmark: gravity_spheres_volume/dim_512/scivis/real_time

OpenBenchmarking.orgItems Per Second Per Watt, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeEPYC 76010.00090.00180.00270.00360.00450.004

OSPRay

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601264623711OpenBenchmarking.orgWatts, Fewer Is BetterOSPRay 2.12CPU Power Consumption Monitor2004006008001000

OSPRay

Benchmark: gravity_spheres_volume/dim_512/ao/real_time

OpenBenchmarking.orgItems Per Second Per Watt, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/ao/real_timeEPYC 76010.00090.00180.00270.00360.00450.004

OSPRay Studio

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601262669715OpenBenchmarking.orgWatts, Fewer Is BetterOSPRay Studio 0.13CPU Power Consumption Monitor2004006008001000

OSPRay Studio

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601262639714OpenBenchmarking.orgWatts, Fewer Is BetterOSPRay Studio 0.13CPU Power Consumption Monitor2004006008001000

OSPRay Studio

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601264581715OpenBenchmarking.orgWatts, Fewer Is BetterOSPRay Studio 0.13CPU Power Consumption Monitor2004006008001000

OSPRay Studio

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601261664715OpenBenchmarking.orgWatts, Fewer Is BetterOSPRay Studio 0.13CPU Power Consumption Monitor2004006008001000

OSPRay Studio

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601264629715OpenBenchmarking.orgWatts, Fewer Is BetterOSPRay Studio 0.13CPU Power Consumption Monitor2004006008001000

OSPRay Studio

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601263578714OpenBenchmarking.orgWatts, Fewer Is BetterOSPRay Studio 0.13CPU Power Consumption Monitor2004006008001000

Intel Open Image Denoise

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601265659700OpenBenchmarking.orgWatts, Fewer Is BetterIntel Open Image Denoise 2.1CPU Power Consumption Monitor2004006008001000

Intel Open Image Denoise

Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only

OpenBenchmarking.orgImages / Sec Per Watt, More Is BetterIntel Open Image Denoise 2.1Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-OnlyEPYC 76010.00020.00040.00060.00080.0010.001

Embree

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601265660711OpenBenchmarking.orgWatts, Fewer Is BetterEmbree 4.3CPU Power Consumption Monitor2004006008001000

Embree

Binary: Pathtracer ISPC - Model: Crown

OpenBenchmarking.orgFrames Per Second Per Watt, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: CrownEPYC 76010.00630.01260.01890.02520.03150.028

Embree

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601265649712OpenBenchmarking.orgWatts, Fewer Is BetterEmbree 4.3CPU Power Consumption Monitor2004006008001000

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second Per Watt, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian DragonEPYC 76010.00770.01540.02310.03080.03850.034

Blender

CPU Power Consumption Monitor

OpenBenchmarking.orgWatts, Fewer Is BetterBlender 4.0CPU Power Consumption MonitorEPYC 7601130260390520650Min: 263.43 / Avg: 701.62 / Max: 714.73

Blender

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601263699714OpenBenchmarking.orgWatts, Fewer Is BetterBlender 4.0CPU Power Consumption Monitor2004006008001000

Blender

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601263682716OpenBenchmarking.orgWatts, Fewer Is BetterBlender 4.0CPU Power Consumption Monitor2004006008001000

Blender

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601263696713OpenBenchmarking.orgWatts, Fewer Is BetterBlender 4.0CPU Power Consumption Monitor2004006008001000

Blender

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601263675715OpenBenchmarking.orgWatts, Fewer Is BetterBlender 4.0CPU Power Consumption Monitor2004006008001000

Timed MrBayes Analysis

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601267687711OpenBenchmarking.orgWatts, Fewer Is BetterTimed MrBayes Analysis 3.2.7CPU Power Consumption Monitor2004006008001000

ACES DGEMM

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601263615677OpenBenchmarking.orgWatts, Fewer Is BetterACES DGEMM 1.0CPU Power Consumption Monitor2004006008001000

ACES DGEMM

Sustained Floating-Point Rate

OpenBenchmarking.orgGFLOP/s Per Watt, More Is BetterACES DGEMM 1.0Sustained Floating-Point RateEPYC 76010.00140.00280.00420.00560.0070.006

ACES DGEMM

Sustained Floating-Point Rate

OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point RateEPYC 76010.84651.6932.53953.3864.2325SE +/- 0.067646, N = 153.7623931. (CC) gcc options: -O3 -march=native -fopenmp

GROMACS

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601263668709OpenBenchmarking.orgWatts, Fewer Is BetterGROMACS 2023CPU Power Consumption Monitor2004006008001000

GROMACS

Implementation: MPI CPU - Input: water_GMX50_bare

OpenBenchmarking.orgNs Per Day Per Watt, More Is BetterGROMACS 2023Implementation: MPI CPU - Input: water_GMX50_bareEPYC 76010.00070.00140.00210.00280.00350.003

NAMD

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601264663713OpenBenchmarking.orgWatts, Fewer Is BetterNAMD 2.14CPU Power Consumption Monitor2004006008001000

Xcompact3d Incompact3d

CPU Power Consumption Monitor

OpenBenchmarking.orgWatts, Fewer Is BetterXcompact3d Incompact3d 2021-03-11CPU Power Consumption MonitorEPYC 7601120240360480600Min: 241.38 / Avg: 584.29 / Max: 692.59

Xcompact3d Incompact3d

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601263577642OpenBenchmarking.orgWatts, Fewer Is BetterXcompact3d Incompact3d 2021-03-11CPU Power Consumption Monitor2004006008001000

Kripke

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601266642710OpenBenchmarking.orgWatts, Fewer Is BetterKripke 1.2.6CPU Power Consumption Monitor2004006008001000

Kripke

OpenBenchmarking.orgThroughput FoM Per Watt, More Is BetterKripke 1.2.6EPYC 760160K120K180K240K300K287085.20

CloverLeaf

CPU Power Consumption Monitor

OpenBenchmarking.orgWatts, Fewer Is BetterCloverLeaf 1.3CPU Power Consumption MonitorEPYC 7601110220330440550Min: 263.01 / Avg: 591.37 / Max: 637.79

CloverLeaf

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601267587646OpenBenchmarking.orgWatts, Fewer Is BetterCloverLeaf 1.3CPU Power Consumption Monitor2004006008001000

LAMMPS Molecular Dynamics Simulator

CPU Power Consumption Monitor

OpenBenchmarking.orgWatts, Fewer Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022CPU Power Consumption MonitorEPYC 7601120240360480600Min: 264.67 / Avg: 689.74 / Max: 702.54

LAMMPS Molecular Dynamics Simulator

Model: 20k Atoms

OpenBenchmarking.orgns/day Per Watt, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k AtomsEPYC 76010.00450.0090.01350.0180.02250.02

GPAW

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601263666715OpenBenchmarking.orgWatts, Fewer Is BetterGPAW 23.6CPU Power Consumption Monitor2004006008001000

easyWave

CPU Power Consumption Monitor

OpenBenchmarking.orgWatts, Fewer Is BettereasyWave r34CPU Power Consumption MonitorEPYC 7601110220330440550Min: 258.39 / Avg: 525.08 / Max: 599.48

easyWave

CPU Power Consumption Monitor

OpenBenchmarking.orgWatts, Fewer Is BettereasyWave r34CPU Power Consumption MonitorEPYC 7601110220330440550Min: 249.6 / Avg: 522.07 / Max: 595.64

miniBUDE

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601264676704OpenBenchmarking.orgWatts, Fewer Is BetterminiBUDE 20210901CPU Power Consumption Monitor2004006008001000

miniBUDE

Implementation: OpenMP - Input Deck: BM2

OpenBenchmarking.orgBillion Interactions/s Per Watt, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2EPYC 76010.00470.00940.01410.01880.02350.021

miniBUDE

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601266659704OpenBenchmarking.orgWatts, Fewer Is BetterminiBUDE 20210901CPU Power Consumption Monitor2004006008001000

miniBUDE

Implementation: OpenMP - Input Deck: BM1

OpenBenchmarking.orgBillion Interactions/s Per Watt, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1EPYC 76010.0050.010.0150.020.0250.022

OpenRadioss

CPU Power Consumption Monitor

OpenBenchmarking.orgWatts, Fewer Is BetterOpenRadioss 2023.09.15CPU Power Consumption MonitorEPYC 7601120240360480600Min: 266.18 / Avg: 627.09 / Max: 692.76

OpenFOAM

CPU Power Consumption Monitor

OpenBenchmarking.orgWatts, Fewer Is BetterOpenFOAM 10CPU Power Consumption MonitorEPYC 7601120240360480600Min: 264.52 / Avg: 640.81 / Max: 706.65

OpenFOAM

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601262657697OpenBenchmarking.orgWatts, Fewer Is BetterOpenFOAM 10CPU Power Consumption Monitor2004006008001000

SPECFEM3D

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601262640715OpenBenchmarking.orgWatts, Fewer Is BetterSPECFEM3D 4.0CPU Power Consumption Monitor2004006008001000

SPECFEM3D

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601263636711OpenBenchmarking.orgWatts, Fewer Is BetterSPECFEM3D 4.0CPU Power Consumption Monitor2004006008001000

SPECFEM3D

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601264650715OpenBenchmarking.orgWatts, Fewer Is BetterSPECFEM3D 4.0CPU Power Consumption Monitor2004006008001000

SPECFEM3D

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601266651706OpenBenchmarking.orgWatts, Fewer Is BetterSPECFEM3D 4.0CPU Power Consumption Monitor2004006008001000

SPECFEM3D

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601260668710OpenBenchmarking.orgWatts, Fewer Is BetterSPECFEM3D 4.0CPU Power Consumption Monitor2004006008001000

QuantLib

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601258666715OpenBenchmarking.orgWatts, Fewer Is BetterQuantLib 1.32CPU Power Consumption Monitor2004006008001000

QuantLib

Configuration: Multi-Threaded

OpenBenchmarking.orgMFLOPS Per Watt, More Is BetterQuantLib 1.32Configuration: Multi-ThreadedEPYC 76012040608010096.81

Timed FFmpeg Compilation

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601255523718OpenBenchmarking.orgWatts, Fewer Is BetterTimed FFmpeg Compilation 6.1CPU Power Consumption Monitor2004006008001000

Timed Gem5 Compilation

CPU Power Consumption Monitor

OpenBenchmarking.orgWatts, Fewer Is BetterTimed Gem5 Compilation 23.0.1CPU Power Consumption MonitorEPYC 7601130260390520650Min: 141.91 / Avg: 529.47 / Max: 719.64

Timed Node.js Compilation

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601174624717OpenBenchmarking.orgWatts, Fewer Is BetterTimed Node.js Compilation 19.8.1CPU Power Consumption Monitor2004006008001000

Timed LLVM Compilation

CPU Power Consumption Monitor

OpenBenchmarking.orgWatts, Fewer Is BetterTimed LLVM Compilation 16.0CPU Power Consumption MonitorEPYC 7601130260390520650Min: 157.96 / Avg: 592.01 / Max: 716.84

Timed LLVM Compilation

CPU Power Consumption Monitor

OpenBenchmarking.orgWatts, Fewer Is BetterTimed LLVM Compilation 16.0CPU Power Consumption MonitorEPYC 7601130260390520650Min: 255.68 / Avg: 644.25 / Max: 716.84

Timed Linux Kernel Compilation

CPU Power Consumption Monitor

OpenBenchmarking.orgWatts, Fewer Is BetterTimed Linux Kernel Compilation 6.1CPU Power Consumption MonitorEPYC 7601130260390520650Min: 252.98 / Avg: 676.51 / Max: 718.81

Timed Linux Kernel Compilation

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601128527718OpenBenchmarking.orgWatts, Fewer Is BetterTimed Linux Kernel Compilation 6.1CPU Power Consumption Monitor2004006008001000

7-Zip Compression

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601253633714OpenBenchmarking.orgWatts, Fewer Is Better7-Zip Compression 22.01CPU Power Consumption Monitor2004006008001000

7-Zip Compression

Test: Decompression Rating

OpenBenchmarking.orgMIPS Per Watt, More Is Better7-Zip Compression 22.01Test: Decompression RatingEPYC 760150100150200250215.04

Redis 7.0.12 + memtier_benchmark

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601272643715OpenBenchmarking.orgWatts, Fewer Is BetterRedis 7.0.12 + memtier_benchmark 2.0CPU Power Consumption Monitor2004006008001000

Redis 7.0.12 + memtier_benchmark

Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5

OpenBenchmarking.orgOps/sec Per Watt, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5EPYC 76014008001200160020001802.37

Redis 7.0.12 + memtier_benchmark

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601260645714OpenBenchmarking.orgWatts, Fewer Is BetterRedis 7.0.12 + memtier_benchmark 2.0CPU Power Consumption Monitor2004006008001000

Redis 7.0.12 + memtier_benchmark

Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10

OpenBenchmarking.orgOps/sec Per Watt, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10EPYC 76014008001200160020001933.83

RocksDB

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601263579638OpenBenchmarking.orgWatts, Fewer Is BetterRocksDB 8.0CPU Power Consumption Monitor2004006008001000

RocksDB

Test: Update Random

OpenBenchmarking.orgOp/s Per Watt, More Is BetterRocksDB 8.0Test: Update RandomEPYC 7601110220330440550501.43

RocksDB

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601263671712OpenBenchmarking.orgWatts, Fewer Is BetterRocksDB 8.0CPU Power Consumption Monitor2004006008001000

RocksDB

Test: Read Random Write Random

OpenBenchmarking.orgOp/s Per Watt, More Is BetterRocksDB 8.0Test: Read Random Write RandomEPYC 760150010001500200025002251.90

RocksDB

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601263680712OpenBenchmarking.orgWatts, Fewer Is BetterRocksDB 8.0CPU Power Consumption Monitor2004006008001000

RocksDB

Test: Read While Writing

OpenBenchmarking.orgOp/s Per Watt, More Is BetterRocksDB 8.0Test: Read While WritingEPYC 7601110022003300440055004987.78

RocksDB

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601260679712OpenBenchmarking.orgWatts, Fewer Is BetterRocksDB 8.0CPU Power Consumption Monitor2004006008001000

RocksDB

Test: Random Read

OpenBenchmarking.orgOp/s Per Watt, More Is BetterRocksDB 8.0Test: Random ReadEPYC 760130K60K90K120K150K122252.45

Speedb

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601261.3509.4579.9OpenBenchmarking.orgWatts, Fewer Is BetterSpeedb 2.7CPU Power Consumption Monitor160320480640800

Speedb

Test: Update Random

OpenBenchmarking.orgOp/s Per Watt, More Is BetterSpeedb 2.7Test: Update RandomEPYC 760190180270360450395.38

Speedb

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601269635710OpenBenchmarking.orgWatts, Fewer Is BetterSpeedb 2.7CPU Power Consumption Monitor2004006008001000

Speedb

Test: Read Random Write Random

OpenBenchmarking.orgOp/s Per Watt, More Is BetterSpeedb 2.7Test: Read Random Write RandomEPYC 760150010001500200025002257.83

Speedb

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601260677714OpenBenchmarking.orgWatts, Fewer Is BetterSpeedb 2.7CPU Power Consumption Monitor2004006008001000

Speedb

Test: Read While Writing

OpenBenchmarking.orgOp/s Per Watt, More Is BetterSpeedb 2.7Test: Read While WritingEPYC 76012K4K6K8K10K9032.65

Speedb

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601263680714OpenBenchmarking.orgWatts, Fewer Is BetterSpeedb 2.7CPU Power Consumption Monitor2004006008001000

Speedb

Test: Random Read

OpenBenchmarking.orgOp/s Per Watt, More Is BetterSpeedb 2.7Test: Random ReadEPYC 760130K60K90K120K150K126723.28

Apache Cassandra

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601255596719OpenBenchmarking.orgWatts, Fewer Is BetterApache Cassandra 4.1.3CPU Power Consumption Monitor2004006008001000

Apache Cassandra

Test: Writes

OpenBenchmarking.orgOp/s Per Watt, More Is BetterApache Cassandra 4.1.3Test: WritesEPYC 760160120180240300257.31

DuckDB

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601258.0315.2588.8OpenBenchmarking.orgWatts, Fewer Is BetterDuckDB 0.9.1CPU Power Consumption Monitor160320480640800

DuckDB

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601204.9399.4590.7OpenBenchmarking.orgWatts, Fewer Is BetterDuckDB 0.9.1CPU Power Consumption Monitor160320480640800

Apache IoTDB

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601254527710OpenBenchmarking.orgWatts, Fewer Is BetterApache IoTDB 1.2CPU Power Consumption Monitor2004006008001000

Apache IoTDB

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601252536711OpenBenchmarking.orgWatts, Fewer Is BetterApache IoTDB 1.2CPU Power Consumption Monitor2004006008001000

Apache IoTDB

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601253494693OpenBenchmarking.orgWatts, Fewer Is BetterApache IoTDB 1.2CPU Power Consumption Monitor2004006008001000

Apache IoTDB

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601252499690OpenBenchmarking.orgWatts, Fewer Is BetterApache IoTDB 1.2CPU Power Consumption Monitor2004006008001000

Apache IoTDB

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601253424700OpenBenchmarking.orgWatts, Fewer Is BetterApache IoTDB 1.2CPU Power Consumption Monitor2004006008001000

Apache IoTDB

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601253419706OpenBenchmarking.orgWatts, Fewer Is BetterApache IoTDB 1.2CPU Power Consumption Monitor2004006008001000

Apache IoTDB

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601252497696OpenBenchmarking.orgWatts, Fewer Is BetterApache IoTDB 1.2CPU Power Consumption Monitor2004006008001000

Apache IoTDB

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601252498698OpenBenchmarking.orgWatts, Fewer Is BetterApache IoTDB 1.2CPU Power Consumption Monitor2004006008001000

Apache IoTDB

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601253454699OpenBenchmarking.orgWatts, Fewer Is BetterApache IoTDB 1.2CPU Power Consumption Monitor2004006008001000

Apache IoTDB

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601253453696OpenBenchmarking.orgWatts, Fewer Is BetterApache IoTDB 1.2CPU Power Consumption Monitor2004006008001000

Apache IoTDB

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601265388688OpenBenchmarking.orgWatts, Fewer Is BetterApache IoTDB 1.2CPU Power Consumption Monitor2004006008001000

Apache IoTDB

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601261386705OpenBenchmarking.orgWatts, Fewer Is BetterApache IoTDB 1.2CPU Power Consumption Monitor2004006008001000

Apache HTTP Server

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601262661707OpenBenchmarking.orgWatts, Fewer Is BetterApache HTTP Server 2.4.56CPU Power Consumption Monitor2004006008001000

Apache HTTP Server

Concurrent Requests: 1000

OpenBenchmarking.orgRequests Per Second Per Watt, More Is BetterApache HTTP Server 2.4.56Concurrent Requests: 1000EPYC 7601306090120150133.43

OpenSSL

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601265685713OpenBenchmarking.orgWatts, Fewer Is BetterOpenSSL 3.1CPU Power Consumption Monitor2004006008001000

OpenSSL

Algorithm: ChaCha20-Poly1305

OpenBenchmarking.orgbyte/s Per Watt, More Is BetterOpenSSL 3.1Algorithm: ChaCha20-Poly1305EPYC 76019M18M27M36M45M44232442.1

OpenSSL

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601263684710OpenBenchmarking.orgWatts, Fewer Is BetterOpenSSL 3.1CPU Power Consumption Monitor2004006008001000

OpenSSL

Algorithm: ChaCha20

OpenBenchmarking.orgbyte/s Per Watt, More Is BetterOpenSSL 3.1Algorithm: ChaCha20EPYC 760115M30M45M60M75M69981779.21

OpenSSL

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601263698714OpenBenchmarking.orgWatts, Fewer Is BetterOpenSSL 3.1CPU Power Consumption Monitor2004006008001000

OpenSSL

Algorithm: AES-256-GCM

OpenBenchmarking.orgbyte/s Per Watt, More Is BetterOpenSSL 3.1Algorithm: AES-256-GCMEPYC 760130M60M90M120M150M128779763.99

OpenSSL

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601264699713OpenBenchmarking.orgWatts, Fewer Is BetterOpenSSL 3.1CPU Power Consumption Monitor2004006008001000

OpenSSL

Algorithm: AES-128-GCM

OpenBenchmarking.orgbyte/s Per Watt, More Is BetterOpenSSL 3.1Algorithm: AES-128-GCMEPYC 760130M60M90M120M150M139960512.85

OpenSSL

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601264698712OpenBenchmarking.orgWatts, Fewer Is BetterOpenSSL 3.1CPU Power Consumption Monitor2004006008001000

OpenSSL

Algorithm: SHA512

OpenBenchmarking.orgbyte/s Per Watt, More Is BetterOpenSSL 3.1Algorithm: SHA512EPYC 76013M6M9M12M15M11949440.21

OpenSSL

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601265697715OpenBenchmarking.orgWatts, Fewer Is BetterOpenSSL 3.1CPU Power Consumption Monitor2004006008001000

OpenSSL

Algorithm: SHA256

OpenBenchmarking.orgbyte/s Per Watt, More Is BetterOpenSSL 3.1Algorithm: SHA256EPYC 76018M16M24M32M40M39173939.34

OpenSSL

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601259679713OpenBenchmarking.orgWatts, Fewer Is BetterOpenSSL 3.1CPU Power Consumption Monitor2004006008001000

OpenSSL

Algorithm: RSA4096

OpenBenchmarking.orgverify/s Per Watt, More Is BetterOpenSSL 3.1Algorithm: RSA4096EPYC 760190180270360450432.33

nginx

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601260675714OpenBenchmarking.orgWatts, Fewer Is Betternginx 1.23.2CPU Power Consumption Monitor2004006008001000

nginx

Connections: 1000

OpenBenchmarking.orgRequests Per Second Per Watt, More Is Betternginx 1.23.2Connections: 1000EPYC 7601306090120150146.51

nginx

CPU Power Consumption Monitor

MinAvgMaxEPYC 7601241675712OpenBenchmarking.orgWatts, Fewer Is Betternginx 1.23.2CPU Power Consumption Monitor2004006008001000

nginx

Connections: 500

OpenBenchmarking.orgRequests Per Second Per Watt, More Is Betternginx 1.23.2Connections: 500EPYC 7601306090120150151.50


Phoronix Test Suite v10.8.5