eoy2024 Benchmarks for a future article. AMD EPYC 4124P 4-Core testing with a Supermicro AS-3015A-I H13SAE-MF v1.00 (2.1 BIOS) and ASPEED on Ubuntu 24.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2412201-NE-EOY20246700&sro&gru .
eoy2024 Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server Compiler File-System Screen Resolution a 4484PX px 4464p 4464p epyc 4364P 4584PX EPYC 4584PX amd 45 41 41 b AMD EPYC 4564P 16-Core @ 5.88GHz (16 Cores / 32 Threads) Supermicro AS-3015A-I H13SAE-MF v1.00 (2.1 BIOS) AMD Device 14d8 2 x 32GB DRAM-4800MT/s Micron MTC20C2085S1EC48BA1 BC 3201GB Micron_7450_MTFDKCC3T2TFS + 960GB SAMSUNG MZ1L2960HCJR-00A07 ASPEED AMD Rembrandt Radeon HD Audio VA2431 2 x Intel I210 Ubuntu 24.04 6.8.0-11-generic (x86_64) GNOME Shell 45.3 X Server 1.21.1.11 GCC 13.2.0 ext4 1024x768 AMD EPYC 4484PX 12-Core @ 5.66GHz (12 Cores / 24 Threads) 6.12.2-061202-generic (x86_64) AMD EPYC 4464P 12-Core @ 5.48GHz (12 Cores / 24 Threads) AMD EPYC 4364P 8-Core @ 5.57GHz (8 Cores / 16 Threads) AMD EPYC 4584PX 16-Core @ 5.76GHz (16 Cores / 32 Threads) AMD EPYC 4124P 4-Core @ 5.17GHz (4 Cores / 8 Threads) 960GB SAMSUNG MZ1L2960HCJR-00A07 + 3201GB Micron_7450_MTFDKCC3T2TFS OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-fxIygj/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-fxIygj/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - a: Scaling Governor: amd-pstate-epp performance (EPP: performance) - CPU Microcode: 0xa601209 - 4484PX: Scaling Governor: amd-pstate-epp performance (Boost: Enabled EPP: performance) - CPU Microcode: 0xa601209 - px: Scaling Governor: amd-pstate-epp performance (Boost: Enabled EPP: performance) - CPU Microcode: 0xa601209 - 4464p: Scaling Governor: amd-pstate-epp performance (Boost: Enabled EPP: performance) - CPU Microcode: 0xa601209 - 4464p epyc: Scaling Governor: amd-pstate-epp performance (Boost: Enabled EPP: performance) - CPU Microcode: 0xa601209 - 4364P: Scaling Governor: amd-pstate-epp performance (Boost: Enabled EPP: performance) - CPU Microcode: 0xa601209 - 4584PX: Scaling Governor: amd-pstate-epp performance (Boost: Enabled EPP: performance) - CPU Microcode: 0xa601209 - EPYC 4584PX amd: Scaling Governor: amd-pstate-epp performance (Boost: Enabled EPP: performance) - CPU Microcode: 0xa601209 - 45: Scaling Governor: amd-pstate-epp performance (Boost: Enabled EPP: performance) - CPU Microcode: 0xa601209 - 41: Scaling Governor: amd-pstate-epp performance (Boost: Enabled EPP: performance) - CPU Microcode: 0xa601209 - 41 b: Scaling Governor: amd-pstate-epp performance (Boost: Enabled EPP: performance) - CPU Microcode: 0xa601209 Java Details - OpenJDK Runtime Environment (build 21.0.2+13-Ubuntu-2) Python Details - Python 3.12.3 Security Details - a: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 4484PX: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected - px: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 4464p: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 4464p epyc: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 4364P: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 4584PX: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected - EPYC 4584PX amd: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 45: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 41: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 41 b: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected
eoy2024 openssl: ChaCha20 openssl: AES-128-GCM openssl: AES-256-GCM openssl: ChaCha20-Poly1305 svt-av1: Preset 3 - Bosphorus 4K svt-av1: Preset 5 - Bosphorus 4K svt-av1: Preset 8 - Bosphorus 4K svt-av1: Preset 13 - Bosphorus 4K svt-av1: Preset 3 - Bosphorus 1080p svt-av1: Preset 5 - Bosphorus 1080p svt-av1: Preset 8 - Bosphorus 1080p svt-av1: Preset 13 - Bosphorus 1080p svt-av1: Preset 3 - Beauty 4K 10-bit svt-av1: Preset 5 - Beauty 4K 10-bit svt-av1: Preset 8 - Beauty 4K 10-bit svt-av1: Preset 13 - Beauty 4K 10-bit x265: Bosphorus 4K x265: Bosphorus 1080p simdjson: Kostya simdjson: TopTweet simdjson: LargeRand simdjson: PartialTweets simdjson: DistinctUserID mt-dgemm: Sustained Floating-Point Rate rustls: handshake - TLS13_CHACHA20_POLY1305_SHA256 rustls: handshake - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 rustls: handshake-resume - TLS13_CHACHA20_POLY1305_SHA256 rustls: handshake-ticket - TLS13_CHACHA20_POLY1305_SHA256 rustls: handshake - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 rustls: handshake-resume - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 rustls: handshake-ticket - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 rustls: handshake-resume - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 rustls: handshake-ticket - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 onnx: GPT-2 - CPU - Standard onnx: yolov4 - CPU - Standard onnx: ZFNet-512 - CPU - Standard onnx: T5 Encoder - CPU - Standard onnx: bertsquad-12 - CPU - Standard onnx: CaffeNet 12-int8 - CPU - Standard onnx: fcn-resnet101-11 - CPU - Standard onnx: ArcFace ResNet-100 - CPU - Standard onnx: ResNet50 v1-12-int8 - CPU - Standard onnx: super-resolution-10 - CPU - Standard onnx: ResNet101_DUC_HDC-12 - CPU - Standard onnx: Faster R-CNN R-50-FPN-int8 - CPU - Standard ospray: particle_volume/ao/real_time ospray: particle_volume/scivis/real_time ospray: particle_volume/pathtracer/real_time ospray: gravity_spheres_volume/dim_512/ao/real_time ospray: gravity_spheres_volume/dim_512/scivis/real_time ospray: gravity_spheres_volume/dim_512/pathtracer/real_time byte: Pipe byte: Dhrystone 2 byte: System Call compress-7zip: Compression Rating compress-7zip: Decompression Rating etcpak: Multi-Threaded - ETC2 astcenc: Fast astcenc: Medium astcenc: Thorough astcenc: Exhaustive astcenc: Very Thorough byte: Whetstone Double stockfish: Chess Benchmark stockfish: Chess Benchmark gromacs: water_GMX50_bare namd: ATPase with 327,506 Atoms namd: STMV with 1,066,628 Atoms cassandra: Writes numpy: quantlib: S quantlib: XXS llama-cpp: CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Text Generation 128 llama-cpp: CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Prompt Processing 512 llama-cpp: CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Prompt Processing 1024 llama-cpp: CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Prompt Processing 2048 llama-cpp: CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Text Generation 128 llama-cpp: CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Prompt Processing 512 llama-cpp: CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Prompt Processing 1024 llama-cpp: CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Prompt Processing 2048 llama-cpp: CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - Text Generation 128 llama-cpp: CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - Prompt Processing 512 llama-cpp: CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - Prompt Processing 1024 llama-cpp: CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - Prompt Processing 2048 llamafile: Llama-3.2-3B-Instruct.Q6_K - Text Generation 16 llamafile: Llama-3.2-3B-Instruct.Q6_K - Text Generation 128 llamafile: Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 256 llamafile: Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 512 llamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Text Generation 16 llamafile: Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 1024 llamafile: Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 2048 llamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Text Generation 128 llamafile: mistral-7b-instruct-v0.2.Q5_K_M - Text Generation 16 llamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 256 llamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 512 llamafile: mistral-7b-instruct-v0.2.Q5_K_M - Text Generation 128 llamafile: wizardcoder-python-34b-v1.0.Q6_K - Text Generation 16 llamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 1024 llamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 2048 llamafile: wizardcoder-python-34b-v1.0.Q6_K - Text Generation 128 llamafile: mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 256 llamafile: mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 512 llamafile: mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 1024 llamafile: mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 2048 llamafile: wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 256 llamafile: wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 512 llamafile: wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 1024 llamafile: wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 2048 openvino-genai: Gemma-7b-int4-ov - CPU openvino-genai: Falcon-7b-instruct-int4-ov - CPU openvino-genai: Phi-3-mini-128k-instruct-int4-ov - CPU onnx: GPT-2 - CPU - Standard onnx: yolov4 - CPU - Standard onnx: ZFNet-512 - CPU - Standard onnx: T5 Encoder - CPU - Standard onnx: bertsquad-12 - CPU - Standard onnx: CaffeNet 12-int8 - CPU - Standard onnx: fcn-resnet101-11 - CPU - Standard onnx: ArcFace ResNet-100 - CPU - Standard onnx: ResNet50 v1-12-int8 - CPU - Standard onnx: super-resolution-10 - CPU - Standard onnx: ResNet101_DUC_HDC-12 - CPU - Standard onnx: Faster R-CNN R-50-FPN-int8 - CPU - Standard litert: DeepLab V3 litert: SqueezeNet litert: Inception V4 litert: NASNet Mobile litert: Mobilenet Float litert: Mobilenet Quant litert: Inception ResNet V2 litert: Quantized COCO SSD MobileNet v1 pyperformance: go pyperformance: chaos pyperformance: float pyperformance: nbody pyperformance: pathlib pyperformance: raytrace pyperformance: xml_etree pyperformance: gc_collect pyperformance: json_loads pyperformance: crypto_pyaes pyperformance: async_tree_io pyperformance: regex_compile pyperformance: python_startup pyperformance: asyncio_tcp_ssl pyperformance: django_template pyperformance: asyncio_websockets pyperformance: pickle_pure_python renaissance: Scala Dotty renaissance: Rand Forest renaissance: ALS Movie Lens renaissance: Apache Spark Bayes renaissance: Savina Reactors.IO renaissance: Apache Spark PageRank renaissance: Finagle HTTP Requests renaissance: Gaussian Mixture Model renaissance: In-Memory Database Shootout renaissance: Akka Unbalanced Cobwebbed Tree renaissance: Genetic Algorithm Using Jenetics + Futures onednn: IP Shapes 1D - CPU onednn: IP Shapes 3D - CPU onednn: Convolution Batch Shapes Auto - CPU onednn: Deconvolution Batch shapes_1d - CPU onednn: Deconvolution Batch shapes_3d - CPU onednn: Recurrent Neural Network Training - CPU onednn: Recurrent Neural Network Inference - CPU financebench: Repo OpenMP financebench: Bonds OpenMP openvino-genai: Gemma-7b-int4-ov - CPU - Time To First Token openvino-genai: Gemma-7b-int4-ov - CPU - Time Per Output Token openvino-genai: Falcon-7b-instruct-int4-ov - CPU - Time To First Token openvino-genai: Falcon-7b-instruct-int4-ov - CPU - Time Per Output Token openvino-genai: Phi-3-mini-128k-instruct-int4-ov - CPU - Time To First Token openvino-genai: Phi-3-mini-128k-instruct-int4-ov - CPU - Time Per Output Token cp2k: H20-64 cp2k: H20-256 cp2k: Fayalite-FIST relion: Basic - CPU build2: Time To Compile primesieve: 1e12 primesieve: 1e13 y-cruncher: 1B y-cruncher: 500M povray: Trace Time build-eigen: Time To Compile gcrypt: couchdb: 100 - 1000 - 30 couchdb: 100 - 3000 - 30 couchdb: 300 - 1000 - 30 couchdb: 300 - 3000 - 30 couchdb: 500 - 1000 - 30 couchdb: 500 - 3000 - 30 blender: BMW27 - CPU-Only blender: Junkshop - CPU-Only blender: Classroom - CPU-Only blender: Fishy Cat - CPU-Only blender: Barbershop - CPU-Only blender: Pabellon Barcelona - CPU-Only whisper-cpp: ggml-base.en - 2016 State of the Union whisper-cpp: ggml-small.en - 2016 State of the Union whisper-cpp: ggml-medium.en - 2016 State of the Union whisperfile: Tiny whisperfile: Small whisperfile: Medium xnnpack: FP32MobileNetV1 xnnpack: FP32MobileNetV2 xnnpack: FP32MobileNetV3Large xnnpack: FP32MobileNetV3Small xnnpack: FP16MobileNetV1 xnnpack: FP16MobileNetV2 xnnpack: FP16MobileNetV3Large xnnpack: FP16MobileNetV3Small xnnpack: QS8MobileNetV2 a 4484PX px 4464p 4464p epyc 4364P 4584PX EPYC 4584PX amd 45 41 41 b 130588495050 104784522170 97172751700 92393529340 9.59 34.538 102.005 212.52 29.573 101.971 339.023 842.558 1.422 6.504 12.468 18.588 32.57 114.45 5.97 10.46 1.83 9.76 10.46 1141.194104 76454.45 80462.6 388077.69 404263.45 423535.68 3563852.57 2620332 1820810.21 1553632.14 134.596 11.0552 102.331 156.453 15.5899 636.318 3.2167 42.4537 390.597 141.117 1.54196 47.0691 9.00917 8.98486 236.245 7.63944 7.58789 8.82093 48806257.1 1866536062.7 49140426.6 163859 165916 577.817 396.6495 156.2217 20.3025 1.6844 2.741 343491.9 54752796 46507038 1.692 2.79632 0.75656 271333 775.75 12.7476 13.432 6.88 70.76 70.85 63.09 7.24 68.4 69.26 62.97 47.72 327.3 355.09 279.04 19.03 20.13 4096 8192 24.59 16384 32768 26.28 10.22 4096 8192 10.47 1.78 16384 32768 1.99 4096 8192 16384 32768 1536 3072 6144 12288 9.83 12.93 19.28 7.42776 90.4523 9.76985 6.39112 64.141 1.57084 310.875 23.553 2.55898 7.08601 648.522 21.2429 3579.67 1794.11 21477.8 16936 1211.48 823.17 19530.2 2129.52 77.8 38.2 50.7 59 14.2 175 35.8 677 12.1 41.7 755 69.8 5.77 645 20.7 315 165 477.0 414.4 9805.7 490.0 3506.4 2412.2 2319.4 3399.5 3256.1 4403.8 732.8 1.12573 4.058 6.67287 2.97612 2.41294 1372.03 700.859 21418.445312 33061.21875 106.62 101.72 86.06 77.34 55.93 51.86 58.191 592.857 94.032 944.27 92.053 6.347 78.498 18.485 8.772 18.542 58.655 162.125 69.929 232.188 106.13 367.83 148.049 511.775 53.55 73.56 143.36 71.35 506.2 166.12 87.48973 245.07838 700.91 41.70935 195.41642 534.919 1252 1495 1810 979 1143 1190 1498 920 844 97105235690 76496336760 71160291870 68816544020 7.684 29.094 85.201 198.112 25.446 88.415 287.047 776.115 1.188 5.602 10.967 17.406 27.16 101.37 6.11 10.82 1.84 10.1 10.76 842.730642 57716.64 59308.75 333882.92 344296.24 306153.2 3035330.21 2282729.64 1586292.42 1329363.1 159.71 10.7338 110.94 208.174 14.5122 941.401 2.81093 37.3832 356.409 125.172 1.17627 40.0935 6.52776 6.44913 199.023 5.63122 5.54888 6.41198 33443359.2 1346521770.3 30761218.9 141263 125698 410.726 278.2445 109.0265 14.17 1.1887 1.9412 244075.3 45267546 33702298 1.577 2.38124 0.65119 174960 745.59 11.8647 12.1169 7.11 69.11 66.57 63.8 7.41 68.2 66.85 63.61 52.3 243.14 232.26 222.75 19.49 20.39 4096 8192 25.86 16384 32768 27.59 10.45 4096 8192 10.91 1.83 16384 32768 2.05 4096 8192 16384 32768 1536 3072 6144 12288 10.23 13.4 20.28 6.25815 93.1605 9.01322 4.80287 68.9051 1.06188 355.751 26.7478 2.80544 7.98873 850.141 24.9402 2343.38 1809.18 22083.3 8057.56 1244.7 848.943 19477.8 1420.15 78.6 39.7 51.3 59.5 14.4 182 36.8 699 12.4 43.1 666 71.7 6.08 590 21 321 169 428.6 422.0 9378.8 513.2 3655.8 2138.1 2492.2 3860.6 3241.5 4038.4 904.0 1.93806 2.73072 4.11551 3.40293 3.5084 1898.36 965.015 22320.332031 34600.773438 121.48 97.79 93.01 74.65 58.91 49.31 53.005 628.104 92.21 729.4 111.651 9.116 110.608 18.379 8.688 25.264 67.364 171.023 75.901 253.99 117.566 406.12 164.468 559.346 74.08 97.01 197.2 96.67 679.34 226.34 92.70933 268.23891 809.78969 37.13462 173.38197 473.55091 1257 1365 1515 809 1383 1217 1467 779 717 97019897450 76184405610 70902656480 68678955550 7.646 28.824 84.998 194.024 25.447 88.27 286.962 769.818 1.184 5.551 10.855 17.355 26.94 101.25 5.45 10.51 1.84 8.35 8.97 842.012831 57688.08 59206.34 333574.3 342775.29 304060.28 3038723.48 2292879.44 1572010.68 1340712.85 157.893 10.7127 110.892 206.091 14.5747 937.778 2.79638 37.1048 356.194 125.076 1.1705 43.362 6.52206 6.52304 197.2 5.71084 5.6147 6.4074 33381363.1 1340340196.6 30701622.8 142213 125605 409.875 277.2994 108.8588 14.1464 1.1862 1.9391 244131 42973396 33871595 1.575 2.35379 0.65448 173946 831.42 11.839 12.1057 7.12 67.95 66.35 63.79 7.44 68.81 66.52 63.41 52.37 232.86 244.77 208.99 19.5 20.51 4096 8192 25.94 16384 32768 27.8 10.45 4096 8192 10.93 1.84 16384 32768 2.05 4096 8192 16384 32768 1536 3072 6144 12288 10.24 13.41 20.29 6.33034 93.3441 9.01687 4.85142 68.6104 1.066 357.602 26.9485 2.80695 7.99486 854.334 23.0604 2359.99 1821.35 22752.4 7931.64 1244.51 849.209 19490.7 1417.35 79.4 39.4 50.8 59.2 14.4 182 36.5 706 12.5 43.3 656 72.5 6.09 590 21.2 322 168 436.2 453.2 9275.7 474.9 3676.0 2229.7 2483.1 3815.2 3175.6 4002.3 920.7 1.93913 2.72942 4.13321 3.40628 3.51243 1895.68 966.013 22318.738281 34896.835938 122.3 97.61 93 74.54 58.86 49.28 52.724 631.31 94.896 733.02 113.78 9.147 110.709 18.365 8.623 25.328 67.076 163.839 76.389 254.733 119.349 408.483 164.812 560.7 73.16 97.1 197.53 97.09 678.4 224.64 93.45463 266.81425 809.489 38.71828 167.89219 475.51084 1272 1368 1574 837 1386 1248 1527 798 723 93956743710 72709954220 67431535960 66548747470 7.252 27.36 82.525 192.171 24.854 86.96 282.008 752.155 1.129 5.445 10.914 18.002 26.09 101.23 5.78 10.66 1.81 9.8 10.54 818.678141 56047.38 57954.19 328877.92 336267.09 301017.23 3061522.26 2283083.54 1543373.12 1297646.72 136.299 10.4982 101.413 168.036 14.6885 724.533 2.58688 36.1205 360.354 134.941 1.14581 46.3543 6.28807 6.29615 199.512 5.47043 5.35756 6.17399 32471450.7 1272938788 29969810.9 135033 118569 388.404 263.5475 103.8089 13.523 1.1365 1.8546 242679 43158853 30315972 1.398 2.21645 0.61452 173633 781.66 10.2873 10.5788 7.09 64.96 63.71 60.68 7.42 65.24 64.15 60.47 50.5 235.94 235.01 213.02 19.47 20.48 4096 8192 25.77 16384 32768 27.36 10.48 4096 8192 10.93 1.83 16384 32768 2.05 4096 8192 16384 32768 1536 3072 6144 12288 10.17 13.34 19.82 7.33331 95.2507 9.85896 5.95035 68.0775 1.3797 386.563 27.6828 2.77422 7.41037 872.742 21.5708 2272.28 1820.34 23141.4 7759.69 1248.12 859.757 20123.5 1431.19 78.3 39 49.7 57.2 14.1 177 35.3 671 12 41.8 738 69.5 5.8 626 20.6 315 164 424.6 415.3 9243.0 515.3 3523.1 2366.1 2650.9 3647.2 3288.5 4084.6 924.5 1.88731 3.75003 6.36475 3.43287 3.38958 1911.44 1009.58 21588.90625 33551.523438 124.38 98.33 93 74.94 59.87 50.45 58.397 676.742 100.486 937.33 121.737 9.701 118.664 19.364 9.041 26.609 69.379 156.343 76.696 259.196 118.161 401.989 164.358 551.346 79.19 104.7 212.3 105.45 739.91 243.35 97.94408 280.03419 838.18425 40.847 178.32917 491.98412 1267 1357 1473 799 1418 1208 1429 759 703 93907572870 72667527300 67396254380 66497739430 7.232 27.564 82.164 190.343 24.805 86.386 281.892 748.147 1.132 5.422 10.885 17.944 25.85 101.33 6.02 10.69 1.73 9.63 10.28 812.147089 56176.9 58056.01 326342.42 335411.96 301217.61 3082657.5 2304145.79 1532788.04 1300937.3 135.718 10.4865 101.618 166.512 14.685 748.66 2.59025 36.3968 361.297 134.517 1.15632 46.0921 6.29211 6.27188 199.546 5.47057 5.36189 6.17624 32439611.6 1263644317.8 29923094.4 133804 118587 388.128 263.3662 103.6744 13.5139 1.1368 1.8559 242340.5 42042951 29940074 1.393 2.20728 0.61093 174025 783.43 10.2531 10.5616 7.1 64.43 63.5 60.8 7.44 64.68 62.84 60.3 50.43 242.1 227.71 212.3 19.53 20.5 4096 8192 25.78 16384 32768 27.35 10.45 4096 8192 10.91 1.83 16384 32768 2.06 4096 8192 16384 32768 1536 3072 6144 12288 10.2 13.32 19.82 7.36544 95.3576 9.83893 6.0047 68.0934 1.33516 386.061 27.4729 2.76714 7.43372 864.81 21.6933 2289.58 1822.7 22928.3 7640.81 1248.6 857.489 20503.5 1415.11 76 38.6 49.7 58.6 14.1 176 35.6 671 12 42 742 70 5.81 628 20.6 316 164 442.9 412.9 8980.8 513.3 3688.6 2373.0 2708.1 3703.2 3275.5 4098.5 884.0 1.88919 3.74578 6.3715 3.42547 3.38823 1913.86 1012.19 21612.042969 33538.269531 124.52 98.03 94.77 75.06 59.86 50.46 58.864 678.676 97.446 946.43 117.067 9.705 118.848 19.278 9.067 26.595 69.578 156.712 76.577 259.055 116.417 405.448 163.363 551.6 79.24 104.01 211.4 105.24 731.39 245.12 97.54562 280.47206 839.98575 38.57843 177.87 492.69519 1246 1327 1542 791 1414 1210 1427 766 703 68048329660 53755663700 49930945600 48278109930 5.373 19.85 65.194 156.598 18.744 65.318 215.213 652.965 0.872 3.802 6.458 9.188 18.85 88.71 6.06 10.86 1.81 10.04 10.85 629.16512 43571.86 45095.35 272035.68 265949.2 225379.66 2874011.56 2353677.96 1298373.66 1187839.34 125.52 13.2372 97.2268 141.568 19.8127 762.248 2.05211 39.0828 376.531 157.078 0.916317 44.8002 4.63965 4.62423 177.118 3.91749 3.83152 4.52318 23448988.5 966576789.4 21559197 101872 88229 304.244 205.0144 80.3686 10.3392 0.8612 1.4052 174760.6 31514937 24017024 1.104 1.74038 0.49771 139110 813.31 8.02696 8.19153 6.64 60.87 59.27 55.95 7.02 61.09 59.54 56.05 51.86 204.06 185.97 171.68 18.83 19.6 4096 8192 24.27 16384 32768 25.52 9.92 4096 8192 10.32 1.76 16384 32768 1.93 4096 8192 16384 32768 1536 3072 6144 12288 9.52 12.59 19 7.96069 75.5414 10.2835 7.06075 50.4698 1.31121 487.301 25.5845 2.65482 6.36564 1091.32 22.3192 1569.75 1654.65 24502.1 3808.8 1044.53 935.994 21153.5 1494.51 75.3 38.3 48.2 56.8 13.5 174 34.7 649 11.8 40.7 726 68.1 6.34 610 20.1 308 162 464.5 381.6 5993.4 483.4 3349.5 2506.8 1705.5 3397.0 3244.8 3720.1 749.9 1.90686 4.45096 7.87367 3.78629 3.78008 2422.83 1270.76 21121.613281 32715.808594 154.83 105.09 115.73 79.41 71.58 52.64 68.171 740.089 108.436 1369.225 154.429 12.273 150.668 23.886 10.954 34.917 84.477 154.317 73.237 239.078 118.317 398.386 163.915 545.967 103.97 141.99 276.44 136.51 978.03 320.59 103.71549 314.47722 966.00638 41.29562 193.93664 528.33262 1056 708 753 335 1578 1088 1067 455 400 127793294810 100450008810 93168087590 90711447140 9.329 34.164 101.332 217.908 29.067 98.789 320.184 842.344 1.426 6.327 12.086 17.845 32.59 113.37 6.07 11.25 1.82 9.85 11.05 1091.011509 72793.26 74468.49 379874.32 393408.85 435194.2 3397871.03 2528614.52 1863395.07 1559859.89 166.047 11.3551 112.671 203.947 15.0106 1005.5 3.38658 43.8436 383.134 122.168 1.53619 48.2636 8.51573 8.49097 218.181 7.3876 7.32353 8.43752 44169926.7 1765175324.6 40531084 170361 164581 556.043 368.5451 144.4782 18.7075 1.5697 2.5634 329601.4 57247763 44446458 1.817 2.83913 0.75987 235893 806.83 14.0171 14.624 7 92.89 90.01 86.17 7.32 93.24 91.12 86.56 49.9 316.8 290.1 289.41 19.27 20.23 4096 8192 25.32 16384 32768 27.2 10.3 4096 8192 10.81 1.82 16384 32768 2.03 4096 8192 16384 32768 1536 3072 6144 12288 10.11 13.32 19.67 6.02069 88.0632 8.87444 4.90285 66.6172 0.994193 295.281 22.8062 2.60974 8.18519 650.96 20.7183 2185.78 1769.56 20102.8 9311.77 1165.5 734.352 18224 1337.78 78.4 39.2 51.1 59 14.1 176 36.1 682 12.5 42.7 653 71.4 6.01 582 20.8 315 166 441.9 420.7 9891.4 477.0 3582.4 2182.5 2627.3 3806.5 3232.8 4384.0 1050.3 1.12465 2.74699 4.03808 3.07219 2.67219 1320.99 672.506 21948.935547 34053.0625 105.36 98.91 86.04 75.06 56.06 50.84 43.167 517.664 84.802 551.722 91.724 6.926 84.489 17.343 8.173 19.55 58.818 150.639 75.893 252.825 116.241 397.426 163.805 553.805 55.26 73.87 149.37 73.96 514.7 171.82 84.68398 233.68441 686.87944 35.91499 155.13923 425.50969 1194 1484 1760 933 1200 1224 1527 877 809 128265416980 100666591640 93555991350 90927418030 9.4 34.414 101.537 218.529 29.205 99.445 327.264 845.584 1.433 6.391 11.897 17.863 32.22 113.16 6.04 9.19 1.83 9.98 11.14 1093.097867 72508.13 74683.74 382691.63 394570 436101.34 3431395.73 2558621.38 1874050.7 1569308.65 165.769 11.3166 113.294 203.491 14.9427 999.973 3.39064 44.051 382.771 122.205 1.54365 47.885 8.57146 8.51233 220.799 7.44575 7.32284 8.47727 44323920 1771958344.7 40718581.6 169750 165934 556.412 368.5734 144.6262 18.7499 1.5743 2.5694 331202 58676636 45724794 1.808 2.85485 0.75645 238227 754.41 14.0472 14.6973 7 93.91 91.31 86.47 7.37 93.49 90.48 86.44 49.76 326.16 324.04 289.04 19.24 20.26 4096 8192 25.3 16384 32768 27.19 10.37 4096 8192 10.85 1.83 16384 32768 2.03 4096 8192 16384 32768 1536 3072 6144 12288 10.14 13.24 19.73 6.03024 88.3622 8.82581 4.91381 66.9202 0.999701 294.927 22.6984 2.61221 8.18265 647.814 20.8816 2200.93 1765.39 19859.4 9231.97 1175.5 742.029 18412.3 1339.69 79.6 39.1 51 58.8 14.1 176 36.2 686 12.2 42.7 651 69.8 6.01 586 21 317 165 411.9 402.2 9969.8 515.6 3641.9 2227.9 2571.2 3773.1 3442.9 4344.8 959.3 1.12801 2.77727 4.04877 3.06341 2.66429 1320.37 670.234 21848.070312 33913.464844 104.38 98.64 86.2 75.51 55.47 50.68 42.51 517.918 84.211 553.782 92.021 6.903 84.263 17.567 8.279 20.325 58.692 159.247 76.669 253.493 116.617 400.399 161.089 554.397 55.74 73.55 148.97 73.86 513.66 171.25 84.02813 233.05316 684.38662 35.4262 155.9745 424.94666 1192 1477 1756 931 1196 1222 1542 871 807 128278766110 100742301290 93614594180 91025092880 9.345 34.66 101.522 217.957 29.179 98.905 325.945 835.45 1.427 6.322 12.112 17.836 32.23 113.31 5.54 9.4 1.83 10.32 9.15 1092.088495 72140.75 74672.57 382067.3 395112.39 435926.4 3379343.72 2559245.2 1862938 1569479.7 166.168 11.3105 113.171 203.934 14.8837 1013.19 3.40178 43.4886 380.801 122.219 1.54988 47.6721 8.54878 8.54395 220.728 7.46399 7.31594 8.49646 44309550.9 1766450267.1 40711986.9 169373 166180 556.865 369.0274 144.6131 18.7661 1.5767 2.5696 331193.7 55762702 48797079 1.811 2.83798 0.76227 236826 833.64 14.0431 14.6226 7 91.71 90.96 86.68 7.37 94.08 91.17 86.91 50.33 337.06 291.02 273.67 19.16 20.29 4096 8192 25.38 16384 32768 27.21 10.34 4096 8192 10.82 1.86 16384 32768 2.03 4096 8192 16384 32768 1536 3072 6144 12288 10.17 13.36 19.71 6.01631 88.4104 8.83524 4.90313 67.1849 0.986643 293.961 22.9923 2.62573 8.18175 645.208 20.9753 2152.48 1763.22 19686 9245.76 1178.94 730.167 17824.6 1343.22 78.1 38.8 50.7 59.6 14.2 179 36.1 689 12.3 42.4 633 70.4 6.03 581 20.9 314 163 455.6 407.7 9972.0 500.0 3589.4 2179.0 2570.5 3829.9 3408.4 4359.3 891.2 1.13012 2.77835 4.0355 3.05918 2.66296 1320.23 670.7 22060.152344 33906.6875 104.87 98.31 87.34 74.86 55.67 50.74 42.469 517.502 84.072 549.8 91.017 6.937 84.315 17.364 8.119 19.521 57.18 167.631 76.662 253.279 116.222 399.261 161.473 552.209 55.18 73.69 149.37 73.48 513.54 170.09 84.11276 232.58353 684.13056 35.11036 154.43658 424.57459 1189 1478 1754 929 1195 1223 1547 898 801 33280835860 25376123290 23845653160 23701092760 2.808 10.869 35.849 85.948 10.578 37.171 123.792 380.633 0.443 2.178 4.334 8.007 10.91 56.61 5.7 10.02 1.7 9.32 9.92 259.146491 21499.73 22255.23 135362.96 132265.65 106833.78 2105346.65 1656876.39 732992.32 596141.19 124.166 7.16015 73.8845 141.955 10.5186 495.836 1.07903 23.8412 207.605 73.2707 0.464723 36.107 2.30223 2.29544 112.72 1.84407 1.90804 2.23851 11669425.7 478992410.7 10707715 54079 43781 155.441 100.3659 39.3205 5.0655 0.4244 0.6923 84017.4 12630264 12024682 0.664 0.88895 0.27352 66243 705.99 4.43021 4.47643 8.04835 139.659 13.5323 7.04122 95.0664 2.01609 926.756 41.9418 4.8153 13.6457 2151.82 27.6891 2950.8 3173.55 49048.8 5153.49 2193.88 1781.6 41844 2736.98 78.4 39.9 52.8 60.4 14.6 182 36.7 702 12.4 43 855 72.6 6.93 723 21.2 330 171 506.5 439.9 7465.1 566.2 3745.8 2267.1 1664.4 3087.0 2951.1 4474.8 806.4 3.83117 4.94689 9.18658 7.26738 7.53655 4726.08 2415.25 32271.666016 47198.941406 123.656 1281.082 177.144 2771.778 300.72 24.723 301.596 40.562 18.165 70.064 145.21 165.547 96.744 314.485 153.029 515.495 212.829 695.352 215.53 292.14 560.89 280.26 2077.69 667.71 187.28645 581.27325 2367 1425 1342 412 3512 2131 2018 714 623 33289542860 25582374640 23760535010 23701138360 2.793 10.805 35.641 85.603 10.554 37.07 122.921 377.791 0.441 2.172 4.332 7.984 10.89 56.03 5.68 9.97 1.71 9.3 10.16 263.204392 21444.09 22201.03 135016.23 132025.36 105324.92 2103587.5 1571309.62 690282.02 602002.02 123.269 7.15712 73.0047 140.034 10.6241 495.078 1.07829 23.9108 202.118 73.8364 0.464485 38.5294 2.26612 2.27718 111.483 1.90481 1.89469 2.22318 11518844.2 475486444.1 10629944.8 53425 43210 154.248 99.3999 39.0377 5.0448 0.4223 0.6906 83540.7 14943737 11607502 0.661 0.89241 0.27320 66519 700.76 4.43091 4.46411 6.55 32.97 31.91 30.93 6.9 32.94 32.59 31.04 51.5 122.8 115.67 105.1 19.17 20.08 4096 8192 23.8 16384 32768 25.09 9.99 4096 8192 10.46 1.73 16384 32768 1.93 4096 8192 16384 32768 1536 3072 6144 12288 8.05 11.38 16.65 8.10767 139.718 13.6955 7.13857 94.1216 2.01913 927.388 41.8199 4.94636 13.5417 2152.92 25.9499 2994.59 3179.55 49101.9 5034.38 2190.79 1785.4 41757.9 2741.68 78.7 40.2 52.9 60 14.6 180 36.6 704 12.4 43.1 858 72.7 6.97 737 21.3 331 171 515.8 485.4 7147.4 600.1 3818.5 2292.2 1669.9 3130.4 3681.9 4457.1 801.4 3.82885 4.95867 9.16956 7.28139 7.55315 4752.85 2441.27 32477.349609 46851.421875 288.64 124.24 217.85 87.9 130.47 60.07 130.813 1282.66 179.122 3046.805 294.951 24.821 302.737 39.245 18.29 70.774 146.655 169.141 96.554 316.459 154.041 513.583 212.528 698.074 214.89 294.04 563.14 282.19 2095.98 671.42 186.22906 583.2045 1797.8625 66.11323 310.38181 845.17 2407 1443 1359 414 3508 2097 1998 720 623 OpenBenchmarking.org
OpenSSL Algorithm: ChaCha20 OpenBenchmarking.org byte/s, More Is Better OpenSSL Algorithm: ChaCha20 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 30000M 60000M 90000M 120000M 150000M 33280835860 33289542860 68048329660 93956743710 93907572870 97105235690 128278766110 127793294810 128265416980 130588495050 97019897450 1. OpenSSL 3.0.13 30 Jan 2024 (Library: OpenSSL 3.0.13 30 Jan 2024) - Additional Parameters: -engine qatengine -async_jobs 8
OpenSSL Algorithm: AES-128-GCM OpenBenchmarking.org byte/s, More Is Better OpenSSL Algorithm: AES-128-GCM 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 20000M 40000M 60000M 80000M 100000M 25376123290 25582374640 53755663700 72709954220 72667527300 76496336760 100742301290 100450008810 100666591640 104784522170 76184405610 1. OpenSSL 3.0.13 30 Jan 2024 (Library: OpenSSL 3.0.13 30 Jan 2024) - Additional Parameters: -engine qatengine -async_jobs 8
OpenSSL Algorithm: AES-256-GCM OpenBenchmarking.org byte/s, More Is Better OpenSSL Algorithm: AES-256-GCM 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 20000M 40000M 60000M 80000M 100000M 23845653160 23760535010 49930945600 67431535960 67396254380 71160291870 93614594180 93168087590 93555991350 97172751700 70902656480 1. OpenSSL 3.0.13 30 Jan 2024 (Library: OpenSSL 3.0.13 30 Jan 2024) - Additional Parameters: -engine qatengine -async_jobs 8
OpenSSL Algorithm: ChaCha20-Poly1305 OpenBenchmarking.org byte/s, More Is Better OpenSSL Algorithm: ChaCha20-Poly1305 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 20000M 40000M 60000M 80000M 100000M 23701092760 23701138360 48278109930 66548747470 66497739430 68816544020 91025092880 90711447140 90927418030 92393529340 68678955550 1. OpenSSL 3.0.13 30 Jan 2024 (Library: OpenSSL 3.0.13 30 Jan 2024) - Additional Parameters: -engine qatengine -async_jobs 8
SVT-AV1 Encoder Mode: Preset 3 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 2.3 Encoder Mode: Preset 3 - Input: Bosphorus 4K 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 3 6 9 12 15 2.808 2.793 5.373 7.252 7.232 7.684 9.345 9.329 9.400 9.590 7.646 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 5 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 2.3 Encoder Mode: Preset 5 - Input: Bosphorus 4K 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 8 16 24 32 40 10.87 10.81 19.85 27.36 27.56 29.09 34.66 34.16 34.41 34.54 28.82 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 2.3 Encoder Mode: Preset 8 - Input: Bosphorus 4K 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 20 40 60 80 100 35.85 35.64 65.19 82.53 82.16 85.20 101.52 101.33 101.54 102.01 85.00 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 13 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 2.3 Encoder Mode: Preset 13 - Input: Bosphorus 4K 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 50 100 150 200 250 85.95 85.60 156.60 192.17 190.34 198.11 217.96 217.91 218.53 212.52 194.02 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 3 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 2.3 Encoder Mode: Preset 3 - Input: Bosphorus 1080p 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 7 14 21 28 35 10.58 10.55 18.74 24.85 24.81 25.45 29.18 29.07 29.21 29.57 25.45 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 5 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 2.3 Encoder Mode: Preset 5 - Input: Bosphorus 1080p 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 20 40 60 80 100 37.17 37.07 65.32 86.96 86.39 88.42 98.91 98.79 99.45 101.97 88.27 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 2.3 Encoder Mode: Preset 8 - Input: Bosphorus 1080p 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 70 140 210 280 350 123.79 122.92 215.21 282.01 281.89 287.05 325.95 320.18 327.26 339.02 286.96 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 13 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 2.3 Encoder Mode: Preset 13 - Input: Bosphorus 1080p 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 200 400 600 800 1000 380.63 377.79 652.97 752.16 748.15 776.12 835.45 842.34 845.58 842.56 769.82 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 3 - Input: Beauty 4K 10-bit OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 2.3 Encoder Mode: Preset 3 - Input: Beauty 4K 10-bit 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 0.3224 0.6448 0.9672 1.2896 1.612 0.443 0.441 0.872 1.129 1.132 1.188 1.427 1.426 1.433 1.422 1.184 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 5 - Input: Beauty 4K 10-bit OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 2.3 Encoder Mode: Preset 5 - Input: Beauty 4K 10-bit 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 2 4 6 8 10 2.178 2.172 3.802 5.445 5.422 5.602 6.322 6.327 6.391 6.504 5.551 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 8 - Input: Beauty 4K 10-bit OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 2.3 Encoder Mode: Preset 8 - Input: Beauty 4K 10-bit 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 3 6 9 12 15 4.334 4.332 6.458 10.914 10.885 10.967 12.112 12.086 11.897 12.468 10.855 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 13 - Input: Beauty 4K 10-bit OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 2.3 Encoder Mode: Preset 13 - Input: Beauty 4K 10-bit 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 5 10 15 20 25 8.007 7.984 9.188 18.002 17.944 17.406 17.836 17.845 17.863 18.588 17.355 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
x265 Video Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better x265 Video Input: Bosphorus 4K 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 8 16 24 32 40 10.91 10.89 18.85 26.09 25.85 27.16 32.23 32.59 32.22 32.57 26.94 1. x265 [info]: HEVC encoder version 3.5+1-f0c1022b6
x265 Video Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better x265 Video Input: Bosphorus 1080p 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 30 60 90 120 150 56.61 56.03 88.71 101.23 101.33 101.37 113.31 113.37 113.16 114.45 101.25 1. x265 [info]: HEVC encoder version 3.5+1-f0c1022b6
simdjson Throughput Test: Kostya OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: Kostya 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 2 4 6 8 10 5.70 5.68 6.06 5.78 6.02 6.11 5.54 6.07 6.04 5.97 5.45 1. (CXX) g++ options: -O3 -lrt
simdjson Throughput Test: TopTweet OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: TopTweet 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 3 6 9 12 15 10.02 9.97 10.86 10.66 10.69 10.82 9.40 11.25 9.19 10.46 10.51 1. (CXX) g++ options: -O3 -lrt
simdjson Throughput Test: LargeRandom OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: LargeRandom 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 0.414 0.828 1.242 1.656 2.07 1.70 1.71 1.81 1.81 1.73 1.84 1.83 1.82 1.83 1.83 1.84 1. (CXX) g++ options: -O3 -lrt
simdjson Throughput Test: PartialTweets OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: PartialTweets 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 3 6 9 12 15 9.32 9.30 10.04 9.80 9.63 10.10 10.32 9.85 9.98 9.76 8.35 1. (CXX) g++ options: -O3 -lrt
simdjson Throughput Test: DistinctUserID OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: DistinctUserID 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 3 6 9 12 15 9.92 10.16 10.85 10.54 10.28 10.76 9.15 11.05 11.14 10.46 8.97 1. (CXX) g++ options: -O3 -lrt
ACES DGEMM Sustained Floating-Point Rate OpenBenchmarking.org GFLOP/s, More Is Better ACES DGEMM 1.0 Sustained Floating-Point Rate 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 200 400 600 800 1000 259.15 263.20 629.17 818.68 812.15 842.73 1092.09 1091.01 1093.10 1141.19 842.01 1. (CC) gcc options: -ffast-math -mavx2 -O3 -fopenmp -lopenblas
Rustls Benchmark: handshake - Suite: TLS13_CHACHA20_POLY1305_SHA256 OpenBenchmarking.org handshakes/s, More Is Better Rustls 0.23.17 Benchmark: handshake - Suite: TLS13_CHACHA20_POLY1305_SHA256 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 16K 32K 48K 64K 80K 21499.73 21444.09 43571.86 56047.38 56176.90 57716.64 72140.75 72793.26 72508.13 76454.45 57688.08 1. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs
Rustls Benchmark: handshake - Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 OpenBenchmarking.org handshakes/s, More Is Better Rustls 0.23.17 Benchmark: handshake - Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 20K 40K 60K 80K 100K 22255.23 22201.03 45095.35 57954.19 58056.01 59308.75 74672.57 74468.49 74683.74 80462.60 59206.34 1. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs
Rustls Benchmark: handshake-resume - Suite: TLS13_CHACHA20_POLY1305_SHA256 OpenBenchmarking.org handshakes/s, More Is Better Rustls 0.23.17 Benchmark: handshake-resume - Suite: TLS13_CHACHA20_POLY1305_SHA256 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 80K 160K 240K 320K 400K 135362.96 135016.23 272035.68 328877.92 326342.42 333882.92 382067.30 379874.32 382691.63 388077.69 333574.30 1. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs
Rustls Benchmark: handshake-ticket - Suite: TLS13_CHACHA20_POLY1305_SHA256 OpenBenchmarking.org handshakes/s, More Is Better Rustls 0.23.17 Benchmark: handshake-ticket - Suite: TLS13_CHACHA20_POLY1305_SHA256 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 90K 180K 270K 360K 450K 132265.65 132025.36 265949.20 336267.09 335411.96 344296.24 395112.39 393408.85 394570.00 404263.45 342775.29 1. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs
Rustls Benchmark: handshake - Suite: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 OpenBenchmarking.org handshakes/s, More Is Better Rustls 0.23.17 Benchmark: handshake - Suite: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 90K 180K 270K 360K 450K 106833.78 105324.92 225379.66 301017.23 301217.61 306153.20 435926.40 435194.20 436101.34 423535.68 304060.28 1. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs
Rustls Benchmark: handshake-resume - Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 OpenBenchmarking.org handshakes/s, More Is Better Rustls 0.23.17 Benchmark: handshake-resume - Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 800K 1600K 2400K 3200K 4000K 2105346.65 2103587.50 2874011.56 3061522.26 3082657.50 3035330.21 3379343.72 3397871.03 3431395.73 3563852.57 3038723.48 1. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs
Rustls Benchmark: handshake-ticket - Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 OpenBenchmarking.org handshakes/s, More Is Better Rustls 0.23.17 Benchmark: handshake-ticket - Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 600K 1200K 1800K 2400K 3000K 1656876.39 1571309.62 2353677.96 2283083.54 2304145.79 2282729.64 2559245.20 2528614.52 2558621.38 2620332.00 2292879.44 1. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs
Rustls Benchmark: handshake-resume - Suite: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 OpenBenchmarking.org handshakes/s, More Is Better Rustls 0.23.17 Benchmark: handshake-resume - Suite: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 400K 800K 1200K 1600K 2000K 732992.32 690282.02 1298373.66 1543373.12 1532788.04 1586292.42 1862938.00 1863395.07 1874050.70 1820810.21 1572010.68 1. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs
Rustls Benchmark: handshake-ticket - Suite: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 OpenBenchmarking.org handshakes/s, More Is Better Rustls 0.23.17 Benchmark: handshake-ticket - Suite: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 300K 600K 900K 1200K 1500K 596141.19 602002.02 1187839.34 1297646.72 1300937.30 1329363.10 1569479.70 1559859.89 1569308.65 1553632.14 1340712.85 1. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs
ONNX Runtime Model: GPT-2 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Standard 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 40 80 120 160 200 124.17 123.27 125.52 136.30 135.72 159.71 166.17 166.05 165.77 134.60 157.89 1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: yolov4 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Standard 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 3 6 9 12 15 7.16015 7.15712 13.23720 10.49820 10.48650 10.73380 11.31050 11.35510 11.31660 11.05520 10.71270 1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: ZFNet-512 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Standard 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 30 60 90 120 150 73.88 73.00 97.23 101.41 101.62 110.94 113.17 112.67 113.29 102.33 110.89 1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: T5 Encoder - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Standard 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 50 100 150 200 250 141.96 140.03 141.57 168.04 166.51 208.17 203.93 203.95 203.49 156.45 206.09 1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: bertsquad-12 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Standard 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 5 10 15 20 25 10.52 10.62 19.81 14.69 14.69 14.51 14.88 15.01 14.94 15.59 14.57 1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 200 400 600 800 1000 495.84 495.08 762.25 724.53 748.66 941.40 1013.19 1005.50 999.97 636.32 937.78 1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: fcn-resnet101-11 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 0.7654 1.5308 2.2962 3.0616 3.827 1.07903 1.07829 2.05211 2.58688 2.59025 2.81093 3.40178 3.38658 3.39064 3.21670 2.79638 1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 10 20 30 40 50 23.84 23.91 39.08 36.12 36.40 37.38 43.49 43.84 44.05 42.45 37.10 1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 80 160 240 320 400 207.61 202.12 376.53 360.35 361.30 356.41 380.80 383.13 382.77 390.60 356.19 1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: super-resolution-10 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Standard 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 30 60 90 120 150 73.27 73.84 157.08 134.94 134.52 125.17 122.22 122.17 122.21 141.12 125.08 1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 0.3487 0.6974 1.0461 1.3948 1.7435 0.464723 0.464485 0.916317 1.145810 1.156320 1.176270 1.549880 1.536190 1.543650 1.541960 1.170500 1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 11 22 33 44 55 36.11 38.53 44.80 46.35 46.09 40.09 47.67 48.26 47.89 47.07 43.36 1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt
OSPRay Benchmark: particle_volume/ao/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 3.2 Benchmark: particle_volume/ao/real_time 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 3 6 9 12 15 2.30223 2.26612 4.63965 6.28807 6.29211 6.52776 8.54878 8.51573 8.57146 9.00917 6.52206
OSPRay Benchmark: particle_volume/scivis/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 3.2 Benchmark: particle_volume/scivis/real_time 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 3 6 9 12 15 2.29544 2.27718 4.62423 6.29615 6.27188 6.44913 8.54395 8.49097 8.51233 8.98486 6.52304
OSPRay Benchmark: particle_volume/pathtracer/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 3.2 Benchmark: particle_volume/pathtracer/real_time 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 50 100 150 200 250 112.72 111.48 177.12 199.51 199.55 199.02 220.73 218.18 220.80 236.25 197.20
OSPRay Benchmark: gravity_spheres_volume/dim_512/ao/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 3.2 Benchmark: gravity_spheres_volume/dim_512/ao/real_time 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 2 4 6 8 10 1.84407 1.90481 3.91749 5.47043 5.47057 5.63122 7.46399 7.38760 7.44575 7.63944 5.71084
OSPRay Benchmark: gravity_spheres_volume/dim_512/scivis/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 3.2 Benchmark: gravity_spheres_volume/dim_512/scivis/real_time 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 2 4 6 8 10 1.90804 1.89469 3.83152 5.35756 5.36189 5.54888 7.31594 7.32353 7.32284 7.58789 5.61470
OSPRay Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 3.2 Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 2 4 6 8 10 2.23851 2.22318 4.52318 6.17399 6.17624 6.41198 8.49646 8.43752 8.47727 8.82093 6.40740
BYTE Unix Benchmark Computational Test: Pipe OpenBenchmarking.org LPS, More Is Better BYTE Unix Benchmark 5.1.3-git Computational Test: Pipe 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 10M 20M 30M 40M 50M 11669425.7 11518844.2 23448988.5 32471450.7 32439611.6 33443359.2 44309550.9 44169926.7 44323920.0 48806257.1 33381363.1 1. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm
BYTE Unix Benchmark Computational Test: Dhrystone 2 OpenBenchmarking.org LPS, More Is Better BYTE Unix Benchmark 5.1.3-git Computational Test: Dhrystone 2 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 400M 800M 1200M 1600M 2000M 478992410.7 475486444.1 966576789.4 1272938788.0 1263644317.8 1346521770.3 1766450267.1 1765175324.6 1771958344.7 1866536062.7 1340340196.6 1. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm
BYTE Unix Benchmark Computational Test: System Call OpenBenchmarking.org LPS, More Is Better BYTE Unix Benchmark 5.1.3-git Computational Test: System Call 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 11M 22M 33M 44M 55M 10707715.0 10629944.8 21559197.0 29969810.9 29923094.4 30761218.9 40711986.9 40531084.0 40718581.6 49140426.6 30701622.8 1. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm
7-Zip Compression Test: Compression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression Test: Compression Rating 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 40K 80K 120K 160K 200K 54079 53425 101872 135033 133804 141263 169373 170361 169750 163859 142213 1. 7-Zip 23.01 (x64) : Copyright (c) 1999-2023 Igor Pavlov : 2023-06-20
7-Zip Compression Test: Decompression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression Test: Decompression Rating 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 40K 80K 120K 160K 200K 43781 43210 88229 118569 118587 125698 166180 164581 165934 165916 125605 1. 7-Zip 23.01 (x64) : Copyright (c) 1999-2023 Igor Pavlov : 2023-06-20
Etcpak Benchmark: Multi-Threaded - Configuration: ETC2 OpenBenchmarking.org Mpx/s, More Is Better Etcpak 2.0 Benchmark: Multi-Threaded - Configuration: ETC2 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 120 240 360 480 600 155.44 154.25 304.24 388.40 388.13 410.73 556.87 556.04 556.41 577.82 409.88 1. (CXX) g++ options: -flto -pthread
ASTC Encoder Preset: Fast OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 5.0 Preset: Fast 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 90 180 270 360 450 100.37 99.40 205.01 263.55 263.37 278.24 369.03 368.55 368.57 396.65 277.30 1. (CXX) g++ options: -O3 -flto -pthread
ASTC Encoder Preset: Medium OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 5.0 Preset: Medium 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 30 60 90 120 150 39.32 39.04 80.37 103.81 103.67 109.03 144.61 144.48 144.63 156.22 108.86 1. (CXX) g++ options: -O3 -flto -pthread
ASTC Encoder Preset: Thorough OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 5.0 Preset: Thorough 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 5 10 15 20 25 5.0655 5.0448 10.3392 13.5230 13.5139 14.1700 18.7661 18.7075 18.7499 20.3025 14.1464 1. (CXX) g++ options: -O3 -flto -pthread
ASTC Encoder Preset: Exhaustive OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 5.0 Preset: Exhaustive 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 0.379 0.758 1.137 1.516 1.895 0.4244 0.4223 0.8612 1.1365 1.1368 1.1887 1.5767 1.5697 1.5743 1.6844 1.1862 1. (CXX) g++ options: -O3 -flto -pthread
ASTC Encoder Preset: Very Thorough OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 5.0 Preset: Very Thorough 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 0.6167 1.2334 1.8501 2.4668 3.0835 0.6923 0.6906 1.4052 1.8546 1.8559 1.9412 2.5696 2.5634 2.5694 2.7410 1.9391 1. (CXX) g++ options: -O3 -flto -pthread
BYTE Unix Benchmark Computational Test: Whetstone Double OpenBenchmarking.org MWIPS, More Is Better BYTE Unix Benchmark 5.1.3-git Computational Test: Whetstone Double 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 70K 140K 210K 280K 350K 84017.4 83540.7 174760.6 242679.0 242340.5 244075.3 331193.7 329601.4 331202.0 343491.9 244131.0 1. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm
Stockfish Chess Benchmark OpenBenchmarking.org Nodes Per Second, More Is Better Stockfish 17 Chess Benchmark 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 13M 26M 39M 52M 65M 12630264 14943737 31514937 43158853 42042951 45267546 55762702 57247763 58676636 54752796 42973396 1. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -funroll-loops -msse -msse3 -mpopcnt -mavx2 -mbmi -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto-partition=one -flto=jobserver
Stockfish Chess Benchmark OpenBenchmarking.org Nodes Per Second, More Is Better Stockfish Chess Benchmark 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 10M 20M 30M 40M 50M 12024682 11607502 24017024 30315972 29940074 33702298 48797079 44446458 45724794 46507038 33871595 1. Stockfish 16 by the Stockfish developers (see AUTHORS file)
GROMACS Input: water_GMX50_bare OpenBenchmarking.org Ns Per Day, More Is Better GROMACS Input: water_GMX50_bare 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 0.4088 0.8176 1.2264 1.6352 2.044 0.664 0.661 1.104 1.398 1.393 1.577 1.811 1.817 1.808 1.692 1.575 1. GROMACS version: 2023.3-Ubuntu_2023.3_1ubuntu3
NAMD Input: ATPase with 327,506 Atoms OpenBenchmarking.org ns/day, More Is Better NAMD 3.0 Input: ATPase with 327,506 Atoms 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 0.6423 1.2846 1.9269 2.5692 3.2115 0.88895 0.89241 1.74038 2.21645 2.20728 2.38124 2.83798 2.83913 2.85485 2.79632 2.35379
NAMD Input: STMV with 1,066,628 Atoms OpenBenchmarking.org ns/day, More Is Better NAMD 3.0 Input: STMV with 1,066,628 Atoms 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 0.1715 0.343 0.5145 0.686 0.8575 0.27352 0.27320 0.49771 0.61452 0.61093 0.65119 0.76227 0.75987 0.75645 0.75656 0.65448
Apache Cassandra Test: Writes OpenBenchmarking.org Op/s, More Is Better Apache Cassandra 5.0 Test: Writes 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 60K 120K 180K 240K 300K 66243 66519 139110 173633 174025 174960 236826 235893 238227 271333 173946
Numpy Benchmark OpenBenchmarking.org Score, More Is Better Numpy Benchmark 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 200 400 600 800 1000 705.99 700.76 813.31 781.66 783.43 745.59 833.64 806.83 754.41 775.75 831.42
QuantLib Size: S OpenBenchmarking.org tasks/s, More Is Better QuantLib 1.35-dev Size: S 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 4 8 12 16 20 4.43021 4.43091 8.02696 10.28730 10.25310 11.86470 14.04310 14.01710 14.04720 12.74760 11.83900 1. (CXX) g++ options: -O3 -march=native -fPIE -pie
QuantLib Size: XXS OpenBenchmarking.org tasks/s, More Is Better QuantLib 1.35-dev Size: XXS 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 4 8 12 16 20 4.47643 4.46411 8.19153 10.57880 10.56160 12.11690 14.62260 14.62400 14.69730 13.43200 12.10570 1. (CXX) g++ options: -O3 -march=native -fPIE -pie
Llama.cpp Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Text Generation 128 OpenBenchmarking.org Tokens Per Second, More Is Better Llama.cpp b4154 Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Text Generation 128 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 2 4 6 8 10 6.55 6.64 7.09 7.10 7.11 7.00 7.00 7.00 6.88 7.12 1. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas
Llama.cpp Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 512 OpenBenchmarking.org Tokens Per Second, More Is Better Llama.cpp b4154 Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 512 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 20 40 60 80 100 32.97 60.87 64.96 64.43 69.11 91.71 92.89 93.91 70.76 67.95 1. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas
Llama.cpp Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 1024 OpenBenchmarking.org Tokens Per Second, More Is Better Llama.cpp b4154 Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 1024 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 20 40 60 80 100 31.91 59.27 63.71 63.50 66.57 90.96 90.01 91.31 70.85 66.35 1. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas
Llama.cpp Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 2048 OpenBenchmarking.org Tokens Per Second, More Is Better Llama.cpp b4154 Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 2048 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 20 40 60 80 100 30.93 55.95 60.68 60.80 63.80 86.68 86.17 86.47 63.09 63.79 1. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas
Llama.cpp Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Text Generation 128 OpenBenchmarking.org Tokens Per Second, More Is Better Llama.cpp b4154 Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Text Generation 128 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 2 4 6 8 10 6.90 7.02 7.42 7.44 7.41 7.37 7.32 7.37 7.24 7.44 1. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas
Llama.cpp Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 512 OpenBenchmarking.org Tokens Per Second, More Is Better Llama.cpp b4154 Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 512 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 20 40 60 80 100 32.94 61.09 65.24 64.68 68.20 94.08 93.24 93.49 68.40 68.81 1. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas
Llama.cpp Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 1024 OpenBenchmarking.org Tokens Per Second, More Is Better Llama.cpp b4154 Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 1024 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 20 40 60 80 100 32.59 59.54 64.15 62.84 66.85 91.17 91.12 90.48 69.26 66.52 1. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas
Llama.cpp Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 2048 OpenBenchmarking.org Tokens Per Second, More Is Better Llama.cpp b4154 Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 2048 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 20 40 60 80 100 31.04 56.05 60.47 60.30 63.61 86.91 86.56 86.44 62.97 63.41 1. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas
Llama.cpp Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Text Generation 128 OpenBenchmarking.org Tokens Per Second, More Is Better Llama.cpp b4154 Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Text Generation 128 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 12 24 36 48 60 51.50 51.86 50.50 50.43 52.30 50.33 49.90 49.76 47.72 52.37 1. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas
Llama.cpp Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Prompt Processing 512 OpenBenchmarking.org Tokens Per Second, More Is Better Llama.cpp b4154 Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Prompt Processing 512 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 70 140 210 280 350 122.80 204.06 235.94 242.10 243.14 337.06 316.80 326.16 327.30 232.86 1. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas
Llama.cpp Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Prompt Processing 1024 OpenBenchmarking.org Tokens Per Second, More Is Better Llama.cpp b4154 Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Prompt Processing 1024 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 80 160 240 320 400 115.67 185.97 235.01 227.71 232.26 291.02 290.10 324.04 355.09 244.77 1. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas
Llama.cpp Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Prompt Processing 2048 OpenBenchmarking.org Tokens Per Second, More Is Better Llama.cpp b4154 Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Prompt Processing 2048 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 60 120 180 240 300 105.10 171.68 213.02 212.30 222.75 273.67 289.41 289.04 279.04 208.99 1. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas
Llamafile Model: Llama-3.2-3B-Instruct.Q6_K - Test: Text Generation 16 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: Llama-3.2-3B-Instruct.Q6_K - Test: Text Generation 16 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 5 10 15 20 25 19.17 18.83 19.47 19.53 19.49 19.16 19.27 19.24 19.03 19.50
Llamafile Model: Llama-3.2-3B-Instruct.Q6_K - Test: Text Generation 128 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: Llama-3.2-3B-Instruct.Q6_K - Test: Text Generation 128 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 5 10 15 20 25 20.08 19.60 20.48 20.50 20.39 20.29 20.23 20.26 20.13 20.51
Llamafile Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 256 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 256 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 900 1800 2700 3600 4500 4096 4096 4096 4096 4096 4096 4096 4096 4096 4096
Llamafile Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 512 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 512 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 2K 4K 6K 8K 10K 8192 8192 8192 8192 8192 8192 8192 8192 8192 8192
Llamafile Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Text Generation 16 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Text Generation 16 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 6 12 18 24 30 23.80 24.27 25.77 25.78 25.86 25.38 25.32 25.30 24.59 25.94
Llamafile Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 1024 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 1024 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 4K 8K 12K 16K 20K 16384 16384 16384 16384 16384 16384 16384 16384 16384 16384
Llamafile Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 2048 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 2048 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 7K 14K 21K 28K 35K 32768 32768 32768 32768 32768 32768 32768 32768 32768 32768
Llamafile Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Text Generation 128 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Text Generation 128 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 7 14 21 28 35 25.09 25.52 27.36 27.35 27.59 27.21 27.20 27.19 26.28 27.80
Llamafile Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Text Generation 16 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Text Generation 16 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 3 6 9 12 15 9.99 9.92 10.48 10.45 10.45 10.34 10.30 10.37 10.22 10.45
Llamafile Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 256 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 256 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 900 1800 2700 3600 4500 4096 4096 4096 4096 4096 4096 4096 4096 4096 4096
Llamafile Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 512 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 512 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 2K 4K 6K 8K 10K 8192 8192 8192 8192 8192 8192 8192 8192 8192 8192
Llamafile Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Text Generation 128 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Text Generation 128 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 3 6 9 12 15 10.46 10.32 10.93 10.91 10.91 10.82 10.81 10.85 10.47 10.93
Llamafile Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Text Generation 16 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Text Generation 16 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 0.4185 0.837 1.2555 1.674 2.0925 1.73 1.76 1.83 1.83 1.83 1.86 1.82 1.83 1.78 1.84
Llamafile Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 1024 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 1024 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 4K 8K 12K 16K 20K 16384 16384 16384 16384 16384 16384 16384 16384 16384 16384
Llamafile Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 2048 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 2048 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 7K 14K 21K 28K 35K 32768 32768 32768 32768 32768 32768 32768 32768 32768 32768
Llamafile Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Text Generation 128 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Text Generation 128 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 0.4635 0.927 1.3905 1.854 2.3175 1.93 1.93 2.05 2.06 2.05 2.03 2.03 2.03 1.99 2.05
Llamafile Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 256 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 256 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 900 1800 2700 3600 4500 4096 4096 4096 4096 4096 4096 4096 4096 4096 4096
Llamafile Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 512 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 512 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 2K 4K 6K 8K 10K 8192 8192 8192 8192 8192 8192 8192 8192 8192 8192
Llamafile Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 1024 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 1024 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 4K 8K 12K 16K 20K 16384 16384 16384 16384 16384 16384 16384 16384 16384 16384
Llamafile Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 2048 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 2048 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 7K 14K 21K 28K 35K 32768 32768 32768 32768 32768 32768 32768 32768 32768 32768
Llamafile Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 256 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 256 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 300 600 900 1200 1500 1536 1536 1536 1536 1536 1536 1536 1536 1536 1536
Llamafile Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 512 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 512 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 700 1400 2100 2800 3500 3072 3072 3072 3072 3072 3072 3072 3072 3072 3072
Llamafile Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 1024 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 1024 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 1300 2600 3900 5200 6500 6144 6144 6144 6144 6144 6144 6144 6144 6144 6144
Llamafile Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 2048 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 2048 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 3K 6K 9K 12K 15K 12288 12288 12288 12288 12288 12288 12288 12288 12288 12288
OpenVINO GenAI Model: Gemma-7b-int4-ov - Device: CPU OpenBenchmarking.org tokens/s, More Is Better OpenVINO GenAI 2024.5 Model: Gemma-7b-int4-ov - Device: CPU 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 3 6 9 12 15 8.05 9.52 10.17 10.20 10.23 10.17 10.11 10.14 9.83 10.24
OpenVINO GenAI Model: Falcon-7b-instruct-int4-ov - Device: CPU OpenBenchmarking.org tokens/s, More Is Better OpenVINO GenAI 2024.5 Model: Falcon-7b-instruct-int4-ov - Device: CPU 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 3 6 9 12 15 11.38 12.59 13.34 13.32 13.40 13.36 13.32 13.24 12.93 13.41
OpenVINO GenAI Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU OpenBenchmarking.org tokens/s, More Is Better OpenVINO GenAI 2024.5 Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 5 10 15 20 25 16.65 19.00 19.82 19.82 20.28 19.71 19.67 19.73 19.28 20.29
ONNX Runtime Model: GPT-2 - Device: CPU - Executor: Standard OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Standard 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 2 4 6 8 10 8.04835 8.10767 7.96069 7.33331 7.36544 6.25815 6.01631 6.02069 6.03024 7.42776 6.33034 1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: yolov4 - Device: CPU - Executor: Standard OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Standard 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 30 60 90 120 150 139.66 139.72 75.54 95.25 95.36 93.16 88.41 88.06 88.36 90.45 93.34 1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: ZFNet-512 - Device: CPU - Executor: Standard OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Standard 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 4 8 12 16 20 13.53230 13.69550 10.28350 9.85896 9.83893 9.01322 8.83524 8.87444 8.82581 9.76985 9.01687 1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: T5 Encoder - Device: CPU - Executor: Standard OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Standard 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 2 4 6 8 10 7.04122 7.13857 7.06075 5.95035 6.00470 4.80287 4.90313 4.90285 4.91381 6.39112 4.85142 1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: bertsquad-12 - Device: CPU - Executor: Standard OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Standard 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 20 40 60 80 100 95.07 94.12 50.47 68.08 68.09 68.91 67.18 66.62 66.92 64.14 68.61 1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 0.4543 0.9086 1.3629 1.8172 2.2715 2.016090 2.019130 1.311210 1.379700 1.335160 1.061880 0.986643 0.994193 0.999701 1.570840 1.066000 1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: fcn-resnet101-11 - Device: CPU - Executor: Standard OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 200 400 600 800 1000 926.76 927.39 487.30 386.56 386.06 355.75 293.96 295.28 294.93 310.88 357.60 1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 10 20 30 40 50 41.94 41.82 25.58 27.68 27.47 26.75 22.99 22.81 22.70 23.55 26.95 1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 1.1129 2.2258 3.3387 4.4516 5.5645 4.81530 4.94636 2.65482 2.77422 2.76714 2.80544 2.62573 2.60974 2.61221 2.55898 2.80695 1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: super-resolution-10 - Device: CPU - Executor: Standard OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Standard 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 4 8 12 16 20 13.64570 13.54170 6.36564 7.41037 7.43372 7.98873 8.18175 8.18519 8.18265 7.08601 7.99486 1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 500 1000 1500 2000 2500 2151.82 2152.92 1091.32 872.74 864.81 850.14 645.21 650.96 647.81 648.52 854.33 1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 7 14 21 28 35 27.69 25.95 22.32 21.57 21.69 24.94 20.98 20.72 20.88 21.24 23.06 1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt
LiteRT Model: DeepLab V3 OpenBenchmarking.org Microseconds, Fewer Is Better LiteRT 2024-10-15 Model: DeepLab V3 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 800 1600 2400 3200 4000 2950.80 2994.59 1569.75 2272.28 2289.58 2343.38 2152.48 2185.78 2200.93 3579.67 2359.99
LiteRT Model: SqueezeNet OpenBenchmarking.org Microseconds, Fewer Is Better LiteRT 2024-10-15 Model: SqueezeNet 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 700 1400 2100 2800 3500 3173.55 3179.55 1654.65 1820.34 1822.70 1809.18 1763.22 1769.56 1765.39 1794.11 1821.35
LiteRT Model: Inception V4 OpenBenchmarking.org Microseconds, Fewer Is Better LiteRT 2024-10-15 Model: Inception V4 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 11K 22K 33K 44K 55K 49048.8 49101.9 24502.1 23141.4 22928.3 22083.3 19686.0 20102.8 19859.4 21477.8 22752.4
LiteRT Model: NASNet Mobile OpenBenchmarking.org Microseconds, Fewer Is Better LiteRT 2024-10-15 Model: NASNet Mobile 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 4K 8K 12K 16K 20K 5153.49 5034.38 3808.80 7759.69 7640.81 8057.56 9245.76 9311.77 9231.97 16936.00 7931.64
LiteRT Model: Mobilenet Float OpenBenchmarking.org Microseconds, Fewer Is Better LiteRT 2024-10-15 Model: Mobilenet Float 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 500 1000 1500 2000 2500 2193.88 2190.79 1044.53 1248.12 1248.60 1244.70 1178.94 1165.50 1175.50 1211.48 1244.51
LiteRT Model: Mobilenet Quant OpenBenchmarking.org Microseconds, Fewer Is Better LiteRT 2024-10-15 Model: Mobilenet Quant 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 400 800 1200 1600 2000 1781.60 1785.40 935.99 859.76 857.49 848.94 730.17 734.35 742.03 823.17 849.21
LiteRT Model: Inception ResNet V2 OpenBenchmarking.org Microseconds, Fewer Is Better LiteRT 2024-10-15 Model: Inception ResNet V2 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 9K 18K 27K 36K 45K 41844.0 41757.9 21153.5 20123.5 20503.5 19477.8 17824.6 18224.0 18412.3 19530.2 19490.7
LiteRT Model: Quantized COCO SSD MobileNet v1 OpenBenchmarking.org Microseconds, Fewer Is Better LiteRT 2024-10-15 Model: Quantized COCO SSD MobileNet v1 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 600 1200 1800 2400 3000 2736.98 2741.68 1494.51 1431.19 1415.11 1420.15 1343.22 1337.78 1339.69 2129.52 1417.35
PyPerformance Benchmark: go OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.11 Benchmark: go 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 20 40 60 80 100 78.4 78.7 75.3 78.3 76.0 78.6 78.1 78.4 79.6 77.8 79.4
PyPerformance Benchmark: chaos OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.11 Benchmark: chaos 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 9 18 27 36 45 39.9 40.2 38.3 39.0 38.6 39.7 38.8 39.2 39.1 38.2 39.4
PyPerformance Benchmark: float OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.11 Benchmark: float 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 12 24 36 48 60 52.8 52.9 48.2 49.7 49.7 51.3 50.7 51.1 51.0 50.7 50.8
PyPerformance Benchmark: nbody OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.11 Benchmark: nbody 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 14 28 42 56 70 60.4 60.0 56.8 57.2 58.6 59.5 59.6 59.0 58.8 59.0 59.2
PyPerformance Benchmark: pathlib OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.11 Benchmark: pathlib 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 4 8 12 16 20 14.6 14.6 13.5 14.1 14.1 14.4 14.2 14.1 14.1 14.2 14.4
PyPerformance Benchmark: raytrace OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.11 Benchmark: raytrace 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 40 80 120 160 200 182 180 174 177 176 182 179 176 176 175 182
PyPerformance Benchmark: xml_etree OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.11 Benchmark: xml_etree 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 8 16 24 32 40 36.7 36.6 34.7 35.3 35.6 36.8 36.1 36.1 36.2 35.8 36.5
PyPerformance Benchmark: gc_collect OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.11 Benchmark: gc_collect 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 150 300 450 600 750 702 704 649 671 671 699 689 682 686 677 706
PyPerformance Benchmark: json_loads OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.11 Benchmark: json_loads 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 3 6 9 12 15 12.4 12.4 11.8 12.0 12.0 12.4 12.3 12.5 12.2 12.1 12.5
PyPerformance Benchmark: crypto_pyaes OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.11 Benchmark: crypto_pyaes 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 10 20 30 40 50 43.0 43.1 40.7 41.8 42.0 43.1 42.4 42.7 42.7 41.7 43.3
PyPerformance Benchmark: async_tree_io OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.11 Benchmark: async_tree_io 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 200 400 600 800 1000 855 858 726 738 742 666 633 653 651 755 656
PyPerformance Benchmark: regex_compile OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.11 Benchmark: regex_compile 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 16 32 48 64 80 72.6 72.7 68.1 69.5 70.0 71.7 70.4 71.4 69.8 69.8 72.5
PyPerformance Benchmark: python_startup OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.11 Benchmark: python_startup 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 2 4 6 8 10 6.93 6.97 6.34 5.80 5.81 6.08 6.03 6.01 6.01 5.77 6.09
PyPerformance Benchmark: asyncio_tcp_ssl OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.11 Benchmark: asyncio_tcp_ssl 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 160 320 480 640 800 723 737 610 626 628 590 581 582 586 645 590
PyPerformance Benchmark: django_template OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.11 Benchmark: django_template 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 5 10 15 20 25 21.2 21.3 20.1 20.6 20.6 21.0 20.9 20.8 21.0 20.7 21.2
PyPerformance Benchmark: asyncio_websockets OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.11 Benchmark: asyncio_websockets 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 70 140 210 280 350 330 331 308 315 316 321 314 315 317 315 322
PyPerformance Benchmark: pickle_pure_python OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.11 Benchmark: pickle_pure_python 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 40 80 120 160 200 171 171 162 164 164 169 163 166 165 165 168
Renaissance Test: Scala Dotty OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.16 Test: Scala Dotty 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 110 220 330 440 550 506.5 515.8 464.5 424.6 442.9 428.6 455.6 441.9 411.9 477.0 436.2 MIN: 420.76 / MAX: 1038.72 MIN: 426.08 / MAX: 1026.55 MIN: 358.07 / MAX: 790.35 MIN: 376.26 / MAX: 659.55 MIN: 376.04 / MAX: 787.17 MIN: 378.22 / MAX: 628.77 MIN: 394.05 / MAX: 695.67 MIN: 387.7 / MAX: 646.01 MIN: 362.36 / MAX: 713.71 MIN: 371.54 / MAX: 736.5 MIN: 380.62 / MAX: 721.56
Renaissance Test: Random Forest OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.16 Test: Random Forest 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 110 220 330 440 550 439.9 485.4 381.6 415.3 412.9 422.0 407.7 420.7 402.2 414.4 453.2 MIN: 388.86 / MAX: 516.9 MIN: 403.24 / MAX: 525.59 MIN: 332.96 / MAX: 445.77 MIN: 342.21 / MAX: 486.26 MIN: 342.95 / MAX: 473.15 MIN: 357.91 / MAX: 497.55 MIN: 337.66 / MAX: 471.92 MIN: 339.2 / MAX: 478.26 MIN: 343.06 / MAX: 486.63 MIN: 322.79 / MAX: 466.1 MIN: 352.31 / MAX: 513.31
Renaissance Test: ALS Movie Lens OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.16 Test: ALS Movie Lens 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 2K 4K 6K 8K 10K 7465.1 7147.4 5993.4 9243.0 8980.8 9378.8 9972.0 9891.4 9969.8 9805.7 9275.7 MIN: 6716.72 / MAX: 7815.35 MIN: 6653.82 / MAX: 7863.42 MIN: 5836.57 / MAX: 6535.35 MIN: 8920.42 / MAX: 9406.46 MIN: 8480.95 / MAX: 9113.57 MIN: 8718.36 / MAX: 9413.7 MIN: 9479.38 / MAX: 10040.33 MIN: 9364.27 / MAX: 10037.94 MIN: 9680.91 / MAX: 9983.16 MIN: 9253.4 / MAX: 10057.61 MIN: 8821.09 / MAX: 9495.91
Renaissance Test: Apache Spark Bayes OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.16 Test: Apache Spark Bayes 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 130 260 390 520 650 566.2 600.1 483.4 515.3 513.3 513.2 500.0 477.0 515.6 490.0 474.9 MIN: 493.35 / MAX: 1029.04 MIN: 506.71 / MAX: 741.31 MIN: 456.88 / MAX: 535.94 MIN: 456.38 / MAX: 531.31 MIN: 455.04 / MAX: 535.53 MIN: 453.66 / MAX: 554.7 MIN: 457.03 / MAX: 581.14 MIN: 460.78 / MAX: 515.59 MIN: 459.35 / MAX: 536.06 MIN: 459.29 / MAX: 580.9 MIN: 454.77 / MAX: 514.32
Renaissance Test: Savina Reactors.IO OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.16 Test: Savina Reactors.IO 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 800 1600 2400 3200 4000 3745.8 3818.5 3349.5 3523.1 3688.6 3655.8 3589.4 3582.4 3641.9 3506.4 3676.0 MIN: 3745.79 / MAX: 4547.1 MAX: 4702.95 MIN: 3349.49 / MAX: 4130.17 MIN: 3523.09 / MAX: 4370.41 MAX: 4840.82 MIN: 3655.76 / MAX: 4484.97 MAX: 4472.27 MIN: 3582.35 / MAX: 4689.63 MAX: 4585.77 MIN: 3506.38 / MAX: 4329.37 MAX: 4536.84
Renaissance Test: Apache Spark PageRank OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.16 Test: Apache Spark PageRank 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 500 1000 1500 2000 2500 2267.1 2292.2 2506.8 2366.1 2373.0 2138.1 2179.0 2182.5 2227.9 2412.2 2229.7 MIN: 2117.39 / MAX: 2335.98 MIN: 2106.39 / MAX: 2374.31 MIN: 1771.47 MIN: 1667.92 / MAX: 2366.13 MIN: 1684.52 MIN: 1499.64 MIN: 1591.55 / MAX: 2179.02 MIN: 1564.17 MIN: 1592.13 / MAX: 2227.91 MIN: 1691.04 MIN: 1612.96 / MAX: 2229.74
Renaissance Test: Finagle HTTP Requests OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.16 Test: Finagle HTTP Requests 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 600 1200 1800 2400 3000 1664.4 1669.9 1705.5 2650.9 2708.1 2492.2 2570.5 2627.3 2571.2 2319.4 2483.1 MIN: 1625.17 / MAX: 1776.68 MIN: 1612.75 / MAX: 1713.57 MIN: 1676.49 / MAX: 1730.57 MIN: 2066.79 MIN: 2074.84 / MAX: 2708.11 MIN: 1947.63 MIN: 1960.33 / MAX: 2570.51 MIN: 2034.59 / MAX: 2627.31 MIN: 1999.64 MIN: 1832.84 MIN: 1933.43
Renaissance Test: Gaussian Mixture Model OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.16 Test: Gaussian Mixture Model 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 800 1600 2400 3200 4000 3087.0 3130.4 3397.0 3647.2 3703.2 3860.6 3829.9 3806.5 3773.1 3399.5 3815.2 MIN: 2935.02 / MAX: 3439.02 MIN: 2997.45 / MAX: 3481.57 MIN: 2497.54 / MAX: 3397.03 MIN: 2576.86 / MAX: 3647.22 MIN: 2648.51 / MAX: 3703.23 MIN: 2758.89 / MAX: 3860.61 MIN: 2792.29 MIN: 2770.53 / MAX: 3806.52 MIN: 2755.26 / MAX: 3773.12 MIN: 2471.52 MIN: 2749.56 / MAX: 3815.24
Renaissance Test: In-Memory Database Shootout OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.16 Test: In-Memory Database Shootout 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 800 1600 2400 3200 4000 2951.1 3681.9 3244.8 3288.5 3275.5 3241.5 3408.4 3232.8 3442.9 3256.1 3175.6 MIN: 2787.28 / MAX: 3070.94 MIN: 2559.5 MIN: 2350.89 MIN: 2991.78 / MAX: 3586.63 MIN: 3012.8 / MAX: 3533.02 MIN: 3037.03 / MAX: 3491.91 MIN: 3187.55 / MAX: 3638.98 MIN: 3057.86 / MAX: 3585.74 MIN: 3258.62 / MAX: 3709.4 MIN: 3019.89 / MAX: 3599.5 MIN: 2896.06 / MAX: 3367.44
Renaissance Test: Akka Unbalanced Cobwebbed Tree OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.16 Test: Akka Unbalanced Cobwebbed Tree 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 1000 2000 3000 4000 5000 4474.8 4457.1 3720.1 4084.6 4098.5 4038.4 4359.3 4384.0 4344.8 4403.8 4002.3 MIN: 4474.77 / MAX: 5751.45 MIN: 4457.08 / MAX: 5796.57 MIN: 3720.09 / MAX: 4686.78 MAX: 5256.95 MIN: 4098.48 / MAX: 5163.21 MIN: 4038.36 / MAX: 5089.28 MIN: 4359.25 / MAX: 5618.71 MIN: 4383.98 / MAX: 5691.67 MAX: 5622.35 MAX: 5719.11 MIN: 4002.27 / MAX: 4983.72
Renaissance Test: Genetic Algorithm Using Jenetics + Futures OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.16 Test: Genetic Algorithm Using Jenetics + Futures 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 200 400 600 800 1000 806.4 801.4 749.9 924.5 884.0 904.0 891.2 1050.3 959.3 732.8 920.7 MIN: 786.25 / MAX: 832.34 MIN: 786.46 / MAX: 836.72 MIN: 737.7 / MAX: 777.92 MIN: 821.03 MIN: 863.46 / MAX: 897.46 MIN: 886.83 / MAX: 919.31 MIN: 861.29 / MAX: 903.79 MIN: 1016.46 / MAX: 1068.02 MIN: 844.95 MIN: 713.67 / MAX: 813.49 MIN: 888.75 / MAX: 934.44
oneDNN Harness: IP Shapes 1D - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.6 Harness: IP Shapes 1D - Engine: CPU 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 0.862 1.724 2.586 3.448 4.31 3.83117 3.82885 1.90686 1.88731 1.88919 1.93806 1.13012 1.12465 1.12801 1.12573 1.93913 MIN: 3.77 MIN: 3.77 MIN: 1.86 MIN: 1.83 MIN: 1.83 MIN: 1.92 MIN: 1.1 MIN: 1.09 MIN: 1.1 MIN: 1.03 MIN: 1.91 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl
oneDNN Harness: IP Shapes 3D - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.6 Harness: IP Shapes 3D - Engine: CPU 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 1.1157 2.2314 3.3471 4.4628 5.5785 4.94689 4.95867 4.45096 3.75003 3.74578 2.73072 2.77835 2.74699 2.77727 4.05800 2.72942 MIN: 4.85 MIN: 4.87 MIN: 4.38 MIN: 3.71 MIN: 3.71 MIN: 2.7 MIN: 2.74 MIN: 2.72 MIN: 2.75 MIN: 3.75 MIN: 2.7 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl
oneDNN Harness: Convolution Batch Shapes Auto - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.6 Harness: Convolution Batch Shapes Auto - Engine: CPU 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 3 6 9 12 15 9.18658 9.16956 7.87367 6.36475 6.37150 4.11551 4.03550 4.03808 4.04877 6.67287 4.13321 MIN: 9.06 MIN: 9.03 MIN: 7.77 MIN: 6.28 MIN: 6.28 MIN: 4.05 MIN: 3.98 MIN: 3.98 MIN: 3.99 MIN: 6.2 MIN: 4.07 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_1d - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.6 Harness: Deconvolution Batch shapes_1d - Engine: CPU 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 2 4 6 8 10 7.26738 7.28139 3.78629 3.43287 3.42547 3.40293 3.05918 3.07219 3.06341 2.97612 3.40628 MIN: 6.92 MIN: 6.94 MIN: 3.55 MIN: 2.96 MIN: 2.9 MIN: 3.03 MIN: 2.56 MIN: 2.59 MIN: 2.58 MIN: 2.42 MIN: 3.03 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_3d - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.6 Harness: Deconvolution Batch shapes_3d - Engine: CPU 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 2 4 6 8 10 7.53655 7.55315 3.78008 3.38958 3.38823 3.50840 2.66296 2.67219 2.66429 2.41294 3.51243 MIN: 7.53 MIN: 7.52 MIN: 3.64 MIN: 3.28 MIN: 3.28 MIN: 3.46 MIN: 2.62 MIN: 2.62 MIN: 2.62 MIN: 2.34 MIN: 3.47 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl
oneDNN Harness: Recurrent Neural Network Training - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.6 Harness: Recurrent Neural Network Training - Engine: CPU 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 1000 2000 3000 4000 5000 4726.08 4752.85 2422.83 1911.44 1913.86 1898.36 1320.23 1320.99 1320.37 1372.03 1895.68 MIN: 4716.28 MIN: 4746.78 MIN: 2416.25 MIN: 1895 MIN: 1899.63 MIN: 1894.26 MIN: 1302.81 MIN: 1308.56 MIN: 1301.96 MIN: 1342.06 MIN: 1892.59 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl
oneDNN Harness: Recurrent Neural Network Inference - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.6 Harness: Recurrent Neural Network Inference - Engine: CPU 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 500 1000 1500 2000 2500 2415.25 2441.27 1270.76 1009.58 1012.19 965.02 670.70 672.51 670.23 700.86 966.01 MIN: 2406.21 MIN: 2435.83 MIN: 1266.25 MIN: 994.85 MIN: 999.17 MIN: 963.27 MIN: 662.66 MIN: 665.03 MIN: 663.39 MIN: 679.89 MIN: 963.43 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl
FinanceBench Benchmark: Repo OpenMP OpenBenchmarking.org ms, Fewer Is Better FinanceBench 2016-07-25 Benchmark: Repo OpenMP 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 7K 14K 21K 28K 35K 32271.67 32477.35 21121.61 21588.91 21612.04 22320.33 22060.15 21948.94 21848.07 21418.45 22318.74 1. (CXX) g++ options: -O3 -march=native -fopenmp
FinanceBench Benchmark: Bonds OpenMP OpenBenchmarking.org ms, Fewer Is Better FinanceBench 2016-07-25 Benchmark: Bonds OpenMP 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 10K 20K 30K 40K 50K 47198.94 46851.42 32715.81 33551.52 33538.27 34600.77 33906.69 34053.06 33913.46 33061.22 34896.84 1. (CXX) g++ options: -O3 -march=native -fopenmp
OpenVINO GenAI Model: Gemma-7b-int4-ov - Device: CPU - Time To First Token OpenBenchmarking.org ms, Fewer Is Better OpenVINO GenAI 2024.5 Model: Gemma-7b-int4-ov - Device: CPU - Time To First Token 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 60 120 180 240 300 288.64 154.83 124.38 124.52 121.48 104.87 105.36 104.38 106.62 122.30
OpenVINO GenAI Model: Gemma-7b-int4-ov - Device: CPU - Time Per Output Token OpenBenchmarking.org ms, Fewer Is Better OpenVINO GenAI 2024.5 Model: Gemma-7b-int4-ov - Device: CPU - Time Per Output Token 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 30 60 90 120 150 124.24 105.09 98.33 98.03 97.79 98.31 98.91 98.64 101.72 97.61
OpenVINO GenAI Model: Falcon-7b-instruct-int4-ov - Device: CPU - Time To First Token OpenBenchmarking.org ms, Fewer Is Better OpenVINO GenAI 2024.5 Model: Falcon-7b-instruct-int4-ov - Device: CPU - Time To First Token 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 50 100 150 200 250 217.85 115.73 93.00 94.77 93.01 87.34 86.04 86.20 86.06 93.00
OpenVINO GenAI Model: Falcon-7b-instruct-int4-ov - Device: CPU - Time Per Output Token OpenBenchmarking.org ms, Fewer Is Better OpenVINO GenAI 2024.5 Model: Falcon-7b-instruct-int4-ov - Device: CPU - Time Per Output Token 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 20 40 60 80 100 87.90 79.41 74.94 75.06 74.65 74.86 75.06 75.51 77.34 74.54
OpenVINO GenAI Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU - Time To First Token OpenBenchmarking.org ms, Fewer Is Better OpenVINO GenAI 2024.5 Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU - Time To First Token 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 30 60 90 120 150 130.47 71.58 59.87 59.86 58.91 55.67 56.06 55.47 55.93 58.86
OpenVINO GenAI Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU - Time Per Output Token OpenBenchmarking.org ms, Fewer Is Better OpenVINO GenAI 2024.5 Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU - Time Per Output Token 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 13 26 39 52 65 60.07 52.64 50.45 50.46 49.31 50.74 50.84 50.68 51.86 49.28
CP2K Molecular Dynamics Input: H20-64 OpenBenchmarking.org Seconds, Fewer Is Better CP2K Molecular Dynamics 2024.3 Input: H20-64 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 30 60 90 120 150 123.66 130.81 68.17 58.40 58.86 53.01 42.47 43.17 42.51 58.19 52.72 1. (F9X) gfortran options: -fopenmp -march=native -mtune=native -O3 -funroll-loops -fbacktrace -ffree-form -fimplicit-none -std=f2008 -lcp2kstart -lcp2kmc -lcp2kswarm -lcp2kmotion -lcp2kthermostat -lcp2kemd -lcp2ktmc -lcp2kmain -lcp2kdbt -lcp2ktas -lcp2kgrid -lcp2kgriddgemm -lcp2kgridcpu -lcp2kgridref -lcp2kgridcommon -ldbcsrarnoldi -ldbcsrx -lcp2kdbx -lcp2kdbm -lcp2kshg_int -lcp2keri_mme -lcp2kminimax -lcp2khfxbase -lcp2ksubsys -lcp2kxc -lcp2kao -lcp2kpw_env -lcp2kinput -lcp2kpw -lcp2kgpu -lcp2kfft -lcp2kfpga -lcp2kfm -lcp2kcommon -lcp2koffload -lcp2kmpiwrap -lcp2kbase -ldbcsr -lsirius -lspla -lspfft -lsymspg -lvdwxc -l:libhdf5_fortran.a -l:libhdf5.a -lz -lgsl -lelpa_openmp -lcosma -lcosta -lscalapack -lxsmmf -lxsmm -ldl -lpthread -llibgrpp -lxcf03 -lxc -lint2 -lfftw3_mpi -lfftw3 -lfftw3_omp -lmpi_cxx -lmpi -l:libopenblas.a -lvori -lstdc++ -lmpi_usempif08 -lmpi_mpifh -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm
CP2K Molecular Dynamics Input: H20-256 OpenBenchmarking.org Seconds, Fewer Is Better CP2K Molecular Dynamics 2024.3 Input: H20-256 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 300 600 900 1200 1500 1281.08 1282.66 740.09 676.74 678.68 628.10 517.50 517.66 517.92 592.86 631.31 1. (F9X) gfortran options: -fopenmp -march=native -mtune=native -O3 -funroll-loops -fbacktrace -ffree-form -fimplicit-none -std=f2008 -lcp2kstart -lcp2kmc -lcp2kswarm -lcp2kmotion -lcp2kthermostat -lcp2kemd -lcp2ktmc -lcp2kmain -lcp2kdbt -lcp2ktas -lcp2kgrid -lcp2kgriddgemm -lcp2kgridcpu -lcp2kgridref -lcp2kgridcommon -ldbcsrarnoldi -ldbcsrx -lcp2kdbx -lcp2kdbm -lcp2kshg_int -lcp2keri_mme -lcp2kminimax -lcp2khfxbase -lcp2ksubsys -lcp2kxc -lcp2kao -lcp2kpw_env -lcp2kinput -lcp2kpw -lcp2kgpu -lcp2kfft -lcp2kfpga -lcp2kfm -lcp2kcommon -lcp2koffload -lcp2kmpiwrap -lcp2kbase -ldbcsr -lsirius -lspla -lspfft -lsymspg -lvdwxc -l:libhdf5_fortran.a -l:libhdf5.a -lz -lgsl -lelpa_openmp -lcosma -lcosta -lscalapack -lxsmmf -lxsmm -ldl -lpthread -llibgrpp -lxcf03 -lxc -lint2 -lfftw3_mpi -lfftw3 -lfftw3_omp -lmpi_cxx -lmpi -l:libopenblas.a -lvori -lstdc++ -lmpi_usempif08 -lmpi_mpifh -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm
CP2K Molecular Dynamics Input: Fayalite-FIST OpenBenchmarking.org Seconds, Fewer Is Better CP2K Molecular Dynamics 2024.3 Input: Fayalite-FIST 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 40 80 120 160 200 177.14 179.12 108.44 100.49 97.45 92.21 84.07 84.80 84.21 94.03 94.90 1. (F9X) gfortran options: -fopenmp -march=native -mtune=native -O3 -funroll-loops -fbacktrace -ffree-form -fimplicit-none -std=f2008 -lcp2kstart -lcp2kmc -lcp2kswarm -lcp2kmotion -lcp2kthermostat -lcp2kemd -lcp2ktmc -lcp2kmain -lcp2kdbt -lcp2ktas -lcp2kgrid -lcp2kgriddgemm -lcp2kgridcpu -lcp2kgridref -lcp2kgridcommon -ldbcsrarnoldi -ldbcsrx -lcp2kdbx -lcp2kdbm -lcp2kshg_int -lcp2keri_mme -lcp2kminimax -lcp2khfxbase -lcp2ksubsys -lcp2kxc -lcp2kao -lcp2kpw_env -lcp2kinput -lcp2kpw -lcp2kgpu -lcp2kfft -lcp2kfpga -lcp2kfm -lcp2kcommon -lcp2koffload -lcp2kmpiwrap -lcp2kbase -ldbcsr -lsirius -lspla -lspfft -lsymspg -lvdwxc -l:libhdf5_fortran.a -l:libhdf5.a -lz -lgsl -lelpa_openmp -lcosma -lcosta -lscalapack -lxsmmf -lxsmm -ldl -lpthread -llibgrpp -lxcf03 -lxc -lint2 -lfftw3_mpi -lfftw3 -lfftw3_omp -lmpi_cxx -lmpi -l:libopenblas.a -lvori -lstdc++ -lmpi_usempif08 -lmpi_mpifh -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm
RELION Test: Basic - Device: CPU OpenBenchmarking.org Seconds, Fewer Is Better RELION 5.0 Test: Basic - Device: CPU 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 700 1400 2100 2800 3500 2771.78 3046.81 1369.23 937.33 946.43 729.40 549.80 551.72 553.78 944.27 733.02 1. (CXX) g++ options: -fPIC -std=c++14 -fopenmp -O3 -rdynamic -lfftw3f -lfftw3 -ldl -ltiff -lpng -ljpeg -lmpi_cxx -lmpi
Build2 Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Build2 0.17 Time To Compile 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 70 140 210 280 350 300.72 294.95 154.43 121.74 117.07 111.65 91.02 91.72 92.02 92.05 113.78
Primesieve Length: 1e12 OpenBenchmarking.org Seconds, Fewer Is Better Primesieve 12.6 Length: 1e12 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 6 12 18 24 30 24.723 24.821 12.273 9.701 9.705 9.116 6.937 6.926 6.903 6.347 9.147 1. (CXX) g++ options: -O3
Primesieve Length: 1e13 OpenBenchmarking.org Seconds, Fewer Is Better Primesieve 12.6 Length: 1e13 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 70 140 210 280 350 301.60 302.74 150.67 118.66 118.85 110.61 84.32 84.49 84.26 78.50 110.71 1. (CXX) g++ options: -O3
Y-Cruncher Pi Digits To Calculate: 1B OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.8.5 Pi Digits To Calculate: 1B 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 9 18 27 36 45 40.56 39.25 23.89 19.36 19.28 18.38 17.36 17.34 17.57 18.49 18.37
Y-Cruncher Pi Digits To Calculate: 500M OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.8.5 Pi Digits To Calculate: 500M 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 5 10 15 20 25 18.165 18.290 10.954 9.041 9.067 8.688 8.119 8.173 8.279 8.772 8.623
POV-Ray Trace Time OpenBenchmarking.org Seconds, Fewer Is Better POV-Ray Trace Time 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 16 32 48 64 80 70.06 70.77 34.92 26.61 26.60 25.26 19.52 19.55 20.33 18.54 25.33 1. POV-Ray 3.7.0.10.unofficial
Timed Eigen Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Eigen Compilation 3.4.0 Time To Compile 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 30 60 90 120 150 145.21 146.66 84.48 69.38 69.58 67.36 57.18 58.82 58.69 58.66 67.08
Gcrypt Library OpenBenchmarking.org Seconds, Fewer Is Better Gcrypt Library 1.10.3 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 40 80 120 160 200 165.55 169.14 154.32 156.34 156.71 171.02 167.63 150.64 159.25 162.13 163.84 1. (CC) gcc options: -O2 -fvisibility=hidden
Apache CouchDB Bulk Size: 100 - Inserts: 1000 - Rounds: 30 OpenBenchmarking.org Seconds, Fewer Is Better Apache CouchDB 3.4.1 Bulk Size: 100 - Inserts: 1000 - Rounds: 30 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 20 40 60 80 100 96.74 96.55 73.24 76.70 76.58 75.90 76.66 75.89 76.67 69.93 76.39 1. (CXX) g++ options: -flto -lstdc++ -shared -lei
Apache CouchDB Bulk Size: 100 - Inserts: 3000 - Rounds: 30 OpenBenchmarking.org Seconds, Fewer Is Better Apache CouchDB 3.4.1 Bulk Size: 100 - Inserts: 3000 - Rounds: 30 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 70 140 210 280 350 314.49 316.46 239.08 259.20 259.06 253.99 253.28 252.83 253.49 232.19 254.73 1. (CXX) g++ options: -flto -lstdc++ -shared -lei
Apache CouchDB Bulk Size: 300 - Inserts: 1000 - Rounds: 30 OpenBenchmarking.org Seconds, Fewer Is Better Apache CouchDB 3.4.1 Bulk Size: 300 - Inserts: 1000 - Rounds: 30 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 30 60 90 120 150 153.03 154.04 118.32 118.16 116.42 117.57 116.22 116.24 116.62 106.13 119.35 1. (CXX) g++ options: -flto -lstdc++ -shared -lei
Apache CouchDB Bulk Size: 300 - Inserts: 3000 - Rounds: 30 OpenBenchmarking.org Seconds, Fewer Is Better Apache CouchDB 3.4.1 Bulk Size: 300 - Inserts: 3000 - Rounds: 30 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 110 220 330 440 550 515.50 513.58 398.39 401.99 405.45 406.12 399.26 397.43 400.40 367.83 408.48 1. (CXX) g++ options: -flto -lstdc++ -shared -lei
Apache CouchDB Bulk Size: 500 - Inserts: 1000 - Rounds: 30 OpenBenchmarking.org Seconds, Fewer Is Better Apache CouchDB 3.4.1 Bulk Size: 500 - Inserts: 1000 - Rounds: 30 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 50 100 150 200 250 212.83 212.53 163.92 164.36 163.36 164.47 161.47 163.81 161.09 148.05 164.81 1. (CXX) g++ options: -flto -lstdc++ -shared -lei
Apache CouchDB Bulk Size: 500 - Inserts: 3000 - Rounds: 30 OpenBenchmarking.org Seconds, Fewer Is Better Apache CouchDB 3.4.1 Bulk Size: 500 - Inserts: 3000 - Rounds: 30 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 150 300 450 600 750 695.35 698.07 545.97 551.35 551.60 559.35 552.21 553.81 554.40 511.78 560.70 1. (CXX) g++ options: -flto -lstdc++ -shared -lei
Blender Blend File: BMW27 - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.3 Blend File: BMW27 - Compute: CPU-Only 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 50 100 150 200 250 215.53 214.89 103.97 79.19 79.24 74.08 55.18 55.26 55.74 53.55 73.16
Blender Blend File: Junkshop - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.3 Blend File: Junkshop - Compute: CPU-Only 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 60 120 180 240 300 292.14 294.04 141.99 104.70 104.01 97.01 73.69 73.87 73.55 73.56 97.10
Blender Blend File: Classroom - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.3 Blend File: Classroom - Compute: CPU-Only 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 120 240 360 480 600 560.89 563.14 276.44 212.30 211.40 197.20 149.37 149.37 148.97 143.36 197.53
Blender Blend File: Fishy Cat - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.3 Blend File: Fishy Cat - Compute: CPU-Only 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 60 120 180 240 300 280.26 282.19 136.51 105.45 105.24 96.67 73.48 73.96 73.86 71.35 97.09
Blender Blend File: Barbershop - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.3 Blend File: Barbershop - Compute: CPU-Only 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 400 800 1200 1600 2000 2077.69 2095.98 978.03 739.91 731.39 679.34 513.54 514.70 513.66 506.20 678.40
Blender Blend File: Pabellon Barcelona - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.3 Blend File: Pabellon Barcelona - Compute: CPU-Only 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 140 280 420 560 700 667.71 671.42 320.59 243.35 245.12 226.34 170.09 171.82 171.25 166.12 224.64
Whisper.cpp Model: ggml-base.en - Input: 2016 State of the Union OpenBenchmarking.org Seconds, Fewer Is Better Whisper.cpp 1.6.2 Model: ggml-base.en - Input: 2016 State of the Union 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 40 80 120 160 200 187.29 186.23 103.72 97.94 97.55 92.71 84.11 84.68 84.03 87.49 93.45 1. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni
Whisper.cpp Model: ggml-small.en - Input: 2016 State of the Union OpenBenchmarking.org Seconds, Fewer Is Better Whisper.cpp 1.6.2 Model: ggml-small.en - Input: 2016 State of the Union 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 130 260 390 520 650 581.27 583.20 314.48 280.03 280.47 268.24 232.58 233.68 233.05 245.08 266.81 1. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni
Whisper.cpp Model: ggml-medium.en - Input: 2016 State of the Union OpenBenchmarking.org Seconds, Fewer Is Better Whisper.cpp 1.6.2 Model: ggml-medium.en - Input: 2016 State of the Union 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 400 800 1200 1600 2000 1797.86 966.01 838.18 839.99 809.79 684.13 686.88 684.39 700.91 809.49 1. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni
Whisperfile Model Size: Tiny OpenBenchmarking.org Seconds, Fewer Is Better Whisperfile 20Aug24 Model Size: Tiny 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 15 30 45 60 75 66.11 41.30 40.85 38.58 37.13 35.11 35.91 35.43 41.71 38.72
Whisperfile Model Size: Small OpenBenchmarking.org Seconds, Fewer Is Better Whisperfile 20Aug24 Model Size: Small 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 70 140 210 280 350 310.38 193.94 178.33 177.87 173.38 154.44 155.14 155.97 195.42 167.89
Whisperfile Model Size: Medium OpenBenchmarking.org Seconds, Fewer Is Better Whisperfile 20Aug24 Model Size: Medium 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 200 400 600 800 1000 845.17 528.33 491.98 492.70 473.55 424.57 425.51 424.95 534.92 475.51
XNNPACK Model: FP32MobileNetV1 OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP32MobileNetV1 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 500 1000 1500 2000 2500 2367 2407 1056 1267 1246 1257 1189 1191 1192 1252 1272 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP32MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP32MobileNetV2 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 300 600 900 1200 1500 1425 1443 708 1357 1327 1365 1478 1481 1477 1495 1368 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP32MobileNetV3Large OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP32MobileNetV3Large 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 400 800 1200 1600 2000 1342 1359 753 1473 1542 1515 1754 1749 1756 1810 1574 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP32MobileNetV3Small OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP32MobileNetV3Small 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 200 400 600 800 1000 412 414 335 799 791 809 929 933 931 979 837 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP16MobileNetV1 OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP16MobileNetV1 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 800 1600 2400 3200 4000 3512 3508 1578 1418 1414 1383 1195 1200 1196 1143 1386 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP16MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP16MobileNetV2 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 500 1000 1500 2000 2500 2131 2097 1088 1208 1210 1217 1223 1212 1222 1190 1248 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP16MobileNetV3Large OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP16MobileNetV3Large 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 400 800 1200 1600 2000 2018 1998 1067 1429 1427 1467 1547 1527 1542 1498 1527 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP16MobileNetV3Small OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP16MobileNetV3Small 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 200 400 600 800 1000 714 720 455 759 766 779 898 877 871 920 798 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: QS8MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: QS8MobileNetV2 41 41 b 4364P 4464p 4464p epyc 4484PX 45 4584PX EPYC 4584PX amd a px 200 400 600 800 1000 623 623 400 703 703 717 801 809 807 844 723 1. (CXX) g++ options: -O3 -lrt -lm
Phoronix Test Suite v10.8.5