Microsoft Azure EPYC 7003 HBv3 Benchmarks
Azure HBv3 benchmarks against other Microsoft Azure HPC instance types. Benchmarks by Michael Larabel for a future article on phoronix.
HTML result view exported from: https://openbenchmarking.org/result/2104126-PTS-MSAZURE208&grt&sor.
Botan
Test: KASUMI
Botan
Test: KASUMI - Decrypt
Botan
Test: AES-256
Botan
Test: AES-256 - Decrypt
Botan
Test: Twofish
Botan
Test: Twofish - Decrypt
Botan
Test: Blowfish
Botan
Test: Blowfish - Decrypt
Botan
Test: CAST-256
Botan
Test: CAST-256 - Decrypt
Botan
Test: ChaCha20Poly1305
Botan
Test: ChaCha20Poly1305 - Decrypt
CloverLeaf
Lagrangian-Eulerian Hydrodynamics
FinanceBench
Benchmark: Repo OpenMP
FinanceBench
Benchmark: Bonds OpenMP
GNU GMP GMPbench
Total Time
GROMACS
Water Benchmark
High Performance Conjugate Gradient
LULESH
Mobile Neural Network
Model: SqueezeNetV1.0
Mobile Neural Network
Model: resnet-v2-50
Mobile Neural Network
Model: inception-v3
NAMD
ATPase Simulation - 327,506 Atoms
NAS Parallel Benchmarks
Test / Class: LU.C
oneDNN
Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU
oneDNN
Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU
oneDNN
Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU
oneDNN
Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU
Pennant
Test: sedovbig
Pennant
Test: leblancbig
PlaidML
FP16: No - Mode: Inference - Network: VGG16 - Device: CPU
PlaidML
FP16: No - Mode: Inference - Network: VGG19 - Device: CPU
PlaidML
FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU
QuantLib
Rodinia
Test: OpenMP LavaMD
Rodinia
Test: OpenMP HotSpot3D
SVT-AV1
Encoder Mode: Enc Mode 0 - Input: 1080p
SVT-AV1
Encoder Mode: Enc Mode 4 - Input: 1080p
SVT-AV1
Encoder Mode: Enc Mode 8 - Input: 1080p
SVT-HEVC
Tuning: 1 - Input: Bosphorus 1080p
SVT-HEVC
Tuning: 7 - Input: Bosphorus 1080p
SVT-HEVC
Tuning: 10 - Input: Bosphorus 1080p
SVT-VP9
Tuning: Visual Quality Optimized - Input: Bosphorus 1080p
TensorFlow Lite
Model: SqueezeNet
Timed Linux Kernel Compilation
Time To Compile
Timed LLVM Compilation
Time To Compile
Timed MAFFT Alignment
Multiple Sequence Alignment - LSU RNA
Timed Node.js Compilation
Time To Compile
TNN
Target: CPU - Model: SqueezeNet v1.1
Xcompact3d Incompact3d
Input: X3D-benchmarking input.i3d
Zstd Compression
Compression Level: 8 - Compression Speed
Zstd Compression
Compression Level: 19 - Compression Speed
Zstd Compression
Compression Level: 19 - Decompression Speed
Zstd Compression
Compression Level: 8, Long Mode - Compression Speed
Zstd Compression
Compression Level: 8, Long Mode - Decompression Speed
Zstd Compression
Compression Level: 19, Long Mode - Decompression Speed
Phoronix Test Suite v10.8.5