X3D Zen 4
Benchmarks for a future article..
HTML result view exported from: https://openbenchmarking.org/result/2303056-NE-X3DUPPER661&sro&grw.
KTX-Software toktx
Settings: Zstd Compression 19
KTX-Software toktx
Settings: UASTC 3
KTX-Software toktx
Settings: UASTC 3 + Zstd Compression 19
KTX-Software toktx
Settings: UASTC 4 + Zstd Compression 19
Xmrig
Variant: Monero - Hash Count: 1M
Xmrig
Variant: Wownero - Hash Count: 1M
TensorFlow
Device: CPU - Batch Size: 16 - Model: ResNet-50
TensorFlow
Device: CPU - Batch Size: 16 - Model: AlexNet
TensorFlow
Device: CPU - Batch Size: 16 - Model: GoogLeNet
TensorFlow
Device: CPU - Batch Size: 32 - Model: ResNet-50
TensorFlow
Device: CPU - Batch Size: 32 - Model: AlexNet
TensorFlow
Device: CPU - Batch Size: 32 - Model: GoogLeNet
TensorFlow
Device: CPU - Batch Size: 64 - Model: ResNet-50
TensorFlow
Device: CPU - Batch Size: 64 - Model: AlexNet
TensorFlow
Device: CPU - Batch Size: 64 - Model: GoogLeNet
TensorFlow
Device: CPU - Batch Size: 256 - Model: ResNet-50
TensorFlow
Device: CPU - Batch Size: 256 - Model: AlexNet
TensorFlow
Device: CPU - Batch Size: 256 - Model: GoogLeNet
LeelaChessZero
Backend: BLAS
LeelaChessZero
Backend: Eigen
CloverLeaf
Lagrangian-Eulerian Hydrodynamics
ONNX Runtime
Model: yolov4 - Device: CPU - Executor: Standard
ONNX Runtime
Model: yolov4 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: fcn-resnet101-11 - Device: CPU - Executor: Standard
ONNX Runtime
Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: super-resolution-10 - Device: CPU - Executor: Standard
ONNX Runtime
Model: super-resolution-10 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: bertsquad-12 - Device: CPU - Executor: Standard
ONNX Runtime
Model: bertsquad-12 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: GPT-2 - Device: CPU - Executor: Standard
ONNX Runtime
Model: GPT-2 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard
ONNX Runtime
Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard
ONNX Runtime
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard
ONNX Runtime
Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard
ONNX Runtime
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel
NCNN
Target: CPU - Model: mobilenet
NCNN
Target: CPU-v2-v2 - Model: mobilenet-v2
NCNN
Target: CPU-v3-v3 - Model: mobilenet-v3
NCNN
Target: CPU - Model: shufflenet-v2
NCNN
Target: CPU - Model: mnasnet
NCNN
Target: CPU - Model: efficientnet-b0
NCNN
Target: CPU - Model: blazeface
NCNN
Target: CPU - Model: googlenet
NCNN
Target: CPU - Model: vgg16
NCNN
Target: CPU - Model: resnet18
NCNN
Target: CPU - Model: alexnet
NCNN
Target: CPU - Model: resnet50
NCNN
Target: CPU - Model: yolov4-tiny
NCNN
Target: CPU - Model: squeezenet_ssd
NCNN
Target: CPU - Model: regnety_400m
NCNN
Target: CPU - Model: vision_transformer
NCNN
Target: CPU - Model: FastestDet
GROMACS
Implementation: MPI CPU - Input: water_GMX50_bare
oneDNN
Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU
oneDNN
Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU
oneDNN
Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU
oneDNN
Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU
OpenVINO
Model: Face Detection FP16 - Device: CPU
OpenVINO
Model: Face Detection FP16 - Device: CPU
OpenVINO
Model: Face Detection FP16-INT8 - Device: CPU
OpenVINO
Model: Face Detection FP16-INT8 - Device: CPU
OpenVINO
Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU
OpenVINO
Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU
OpenVINO
Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU
OpenVINO
Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU
OpenVINO
Model: Person Detection FP16 - Device: CPU
OpenVINO
Model: Person Detection FP16 - Device: CPU
OpenVINO
Model: Person Detection FP32 - Device: CPU
OpenVINO
Model: Person Detection FP32 - Device: CPU
OpenVINO
Model: Weld Porosity Detection FP16-INT8 - Device: CPU
OpenVINO
Model: Weld Porosity Detection FP16-INT8 - Device: CPU
OpenVINO
Model: Weld Porosity Detection FP16 - Device: CPU
OpenVINO
Model: Weld Porosity Detection FP16 - Device: CPU
OpenVINO
Model: Vehicle Detection FP16-INT8 - Device: CPU
OpenVINO
Model: Vehicle Detection FP16-INT8 - Device: CPU
OpenVINO
Model: Vehicle Detection FP16 - Device: CPU
OpenVINO
Model: Vehicle Detection FP16 - Device: CPU
OpenVINO
Model: Person Vehicle Bike Detection FP16 - Device: CPU
OpenVINO
Model: Person Vehicle Bike Detection FP16 - Device: CPU
OpenVINO
Model: Machine Translation EN To DE FP16 - Device: CPU
OpenVINO
Model: Machine Translation EN To DE FP16 - Device: CPU
ASKAP
Test: tConvolve MPI - Degridding
ASKAP
Test: tConvolve MPI - Gridding
ASKAP
Test: tConvolve OpenMP - Gridding
ASKAP
Test: tConvolve OpenMP - Degridding
ASKAP
Test: tConvolve MT - Gridding
ASKAP
Test: tConvolve MT - Degridding
ASKAP
Test: Hogbom Clean OpenMP
ACES DGEMM
Sustained Floating-Point Rate
Pennant
Test: leblancbig
Pennant
Test: sedovbig
PyHPC Benchmarks
Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of State
PyHPC Benchmarks
Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixing
LULESH
OpenFOAM
Input: drivaerFastback, Small Mesh Size - Mesh Time
OpenFOAM
Input: drivaerFastback, Small Mesh Size - Execution Time
OpenFOAM
Input: drivaerFastback, Medium Mesh Size - Mesh Time
OpenFOAM
Input: drivaerFastback, Medium Mesh Size - Execution Time
Xcompact3d Incompact3d
Input: input.i3d 129 Cells Per Direction
Xcompact3d Incompact3d
Input: input.i3d 193 Cells Per Direction
GPAW
Input: Carbon Nanotube
Zstd Compression
Compression Level: 3 - Compression Speed
Zstd Compression
Compression Level: 3 - Decompression Speed
Zstd Compression
Compression Level: 3, Long Mode - Compression Speed
Zstd Compression
Compression Level: 3, Long Mode - Decompression Speed
Zstd Compression
Compression Level: 8 - Compression Speed
Zstd Compression
Compression Level: 8 - Decompression Speed
Zstd Compression
Compression Level: 8, Long Mode - Compression Speed
Zstd Compression
Compression Level: 8, Long Mode - Decompression Speed
Zstd Compression
Compression Level: 12 - Compression Speed
Zstd Compression
Compression Level: 12 - Decompression Speed
Zstd Compression
Compression Level: 19 - Compression Speed
Zstd Compression
Compression Level: 19 - Decompression Speed
Zstd Compression
Compression Level: 19, Long Mode - Compression Speed
Zstd Compression
Compression Level: 19, Long Mode - Decompression Speed
Embree
Binary: Pathtracer - Model: Asian Dragon
Embree
Binary: Pathtracer - Model: Asian Dragon Obj
Embree
Binary: Pathtracer - Model: Crown
Embree
Binary: Pathtracer ISPC - Model: Asian Dragon
Embree
Binary: Pathtracer ISPC - Model: Asian Dragon Obj
Embree
Binary: Pathtracer ISPC - Model: Crown
srsRAN
Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM
srsRAN
Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM
srsRAN
Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM
srsRAN
Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM
srsRAN
Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM
srsRAN
Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM
srsRAN
Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM
srsRAN
Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM
srsRAN
Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM
srsRAN
Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM
srsRAN
Test: OFDM_Test
ClickHouse
100M Rows Hits Dataset, First Run / Cold Cache
ClickHouse
100M Rows Hits Dataset, Second Run
ClickHouse
100M Rows Hits Dataset, Third Run
ONNX Runtime
Model: yolov4 - Device: CPU - Executor: Standard
ONNX Runtime
Model: yolov4 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: fcn-resnet101-11 - Device: CPU - Executor: Standard
ONNX Runtime
Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: super-resolution-10 - Device: CPU - Executor: Standard
ONNX Runtime
Model: super-resolution-10 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: bertsquad-12 - Device: CPU - Executor: Standard
ONNX Runtime
Model: bertsquad-12 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: GPT-2 - Device: CPU - Executor: Standard
ONNX Runtime
Model: GPT-2 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard
ONNX Runtime
Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard
ONNX Runtime
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard
ONNX Runtime
Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard
ONNX Runtime
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel
Phoronix Test Suite v10.8.5