Tests for a future article. AMD Ryzen Threadripper PRO 7995WX 96-Cores testing with a HP Z6 G5 A Workstation 8B24 (U65 Ver. 01.01.04 BIOS) and NVIDIA RTX A4000 16GB on CachyOS rolling via the Phoronix Test Suite.
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 2402162-NE-COMPUTE0976
compute
Tests for a future article. AMD Ryzen Threadripper PRO 7995WX 96-Cores testing with a HP Z6 G5 A Workstation 8B24 (U65 Ver. 01.01.04 BIOS) and NVIDIA RTX A4000 16GB on CachyOS rolling via the Phoronix Test Suite.
,,"a","b"
Processor,,AMD Ryzen Threadripper PRO 7995WX 96-Cores @ 5.19GHz (96 Cores / 192 Threads),AMD Ryzen Threadripper PRO 7995WX 96-Cores @ 5.19GHz (96 Cores / 192 Threads)
Motherboard,,HP Z6 G5 A Workstation 8B24 (U65 Ver. 01.01.04 BIOS),HP Z6 G5 A Workstation 8B24 (U65 Ver. 01.01.04 BIOS)
Chipset,,AMD Device 14a4,AMD Device 14a4
Memory,,8 x 16GB DDR5-5200MT/s Hynix HMCG78AGBRA190N,8 x 16GB DDR5-5200MT/s Hynix HMCG78AGBRA190N
Disk,,2 x 1024GB SAMSUNG MZVL21T0HCLR-00BH1,2 x 1024GB SAMSUNG MZVL21T0HCLR-00BH1
Graphics,,NVIDIA RTX A4000 16GB,NVIDIA RTX A4000 16GB
Audio,,NVIDIA GA104 HD Audio,NVIDIA GA104 HD Audio
Monitor,,ASUS VP28U,ASUS VP28U
Network,,Realtek RTL8111/8168/8411,Realtek RTL8111/8168/8411
OS,,CachyOS rolling,CachyOS rolling
Kernel,,6.7.2-1-cachyos (x86_64),6.7.2-1-cachyos (x86_64)
Desktop,,GNOME Shell 45.3,GNOME Shell 45.3
Display Server,,X Server 1.21.1.11,X Server 1.21.1.11
Display Driver,,NVIDIA 545.29.06,NVIDIA 545.29.06
OpenGL,,4.6.0,4.6.0
Compiler,,GCC 13.2.1 20230801 + Clang 16.0.6 + LLVM 16.0.6,GCC 13.2.1 20230801 + Clang 16.0.6 + LLVM 16.0.6
File-System,,xfs,xfs
Screen Resolution,,3840x2160,3840x2160
,,"a","b"
"Intel Open Image Denoise - Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only (Images / Sec)",HIB,3.02,2.97
"Intel Open Image Denoise - Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only (Images / Sec)",HIB,3.03,2.95
"Intel Open Image Denoise - Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only (Images / Sec)",HIB,1.43,1.39
"ONNX Runtime - Model: GPT-2 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,173.069,174.68
"ONNX Runtime - Model: GPT-2 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,130.351,129.11
"ONNX Runtime - Model: yolov4 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,9.49305,9.4203
"ONNX Runtime - Model: yolov4 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,9.19506,9.02456
"ONNX Runtime - Model: T5 Encoder - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,397.997,400.874
"ONNX Runtime - Model: T5 Encoder - Device: CPU - Executor: Standard (Inferences/sec)",HIB,184.302,189.93
"ONNX Runtime - Model: bertsquad-12 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,12.8066,12.8465
"ONNX Runtime - Model: bertsquad-12 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,13.5169,13.5743
"ONNX Runtime - Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,567.433,552.898
"ONNX Runtime - Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,563.581,562.793
"ONNX Runtime - Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,1.17451,1.16383
"ONNX Runtime - Model: fcn-resnet101-11 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,5.38636,4.54438
"ONNX Runtime - Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,28.3461,28.0855
"ONNX Runtime - Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,37.2252,34.7603
"ONNX Runtime - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,183.782,185.252
"ONNX Runtime - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,209.294,241.412
"ONNX Runtime - Model: super-resolution-10 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,122.589,117.686
"ONNX Runtime - Model: super-resolution-10 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,142.569,141.266
"ONNX Runtime - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,33.471,34.1641
"ONNX Runtime - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,42.9724,50.5753
"GROMACS - Implementation: MPI CPU - Input: water_GMX50_bare (Ns/Day)",HIB,11.348,11.363
"NAMD - Input: ATPase with 327,506 Atoms (ns/day)",HIB,8.48033,8.91806
"NAMD - Input: STMV with 1,066,628 Atoms (ns/day)",HIB,2.49536,2.49445
"ONNX Runtime - Model: GPT-2 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,5.7723,5.71852
"ONNX Runtime - Model: GPT-2 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,7.66971,7.74347
"ONNX Runtime - Model: yolov4 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,105.337,106.149
"ONNX Runtime - Model: yolov4 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,108.75,110.805
"ONNX Runtime - Model: T5 Encoder - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,2.5114,2.49335
"ONNX Runtime - Model: T5 Encoder - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,5.42515,5.26412
"ONNX Runtime - Model: bertsquad-12 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,78.0818,77.8393
"ONNX Runtime - Model: bertsquad-12 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,73.9802,73.6673
"ONNX Runtime - Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,1.76072,1.80697
"ONNX Runtime - Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,1.77395,1.77643
"ONNX Runtime - Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,851.415,859.227
"ONNX Runtime - Model: fcn-resnet101-11 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,185.652,220.05
"ONNX Runtime - Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,35.2768,35.604
"ONNX Runtime - Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,26.8616,28.7671
"ONNX Runtime - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,5.43976,5.39672
"ONNX Runtime - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,4.77742,4.14187
"ONNX Runtime - Model: super-resolution-10 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,8.15595,8.49576
"ONNX Runtime - Model: super-resolution-10 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,7.01381,7.07856
"ONNX Runtime - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,29.8745,29.2685
"ONNX Runtime - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,23.2679,19.7704