s76 compute hpc Tests for a future article. AMD Ryzen Threadripper 7980X 64-Cores testing with a System76 Thelio Major (FA Z5 BIOS) and NAVI31 45GB on Pop 22.04 via the Phoronix Test Suite. sa: Processor: AMD Ryzen Threadripper 7980X 64-Cores @ 7.79GHz (64 Cores / 128 Threads), Motherboard: System76 Thelio Major (FA Z5 BIOS), Chipset: AMD Device 14a4, Memory: 4 x 32GB DRAM-4800MT/s Micron MTC20F1045S1RC48BA2, Disk: 1000GB CT1000T700SSD5, Graphics: NAVI31 45GB (1760/1124MHz), Audio: AMD Device 14cc, Monitor: DELL P2415Q, Network: Aquantia Device 14c0 + Realtek RTL8125 2.5GbE + Intel Wi-Fi 6 AX210/AX211/AX411 OS: Pop 22.04, Kernel: 6.7.0-060700daily20240120-generic (x86_64), Desktop: GNOME Shell 42.5, Display Server: X Server 1.21.1.4, OpenGL: 4.6 Mesa 23.3.2-1pop0~1704238321~22.04~36f1d0e (LLVM 15.0.7 DRM 3.57), Vulkan: 1.3.267, Compiler: GCC 11.4.0, File-System: ext4, Screen Resolution: 1920x1080 b: Processor: AMD Ryzen Threadripper 7980X 64-Cores @ 7.79GHz (64 Cores / 128 Threads), Motherboard: System76 Thelio Major (FA Z5 BIOS), Chipset: AMD Device 14a4, Memory: 4 x 32GB DRAM-4800MT/s Micron MTC20F1045S1RC48BA2, Disk: 1000GB CT1000T700SSD5, Graphics: NAVI31 45GB (1760/1124MHz), Audio: AMD Device 14cc, Monitor: DELL P2415Q, Network: Aquantia Device 14c0 + Realtek RTL8125 2.5GbE + Intel Wi-Fi 6 AX210/AX211/AX411 OS: Pop 22.04, Kernel: 6.7.0-060700daily20240120-generic (x86_64), Desktop: GNOME Shell 42.5, Display Server: X Server 1.21.1.4, OpenGL: 4.6 Mesa 23.3.2-1pop0~1704238321~22.04~36f1d0e (LLVM 15.0.7 DRM 3.57), Vulkan: 1.3.267, Compiler: GCC 11.4.0, File-System: ext4, Screen Resolution: 1920x1080 NAMD 3.0b6 Input: ATPase with 327,506 Atoms ns/day > Higher Is Better sa . 6.48727 |================================================================= b .. 6.51941 |================================================================= NAMD 3.0b6 Input: STMV with 1,066,628 Atoms ns/day > Higher Is Better sa . 1.65389 |================================================================= b .. 1.65429 |================================================================= Intel Open Image Denoise 2.2 Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only Images / Sec > Higher Is Better sa . 2.21 |==================================================================== b .. 2.21 |==================================================================== Intel Open Image Denoise 2.2 Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only Images / Sec > Higher Is Better sa . 2.22 |==================================================================== b .. 2.22 |==================================================================== Intel Open Image Denoise 2.2 Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only Images / Sec > Higher Is Better sa . 1.05 |==================================================================== b .. 1.05 |==================================================================== GROMACS 2024 Implementation: MPI CPU - Input: water_GMX50_bare Ns Per Day > Higher Is Better sa . 7.572 |=================================================================== b .. 7.561 |=================================================================== ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better sa . 167.22 |================================================================== b .. 164.21 |================================================================= ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better sa . 5.97364 |================================================================ b .. 6.08347 |================================================================= ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better sa . 104.86 |================================================================ b .. 108.24 |================================================================== ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better sa . 9.53425 |================================================================= b .. 9.23622 |=============================================================== ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better sa . 5.39937 |================================================================= b .. 5.34960 |================================================================ ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better sa . 185.20 |================================================================= b .. 186.92 |================================================================== ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better sa . 9.72674 |================================================================= b .. 9.73582 |================================================================= ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better sa . 102.81 |================================================================== b .. 102.70 |================================================================== ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better sa . 339.90 |================================================================== b .. 342.33 |================================================================== ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better sa . 2.94058 |================================================================= b .. 2.91968 |================================================================= ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better sa . 138.56 |================================================================= b .. 140.01 |================================================================== ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better sa . 7.21687 |================================================================= b .. 7.14188 |================================================================ ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better sa . 6.21189 |=============================================================== b .. 6.40524 |================================================================= ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better sa . 160.98 |================================================================== b .. 156.12 |================================================================ ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better sa . 13.31 |=================================================================== b .. 13.29 |=================================================================== ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better sa . 75.12 |=================================================================== b .. 75.25 |=================================================================== ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better sa . 222.61 |============================================================== b .. 237.88 |================================================================== ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better sa . 4.49040 |================================================================= b .. 4.20204 |============================================================= ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better sa . 445.48 |================================================================= b .. 449.20 |================================================================== ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better sa . 2.24407 |================================================================= b .. 2.22552 |================================================================ ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better sa . 2.21477 |================================================================= b .. 2.19799 |================================================================= ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better sa . 451.51 |================================================================= b .. 454.96 |================================================================== ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better sa . 5.74038 |================================================================= b .. 5.75273 |================================================================= ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better sa . 174.20 |================================================================== b .. 173.83 |================================================================== ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better sa . 12.93 |=================================================================== b .. 12.84 |=================================================================== ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better sa . 77.36 |=================================================================== b .. 77.89 |=================================================================== ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better sa . 40.11 |=================================================================== b .. 39.90 |=================================================================== ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better sa . 24.93 |=================================================================== b .. 25.06 |=================================================================== ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better sa . 87.63 |================================================================== b .. 88.76 |=================================================================== ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better sa . 11.41 |=================================================================== b .. 11.26 |================================================================== ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better sa . 206.01 |================================================================== b .. 205.72 |================================================================== ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better sa . 4.85344 |================================================================= b .. 4.86018 |================================================================= ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better sa . 135.76 |================================================================== b .. 134.36 |================================================================= ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better sa . 7.36444 |================================================================ b .. 7.44156 |================================================================= ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better sa . 98.36 |=================================================================== b .. 98.34 |=================================================================== ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better sa . 10.17 |=================================================================== b .. 10.17 |=================================================================== ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better sa . 30.74 |=================================================================== b .. 30.40 |================================================================== ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better sa . 32.53 |================================================================== b .. 32.90 |=================================================================== ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better sa . 44.52 |=================================================================== b .. 42.99 |================================================================= ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better sa . 22.46 |================================================================= b .. 23.26 |===================================================================