Tests for a future article. 2 x AMD EPYC 9684X 96-Core testing with a AMD Titanite_4G (RTI1007B BIOS) and ASPEED on Ubuntu 23.10 via the Phoronix Test Suite.
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 2402188-NE-9684XNE0007
9684x-ne
Tests for a future article. 2 x AMD EPYC 9684X 96-Core testing with a AMD Titanite_4G (RTI1007B BIOS) and ASPEED on Ubuntu 23.10 via the Phoronix Test Suite.
,,"a","b","c"
Processor,,2 x AMD EPYC 9684X 96-Core @ 2.55GHz (192 Cores / 384 Threads),2 x AMD EPYC 9684X 96-Core @ 2.55GHz (192 Cores / 384 Threads),2 x AMD EPYC 9684X 96-Core @ 2.55GHz (192 Cores / 384 Threads)
Motherboard,,AMD Titanite_4G (RTI1007B BIOS),AMD Titanite_4G (RTI1007B BIOS),AMD Titanite_4G (RTI1007B BIOS)
Chipset,,AMD Device 14a4,AMD Device 14a4,AMD Device 14a4
Memory,,1520GB,1520GB,1520GB
Disk,,3201GB Micron_7450_MTFDKCB3T2TFS,3201GB Micron_7450_MTFDKCB3T2TFS,3201GB Micron_7450_MTFDKCB3T2TFS
Graphics,,ASPEED,ASPEED,ASPEED
Network,,Broadcom NetXtreme BCM5720 PCIe,Broadcom NetXtreme BCM5720 PCIe,Broadcom NetXtreme BCM5720 PCIe
OS,,Ubuntu 23.10,Ubuntu 23.10,Ubuntu 23.10
Kernel,,6.6.0-060600-generic (x86_64),6.6.0-060600-generic (x86_64),6.6.0-060600-generic (x86_64)
Compiler,,GCC 13.2.0,GCC 13.2.0,GCC 13.2.0
File-System,,ext4,ext4,ext4
Screen Resolution,,800x600,800x600,800x600
,,"a","b","c"
"GROMACS - Implementation: MPI CPU - Input: water_GMX50_bare (Ns/Day)",HIB,24.055,24.159,24.074
"Intel Open Image Denoise - Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only (Images / Sec)",HIB,3.49,3.49,3.49
"Intel Open Image Denoise - Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only (Images / Sec)",HIB,3.49,3.49,3.49
"Intel Open Image Denoise - Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only (Images / Sec)",HIB,1.68,1.65,1.67
"NAMD - Input: ATPase with 327,506 Atoms (ns/day)",HIB,20.90323,20.88550,20.88433
"NAMD - Input: STMV with 1,066,628 Atoms (ns/day)",HIB,6.36963,6.49427,6.47040
"ONNX Runtime - Model: GPT-2 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,110.687,111.541,
"ONNX Runtime - Model: GPT-2 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,9.02332,8.9547,
"ONNX Runtime - Model: GPT-2 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,159.717,122.039,122.148
"ONNX Runtime - Model: GPT-2 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,6.25834,8.19077,8.18389
"ONNX Runtime - Model: yolov4 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,6.36075,6.37602,
"ONNX Runtime - Model: yolov4 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,157.207,156.831,
"ONNX Runtime - Model: yolov4 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,7.03904,7.04966,7.0734
"ONNX Runtime - Model: yolov4 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,142.061,141.847,141.371
"ONNX Runtime - Model: T5 Encoder - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,284.322,281.887,
"ONNX Runtime - Model: T5 Encoder - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,3.51487,3.54536,
"ONNX Runtime - Model: T5 Encoder - Device: CPU - Executor: Standard (Inferences/sec)",HIB,235.262,237.889,235.807
"ONNX Runtime - Model: T5 Encoder - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,4.2498,4.20289,4.23992
"ONNX Runtime - Model: bertsquad-12 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,8.72812,8.73027,
"ONNX Runtime - Model: bertsquad-12 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,114.566,114.538,
"ONNX Runtime - Model: bertsquad-12 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,10.7281,10.6571,10.8732
"ONNX Runtime - Model: bertsquad-12 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,93.2099,93.8308,91.9517
"ONNX Runtime - Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,331.596,314.567,
"ONNX Runtime - Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,3.01302,3.17604,
"ONNX Runtime - Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,446.93,430.574,479.844
"ONNX Runtime - Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,2.23695,2.32191,2.08344
"ONNX Runtime - Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,0.880576,0.881007,
"ONNX Runtime - Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,1135.61,1135.06,
"ONNX Runtime - Model: fcn-resnet101-11 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,5.05401,5.64928,5.03581
"ONNX Runtime - Model: fcn-resnet101-11 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,197.859,177.01,198.574
"ONNX Runtime - Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,15.5773,15.4584,
"ONNX Runtime - Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,64.1922,64.686,
"ONNX Runtime - Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,23.4132,,24.0256
"ONNX Runtime - Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,42.7093,,41.6199
"ONNX Runtime - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,101.632,,
"ONNX Runtime - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,9.8368,,
"ONNX Runtime - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,147.311,,
"ONNX Runtime - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,6.78768,,
"ONNX Runtime - Model: super-resolution-10 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,91.8638,,
"ONNX Runtime - Model: super-resolution-10 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,10.8828,,
"ONNX Runtime - Model: super-resolution-10 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,97.3769,,
"ONNX Runtime - Model: super-resolution-10 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,10.2686,,
"ONNX Runtime - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,22.9007,,
"ONNX Runtime - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,43.6621,,
"ONNX Runtime - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,44.5379,,
"ONNX Runtime - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,22.4507,,