onnx-114-runtime 2 x AMD EPYC 9654 96-Core testing with a AMD Titanite_4G (RTI1004D BIOS) and ASPEED on Ubuntu 23.04 via the Phoronix Test Suite. a: Processor: 2 x AMD EPYC 9654 96-Core @ 3.71GHz (192 Cores / 384 Threads), Motherboard: AMD Titanite_4G (RTI1004D BIOS), Chipset: AMD Device 14a4, Memory: 1520GB, Disk: 2 x 1920GB SAMSUNG MZWLJ1T9HBJR-00007, Graphics: ASPEED, Network: Broadcom NetXtreme BCM5720 PCIe OS: Ubuntu 23.04, Kernel: 5.19.0-21-generic (x86_64), Desktop: GNOME Shell 43.2, Display Server: X Server 1.21.1.6, Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 1024x768 aa: Processor: 2 x AMD EPYC 9654 96-Core @ 3.71GHz (192 Cores / 384 Threads), Motherboard: AMD Titanite_4G (RTI1004D BIOS), Chipset: AMD Device 14a4, Memory: 1520GB, Disk: 2 x 1920GB SAMSUNG MZWLJ1T9HBJR-00007, Graphics: ASPEED, Network: Broadcom NetXtreme BCM5720 PCIe OS: Ubuntu 23.04, Kernel: 5.19.0-21-generic (x86_64), Desktop: GNOME Shell 43.2, Display Server: X Server 1.21.1.6, Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 1024x768 aaa: Processor: 2 x AMD EPYC 9654 96-Core @ 3.71GHz (192 Cores / 384 Threads), Motherboard: AMD Titanite_4G (RTI1004D BIOS), Chipset: AMD Device 14a4, Memory: 1520GB, Disk: 2 x 1920GB SAMSUNG MZWLJ1T9HBJR-00007, Graphics: ASPEED, Network: Broadcom NetXtreme BCM5720 PCIe OS: Ubuntu 23.04, Kernel: 5.19.0-21-generic (x86_64), Desktop: GNOME Shell 43.2, Display Server: X Server 1.21.1.6, Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 1024x768 ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a ... 1175.28 |================================================================ aa .. 1023.91 |======================================================== aaa . 1073.42 |========================================================== ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a ... 0.850853 |======================================================= aa .. 0.976642 |=============================================================== aaa . 0.931595 |============================================================ ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a ... 8.90281 |================================================================ aa .. 8.88600 |================================================================ aaa . 8.78441 |=============================================================== ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a ... 112.19 |================================================================ aa .. 112.41 |================================================================ aaa . 113.70 |================================================================= ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a ... 254.04 |================================================================= aa .. 252.53 |================================================================= aaa . 250.57 |================================================================ ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a ... 3.93634 |=============================================================== aa .. 3.95975 |================================================================ aaa . 3.99078 |================================================================ ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a ... 106.72 |=========================================================== aa .. 117.57 |================================================================= aaa . 105.20 |========================================================== ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a ... 9.36997 |=============================================================== aa .. 8.50505 |========================================================= aaa . 9.50538 |================================================================ ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a ... 9.94281 |================================================================ aa .. 9.91793 |================================================================ aaa . 9.66229 |============================================================== ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a ... 100.55 |=============================================================== aa .. 100.80 |=============================================================== aaa . 103.47 |================================================================= ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a ... 216.89 |=============================================================== aa .. 222.64 |================================================================= aaa . 217.60 |================================================================ ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a ... 4.61049 |================================================================ aa .. 4.49152 |============================================================== aaa . 4.59557 |================================================================ ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a ... 113.79 |================================================================= aa .. 111.91 |================================================================ aaa . 111.68 |================================================================ ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a ... 8.78782 |=============================================================== aa .. 8.93539 |================================================================ aaa . 8.95369 |================================================================ ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a ... 77.51 |================================================================== aa .. 76.97 |================================================================= aaa . 77.76 |================================================================== ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a ... 12.90 |================================================================== aa .. 12.99 |================================================================== aaa . 12.86 |================================================================= ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a ... 199.72 |=========================================================== aa .. 218.89 |================================================================= aaa . 207.42 |============================================================== ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a ... 5.00699 |================================================================ aa .. 4.56850 |========================================================== aaa . 4.82110 |============================================================== ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better aa .. 38.05 |================================================================== aaa . 37.60 |================================================================= ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better aa .. 26.28 |================================================================= aaa . 26.59 |================================================================== ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a ... 49.13 |============================================================= aa .. 43.58 |====================================================== aaa . 53.02 |================================================================== ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a ... 20.35 |=========================================================== aa .. 22.94 |================================================================== aaa . 18.86 |====================================================== ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better aa .. 23.91 |===================================================== aaa . 29.59 |================================================================== ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better aa .. 41.82 |================================================================== aaa . 33.79 |===================================================== ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a ... 3.14498 |========================================================== aa .. 3.13926 |========================================================== aaa . 3.44266 |================================================================ ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a ... 317.65 |================================================================= aa .. 318.23 |================================================================= aaa . 290.22 |=========================================================== ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a ... 2.30886 |================================================================ aa .. 2.19899 |============================================================= aaa . 2.31710 |================================================================ ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a ... 432.97 |============================================================== aa .. 454.59 |================================================================= aaa . 431.41 |============================================================== ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better aa .. 9.44335 |========================================================== aaa . 10.33380 |=============================================================== ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better aa .. 105.87 |================================================================= aaa . 96.75 |=========================================================== ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better aa .. 6.83585 |=========================================================== aaa . 7.44990 |================================================================ ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better aa .. 146.27 |================================================================= aaa . 134.21 |============================================================ ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better aa .. 11.08 |================================================================= aaa . 11.20 |================================================================== ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better aa .. 90.19 |================================================================== aaa . 89.25 |================================================================= ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better aa .. 10.30520 |=============================================================== aaa . 7.90665 |================================================ ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better aa .. 97.03 |================================================== aaa . 126.46 |=================================================================