onnx runtime 1.14 threadripper AMD Ryzen Threadripper 3990X 64-Core testing with a Gigabyte TRX40 AORUS PRO WIFI (F6 BIOS) and AMD Radeon RX 5700 8GB on Ubuntu 23.04 via the Phoronix Test Suite. a: Processor: AMD Ryzen Threadripper 3990X 64-Core @ 2.90GHz (64 Cores / 128 Threads), Motherboard: Gigabyte TRX40 AORUS PRO WIFI (F6 BIOS), Chipset: AMD Starship/Matisse, Memory: 128GB, Disk: Samsung SSD 970 EVO Plus 500GB, Graphics: AMD Radeon RX 5700 8GB (1750/875MHz), Audio: AMD Navi 10 HDMI Audio, Monitor: DELL P2415Q, Network: Intel I211 + Intel Wi-Fi 6 AX200 OS: Ubuntu 23.04, Kernel: 6.2.0-060200rc7daily20230206-generic (x86_64), Desktop: GNOME Shell 43.2, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.2.5 (LLVM 15.0.6 DRM 3.49), Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 3840x2160 b: Processor: AMD Ryzen Threadripper 3990X 64-Core @ 2.90GHz (64 Cores / 128 Threads), Motherboard: Gigabyte TRX40 AORUS PRO WIFI (F6 BIOS), Chipset: AMD Starship/Matisse, Memory: 128GB, Disk: Samsung SSD 970 EVO Plus 500GB, Graphics: AMD Radeon RX 5700 8GB (1750/875MHz), Audio: AMD Navi 10 HDMI Audio, Monitor: DELL P2415Q, Network: Intel I211 + Intel Wi-Fi 6 AX200 OS: Ubuntu 23.04, Kernel: 6.2.0-060200rc7daily20230206-generic (x86_64), Desktop: GNOME Shell 43.2, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.2.5 (LLVM 15.0.6 DRM 3.49), Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 3840x2160 c: Processor: AMD Ryzen Threadripper 3990X 64-Core @ 2.90GHz (64 Cores / 128 Threads), Motherboard: Gigabyte TRX40 AORUS PRO WIFI (F6 BIOS), Chipset: AMD Starship/Matisse, Memory: 128GB, Disk: Samsung SSD 970 EVO Plus 500GB, Graphics: AMD Radeon RX 5700 8GB (1750/875MHz), Audio: AMD Navi 10 HDMI Audio, Monitor: DELL P2415Q, Network: Intel I211 + Intel Wi-Fi 6 AX200 OS: Ubuntu 23.04, Kernel: 6.2.0-060200rc7daily20230206-generic (x86_64), Desktop: GNOME Shell 43.2, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.2.5 (LLVM 15.0.6 DRM 3.49), Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 3840x2160 ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 108.78 |=================================================================== b . 80.59 |================================================== c . 80.03 |================================================= ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 9.18459 |================================================ b . 12.40150 |================================================================= c . 12.48760 |================================================================= ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 85.99 |==================================================================== b . 63.90 |=================================================== c . 63.91 |=================================================== ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 11.62 |=================================================== b . 15.64 |==================================================================== c . 15.64 |==================================================================== ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 2.85784 |================================================================== b . 2.77160 |================================================================ c . 2.73687 |=============================================================== ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 349.96 |================================================================ b . 360.82 |================================================================== c . 365.44 |=================================================================== ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 4.81490 |================================================================== b . 4.64766 |================================================================ c . 4.65903 |================================================================ ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 207.69 |================================================================= b . 215.16 |=================================================================== c . 214.63 |=================================================================== ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 5.82423 |================================================================== b . 5.74155 |================================================================= c . 5.45685 |============================================================== ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 171.92 |=============================================================== b . 174.57 |================================================================ c . 183.28 |=================================================================== ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 8.26847 |================================================================== b . 8.16566 |================================================================= c . 8.18333 |================================================================= ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 120.94 |================================================================== b . 122.46 |=================================================================== c . 122.20 |=================================================================== ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 171.13 |=================================================================== b . 171.32 |=================================================================== c . 169.07 |================================================================== ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 5.84251 |================================================================= b . 5.83457 |================================================================= c . 5.91310 |================================================================== ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 235.29 |=================================================================== b . 223.59 |================================================================ c . 222.36 |=============================================================== ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 4.24929 |============================================================== b . 4.47101 |================================================================== c . 4.49502 |================================================================== ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 0.814278 |================================================================= b . 0.814379 |================================================================= c . 0.805340 |================================================================ ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 1228.37 |================================================================= b . 1228.24 |================================================================= c . 1242.17 |================================================================== ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 2.86976 |================================================================== b . 2.86004 |================================================================== c . 2.85854 |================================================================== ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 348.46 |=================================================================== b . 349.64 |=================================================================== c . 349.83 |=================================================================== ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 7.24304 |================================================================= b . 7.27339 |================================================================= c . 7.33023 |================================================================== ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 138.20 |=================================================================== b . 137.55 |=================================================================== c . 136.46 |================================================================== ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 13.28 |==================================================================== b . 13.26 |==================================================================== c . 13.27 |==================================================================== ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 75.32 |==================================================================== b . 75.43 |==================================================================== c . 75.38 |==================================================================== ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 51.48 |==================================================================== b . 51.65 |==================================================================== c . 51.08 |=================================================================== ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 19.42 |=================================================================== b . 19.36 |=================================================================== c . 19.58 |==================================================================== ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 115.48 |=================================================================== b . 115.66 |=================================================================== c . 114.71 |================================================================== ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 8.65766 |================================================================== b . 8.64411 |================================================================= c . 8.71611 |================================================================== ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 86.32 |=================================================================== b . 86.17 |=================================================================== c . 87.20 |==================================================================== ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 11.58 |==================================================================== b . 11.60 |==================================================================== c . 11.47 |=================================================================== ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 86.59 |==================================================================== b . 86.16 |==================================================================== c . 86.09 |==================================================================== ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 11.55 |==================================================================== b . 11.61 |==================================================================== c . 11.62 |==================================================================== ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 21.43 |==================================================================== b . 21.49 |==================================================================== c . 21.50 |==================================================================== ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 46.67 |==================================================================== b . 46.53 |==================================================================== c . 46.50 |==================================================================== ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 25.72 |==================================================================== b . 25.83 |==================================================================== c . 25.73 |==================================================================== ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 38.88 |==================================================================== b . 38.72 |==================================================================== c . 38.86 |====================================================================