onnx new AMD Ryzen Threadripper 3990X 64-Core testing with a Gigabyte TRX40 AORUS PRO WIFI (F6 BIOS) and AMD Radeon RX 5700 8GB on Pop 22.04 via the Phoronix Test Suite. a: Processor: AMD Ryzen Threadripper 3990X 64-Core @ 2.90GHz (64 Cores / 128 Threads), Motherboard: Gigabyte TRX40 AORUS PRO WIFI (F6 BIOS), Chipset: AMD Starship/Matisse, Memory: 4 x 32GB DDR4-3000MT/s CMK64GX4M2D3000C16, Disk: Samsung SSD 970 EVO Plus 500GB, Graphics: AMD Radeon RX 5700 8GB (1750/875MHz), Audio: AMD Navi 10 HDMI Audio, Monitor: DELL P2415Q, Network: Intel I211 + Intel Wi-Fi 6 AX200 OS: Pop 22.04, Kernel: 6.6.6-76060606-generic (x86_64), Desktop: GNOME Shell 42.5, Display Server: X Server 1.21.1.4, OpenGL: 4.6 Mesa 23.3.2-1pop0~1704238321~22.04~36f1d0e (LLVM 15.0.7 DRM 3.54), Vulkan: 1.3.267, Compiler: GCC 11.4.0, File-System: ext4, Screen Resolution: 3840x2160 b: Processor: AMD Ryzen Threadripper 3990X 64-Core @ 2.90GHz (64 Cores / 128 Threads), Motherboard: Gigabyte TRX40 AORUS PRO WIFI (F6 BIOS), Chipset: AMD Starship/Matisse, Memory: 4 x 32GB DDR4-3000MT/s CMK64GX4M2D3000C16, Disk: Samsung SSD 970 EVO Plus 500GB, Graphics: AMD Radeon RX 5700 8GB (1750/875MHz), Audio: AMD Navi 10 HDMI Audio, Monitor: DELL P2415Q, Network: Intel I211 + Intel Wi-Fi 6 AX200 OS: Pop 22.04, Kernel: 6.6.6-76060606-generic (x86_64), Desktop: GNOME Shell 42.5, Display Server: X Server 1.21.1.4, OpenGL: 4.6 Mesa 23.3.2-1pop0~1704238321~22.04~36f1d0e (LLVM 15.0.7 DRM 3.54), Vulkan: 1.3.267, Compiler: GCC 11.4.0, File-System: ext4, Screen Resolution: 3840x2160 c: Processor: AMD Ryzen Threadripper 3990X 64-Core @ 2.90GHz (64 Cores / 128 Threads), Motherboard: Gigabyte TRX40 AORUS PRO WIFI (F6 BIOS), Chipset: AMD Starship/Matisse, Memory: 4 x 32GB DDR4-3000MT/s CMK64GX4M2D3000C16, Disk: Samsung SSD 970 EVO Plus 500GB, Graphics: AMD Radeon RX 5700 8GB (1750/875MHz), Audio: AMD Navi 10 HDMI Audio, Monitor: DELL P2415Q, Network: Intel I211 + Intel Wi-Fi 6 AX200 OS: Pop 22.04, Kernel: 6.6.6-76060606-generic (x86_64), Desktop: GNOME Shell 42.5, Display Server: X Server 1.21.1.4, OpenGL: 4.6 Mesa 23.3.2-1pop0~1704238321~22.04~36f1d0e (LLVM 15.0.7 DRM 3.54), Vulkan: 1.3.267, Compiler: GCC 11.4.0, File-System: ext4, Screen Resolution: 3840x2160 d: Processor: AMD Ryzen Threadripper 3990X 64-Core @ 2.90GHz (64 Cores / 128 Threads), Motherboard: Gigabyte TRX40 AORUS PRO WIFI (F6 BIOS), Chipset: AMD Starship/Matisse, Memory: 4 x 32GB DDR4-3000MT/s CMK64GX4M2D3000C16, Disk: Samsung SSD 970 EVO Plus 500GB, Graphics: AMD Radeon RX 5700 8GB (1750/875MHz), Audio: AMD Navi 10 HDMI Audio, Monitor: DELL P2415Q, Network: Intel I211 + Intel Wi-Fi 6 AX200 OS: Pop 22.04, Kernel: 6.6.6-76060606-generic (x86_64), Desktop: GNOME Shell 42.5, Display Server: X Server 1.21.1.4, OpenGL: 4.6 Mesa 23.3.2-1pop0~1704238321~22.04~36f1d0e (LLVM 15.0.7 DRM 3.54), Vulkan: 1.3.267, Compiler: GCC 11.4.0, File-System: ext4, Screen Resolution: 3840x2160 ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 101.55 |=================================================================== b . 86.59 |========================================================= c . 76.09 |================================================== d . 73.87 |================================================= ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 158.94 |=================================================================== b . 132.11 |======================================================== c . 119.85 |=================================================== d . 115.97 |================================================= ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 106.63 |=================================================================== b . 91.04 |========================================================= c . 81.50 |=================================================== d . 79.63 |================================================== ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 84.99 |==================================================================== b . 74.78 |============================================================ c . 65.19 |==================================================== d . 63.75 |=================================================== ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 262.07 |=================================================================== b . 231.65 |=========================================================== c . 223.99 |========================================================= d . 226.28 |========================================================== ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 3.41805 |================================================================== b . 3.30419 |================================================================ c . 3.30267 |================================================================ d . 3.25386 |=============================================================== ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 24.80 |================================================================= b . 25.57 |=================================================================== c . 25.87 |==================================================================== d . 25.44 |=================================================================== ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 7.47450 |================================================================== b . 7.31859 |================================================================= c . 7.21143 |================================================================ d . 7.16765 |=============================================================== ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 4.56634 |================================================================== b . 4.42556 |================================================================ c . 4.39522 |================================================================ d . 4.40892 |================================================================ ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 5.98899 |================================================================== b . 5.87005 |================================================================= c . 5.82904 |================================================================ d . 5.81121 |================================================================ ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 2.98135 |================================================================== b . 2.91718 |================================================================= c . 2.91581 |================================================================= d . 2.90364 |================================================================ ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 21.39 |==================================================================== b . 21.01 |=================================================================== c . 20.87 |================================================================== d . 20.85 |================================================================== ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 8.69295 |================================================================== b . 8.63703 |================================================================== c . 8.58440 |================================================================= d . 8.48632 |================================================================ ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 199.46 |=================================================================== b . 195.48 |================================================================== c . 197.53 |================================================================== d . 198.97 |=================================================================== ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 119.15 |=================================================================== b . 118.04 |================================================================== c . 117.05 |================================================================== d . 117.50 |================================================================== ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 48.23 |==================================================================== b . 47.72 |=================================================================== c . 47.59 |=================================================================== d . 48.12 |==================================================================== ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 0.789445 |================================================================ b . 0.798150 |================================================================= c . 0.788713 |================================================================ d . 0.795279 |================================================================= ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 89.59 |==================================================================== b . 88.74 |=================================================================== c . 89.00 |==================================================================== d . 88.93 |=================================================================== ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 15.55 |==================================================================== b . 15.44 |==================================================================== c . 15.41 |=================================================================== d . 15.43 |=================================================================== ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 84.33 |==================================================================== b . 84.27 |==================================================================== c . 83.80 |==================================================================== d . 84.05 |==================================================================== ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 40.32 |==================================================================== b . 39.10 |================================================================== c . 38.66 |================================================================= d . 39.32 |================================================================== ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 46.76 |================================================================== b . 47.59 |=================================================================== c . 47.91 |==================================================================== d . 47.95 |==================================================================== ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 11.16 |=================================================================== b . 11.27 |==================================================================== c . 11.23 |==================================================================== d . 11.24 |==================================================================== ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 11.86 |==================================================================== b . 11.86 |==================================================================== c . 11.93 |==================================================================== d . 11.89 |==================================================================== ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 8.38855 |================================================================= b . 8.46928 |================================================================= c . 8.54047 |================================================================== d . 8.50747 |================================================================== ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 20.73 |=================================================================== b . 20.95 |==================================================================== c . 21.01 |==================================================================== d . 20.78 |=================================================================== ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 64.31 |=================================================================== b . 64.75 |==================================================================== c . 64.88 |==================================================================== d . 64.82 |==================================================================== ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 133.78 |================================================================ b . 136.66 |================================================================== c . 138.66 |=================================================================== d . 139.51 |=================================================================== ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 335.41 |================================================================= b . 342.80 |=================================================================== c . 342.95 |=================================================================== d . 344.39 |=================================================================== ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 1266.70 |================================================================== b . 1253.22 |================================================================= c . 1267.89 |================================================================== d . 1257.91 |================================================================= ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 3.81235 |======================================================== b . 4.31444 |================================================================ c . 4.46068 |================================================================== d . 4.41593 |================================================================= ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 5.00960 |================================================================= b . 5.11408 |================================================================== c . 5.06250 |================================================================= d . 5.02241 |================================================================= ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 115.03 |================================================================= b . 115.77 |================================================================== c . 116.48 |================================================================== d . 117.85 |=================================================================== ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 218.99 |================================================================ b . 225.96 |=================================================================== c . 227.64 |=================================================================== d . 226.82 |=================================================================== ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 9.84382 |=============================================== b . 11.54480 |======================================================= c . 13.13890 |=============================================================== d . 13.53360 |================================================================= ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 6.28880 |================================================ b . 7.56668 |========================================================== c . 8.34118 |================================================================ d . 8.62179 |================================================================== ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 166.97 |================================================================= b . 170.35 |================================================================== c . 171.55 |=================================================================== d . 172.08 |=================================================================== ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 292.56 |================================================================ b . 302.65 |================================================================== c . 302.82 |================================================================== d . 307.34 |=================================================================== ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 11.76 |=================================================== b . 13.37 |========================================================== c . 15.34 |=================================================================== d . 15.68 |==================================================================== ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 9.36878 |================================================= b . 10.97390 |========================================================= c . 12.26020 |================================================================ d . 12.54840 |=================================================================