onnx 2024 AMD Ryzen 9 7950X 16-Core testing with a ASUS ROG STRIX X670E-E GAMING WIFI (1416 BIOS) and AMD Radeon RX 7900 XT 20GB on Ubuntu 23.10 via the Phoronix Test Suite. a: Processor: AMD Ryzen 9 7950X 16-Core @ 5.88GHz (16 Cores / 32 Threads), Motherboard: ASUS ROG STRIX X670E-E GAMING WIFI (1416 BIOS), Chipset: AMD Device 14d8, Memory: 2 x 16 GB DRAM-6000MT/s G Skill F5-6000J3038F16G, Disk: 2000GB Samsung SSD 980 PRO 2TB + 4001GB Western Digital WD_BLACK SN850X 4000GB, Graphics: AMD Radeon RX 7900 XT 20GB (2025/1249MHz), Audio: AMD Navi 31 HDMI/DP, Monitor: DELL U2723QE, Network: Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411 OS: Ubuntu 23.10, Kernel: 6.7.0-060700-generic (x86_64), Desktop: GNOME Shell 45.2, Display Server: X Server 1.21.1.7 + Wayland, OpenGL: 4.6 Mesa 24.1~git2401150600.33b77e~oibaf~m (git-33b77ec 2024-01-15 mantic-oibaf-ppa) (LLVM 16.0.6 DRM 3.56), Compiler: GCC 13.2.0 + LLVM 16.0.6, File-System: ext4, Screen Resolution: 3840x2160 b: Processor: AMD Ryzen 9 7950X 16-Core @ 5.88GHz (16 Cores / 32 Threads), Motherboard: ASUS ROG STRIX X670E-E GAMING WIFI (1416 BIOS), Chipset: AMD Device 14d8, Memory: 2 x 16 GB DRAM-6000MT/s G Skill F5-6000J3038F16G, Disk: 2000GB Samsung SSD 980 PRO 2TB + 4001GB Western Digital WD_BLACK SN850X 4000GB, Graphics: AMD Radeon RX 7900 XT 20GB (2025/1249MHz), Audio: AMD Navi 31 HDMI/DP, Monitor: DELL U2723QE, Network: Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411 OS: Ubuntu 23.10, Kernel: 6.7.0-060700-generic (x86_64), Desktop: GNOME Shell 45.2, Display Server: X Server 1.21.1.7 + Wayland, OpenGL: 4.6 Mesa 24.1~git2401150600.33b77e~oibaf~m (git-33b77ec 2024-01-15 mantic-oibaf-ppa) (LLVM 16.0.6 DRM 3.56), Compiler: GCC 13.2.0 + LLVM 16.0.6, File-System: ext4, Screen Resolution: 3840x2160 c: Processor: AMD Ryzen 9 7950X 16-Core @ 5.88GHz (16 Cores / 32 Threads), Motherboard: ASUS ROG STRIX X670E-E GAMING WIFI (1416 BIOS), Chipset: AMD Device 14d8, Memory: 2 x 16 GB DRAM-6000MT/s G Skill F5-6000J3038F16G, Disk: 2000GB Samsung SSD 980 PRO 2TB + 4001GB Western Digital WD_BLACK SN850X 4000GB, Graphics: AMD Radeon RX 7900 XT 20GB (2025/1249MHz), Audio: AMD Navi 31 HDMI/DP, Monitor: DELL U2723QE, Network: Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411 OS: Ubuntu 23.10, Kernel: 6.7.0-060700-generic (x86_64), Desktop: GNOME Shell 45.2, Display Server: X Server 1.21.1.7 + Wayland, OpenGL: 4.6 Mesa 24.1~git2401150600.33b77e~oibaf~m (git-33b77ec 2024-01-15 mantic-oibaf-ppa) (LLVM 16.0.6 DRM 3.56), Compiler: GCC 13.2.0 + LLVM 16.0.6, File-System: ext4, Screen Resolution: 3840x2160 d: Processor: AMD Ryzen 9 7950X 16-Core @ 5.88GHz (16 Cores / 32 Threads), Motherboard: ASUS ROG STRIX X670E-E GAMING WIFI (1416 BIOS), Chipset: AMD Device 14d8, Memory: 2 x 16 GB DRAM-6000MT/s G Skill F5-6000J3038F16G, Disk: 2000GB Samsung SSD 980 PRO 2TB + 4001GB Western Digital WD_BLACK SN850X 4000GB, Graphics: AMD Radeon RX 7900 XT 20GB (2025/1249MHz), Audio: AMD Navi 31 HDMI/DP, Monitor: DELL U2723QE, Network: Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411 OS: Ubuntu 23.10, Kernel: 6.7.0-060700-generic (x86_64), Desktop: GNOME Shell 45.2, Display Server: X Server 1.21.1.7 + Wayland, OpenGL: 4.6 Mesa 24.1~git2401150600.33b77e~oibaf~m (git-33b77ec 2024-01-15 mantic-oibaf-ppa) (LLVM 16.0.6 DRM 3.56), Compiler: GCC 13.2.0 + LLVM 16.0.6, File-System: ext4, Screen Resolution: 3840x2160 e: Processor: AMD Ryzen 9 7950X 16-Core @ 5.88GHz (16 Cores / 32 Threads), Motherboard: ASUS ROG STRIX X670E-E GAMING WIFI (1416 BIOS), Chipset: AMD Device 14d8, Memory: 2 x 16 GB DRAM-6000MT/s G Skill F5-6000J3038F16G, Disk: 2000GB Samsung SSD 980 PRO 2TB + 4001GB Western Digital WD_BLACK SN850X 4000GB, Graphics: AMD Radeon RX 7900 XT 20GB (2025/1249MHz), Audio: AMD Navi 31 HDMI/DP, Monitor: DELL U2723QE, Network: Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411 OS: Ubuntu 23.10, Kernel: 6.7.0-060700-generic (x86_64), Desktop: GNOME Shell 45.2, Display Server: X Server 1.21.1.7 + Wayland, OpenGL: 4.6 Mesa 24.1~git2401150600.33b77e~oibaf~m (git-33b77ec 2024-01-15 mantic-oibaf-ppa) (LLVM 16.0.6 DRM 3.56), Compiler: GCC 13.2.0 + LLVM 16.0.6, File-System: ext4, Screen Resolution: 3840x2160 ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 138.59 |================================================================== b . 139.98 |=================================================================== c . 140.50 |=================================================================== d . 139.62 |================================================================== e . 140.67 |=================================================================== ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 7.21183 |================================================================== b . 7.13996 |================================================================= c . 7.11340 |================================================================= d . 7.15826 |================================================================== e . 7.10485 |================================================================= ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 162.77 |=================================================================== b . 159.02 |================================================================= c . 163.10 |=================================================================== d . 158.35 |================================================================= e . 161.01 |================================================================== ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 6.14139 |================================================================ b . 6.28555 |================================================================== c . 6.12878 |================================================================ d . 6.32885 |================================================================== e . 6.21146 |================================================================= ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 12.07 |=================================================================== b . 11.76 |================================================================== c . 12.19 |==================================================================== d . 11.84 |================================================================== e . 11.79 |================================================================== ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 82.84 |================================================================== b . 85.03 |==================================================================== c . 82.04 |================================================================== d . 84.49 |==================================================================== e . 84.87 |==================================================================== ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 13.13 |=================================================================== b . 10.86 |======================================================== c . 13.24 |==================================================================== d . 13.01 |=================================================================== e . 11.92 |============================================================= ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 76.18 |======================================================== b . 92.06 |==================================================================== c . 75.51 |======================================================== d . 76.87 |========================================================= e . 84.71 |=============================================================== ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 182.74 |================================================================== b . 181.90 |================================================================== c . 182.59 |================================================================== d . 184.53 |=================================================================== e . 182.75 |================================================================== ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 5.47142 |================================================================== b . 5.49686 |================================================================== c . 5.47609 |================================================================== d . 5.41860 |================================================================= e . 5.47131 |================================================================== ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 194.40 |=================================================================== b . 195.09 |=================================================================== c . 175.70 |============================================================ d . 193.58 |================================================================== e . 191.07 |================================================================== ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 5.14336 |============================================================ b . 5.12505 |=========================================================== c . 5.68968 |================================================================== d . 5.16511 |============================================================ e . 5.23397 |============================================================= ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 15.36 |================================================================ b . 16.43 |==================================================================== c . 16.15 |=================================================================== d . 15.60 |================================================================= e . 15.76 |================================================================= ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 65.10 |==================================================================== b . 60.85 |================================================================ c . 61.93 |================================================================= d . 64.11 |=================================================================== e . 63.47 |================================================================== ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 20.94 |==================================================================== b . 15.52 |================================================== c . 20.98 |==================================================================== d . 20.88 |==================================================================== e . 18.39 |============================================================ ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 47.76 |================================================== b . 64.44 |==================================================================== c . 47.66 |================================================== d . 47.88 |=================================================== e . 55.55 |=========================================================== ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 909.70 |=================================================================== b . 903.54 |=================================================================== c . 887.45 |================================================================= d . 894.71 |================================================================== e . 892.47 |================================================================== ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 1.09815 |================================================================ b . 1.10576 |================================================================= c . 1.12580 |================================================================== d . 1.11678 |================================================================= e . 1.11950 |================================================================== ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 1207.09 |================================================================== b . 1126.85 |============================================================== c . 973.69 |===================================================== d . 1012.63 |======================================================= e . 1206.81 |================================================================== ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 0.828027 |==================================================== b . 0.887052 |======================================================== c . 1.026520 |================================================================= d . 0.995978 |=============================================================== e . 0.828290 |==================================================== ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 1.87559 |================================================================= b . 1.82346 |=============================================================== c . 1.90846 |================================================================== d . 1.86086 |================================================================ e . 1.84548 |================================================================ ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 533.16 |================================================================= b . 548.41 |=================================================================== c . 523.98 |================================================================ d . 537.39 |================================================================== e . 542.06 |================================================================== ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 3.24523 |================================================================= b . 3.29562 |================================================================== c . 3.31084 |================================================================== d . 3.28368 |================================================================= e . 2.47468 |================================================= ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 308.14 |================================================= b . 303.43 |================================================ c . 302.04 |================================================ d . 304.54 |================================================ e . 423.31 |=================================================================== ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 36.93 |==================================================================== b . 36.63 |=================================================================== c . 36.74 |==================================================================== d . 36.12 |=================================================================== e . 36.88 |==================================================================== ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 27.08 |=================================================================== b . 27.30 |=================================================================== c . 27.21 |=================================================================== d . 27.68 |==================================================================== e . 27.11 |=================================================================== ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 46.85 |==================================================================== b . 46.86 |==================================================================== c . 47.05 |==================================================================== d . 45.11 |================================================================= e . 45.25 |================================================================= ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 21.35 |================================================================= b . 21.34 |================================================================= c . 21.25 |================================================================= d . 22.26 |==================================================================== e . 22.24 |==================================================================== ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 382.83 |=================================================================== b . 380.35 |=================================================================== c . 376.53 |================================================================== d . 375.29 |================================================================== e . 372.40 |================================================================= ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 2.61147 |================================================================ b . 2.62849 |================================================================= c . 2.65494 |================================================================= d . 2.66409 |================================================================= e . 2.68514 |================================================================== ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 469.56 |=================================================================== b . 470.28 |=================================================================== c . 387.27 |======================================================= d . 456.82 |================================================================= e . 413.53 |=========================================================== ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 2.12926 |====================================================== b . 2.12591 |====================================================== c . 2.58167 |================================================================== d . 2.19267 |======================================================== e . 2.43146 |============================================================== ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 129.14 |=================================================================== b . 129.64 |=================================================================== c . 128.39 |================================================================== d . 128.83 |=================================================================== e . 128.98 |=================================================================== ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 7.74287 |================================================================== b . 7.71312 |================================================================= c . 7.78802 |================================================================== d . 7.76130 |================================================================== e . 7.75298 |================================================================== ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 207.20 |=================================================================== b . 207.43 |=================================================================== c . 200.64 |================================================================= d . 192.93 |============================================================== e . 165.76 |====================================================== ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 4.82607 |=================================================== b . 4.82062 |=================================================== c . 4.98387 |==================================================== d . 5.33607 |======================================================== e . 6.27994 |================================================================== ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 47.92 |=================================================================== b . 47.66 |=================================================================== d . 48.42 |==================================================================== e . 47.90 |=================================================================== ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 20.87 |==================================================================== b . 20.98 |==================================================================== d . 20.65 |=================================================================== e . 20.87 |==================================================================== ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 65.78 |==================================================================== b . 65.48 |==================================================================== d . 61.41 |=============================================================== e . 60.99 |=============================================================== ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 15.20 |============================================================== b . 15.27 |=============================================================== d . 16.47 |==================================================================== e . 16.58 |====================================================================