onnx-new AMD Ryzen 7 7840HS testing with a NB05 TUXEDO Pulse 14 Gen3 R14FA1 (8.06 BIOS) and AMD Phoenix1 4GB on Ubuntu 23.10 via the Phoronix Test Suite. a: Processor: AMD Ryzen 7 7840HS @ 5.29GHz (8 Cores / 16 Threads), Motherboard: NB05 TUXEDO Pulse 14 Gen3 R14FA1 (8.06 BIOS), Chipset: AMD Device 14e8, Memory: 4 x 8GB DRAM-6400MT/s Micron MT62F2G32D4DS-026 WT, Disk: 2000GB Samsung SSD 980 PRO 2TB, Graphics: AMD Phoenix1 4GB (2700/800MHz), Audio: AMD Rembrandt Radeon HD Audio, Network: MEDIATEK MT7921K OS: Ubuntu 23.10, Kernel: 6.7.0-060700-generic (x86_64), Desktop: GNOME Shell 45.2, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 24.1~git2401200600.ebcab1~oibaf~m (git-ebcab14 2024-01-20 mantic-oibaf-ppa) (LLVM 16.0.6 DRM 3.56), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 2880x1800 b: Processor: AMD Ryzen 7 7840HS @ 5.29GHz (8 Cores / 16 Threads), Motherboard: NB05 TUXEDO Pulse 14 Gen3 R14FA1 (8.06 BIOS), Chipset: AMD Device 14e8, Memory: 4 x 8GB DRAM-6400MT/s Micron MT62F2G32D4DS-026 WT, Disk: 2000GB Samsung SSD 980 PRO 2TB, Graphics: AMD Phoenix1 4GB (2700/800MHz), Audio: AMD Rembrandt Radeon HD Audio, Network: MEDIATEK MT7921K OS: Ubuntu 23.10, Kernel: 6.7.0-060700-generic (x86_64), Desktop: GNOME Shell 45.2, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 24.1~git2401200600.ebcab1~oibaf~m (git-ebcab14 2024-01-20 mantic-oibaf-ppa) (LLVM 16.0.6 DRM 3.56), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 2880x1800 c: Processor: AMD Ryzen 7 7840HS @ 5.29GHz (8 Cores / 16 Threads), Motherboard: NB05 TUXEDO Pulse 14 Gen3 R14FA1 (8.06 BIOS), Chipset: AMD Device 14e8, Memory: 4 x 8GB DRAM-6400MT/s Micron MT62F2G32D4DS-026 WT, Disk: 2000GB Samsung SSD 980 PRO 2TB, Graphics: AMD Phoenix1 4GB (2700/800MHz), Audio: AMD Rembrandt Radeon HD Audio, Network: MEDIATEK MT7921K OS: Ubuntu 23.10, Kernel: 6.7.0-060700-generic (x86_64), Desktop: GNOME Shell 45.2, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 24.1~git2401200600.ebcab1~oibaf~m (git-ebcab14 2024-01-20 mantic-oibaf-ppa) (LLVM 16.0.6 DRM 3.56), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 2880x1800 d: Processor: AMD Ryzen 7 7840HS @ 5.29GHz (8 Cores / 16 Threads), Motherboard: NB05 TUXEDO Pulse 14 Gen3 R14FA1 (8.06 BIOS), Chipset: AMD Device 14e8, Memory: 4 x 8GB DRAM-6400MT/s Micron MT62F2G32D4DS-026 WT, Disk: 2000GB Samsung SSD 980 PRO 2TB, Graphics: AMD Phoenix1 4GB (2700/800MHz), Audio: AMD Rembrandt Radeon HD Audio, Network: MEDIATEK MT7921K OS: Ubuntu 23.10, Kernel: 6.7.0-060700-generic (x86_64), Desktop: GNOME Shell 45.2, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 24.1~git2401200600.ebcab1~oibaf~m (git-ebcab14 2024-01-20 mantic-oibaf-ppa) (LLVM 16.0.6 DRM 3.56), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 2880x1800 ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 94.83 |==================================================================== b . 94.57 |==================================================================== c . 94.53 |==================================================================== d . 95.05 |==================================================================== ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 10.54 |==================================================================== b . 10.57 |==================================================================== c . 10.58 |==================================================================== d . 10.52 |==================================================================== ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 108.89 |=================================================================== b . 108.77 |=================================================================== c . 108.93 |=================================================================== d . 106.68 |================================================================== ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 9.17895 |================================================================= b . 9.18769 |================================================================= c . 9.17392 |================================================================= d . 9.36708 |================================================================== ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 6.72099 |=============================================================== b . 6.84331 |================================================================ c . 6.87218 |================================================================ d . 7.06379 |================================================================== ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 148.80 |=================================================================== b . 146.13 |================================================================== c . 145.51 |================================================================== d . 141.57 |================================================================ ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 8.79725 |=========================================================== b . 8.87901 |=========================================================== c . 6.61808 |============================================ d . 9.90419 |================================================================== ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 117.23 |==================================================== b . 112.62 |================================================== c . 151.10 |=================================================================== d . 100.96 |============================================= ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 119.32 |=================================================================== b . 118.08 |================================================================== c . 118.74 |=================================================================== d . 118.69 |=================================================================== ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 8.37960 |================================================================= b . 8.46767 |================================================================== c . 8.42040 |================================================================== d . 8.42408 |================================================================== ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 125.82 |================================================================== b . 127.49 |=================================================================== c . 119.89 |=============================================================== d . 127.27 |=================================================================== ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 7.94738 |=============================================================== b . 7.84187 |============================================================== c . 8.33812 |================================================================== d . 7.85559 |============================================================== ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 9.42041 |================================================================ b . 9.75321 |================================================================== c . 9.21077 |============================================================== d . 9.29782 |=============================================================== ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 106.20 |================================================================== b . 102.53 |=============================================================== c . 108.57 |=================================================================== d . 107.55 |================================================================== ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 14.46736 |============================================================= b . 15.33960 |================================================================= c . 9.22079 |======================================= d . 15.35120 |================================================================= ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 71.04 |============================================ b . 65.19 |======================================== c . 108.45 |=================================================================== d . 65.14 |======================================== ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 447.70 |=================================================================== b . 446.21 |=================================================================== c . 448.32 |=================================================================== d . 443.00 |================================================================== ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 2.23189 |================================================================= b . 2.23918 |================================================================== c . 2.22863 |================================================================= d . 2.25552 |================================================================== ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 476.54 |================================================================== b . 480.67 |=================================================================== c . 451.05 |============================================================== d . 484.06 |=================================================================== ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 2.09830 |=============================================================== b . 2.07897 |============================================================== c . 2.21534 |================================================================== d . 2.06451 |============================================================== ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 0.929886 |============================================================== b . 0.892587 |=========================================================== c . 0.979780 |================================================================= d . 0.896585 |=========================================================== ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 1075.42 |=============================================================== b . 1120.34 |================================================================== c . 1020.63 |============================================================ d . 1115.34 |================================================================== ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 1.58162 |================================================================== b . 1.58134 |================================================================== c . 1.21598 |=================================================== d . 1.58731 |================================================================== ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 632.26 |==================================================== b . 632.37 |==================================================== c . 822.38 |=================================================================== d . 629.99 |=================================================== ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 21.45 |==================================================================== b . 21.48 |==================================================================== c . 20.92 |================================================================== d . 21.22 |=================================================================== ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 46.62 |================================================================== b . 46.55 |================================================================== c . 47.81 |==================================================================== d . 47.12 |=================================================================== ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better b . 32.35 |==================================================================== c . 32.19 |==================================================================== a . 32.36 |==================================================================== d . 32.42 |==================================================================== ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better b . 30.91 |==================================================================== c . 31.06 |==================================================================== a . 30.90 |==================================================================== d . 30.84 |==================================================================== ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better b . 226.45 |=================================================================== c . 226.86 |=================================================================== a . 227.59 |=================================================================== d . 226.40 |=================================================================== ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better b . 4.41529 |================================================================== c . 4.40731 |================================================================== a . 4.39319 |================================================================== d . 4.41638 |================================================================== ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better b . 228.14 |=================================================================== c . 228.90 |=================================================================== d . 227.21 |=================================================================== ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better b . 4.38183 |================================================================== c . 4.36729 |================================================================== d . 4.39957 |================================================================== ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better b . 70.75 |==================================================================== c . 70.61 |==================================================================== d . 70.59 |==================================================================== ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better b . 14.13 |==================================================================== c . 14.16 |==================================================================== d . 14.17 |==================================================================== ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better b . 107.06 |=================================================================== c . 69.16 |=========================================== d . 106.59 |=================================================================== ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better b . 9.33894 |========================================== c . 14.45610 |================================================================= d . 9.37953 |========================================== ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better b . 39.92 |==================================================================== c . 39.41 |=================================================================== d . 39.57 |=================================================================== ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better b . 25.05 |=================================================================== c . 25.37 |==================================================================== d . 25.27 |==================================================================== ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better b . 47.18 |============================================================ c . 52.17 |=================================================================== d . 53.20 |==================================================================== ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better b . 21.19 |==================================================================== c . 19.16 |============================================================= d . 18.80 |============================================================