onnx 1.14 alderlake Intel Core i7-1280P testing with a MSI MS-14C6 (E14C6IMS.115 BIOS) and MSI Intel ADL GT2 15GB on Ubuntu 22.10 via the Phoronix Test Suite. a: Processor: Intel Core i7-1280P @ 4.70GHz (14 Cores / 20 Threads), Motherboard: MSI MS-14C6 (E14C6IMS.115 BIOS), Chipset: Intel Alder Lake PCH, Memory: 16GB, Disk: 1024GB Micron_3400_MTFDKBA1T0TFH, Graphics: MSI Intel ADL GT2 15GB (1450MHz), Audio: Realtek ALC274, Network: Intel Alder Lake-P PCH CNVi WiFi OS: Ubuntu 22.10, Kernel: 5.19.0-31-generic (x86_64), Desktop: GNOME Shell 43.0, Display Server: X Server 1.21.1.4 + Wayland, OpenGL: 4.6 Mesa 22.2.1, OpenCL: OpenCL 3.0, Vulkan: 1.3.224, Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 1920x1080 b: Processor: Intel Core i7-1280P @ 4.70GHz (14 Cores / 20 Threads), Motherboard: MSI MS-14C6 (E14C6IMS.115 BIOS), Chipset: Intel Alder Lake PCH, Memory: 16GB, Disk: 1024GB Micron_3400_MTFDKBA1T0TFH, Graphics: MSI Intel ADL GT2 15GB (1450MHz), Audio: Realtek ALC274, Network: Intel Alder Lake-P PCH CNVi WiFi OS: Ubuntu 22.10, Kernel: 5.19.0-31-generic (x86_64), Desktop: GNOME Shell 43.0, Display Server: X Server 1.21.1.4 + Wayland, OpenGL: 4.6 Mesa 22.2.1, OpenCL: OpenCL 3.0, Vulkan: 1.3.224, Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 1920x1080 c: Processor: Intel Core i7-1280P @ 4.70GHz (14 Cores / 20 Threads), Motherboard: MSI MS-14C6 (E14C6IMS.115 BIOS), Chipset: Intel Alder Lake PCH, Memory: 16GB, Disk: 1024GB Micron_3400_MTFDKBA1T0TFH, Graphics: MSI Intel ADL GT2 15GB (1450MHz), Audio: Realtek ALC274, Network: Intel Alder Lake-P PCH CNVi WiFi OS: Ubuntu 22.10, Kernel: 5.19.0-31-generic (x86_64), Desktop: GNOME Shell 43.0, Display Server: X Server 1.21.1.4 + Wayland, OpenGL: 4.6 Mesa 22.2.1, OpenCL: OpenCL 3.0, Vulkan: 1.3.224, Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 1920x1080 ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 84.07 |================================================================== b . 84.51 |=================================================================== c . 86.34 |==================================================================== ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 11.89 |==================================================================== b . 11.83 |==================================================================== c . 11.58 |================================================================== ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 107.83 |================================================================== b . 107.74 |================================================================== c . 109.45 |=================================================================== ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 9.26843 |================================================================== b . 9.27584 |================================================================== c . 9.13069 |================================================================= ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 5.04271 |============================================================== b . 5.34531 |================================================================== c . 5.29790 |================================================================= ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 198.30 |=================================================================== b . 187.08 |=============================================================== c . 188.75 |================================================================ ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 5.59971 |================================================================= b . 5.67484 |================================================================== c . 5.60044 |================================================================= ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 178.58 |=================================================================== b . 176.21 |================================================================== c . 178.55 |=================================================================== ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 7.23230 |================================================================= b . 7.30387 |================================================================== c . 7.04533 |================================================================ ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 138.27 |================================================================= b . 136.91 |================================================================= c . 141.94 |=================================================================== ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 5.95587 |==================================================== b . 6.55757 |========================================================= c . 7.55227 |================================================================== ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 167.90 |=================================================================== b . 152.49 |============================================================= c . 132.41 |===================================================== ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 148.99 |============================================================= b . 163.67 |=================================================================== c . 155.04 |=============================================================== ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 6.70998 |================================================================== b . 6.10795 |============================================================ c . 6.44815 |=============================================================== ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 416.96 |=================================================================== b . 417.66 |=================================================================== c . 419.60 |=================================================================== ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 2.39713 |================================================================== b . 2.39311 |================================================================== c . 2.38199 |================================================================== ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 0.727839 |================================================================ b . 0.736007 |================================================================= c . 0.728000 |================================================================ ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 1373.93 |================================================================== b . 1358.68 |================================================================= c . 1373.62 |================================================================== ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 0.867360 |================================================================= b . 0.869303 |================================================================= c . 0.868698 |================================================================= ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 1152.92 |================================================================== b . 1150.34 |================================================================== c . 1151.15 |================================================================== ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 4.80406 |================================================================== b . 4.82474 |================================================================== c . 4.76665 |================================================================= ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 208.16 |================================================================== b . 207.26 |================================================================== c . 209.79 |=================================================================== ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 4.97168 |================================================================== b . 4.97715 |================================================================== c . 4.96983 |================================================================== ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 201.14 |=================================================================== b . 200.92 |=================================================================== c . 201.21 |=================================================================== ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 93.74 |==================================================================== b . 92.53 |=================================================================== c . 93.48 |==================================================================== ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 10.67 |=================================================================== b . 10.81 |==================================================================== c . 10.70 |=================================================================== ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 118.65 |=============================================================== b . 125.96 |=================================================================== c . 125.33 |=================================================================== ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 8.42698 |================================================================== b . 7.93792 |============================================================== c . 7.97784 |============================================================== ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 37.76 |=================================================================== b . 38.08 |==================================================================== c . 37.96 |==================================================================== ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 26.48 |==================================================================== b . 26.26 |=================================================================== c . 26.34 |==================================================================== ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 42.34 |=================================================================== b . 42.29 |=================================================================== c . 43.19 |==================================================================== ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 23.62 |==================================================================== b . 23.64 |==================================================================== c . 23.15 |=================================================================== ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 23.64 |==================================================================== b . 23.23 |=================================================================== c . 23.72 |==================================================================== ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 42.29 |=================================================================== b . 43.05 |==================================================================== c . 42.15 |=================================================================== ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 32.74 |==================================================================== b . 29.18 |============================================================= c . 28.97 |============================================================ ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 30.54 |============================================================ b . 34.26 |=================================================================== c . 34.52 |====================================================================