onnx runtime 1.14 Ryzen 9 7950X AMD Ryzen 9 7950X 16-Core testing with a ASUS ROG CROSSHAIR X670E HERO (0805 BIOS) and NVIDIA GeForce RTX 2080 Ti 11GB on Ubuntu 22.10 via the Phoronix Test Suite. a: Processor: AMD Ryzen 9 7950X 16-Core @ 4.50GHz (16 Cores / 32 Threads), Motherboard: ASUS ROG CROSSHAIR X670E HERO (0805 BIOS), Chipset: AMD Device 14d8, Memory: 32GB, Disk: Western Digital WD_BLACK SN850X 1000GB + 2000GB, Graphics: NVIDIA GeForce RTX 2080 Ti 11GB, Audio: NVIDIA TU102 HD Audio, Monitor: ASUS MG28U, Network: Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411 OS: Ubuntu 22.10, Kernel: 6.2.0-060200rc7daily20230206-generic (x86_64), Desktop: GNOME Shell 43.1, Display Server: X Server 1.21.1.4, Display Driver: NVIDIA 525.89.02, OpenGL: 4.6.0, Vulkan: 1.3.224, Compiler: GCC 12.2.0 + Clang 15.0.6, File-System: ext4, Screen Resolution: 3840x2160 b: Processor: AMD Ryzen 9 7950X 16-Core @ 4.50GHz (16 Cores / 32 Threads), Motherboard: ASUS ROG CROSSHAIR X670E HERO (0805 BIOS), Chipset: AMD Device 14d8, Memory: 32GB, Disk: Western Digital WD_BLACK SN850X 1000GB + 2000GB, Graphics: NVIDIA GeForce RTX 2080 Ti 11GB, Audio: NVIDIA TU102 HD Audio, Monitor: ASUS MG28U, Network: Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411 OS: Ubuntu 22.10, Kernel: 6.2.0-060200rc7daily20230206-generic (x86_64), Desktop: GNOME Shell 43.1, Display Server: X Server 1.21.1.4, Display Driver: NVIDIA 525.89.02, OpenGL: 4.6.0, Vulkan: 1.3.224, Compiler: GCC 12.2.0 + Clang 15.0.6, File-System: ext4, Screen Resolution: 3840x2160 c: Processor: AMD Ryzen 9 7950X 16-Core @ 4.50GHz (16 Cores / 32 Threads), Motherboard: ASUS ROG CROSSHAIR X670E HERO (0805 BIOS), Chipset: AMD Device 14d8, Memory: 32GB, Disk: Western Digital WD_BLACK SN850X 1000GB + 2000GB, Graphics: NVIDIA GeForce RTX 2080 Ti 11GB, Audio: NVIDIA TU102 HD Audio, Monitor: ASUS MG28U, Network: Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411 OS: Ubuntu 22.10, Kernel: 6.2.0-060200rc7daily20230206-generic (x86_64), Desktop: GNOME Shell 43.1, Display Server: X Server 1.21.1.4, Display Driver: NVIDIA 525.89.02, OpenGL: 4.6.0, Vulkan: 1.3.224, Compiler: GCC 12.2.0 + Clang 15.0.6, File-System: ext4, Screen Resolution: 3840x2160 ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 133.08 |================================================================== b . 133.08 |================================================================== c . 134.11 |=================================================================== ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 7.51177 |================================================================== b . 7.51215 |================================================================== c . 7.45343 |================================================================= ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 142.81 |=================================================================== b . 134.89 |=============================================================== c . 134.46 |=============================================================== ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 7.00025 |============================================================== b . 7.41112 |================================================================== c . 7.43464 |================================================================== ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 9.50301 |================================================================ b . 9.65791 |================================================================= c . 9.78889 |================================================================== ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 105.23 |=================================================================== b . 103.54 |================================================================== c . 102.15 |================================================================= ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 10.59880 |================================================================= b . 9.18359 |======================================================== c . 9.21742 |========================================================= ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 94.35 |========================================================== b . 108.89 |=================================================================== c . 108.49 |=================================================================== ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 15.92 |================================================================== b . 15.48 |================================================================ c . 16.37 |==================================================================== ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 62.82 |================================================================== b . 64.59 |==================================================================== c . 61.08 |================================================================ ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 20.42 |==================================================================== b . 18.89 |=============================================================== c . 15.39 |=================================================== ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 48.98 |=================================================== b . 52.93 |======================================================= c . 64.97 |==================================================================== ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 828.16 |================================================================== b . 837.01 |=================================================================== c . 837.06 |=================================================================== ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 1.20642 |================================================================== b . 1.19369 |================================================================= c . 1.19363 |================================================================= ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 1163.38 |================================================================== b . 1027.74 |========================================================== c . 1121.90 |================================================================ ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 0.859083 |========================================================= b . 0.972543 |================================================================= c . 0.890870 |============================================================ ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 1.95846 |================================================================== b . 1.93620 |================================================================= c . 1.90985 |================================================================ ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 510.60 |================================================================= b . 516.47 |================================================================== c . 523.60 |=================================================================== ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 2.22091 |============================================ b . 2.17180 |=========================================== c . 3.34519 |================================================================== ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 450.27 |================================================================== b . 460.45 |=================================================================== c . 298.93 |=========================================== ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 36.24 |==================================================================== b . 35.81 |=================================================================== c . 35.66 |=================================================================== ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 27.59 |=================================================================== b . 27.92 |==================================================================== c . 28.04 |==================================================================== ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 46.16 |==================================================================== b . 43.24 |================================================================ c . 42.69 |=============================================================== ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 21.66 |=============================================================== b . 23.12 |=================================================================== c . 23.42 |==================================================================== ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 385.40 |================================================================ b . 390.81 |================================================================= c . 405.07 |=================================================================== ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 2.59397 |================================================================== b . 2.55807 |================================================================= c . 2.46790 |=============================================================== ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 448.24 |=================================================================== b . 409.35 |============================================================= c . 404.95 |============================================================= ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 2.23032 |============================================================ b . 2.44234 |================================================================= c . 2.46863 |================================================================== ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 137.48 |=================================================================== b . 138.19 |=================================================================== c . 137.20 |=================================================================== ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 7.27337 |================================================================== b . 7.23578 |================================================================== c . 7.28824 |================================================================== ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 162.76 |================================================= b . 158.69 |================================================ c . 221.56 |=================================================================== ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 6.14360 |================================================================ b . 6.30094 |================================================================== c . 4.51325 |=============================================== ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 50.85 |================================================================ b . 54.03 |==================================================================== c . 50.44 |=============================================================== ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 19.66 |=================================================================== b . 18.51 |================================================================ c . 19.82 |==================================================================== ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 67.90 |=================================================================== b . 53.15 |===================================================== c . 68.84 |==================================================================== ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 14.73 |===================================================== b . 18.81 |==================================================================== c . 14.52 |====================================================