xeon-8480-onednn 2 x Intel Xeon Platinum 8480+ testing with a Quanta Cloud S6Q-MB-MPS (3A10.uh BIOS) and ASPEED on Ubuntu 22.04 via the Phoronix Test Suite. convolution-train-infer-all: Processor: 2 x Intel Xeon Platinum 8480+ @ 3.80GHz (112 Cores / 224 Threads), Motherboard: Quanta Cloud S6Q-MB-MPS (3A10.uh BIOS), Chipset: Intel Device 1bce, Memory: 1008GB, Disk: 8 x 1920GB Dell Ent NVMe AGN RI U.2 1.92TB + 1920GB INTEL SSDSC2KG01 + 800GB INTEL SSDSC2BA80 + 800GB INTEL SSDSC2BB80, Graphics: ASPEED, Network: 4 x Intel E810-C for QSFP + 2 x Intel X710 for 10GBASE-T OS: Ubuntu 22.04, Kernel: 5.15.0-76-generic (x86_64), Display Server: X Server, Vulkan: 1.3.224, Compiler: GCC 11.3.0, File-System: ext4, Screen Resolution: 1024x768 oneDNN 3.1 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better convolution-train-infer-all . 854.07 |========================================= oneDNN 3.1 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better convolution-train-infer-all . 852.34 |========================================= oneDNN 3.1 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better convolution-train-infer-all . 0.219917 |======================================= oneDNN 3.1 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better convolution-train-infer-all . 1221.95 |======================================== oneDNN 3.1 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU ms < Lower Is Better convolution-train-infer-all . 820.21 |========================================= oneDNN 3.1 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU ms < Lower Is Better convolution-train-infer-all . 1264.50 |======================================== oneDNN 3.1 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better convolution-train-infer-all . 0.252611 |======================================= oneDNN 3.1 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU ms < Lower Is Better convolution-train-infer-all . 0.412562 |======================================= oneDNN 3.1 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better convolution-train-infer-all . 1246.31 |========================================