xeon-8490h-onednn 2 x Intel Xeon Platinum 8490H testing with a Quanta Cloud S6Q-MB-MPS (3A10.uh BIOS) and ASPEED on Ubuntu 22.04 via the Phoronix Test Suite. convolution-training-inference-all: Processor: 2 x Intel Xeon Platinum 8490H @ 3.50GHz (120 Cores / 240 Threads), Motherboard: Quanta Cloud S6Q-MB-MPS (3A10.uh BIOS), Chipset: Intel Device 1bce, Memory: 1008GB, Disk: 8 x 1920GB Dell Ent NVMe AGN RI U.2 1.92TB + 1920GB INTEL SSDSC2KG01 + 800GB INTEL SSDSC2BA80 + 800GB INTEL SSDSC2BB80, Graphics: ASPEED, Network: 4 x Intel E810-C for QSFP + 2 x Intel X710 for 10GBASE-T OS: Ubuntu 22.04, Kernel: 5.15.0-76-generic (x86_64), Display Server: X Server, Vulkan: 1.3.224, Compiler: GCC 11.3.0, File-System: ext4, Screen Resolution: 1024x768 oneDNN 3.1 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better convolution-training-inference-all . 875.26 |================================== oneDNN 3.1 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better convolution-training-inference-all . 1284.44 |================================= oneDNN 3.1 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better convolution-training-inference-all . 873.54 |================================== oneDNN 3.1 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better convolution-training-inference-all . 0.221800 |================================ oneDNN 3.1 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU ms < Lower Is Better convolution-training-inference-all . 870.37 |================================== oneDNN 3.1 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better convolution-training-inference-all . 0.255689 |================================ oneDNN 3.1 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU ms < Lower Is Better convolution-training-inference-all . 0.413429 |================================ oneDNN 3.1 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better convolution-training-inference-all . 1237.20 |================================= oneDNN 3.1 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU ms < Lower Is Better convolution-training-inference-all . 1258.94 |=================================