dnn Benchmarks for a future article. AMD Ryzen AI 9 HX 370 testing with a ASUS Zenbook S 16 UM5606WA_UM5606WA UM5606WA v1.0 (UM5606WA.308 BIOS) and llvmpipe on Ubuntu 24.10 via the Phoronix Test Suite. a: Processor: AMD Ryzen AI 9 HX 370 @ 4.37GHz (12 Cores / 24 Threads), Motherboard: ASUS Zenbook S 16 UM5606WA_UM5606WA UM5606WA v1.0 (UM5606WA.308 BIOS), Chipset: AMD Device 1507, Memory: 4 x 8GB LPDDR5-7500MT/s Samsung K3KL9L90CM-MGCT, Disk: 1024GB MTFDKBA1T0QFM-1BD1AABGB, Graphics: llvmpipe, Audio: AMD Rembrandt Radeon HD Audio, Network: MEDIATEK Device 7925 OS: Ubuntu 24.10, Kernel: 6.11.0-rc6-phx (x86_64), Desktop: GNOME Shell 47.0, Display Server: X Server + Wayland, OpenGL: 4.5 Mesa 24.2.3-1ubuntu1 (LLVM 19.1.0 256 bits), Compiler: GCC 14.2.0, File-System: ext4, Screen Resolution: 2880x1800 b: Processor: AMD Ryzen AI 9 HX 370 @ 4.37GHz (12 Cores / 24 Threads), Motherboard: ASUS Zenbook S 16 UM5606WA_UM5606WA UM5606WA v1.0 (UM5606WA.308 BIOS), Chipset: AMD Device 1507, Memory: 4 x 8GB LPDDR5-7500MT/s Samsung K3KL9L90CM-MGCT, Disk: 1024GB MTFDKBA1T0QFM-1BD1AABGB, Graphics: llvmpipe, Audio: AMD Rembrandt Radeon HD Audio, Network: MEDIATEK Device 7925 OS: Ubuntu 24.10, Kernel: 6.11.0-rc6-phx (x86_64), Desktop: GNOME Shell 47.0, Display Server: X Server + Wayland, OpenGL: 4.5 Mesa 24.2.3-1ubuntu1 (LLVM 19.1.0 256 bits), Compiler: GCC 14.2.0, File-System: ext4, Screen Resolution: 2880x1800 oneDNN 3.6 Harness: IP Shapes 1D - Engine: CPU ms < Lower Is Better a . 2.76848 |================================================================== b . 2.68462 |================================================================ oneDNN 3.6 Harness: IP Shapes 3D - Engine: CPU ms < Lower Is Better a . 3.56923 |================================================================== b . 3.57302 |================================================================== oneDNN 3.6 Harness: Convolution Batch Shapes Auto - Engine: CPU ms < Lower Is Better a . 8.49180 |================================================================== b . 8.43828 |================================================================== oneDNN 3.6 Harness: Deconvolution Batch shapes_1d - Engine: CPU ms < Lower Is Better a . 5.20977 |================================================================== b . 5.17517 |================================================================== oneDNN 3.6 Harness: Deconvolution Batch shapes_3d - Engine: CPU ms < Lower Is Better a . 6.37788 |================================================================== b . 6.37874 |================================================================== oneDNN 3.6 Harness: Recurrent Neural Network Training - Engine: CPU ms < Lower Is Better a . 3085.55 |================================================================== b . 3030.55 |================================================================= oneDNN 3.6 Harness: Recurrent Neural Network Inference - Engine: CPU ms < Lower Is Better a . 2194.27 |================================================================== b . 1616.68 |================================================= LiteRT 2024-10-15 Model: DeepLab V3 Microseconds < Lower Is Better a . 4268.97 |================================================================== b . 3750.10 |========================================================== LiteRT 2024-10-15 Model: SqueezeNet Microseconds < Lower Is Better a . 3784.36 |================================================================== b . 3787.29 |================================================================== LiteRT 2024-10-15 Model: Inception V4 Microseconds < Lower Is Better a . 49706.2 |================================================================= b . 50678.3 |================================================================== LiteRT 2024-10-15 Model: NASNet Mobile Microseconds < Lower Is Better a . 12624.7 |================================================================== b . 12537.6 |================================================================== LiteRT 2024-10-15 Model: Mobilenet Float Microseconds < Lower Is Better a . 2260.57 |================================================================== b . 2266.86 |================================================================== LiteRT 2024-10-15 Model: Mobilenet Quant Microseconds < Lower Is Better a . 1976.79 |================================================================== b . 1801.70 |============================================================ LiteRT 2024-10-15 Model: Inception ResNet V2 Microseconds < Lower Is Better a . 39825.1 |================================================================== b . 37957.1 |=============================================================== LiteRT 2024-10-15 Model: Quantized COCO SSD MobileNet v1 Microseconds < Lower Is Better a . 2844.21 |=============================================================== b . 2983.71 |================================================================== XNNPACK b7b048 Model: FP32MobileNetV1 us < Lower Is Better a . 2387 |===================================================================== b . 2353 |==================================================================== XNNPACK b7b048 Model: FP32MobileNetV2 us < Lower Is Better a . 1932 |===================================================================== b . 1908 |==================================================================== XNNPACK b7b048 Model: FP32MobileNetV3Large us < Lower Is Better a . 2267 |===================================================================== b . 2215 |=================================================================== XNNPACK b7b048 Model: FP32MobileNetV3Small us < Lower Is Better a . 1103 |===================================================================== b . 1084 |==================================================================== XNNPACK b7b048 Model: FP16MobileNetV1 us < Lower Is Better a . 3214 |===================================================================== b . 3203 |===================================================================== XNNPACK b7b048 Model: FP16MobileNetV2 us < Lower Is Better a . 2368 |===================================================================== b . 2358 |===================================================================== XNNPACK b7b048 Model: FP16MobileNetV3Large us < Lower Is Better a . 2583 |===================================================================== b . 2536 |==================================================================== XNNPACK b7b048 Model: FP16MobileNetV3Small us < Lower Is Better a . 1267 |===================================================================== b . 1248 |==================================================================== XNNPACK b7b048 Model: QS8MobileNetV2 us < Lower Is Better a . 1160 |===================================================================== b . 1129 |===================================================================