dddd AMD Ryzen AI 9 365 testing with a ASUS Zenbook S 16 UM5606WA_UM5606WA UM5606WA v1.0 (UM5606WA.308 BIOS) and AMD Radeon 512MB on Ubuntu 24.10 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2410162-NE-DDDD9587909&grs&rdt .
dddd Processor Motherboard Chipset Memory Disk Graphics Audio Network OS Kernel Desktop Display Server OpenGL Compiler File-System Screen Resolution a b c AMD Ryzen AI 9 365 @ 4.31GHz (10 Cores / 20 Threads) ASUS Zenbook S 16 UM5606WA_UM5606WA UM5606WA v1.0 (UM5606WA.308 BIOS) AMD Device 1507 4 x 6GB LPDDR5-7500MT/s Micron MT62F1536M32D4DS-026 1024GB MTFDKBA1T0QFM-1BD1AABGB AMD Radeon 512MB AMD Rembrandt Radeon HD Audio MEDIATEK Device 7925 Ubuntu 24.10 6.11.0-rc6-phx (x86_64) GNOME Shell 47.0 X Server + Wayland 4.6 Mesa 24.2.3-1ubuntu1 (LLVM 19.1.0 DRM 3.58) GCC 14.2.0 ext4 2880x1800 OpenBenchmarking.org Kernel Details - amdgpu.dcdebugmask=0x600 - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2,rust --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-14-zdkDXv/gcc-14-14.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-14-zdkDXv/gcc-14-14.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: amd-pstate-epp performance (Boost: Enabled EPP: performance) - Platform Profile: performance - CPU Microcode: 0xb204011 - ACPI Profile: performance Security Details - gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected
dddd litert: SqueezeNet onednn: IP Shapes 1D - CPU litert: Mobilenet Float litert: Inception V4 litert: NASNet Mobile litert: Mobilenet Quant litert: Quantized COCO SSD MobileNet v1 litert: DeepLab V3 xnnpack: FP32MobileNetV1 xnnpack: FP32MobileNetV2 xnnpack: FP32MobileNetV3Large xnnpack: FP16MobileNetV2 xnnpack: FP32MobileNetV3Small xnnpack: QS8MobileNetV2 onednn: Recurrent Neural Network Training - CPU xnnpack: FP16MobileNetV1 litert: Inception ResNet V2 onednn: Convolution Batch Shapes Auto - CPU xnnpack: FP16MobileNetV3Large onednn: IP Shapes 3D - CPU onednn: Deconvolution Batch shapes_1d - CPU onednn: Recurrent Neural Network Inference - CPU onednn: Deconvolution Batch shapes_3d - CPU xnnpack: FP16MobileNetV3Small a b c 3395.84 3.02556 2168.31 44473.3 10175.3 1668.5 2664.35 3316.22 2288 1847 2142 2371 988 1084 3472.41 3055 36956.9 8.4748 2527 3.04251 5.44277 1810.44 6.2535 1205 3071.99 2.69059 2273.8 43747.7 11035.8 1710.54 2546.05 3156.09 2213 1792 2147 2363 1007 1099 3521.24 3038 36371.8 8.39592 2533 3.02972 5.41282 1808.03 6.2664 1206 2824.06 2.72287 2415.31 40354.9 11060.1 1799.12 2477.8 3121.29 2194 1773 2068 2441 1010 1108 3454.46 3087 36440.2 8.47544 2547 3.01948 5.40451 1797.94 6.28803 1207 OpenBenchmarking.org
LiteRT Model: SqueezeNet OpenBenchmarking.org Microseconds, Fewer Is Better LiteRT 2024-10-15 Model: SqueezeNet a b c 700 1400 2100 2800 3500 3395.84 3071.99 2824.06
oneDNN Harness: IP Shapes 1D - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.6 Harness: IP Shapes 1D - Engine: CPU a b c 0.6808 1.3616 2.0424 2.7232 3.404 3.02556 2.69059 2.72287 MIN: 2.31 MIN: 2.34 MIN: 2.27 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl
LiteRT Model: Mobilenet Float OpenBenchmarking.org Microseconds, Fewer Is Better LiteRT 2024-10-15 Model: Mobilenet Float a b c 500 1000 1500 2000 2500 2168.31 2273.80 2415.31
LiteRT Model: Inception V4 OpenBenchmarking.org Microseconds, Fewer Is Better LiteRT 2024-10-15 Model: Inception V4 a b c 10K 20K 30K 40K 50K 44473.3 43747.7 40354.9
LiteRT Model: NASNet Mobile OpenBenchmarking.org Microseconds, Fewer Is Better LiteRT 2024-10-15 Model: NASNet Mobile a b c 2K 4K 6K 8K 10K 10175.3 11035.8 11060.1
LiteRT Model: Mobilenet Quant OpenBenchmarking.org Microseconds, Fewer Is Better LiteRT 2024-10-15 Model: Mobilenet Quant a b c 400 800 1200 1600 2000 1668.50 1710.54 1799.12
LiteRT Model: Quantized COCO SSD MobileNet v1 OpenBenchmarking.org Microseconds, Fewer Is Better LiteRT 2024-10-15 Model: Quantized COCO SSD MobileNet v1 a b c 600 1200 1800 2400 3000 2664.35 2546.05 2477.80
LiteRT Model: DeepLab V3 OpenBenchmarking.org Microseconds, Fewer Is Better LiteRT 2024-10-15 Model: DeepLab V3 a b c 700 1400 2100 2800 3500 3316.22 3156.09 3121.29
XNNPACK Model: FP32MobileNetV1 OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP32MobileNetV1 a b c 500 1000 1500 2000 2500 2288 2213 2194 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP32MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP32MobileNetV2 a b c 400 800 1200 1600 2000 1847 1792 1773 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP32MobileNetV3Large OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP32MobileNetV3Large a b c 500 1000 1500 2000 2500 2142 2147 2068 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP16MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP16MobileNetV2 a b c 500 1000 1500 2000 2500 2371 2363 2441 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP32MobileNetV3Small OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP32MobileNetV3Small a b c 200 400 600 800 1000 988 1007 1010 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: QS8MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: QS8MobileNetV2 a b c 200 400 600 800 1000 1084 1099 1108 1. (CXX) g++ options: -O3 -lrt -lm
oneDNN Harness: Recurrent Neural Network Training - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.6 Harness: Recurrent Neural Network Training - Engine: CPU a b c 800 1600 2400 3200 4000 3472.41 3521.24 3454.46 MIN: 3449.8 MIN: 3437.77 MIN: 3431.05 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl
XNNPACK Model: FP16MobileNetV1 OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP16MobileNetV1 a b c 700 1400 2100 2800 3500 3055 3038 3087 1. (CXX) g++ options: -O3 -lrt -lm
LiteRT Model: Inception ResNet V2 OpenBenchmarking.org Microseconds, Fewer Is Better LiteRT 2024-10-15 Model: Inception ResNet V2 a b c 8K 16K 24K 32K 40K 36956.9 36371.8 36440.2
oneDNN Harness: Convolution Batch Shapes Auto - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.6 Harness: Convolution Batch Shapes Auto - Engine: CPU a b c 2 4 6 8 10 8.47480 8.39592 8.47544 MIN: 8.28 MIN: 8.17 MIN: 8.28 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl
XNNPACK Model: FP16MobileNetV3Large OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP16MobileNetV3Large a b c 500 1000 1500 2000 2500 2527 2533 2547 1. (CXX) g++ options: -O3 -lrt -lm
oneDNN Harness: IP Shapes 3D - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.6 Harness: IP Shapes 3D - Engine: CPU a b c 0.6846 1.3692 2.0538 2.7384 3.423 3.04251 3.02972 3.01948 MIN: 2.93 MIN: 2.93 MIN: 2.94 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_1d - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.6 Harness: Deconvolution Batch shapes_1d - Engine: CPU a b c 1.2246 2.4492 3.6738 4.8984 6.123 5.44277 5.41282 5.40451 MIN: 4.5 MIN: 4.7 MIN: 4.64 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl
oneDNN Harness: Recurrent Neural Network Inference - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.6 Harness: Recurrent Neural Network Inference - Engine: CPU a b c 400 800 1200 1600 2000 1810.44 1808.03 1797.94 MIN: 1787.4 MIN: 1780.05 MIN: 1780.76 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_3d - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.6 Harness: Deconvolution Batch shapes_3d - Engine: CPU a b c 2 4 6 8 10 6.25350 6.26640 6.28803 MIN: 6.02 MIN: 5.85 MIN: 6.09 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl
XNNPACK Model: FP16MobileNetV3Small OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP16MobileNetV3Small a b c 300 600 900 1200 1500 1205 1206 1207 1. (CXX) g++ options: -O3 -lrt -lm
Phoronix Test Suite v10.8.5