ldld AMD Ryzen 7 PRO 5850U testing with a LENOVO ThinkPad T14s Gen 2a 20XF004WUS (R1NET57W 1.27 BIOS) and AMD Radeon Vega / Mobile 1GB on Fedora Linux 39 via the Phoronix Test Suite. a: Processor: AMD Ryzen 7 PRO 5850U @ 4.51GHz (8 Cores / 16 Threads), Motherboard: LENOVO ThinkPad T14s Gen 2a 20XF004WUS (R1NET57W 1.27 BIOS), Chipset: AMD Renoir/Cezanne, Memory: 2 x 16GB LPDDR4-4266MT/s Micron MT53E2G32D4NQ-046, Disk: 1024GB SAMSUNG MZVLB1T0HBLR-000L7, Graphics: AMD Radeon Vega / Mobile 1GB, Audio: AMD Renoir Radeon HD Audio, Network: Realtek RTL8111/8168/8411 + MEDIATEK MT7921 802.11ax PCI OS: Fedora Linux 39, Kernel: 6.5.8-300.fc39.x86_64 (x86_64), Desktop: GNOME Shell 45.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 23.2.1 (LLVM 16.0.6 DRM 3.54), Compiler: GCC 13.2.1 20230918, File-System: btrfs, Screen Resolution: 3840x2160 b: Processor: AMD Ryzen 7 PRO 5850U @ 4.51GHz (8 Cores / 16 Threads), Motherboard: LENOVO ThinkPad T14s Gen 2a 20XF004WUS (R1NET57W 1.27 BIOS), Chipset: AMD Renoir/Cezanne, Memory: 2 x 16GB LPDDR4-4266MT/s Micron MT53E2G32D4NQ-046, Disk: 1024GB SAMSUNG MZVLB1T0HBLR-000L7, Graphics: AMD Radeon Vega / Mobile 1GB, Audio: AMD Renoir Radeon HD Audio, Network: Realtek RTL8111/8168/8411 + MEDIATEK MT7921 802.11ax PCI OS: Fedora Linux 39, Kernel: 6.5.8-300.fc39.x86_64 (x86_64), Desktop: GNOME Shell 45.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 23.2.1 (LLVM 16.0.6 DRM 3.54), Compiler: GCC 13.2.1 20230918, File-System: btrfs, Screen Resolution: 3840x2160 c: Processor: AMD Ryzen 7 PRO 5850U @ 4.51GHz (8 Cores / 16 Threads), Motherboard: LENOVO ThinkPad T14s Gen 2a 20XF004WUS (R1NET57W 1.27 BIOS), Chipset: AMD Renoir/Cezanne, Memory: 2 x 16GB LPDDR4-4266MT/s Micron MT53E2G32D4NQ-046, Disk: 1024GB SAMSUNG MZVLB1T0HBLR-000L7, Graphics: AMD Radeon Vega / Mobile 1GB, Audio: AMD Renoir Radeon HD Audio, Network: Realtek RTL8111/8168/8411 + MEDIATEK MT7921 802.11ax PCI OS: Fedora Linux 39, Kernel: 6.5.8-300.fc39.x86_64 (x86_64), Desktop: GNOME Shell 45.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 23.2.1 (LLVM 16.0.6 DRM 3.54), Compiler: GCC 13.2.1 20230918, File-System: btrfs, Screen Resolution: 3840x2160 VkFFT 1.3.4 Test: FFT + iFFT R2C / C2R Benchmark Score > Higher Is Better a . 3267 |================================================================== b . 3346 |==================================================================== c . 3392 |===================================================================== VkFFT 1.3.4 Test: FFT + iFFT C2C 1D batched in half precision Benchmark Score > Higher Is Better a . 11894 |=================================================================== b . 12032 |==================================================================== c . 12045 |==================================================================== VkFFT 1.3.4 Test: FFT + iFFT C2C Bluestein in single precision Benchmark Score > Higher Is Better a . 1540 |=================================================================== b . 1556 |==================================================================== c . 1579 |===================================================================== VkFFT 1.3.4 Test: FFT + iFFT C2C 1D batched in double precision Benchmark Score > Higher Is Better a . 2683 |===================================================================== b . 2610 |=================================================================== c . 2671 |===================================================================== VkFFT 1.3.4 Test: FFT + iFFT C2C 1D batched in single precision Benchmark Score > Higher Is Better a . 6126 |===================================================================== b . 6078 |==================================================================== c . 6141 |===================================================================== VkFFT 1.3.4 Test: FFT + iFFT C2C multidimensional in single precision Benchmark Score > Higher Is Better a . 3308 |==================================================================== b . 3344 |===================================================================== c . 3368 |===================================================================== VkFFT 1.3.4 Test: FFT + iFFT C2C Bluestein benchmark in double precision Benchmark Score > Higher Is Better a . 912 |===================================================================== b . 926 |====================================================================== c . 928 |====================================================================== VkFFT 1.3.4 Test: FFT + iFFT C2C 1D batched in single precision, no reshuffling Benchmark Score > Higher Is Better a . 6453 |==================================================================== b . 6525 |===================================================================== c . 6465 |==================================================================== NAMD 3.0b6 Input: ATPase with 327,506 Atoms ns/day > Higher Is Better a . 0.32043 |================================================================== b . 0.30829 |=============================================================== c . 0.30993 |================================================================ NAMD 3.0b6 Input: STMV with 1,066,628 Atoms ns/day > Higher Is Better a . 0.09658 |================================================================== b . 0.09286 |=============================================================== c . 0.09314 |================================================================ LZ4 Compression 1.9.4 Compression Level: 1 - Compression Speed MB/s > Higher Is Better a . 779.42 |=================================================================== b . 724.19 |============================================================== c . 718.46 |============================================================== LZ4 Compression 1.9.4 Compression Level: 1 - Decompression Speed MB/s > Higher Is Better a . 4566.3 |=================================================================== b . 4200.9 |============================================================== c . 4218.9 |============================================================== LZ4 Compression 1.9.4 Compression Level: 3 - Compression Speed MB/s > Higher Is Better a . 119.47 |=================================================================== b . 110.84 |============================================================== c . 110.79 |============================================================== LZ4 Compression 1.9.4 Compression Level: 3 - Decompression Speed MB/s > Higher Is Better a . 4203.1 |=================================================================== b . 3836.5 |============================================================= c . 3895.6 |============================================================== LZ4 Compression 1.9.4 Compression Level: 9 - Compression Speed MB/s > Higher Is Better a . 39.06 |==================================================================== b . 37.65 |================================================================== c . 37.32 |================================================================= LZ4 Compression 1.9.4 Compression Level: 9 - Decompression Speed MB/s > Higher Is Better a . 4258.6 |=================================================================== b . 4006.1 |=============================================================== c . 4027.6 |=============================================================== dav1d 1.4 Video Input: Chimera 1080p FPS > Higher Is Better a . 373.16 |=================================================================== b . 346.73 |============================================================== c . 348.76 |=============================================================== dav1d 1.4 Video Input: Summer Nature 4K FPS > Higher Is Better a . 124.80 |=================================================================== b . 119.84 |================================================================ c . 120.36 |================================================================= dav1d 1.4 Video Input: Summer Nature 1080p FPS > Higher Is Better a . 529.59 |=================================================================== b . 498.53 |=============================================================== c . 499.42 |=============================================================== dav1d 1.4 Video Input: Chimera 1080p 10-bit FPS > Higher Is Better a . 306.07 |=================================================================== b . 279.41 |============================================================= c . 284.84 |============================================================== Intel Open Image Denoise 2.2 Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only Images / Sec > Higher Is Better a . 0.21 |===================================================================== b . 0.20 |================================================================== c . 0.20 |================================================================== Intel Open Image Denoise 2.2 Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only Images / Sec > Higher Is Better a . 0.21 |===================================================================== b . 0.20 |================================================================== c . 0.20 |================================================================== Intel Open Image Denoise 2.2 Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only Images / Sec > Higher Is Better a . 0.10 |===================================================================== b . 0.10 |===================================================================== c . 0.10 |===================================================================== GROMACS 2024 Implementation: MPI CPU - Input: water_GMX50_bare Ns Per Day > Higher Is Better a . 0.595 |==================================================================== b . 0.583 |=================================================================== c . 0.585 |=================================================================== ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 73.93 |==================================================================== b . 73.11 |=================================================================== c . 72.84 |=================================================================== ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 13.52 |=================================================================== b . 13.67 |==================================================================== c . 13.72 |==================================================================== ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 4.72981 |================================================================== b . 4.52820 |=============================================================== c . 4.53743 |=============================================================== ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 211.42 |================================================================ b . 220.83 |=================================================================== c . 220.38 |=================================================================== ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 89.05 |==================================================================== b . 88.74 |==================================================================== c . 88.72 |==================================================================== ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 11.23 |==================================================================== b . 11.27 |==================================================================== c . 11.27 |==================================================================== ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 7.80252 |================================================================== b . 7.46483 |=============================================================== c . 7.48038 |=============================================================== ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 128.16 |================================================================ b . 133.96 |=================================================================== c . 133.68 |=================================================================== ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 233.20 |=================================================================== b . 223.24 |================================================================ c . 221.48 |================================================================ ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 4.28653 |=============================================================== b . 4.47781 |================================================================= c . 4.51337 |================================================================== ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 0.784697 |================================================================= b . 0.754855 |=============================================================== c . 0.755065 |=============================================================== ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 1274.37 |=============================================================== b . 1324.75 |================================================================== c . 1324.38 |================================================================== ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 17.54 |==================================================================== b . 17.30 |=================================================================== c . 17.26 |=================================================================== ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 57.01 |=================================================================== b . 57.80 |==================================================================== c . 57.92 |==================================================================== ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 97.13 |==================================================================== b . 93.24 |================================================================= c . 94.04 |================================================================== ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 10.29 |================================================================= b . 10.72 |==================================================================== c . 10.63 |=================================================================== ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 57.33 |==================================================================== b . 54.16 |================================================================ c . 53.91 |================================================================ ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 17.44 |================================================================ b . 18.46 |==================================================================== c . 18.55 |==================================================================== ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 21.66 |==================================================================== b . 17.77 |======================================================== c . 17.66 |======================================================= ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 46.15 |======================================================= b . 56.37 |==================================================================== c . 56.62 |====================================================================