feb compute AMD Ryzen 7 PRO 6850U testing with a LENOVO ThinkPad X13 Gen 3 21CM0001US (R22ET51W 1.21 BIOS) and AMD Radeon 680M 1GB on Fedora Linux 39 via the Phoronix Test Suite. a: Processor: AMD Ryzen 7 PRO 6850U @ 4.77GHz (8 Cores / 16 Threads), Motherboard: LENOVO ThinkPad X13 Gen 3 21CM0001US (R22ET51W 1.21 BIOS), Chipset: AMD 17h-19h PCIe Root Complex, Memory: 4 x 4GB DRAM-6400MT/s Micron MT62F1G32D4DR-031 WT, Disk: 512GB Micron MTFDKBA512TFK, Graphics: AMD Radeon 680M 1GB, Audio: AMD Rembrandt Radeon HD Audio, Network: Qualcomm QCNFA765 OS: Fedora Linux 39, Kernel: 6.5.7-300.fc39.x86_64 (x86_64), Desktop: GNOME Shell 45.0, Display Server: X Server 1.20.14 + Wayland, OpenGL: 4.6 Mesa 23.2.1 (LLVM 16.0.6 DRM 3.54), Compiler: GCC 13.2.1 20230918, File-System: btrfs, Screen Resolution: 1920x1200 b: Processor: AMD Ryzen 7 PRO 6850U @ 4.77GHz (8 Cores / 16 Threads), Motherboard: LENOVO ThinkPad X13 Gen 3 21CM0001US (R22ET51W 1.21 BIOS), Chipset: AMD 17h-19h PCIe Root Complex, Memory: 4 x 4GB DRAM-6400MT/s Micron MT62F1G32D4DR-031 WT, Disk: 512GB Micron MTFDKBA512TFK, Graphics: AMD Radeon 680M 1GB, Audio: AMD Rembrandt Radeon HD Audio, Network: Qualcomm QCNFA765 OS: Fedora Linux 39, Kernel: 6.5.7-300.fc39.x86_64 (x86_64), Desktop: GNOME Shell 45.0, Display Server: X Server 1.20.14 + Wayland, OpenGL: 4.6 Mesa 23.2.1 (LLVM 16.0.6 DRM 3.54), Compiler: GCC 13.2.1 20230918, File-System: btrfs, Screen Resolution: 1920x1200 c: Processor: AMD Ryzen 7 PRO 6850U @ 4.77GHz (8 Cores / 16 Threads), Motherboard: LENOVO ThinkPad X13 Gen 3 21CM0001US (R22ET51W 1.21 BIOS), Chipset: AMD 17h-19h PCIe Root Complex, Memory: 4 x 4GB DRAM-6400MT/s Micron MT62F1G32D4DR-031 WT, Disk: 512GB Micron MTFDKBA512TFK, Graphics: AMD Radeon 680M 1GB, Audio: AMD Rembrandt Radeon HD Audio, Network: Qualcomm QCNFA765 OS: Fedora Linux 39, Kernel: 6.5.7-300.fc39.x86_64 (x86_64), Desktop: GNOME Shell 45.0, Display Server: X Server 1.20.14 + Wayland, OpenGL: 4.6 Mesa 23.2.1 (LLVM 16.0.6 DRM 3.54), Compiler: GCC 13.2.1 20230918, File-System: btrfs, Screen Resolution: 1920x1200 NAMD 3.0b6 Input: ATPase with 327,506 Atoms ns/day > Higher Is Better a . 0.37418 |================================================================== b . 0.37699 |================================================================== c . 0.37550 |================================================================== NAMD 3.0b6 Input: STMV with 1,066,628 Atoms ns/day > Higher Is Better a . 0.10784 |=============================================================== b . 0.11155 |================================================================= c . 0.11332 |================================================================== dav1d 1.4 Video Input: Chimera 1080p FPS > Higher Is Better a . 440.16 |================================================================ b . 440.37 |================================================================ c . 458.10 |=================================================================== dav1d 1.4 Video Input: Summer Nature 4K FPS > Higher Is Better a . 148.44 |================================================================== b . 148.55 |================================================================== c . 150.79 |=================================================================== dav1d 1.4 Video Input: Summer Nature 1080p FPS > Higher Is Better a . 611.34 |================================================================= b . 614.45 |================================================================= c . 628.79 |=================================================================== dav1d 1.4 Video Input: Chimera 1080p 10-bit FPS > Higher Is Better a . 366.39 |=================================================================== b . 360.20 |================================================================== c . 357.16 |================================================================= Intel Open Image Denoise 2.2 Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only Images / Sec > Higher Is Better a . 0.25 |===================================================================== b . 0.25 |===================================================================== c . 0.25 |===================================================================== Intel Open Image Denoise 2.2 Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only Images / Sec > Higher Is Better a . 0.25 |===================================================================== b . 0.25 |===================================================================== c . 0.25 |===================================================================== Intel Open Image Denoise 2.2 Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only Images / Sec > Higher Is Better a . 0.12 |===================================================================== b . 0.12 |===================================================================== c . 0.12 |===================================================================== GROMACS 2024 Implementation: MPI CPU - Input: water_GMX50_bare Ns Per Day > Higher Is Better a . 0.726 |==================================================================== b . 0.725 |==================================================================== c . 0.725 |==================================================================== ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 75.00 |==================================================================== b . 75.03 |==================================================================== c . 74.67 |==================================================================== ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 13.33 |==================================================================== b . 13.32 |==================================================================== c . 13.38 |==================================================================== ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 76.76 |============================================================ b . 86.41 |==================================================================== c . 86.17 |==================================================================== ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 13.01 |==================================================================== b . 11.56 |============================================================ c . 11.59 |============================================================= ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 4.56321 |================================================================== b . 4.47998 |================================================================ c . 4.59463 |================================================================== ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 219.14 |================================================================== b . 223.21 |=================================================================== c . 217.64 |================================================================= ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 4.42743 |=============================================== b . 4.43764 |=============================================== c . 6.25362 |================================================================== ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 225.86 |=================================================================== b . 225.34 |=================================================================== c . 159.90 |=============================================== ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 96.26 |==================================================================== b . 96.42 |==================================================================== c . 96.32 |==================================================================== ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 10.39 |==================================================================== b . 10.37 |==================================================================== c . 10.38 |==================================================================== ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 95.52 |=============================================================== b . 101.07 |=================================================================== c . 100.85 |=================================================================== ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 10.46320 |================================================================= b . 9.88722 |============================================================= c . 9.91203 |============================================================== ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 6.79852 |================================================================== b . 6.82285 |================================================================== c . 6.70092 |================================================================= ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 147.09 |================================================================== b . 146.56 |================================================================== c . 149.23 |=================================================================== ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 10.07 |==================================================================== b . 10.10 |==================================================================== c . 10.05 |==================================================================== ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 99.31 |==================================================================== b . 99.01 |==================================================================== c . 99.54 |==================================================================== ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 260.16 |================================================================= b . 266.71 |=================================================================== c . 263.44 |================================================================== ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 3.84125 |================================================================== b . 3.74602 |================================================================ c . 3.79350 |================================================================= ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 309.70 |=================================================================== b . 308.33 |=================================================================== c . 308.77 |=================================================================== ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 3.22604 |================================================================== b . 3.24057 |================================================================== c . 3.23610 |================================================================== ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 0.666164 |================================================================= b . 0.652160 |================================================================ c . 0.581199 |========================================================= ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 1501.12 |========================================================== b . 1533.36 |=========================================================== c . 1720.57 |================================================================== ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 0.992048 |================================================================= b . 0.646323 |========================================== c . 0.645364 |========================================== ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 1008.01 |=========================================== b . 1547.21 |================================================================== c . 1549.50 |================================================================== ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 15.26 |================================================================== b . 15.68 |==================================================================== c . 14.45 |=============================================================== ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 65.52 |================================================================ b . 63.75 |=============================================================== c . 69.22 |==================================================================== ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 14.91 |============================================= b . 14.91 |============================================= c . 22.39 |==================================================================== ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 67.05 |==================================================================== b . 67.06 |==================================================================== c . 44.65 |============================================= ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 102.93 |=================================================================== b . 102.73 |=================================================================== c . 102.52 |=================================================================== ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 9.71292 |================================================================== b . 9.73161 |================================================================== c . 9.75122 |================================================================== ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 105.62 |=============================================================== b . 113.15 |=================================================================== c . 112.96 |=================================================================== ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 9.46487 |================================================================== b . 8.83440 |============================================================== c . 8.84917 |============================================================== ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 47.99 |=================================================================== b . 48.41 |==================================================================== c . 48.11 |==================================================================== ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 20.84 |==================================================================== b . 20.65 |=================================================================== c . 20.78 |==================================================================== ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 67.80 |==================================================================== b . 66.64 |=================================================================== c . 47.84 |================================================ ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 14.74 |================================================ b . 15.00 |================================================= c . 20.90 |==================================================================== ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 22.93 |==================================================================== b . 22.98 |==================================================================== c . 22.90 |==================================================================== ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 43.62 |==================================================================== b . 43.51 |==================================================================== c . 43.67 |==================================================================== ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 23.68 |========================================================= b . 28.39 |==================================================================== c . 23.69 |========================================================= ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 42.21 |==================================================================== b . 35.22 |========================================================= c . 42.21 |====================================================================