dda AMD Ryzen 7 7840HS testing with a Framework Laptop 16 (AMD Ryzen 7040 ) FRANMZCP07 (03.01 BIOS) and AMD Radeon 780M 512MB on Ubuntu 24.04 via the Phoronix Test Suite. a: Processor: AMD Ryzen 7 7840HS @ 5.29GHz (8 Cores / 16 Threads), Motherboard: Framework Laptop 16 (AMD Ryzen 7040 ) FRANMZCP07 (03.01 BIOS), Chipset: AMD Device 14e8, Memory: 2 x 8GB DDR5-5600MT/s A-DATA AD5S56008G-B, Disk: 512GB Western Digital PC SN810 SDCPNRY-512G, Graphics: AMD Radeon 780M 512MB, Audio: AMD Navi 31 HDMI/DP, Network: MEDIATEK MT7922 802.11ax PCI OS: Ubuntu 24.04, Kernel: 6.8.0-49-generic (x86_64), Desktop: GNOME Shell 46.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 24.2~git2406200600.0ac0fb~oibaf~n (git-0ac0fbc 2024-06-20 noble-oibaf-ppa) (LLVM 17.0.6 DRM 3.57), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 2560x1600 b: Processor: AMD Ryzen 7 7840HS @ 5.29GHz (8 Cores / 16 Threads), Motherboard: Framework Laptop 16 (AMD Ryzen 7040 ) FRANMZCP07 (03.01 BIOS), Chipset: AMD Device 14e8, Memory: 2 x 8GB DDR5-5600MT/s A-DATA AD5S56008G-B, Disk: 512GB Western Digital PC SN810 SDCPNRY-512G, Graphics: AMD Radeon 780M 512MB, Audio: AMD Navi 31 HDMI/DP, Network: MEDIATEK MT7922 802.11ax PCI OS: Ubuntu 24.04, Kernel: 6.8.0-49-generic (x86_64), Desktop: GNOME Shell 46.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 24.2~git2406200600.0ac0fb~oibaf~n (git-0ac0fbc 2024-06-20 noble-oibaf-ppa) (LLVM 17.0.6 DRM 3.57), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 2560x1600 c: Processor: AMD Ryzen 7 7840HS @ 5.29GHz (8 Cores / 16 Threads), Motherboard: Framework Laptop 16 (AMD Ryzen 7040 ) FRANMZCP07 (03.01 BIOS), Chipset: AMD Device 14e8, Memory: 2 x 8GB DDR5-5600MT/s A-DATA AD5S56008G-B, Disk: 512GB Western Digital PC SN810 SDCPNRY-512G, Graphics: AMD Radeon 780M 512MB, Audio: AMD Navi 31 HDMI/DP, Network: MEDIATEK MT7922 802.11ax PCI OS: Ubuntu 24.04, Kernel: 6.8.0-49-generic (x86_64), Desktop: GNOME Shell 46.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 24.2~git2406200600.0ac0fb~oibaf~n (git-0ac0fbc 2024-06-20 noble-oibaf-ppa) (LLVM 17.0.6 DRM 3.57), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 2560x1600 Llamafile 0.8.16 Model: Llama-3.2-3B-Instruct.Q6_K - Test: Text Generation 16 Tokens Per Second > Higher Is Better a . 20.26 |================================================================= b . 21.14 |==================================================================== c . 20.31 |================================================================= Llamafile 0.8.16 Model: Llama-3.2-3B-Instruct.Q6_K - Test: Text Generation 128 Tokens Per Second > Higher Is Better a . 21.17 |==================================================================== b . 21.12 |==================================================================== c . 21.17 |==================================================================== Llamafile 0.8.16 Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 256 Tokens Per Second > Higher Is Better a . 4096 |===================================================================== b . 4096 |===================================================================== c . 4096 |===================================================================== Llamafile 0.8.16 Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 512 Tokens Per Second > Higher Is Better a . 8192 |===================================================================== b . 8192 |===================================================================== c . 8192 |===================================================================== Llamafile 0.8.16 Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Text Generation 16 Tokens Per Second > Higher Is Better a . 26.06 |================================================================ b . 25.94 |================================================================ c . 27.75 |==================================================================== Llamafile 0.8.16 Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 1024 Tokens Per Second > Higher Is Better a . 16384 |==================================================================== b . 16384 |==================================================================== c . 16384 |==================================================================== Llamafile 0.8.16 Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 2048 Tokens Per Second > Higher Is Better a . 32768 |==================================================================== b . 32768 |==================================================================== c . 32768 |==================================================================== Llamafile 0.8.16 Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Text Generation 128 Tokens Per Second > Higher Is Better a . 26.99 |================================================================== b . 27.02 |================================================================== c . 27.81 |==================================================================== Llamafile 0.8.16 Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Text Generation 16 Tokens Per Second > Higher Is Better a . 10.50 |================================================================ b . 11.03 |=================================================================== c . 11.17 |==================================================================== Llamafile 0.8.16 Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 256 Tokens Per Second > Higher Is Better a . 4096 |===================================================================== b . 4096 |===================================================================== c . 4096 |===================================================================== Llamafile 0.8.16 Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 512 Tokens Per Second > Higher Is Better a . 8192 |===================================================================== b . 8192 |===================================================================== c . 8192 |===================================================================== Llamafile 0.8.16 Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Text Generation 128 Tokens Per Second > Higher Is Better a . 11.03 |=================================================================== b . 11.17 |==================================================================== c . 11.17 |==================================================================== Llamafile 0.8.16 Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 1024 Tokens Per Second > Higher Is Better a . 16384 |==================================================================== b . 16384 |==================================================================== c . 16384 |==================================================================== Llamafile 0.8.16 Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 2048 Tokens Per Second > Higher Is Better a . 32768 |==================================================================== b . 32768 |==================================================================== c . 32768 |==================================================================== Llamafile 0.8.16 Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 256 Tokens Per Second > Higher Is Better a . 4096 |===================================================================== b . 4096 |===================================================================== c . 4096 |===================================================================== Llamafile 0.8.16 Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 512 Tokens Per Second > Higher Is Better a . 8192 |===================================================================== b . 8192 |===================================================================== c . 8192 |===================================================================== Llamafile 0.8.16 Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 1024 Tokens Per Second > Higher Is Better a . 16384 |==================================================================== b . 16384 |==================================================================== c . 16384 |==================================================================== Llamafile 0.8.16 Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 2048 Tokens Per Second > Higher Is Better a . 32768 |==================================================================== b . 32768 |==================================================================== c . 32768 |==================================================================== x265 4.1 Video Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 14.16 |=================================================================== b . 13.72 |================================================================= c . 14.43 |==================================================================== x265 4.1 Video Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 68.52 |=================================================================== b . 65.95 |================================================================ c . 69.63 |====================================================================