dsdfds Intel Core i9-14900K testing with a ASUS PRIME Z790-P WIFI (1662 BIOS) and ASUS Intel RPL-S 16GB on Ubuntu 24.04 via the Phoronix Test Suite. a: Processor: Intel Core i9-14900K @ 5.70GHz (24 Cores / 32 Threads), Motherboard: ASUS PRIME Z790-P WIFI (1662 BIOS), Chipset: Intel Raptor Lake-S PCH, Memory: 2 x 16GB DDR5-6000MT/s Corsair CMK32GX5M2B6000C36, Disk: Western Digital WD_BLACK SN850X 2000GB, Graphics: ASUS Intel RPL-S 16GB, Audio: Realtek ALC897, Monitor: ASUS VP28U OS: Ubuntu 24.04, Kernel: 6.10.0-061000rc6daily20240706-generic (x86_64), Desktop: GNOME Shell 46.0, Display Server: X Server 1.21.1.11 + Wayland, OpenGL: 4.6 Mesa 24.2~git2407080600.801ed4~oibaf~n (git-801ed4d 2024-07-08 noble-oibaf-ppa), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 3840x2160 b: Processor: Intel Core i9-14900K @ 5.70GHz (24 Cores / 32 Threads), Motherboard: ASUS PRIME Z790-P WIFI (1662 BIOS), Chipset: Intel Raptor Lake-S PCH, Memory: 2 x 16GB DDR5-6000MT/s Corsair CMK32GX5M2B6000C36, Disk: Western Digital WD_BLACK SN850X 2000GB, Graphics: ASUS Intel RPL-S 16GB, Audio: Realtek ALC897, Monitor: ASUS VP28U OS: Ubuntu 24.04, Kernel: 6.10.0-061000rc6daily20240706-generic (x86_64), Desktop: GNOME Shell 46.0, Display Server: X Server 1.21.1.11 + Wayland, OpenGL: 4.6 Mesa 24.2~git2407080600.801ed4~oibaf~n (git-801ed4d 2024-07-08 noble-oibaf-ppa), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 3840x2160 Mobile Neural Network 2.9.b11b7037d Model: nasnet ms < Lower Is Better a . 6.746 |==================================================================== b . 6.698 |==================================================================== Mobile Neural Network 2.9.b11b7037d Model: mobilenetV3 ms < Lower Is Better a . 0.973 |==================================================================== b . 0.887 |============================================================== Mobile Neural Network 2.9.b11b7037d Model: squeezenetv1.1 ms < Lower Is Better a . 1.818 |==================================================================== b . 1.594 |============================================================ Mobile Neural Network 2.9.b11b7037d Model: resnet-v2-50 ms < Lower Is Better a . 12.63 |==================================================================== b . 12.44 |=================================================================== Mobile Neural Network 2.9.b11b7037d Model: SqueezeNetV1.0 ms < Lower Is Better a . 2.720 |==================================================================== b . 2.697 |=================================================================== Mobile Neural Network 2.9.b11b7037d Model: MobileNetV2_224 ms < Lower Is Better a . 1.815 |================================================================ b . 1.930 |==================================================================== Mobile Neural Network 2.9.b11b7037d Model: mobilenet-v1-1.0 ms < Lower Is Better a . 2.061 |==================================================================== b . 2.063 |==================================================================== LiteRT 2024-10-15 Model: Mobilenet Float Microseconds < Lower Is Better a . 986.15 |=================================================================== b . 992.64 |=================================================================== LiteRT 2024-10-15 Model: Quantized COCO SSD MobileNet v1 Microseconds < Lower Is Better a . 3200.84 |================================================================== b . 3059.11 |=============================================================== LiteRT 2024-10-15 Model: Mobilenet Quant Microseconds < Lower Is Better a . 3256.23 |================================================================== b . 2963.55 |============================================================ LiteRT 2024-10-15 Model: DeepLab V3 Microseconds < Lower Is Better a . 4619.23 |=================================================== b . 5977.32 |================================================================== LiteRT 2024-10-15 Model: NASNet Mobile Microseconds < Lower Is Better a . 19510.7 |================================================================== b . 13872.3 |=============================================== LiteRT 2024-10-15 Model: SqueezeNet Microseconds < Lower Is Better a . 1382.17 |================================================================== b . 1371.34 |================================================================= Mobile Neural Network 2.9.b11b7037d Model: inception-v3 ms < Lower Is Better a . 15.83 |==================================================================== b . 15.89 |==================================================================== XNNPACK b7b048 Model: FP32MobileNetV1 us < Lower Is Better a . 1052 |======================================================== b . 1293 |===================================================================== XNNPACK b7b048 Model: FP32MobileNetV2 us < Lower Is Better a . 1035 |===================================================================== b . 1016 |==================================================================== XNNPACK b7b048 Model: FP32MobileNetV3Large us < Lower Is Better a . 1285 |=================================================================== b . 1321 |===================================================================== XNNPACK b7b048 Model: FP16MobileNetV1 us < Lower Is Better a . 1667 |=========================================================== b . 1957 |===================================================================== XNNPACK b7b048 Model: FP16MobileNetV2 us < Lower Is Better a . 1361 |===================================================================== b . 1358 |===================================================================== XNNPACK b7b048 Model: FP16MobileNetV3Large us < Lower Is Better a . 1574 |===================================================================== b . 1496 |================================================================== XNNPACK b7b048 Model: FP16MobileNetV3Small us < Lower Is Better a . 841 |====================================================================== b . 828 |===================================================================== XNNPACK b7b048 Model: QS8MobileNetV2 us < Lower Is Better a . 786 |====================================================================== b . 776 |===================================================================== LiteRT 2024-10-15 Model: Inception V4 Microseconds < Lower Is Better a . 17993.1 |================================================================== b . 17757.9 |================================================================= LiteRT 2024-10-15 Model: Inception ResNet V2 Microseconds < Lower Is Better a . 28238.6 |============================================================= b . 30330.4 |================================================================== XNNPACK b7b048 Model: FP32MobileNetV3Small us < Lower Is Better a . 728 |====================================================================== b . 716 |===================================================================== NAMD 3.0 Input: ATPase with 327,506 Atoms ns/day > Higher Is Better a . 1.52846 |================================================================== b . 1.45419 |=============================================================== NAMD 3.0 Input: STMV with 1,066,628 Atoms ns/day > Higher Is Better a . 0.45061 |================================================================== b . 0.44767 |================================================================== oneDNN 3.6 Harness: IP Shapes 1D - Engine: CPU ms < Lower Is Better a . 1.99276 |================================================================== b . 1.98251 |================================================================== oneDNN 3.6 Harness: IP Shapes 3D - Engine: CPU ms < Lower Is Better a . 6.67614 |================================================================== b . 6.65975 |================================================================== oneDNN 3.6 Harness: Convolution Batch Shapes Auto - Engine: CPU ms < Lower Is Better a . 6.18577 |================================================================== b . 6.19464 |================================================================== oneDNN 3.6 Harness: Deconvolution Batch shapes_1d - Engine: CPU ms < Lower Is Better a . 3.68079 |================================================================== b . 3.67048 |================================================================== oneDNN 3.6 Harness: Deconvolution Batch shapes_3d - Engine: CPU ms < Lower Is Better a . 4.03162 |================================================================== b . 4.01431 |================================================================== oneDNN 3.6 Harness: Recurrent Neural Network Training - Engine: CPU ms < Lower Is Better a . 2118.22 |================================================================== b . 2116.90 |================================================================== oneDNN 3.6 Harness: Recurrent Neural Network Inference - Engine: CPU ms < Lower Is Better a . 1116.11 |================================================================== b . 1085.07 |================================================================ Build2 0.17 Time To Compile Seconds < Lower Is Better a . 88.38 |==================================================================== b . 87.53 |=================================================================== Apache Cassandra 5.0 Test: Writes Op/s > Higher Is Better a . 306186 |=================================================================== b . 293824 |================================================================ BYTE Unix Benchmark 5.1.3-git Computational Test: Pipe LPS > Higher Is Better a . 72384778.6 |=============================================================== b . 71939326.6 |=============================================================== BYTE Unix Benchmark 5.1.3-git Computational Test: Dhrystone 2 LPS > Higher Is Better a . 1899605032.6 |============================================================ b . 1917381047.3 |============================================================= BYTE Unix Benchmark 5.1.3-git Computational Test: System Call LPS > Higher Is Better a . 65518076.8 |=============================================================== b . 65688838.3 |=============================================================== BYTE Unix Benchmark 5.1.3-git Computational Test: Whetstone Double MWIPS > Higher Is Better a . 307795.5 |=============================================================== b . 319189.9 |=================================================================