dld Intel Core i7-1165G7 testing with a Dell 0GG9PT (3.15.0 BIOS) and Intel Xe TGL GT2 15GB on Ubuntu 23.10 via the Phoronix Test Suite. a: Processor: Intel Core i7-1165G7 @ 4.70GHz (4 Cores / 8 Threads), Motherboard: Dell 0GG9PT (3.15.0 BIOS), Chipset: Intel Tiger Lake-LP, Memory: 16GB, Disk: Kioxia KBG40ZNS256G NVMe 256GB, Graphics: Intel Xe TGL GT2 15GB (1300MHz), Audio: Realtek ALC289, Network: Intel Wi-Fi 6 AX201 OS: Ubuntu 23.10, Kernel: 6.5.0-14-generic (x86_64), Desktop: GNOME Shell 45.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 23.2.1-1ubuntu3, OpenCL: OpenCL 3.0, Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 1920x1200 b: Processor: Intel Core i7-1165G7 @ 4.70GHz (4 Cores / 8 Threads), Motherboard: Dell 0GG9PT (3.15.0 BIOS), Chipset: Intel Tiger Lake-LP, Memory: 16GB, Disk: Kioxia KBG40ZNS256G NVMe 256GB, Graphics: Intel Xe TGL GT2 15GB (1300MHz), Audio: Realtek ALC289, Network: Intel Wi-Fi 6 AX201 OS: Ubuntu 23.10, Kernel: 6.5.0-14-generic (x86_64), Desktop: GNOME Shell 45.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 23.2.1-1ubuntu3, OpenCL: OpenCL 3.0, Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 1920x1200 c: Processor: Intel Core i7-1165G7 @ 4.70GHz (4 Cores / 8 Threads), Motherboard: Dell 0GG9PT (3.15.0 BIOS), Chipset: Intel Tiger Lake-LP, Memory: 16GB, Disk: Kioxia KBG40ZNS256G NVMe 256GB, Graphics: Intel Xe TGL GT2 15GB (1300MHz), Audio: Realtek ALC289, Network: Intel Wi-Fi 6 AX201 OS: Ubuntu 23.10, Kernel: 6.5.0-14-generic (x86_64), Desktop: GNOME Shell 45.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 23.2.1-1ubuntu3, OpenCL: OpenCL 3.0, Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 1920x1200 Quicksilver 20230818 Input: CTS2 Figure Of Merit > Higher Is Better a . 3768000 |================================================================= b . 3748000 |================================================================= c . 3803000 |================================================================== Quicksilver 20230818 Input: CORAL2 P1 Figure Of Merit > Higher Is Better a . 3924000 |================================================================= b . 3934000 |================================================================= c . 3979000 |================================================================== Quicksilver 20230818 Input: CORAL2 P2 Figure Of Merit > Higher Is Better a . 6967000 |================================================================= b . 6909000 |================================================================= c . 7033000 |================================================================== NAMD 3.0b6 Input: ATPase with 327,506 Atoms ns/day > Higher Is Better a . 0.50294 |================================================================= b . 0.49416 |================================================================ c . 0.50756 |================================================================== NAMD 3.0b6 Input: STMV with 1,066,628 Atoms ns/day > Higher Is Better a . 0.14600 |============================================================== b . 0.15515 |================================================================== c . 0.14742 |=============================================================== CacheBench Test: Read MB/s > Higher Is Better a . 8879.89 |================================================================== b . 8883.91 |================================================================== c . 8891.68 |================================================================== CacheBench Test: Write MB/s > Higher Is Better a . 107023.35 |=============================================================== b . 107972.86 |=============================================================== c . 108930.33 |================================================================ CacheBench Test: Read / Modify / Write MB/s > Higher Is Better a . 104486.53 |================================================================ b . 105003.55 |================================================================ c . 103956.94 |=============================================================== LZ4 Compression 1.9.4 Compression Level: 1 - Compression Speed MB/s > Higher Is Better a . 747.67 |=================================================================== b . 748.52 |=================================================================== c . 748.04 |=================================================================== LZ4 Compression 1.9.4 Compression Level: 1 - Decompression Speed MB/s > Higher Is Better a . 4126.5 |================================================================== b . 4158.3 |=================================================================== c . 4164.7 |=================================================================== LZ4 Compression 1.9.4 Compression Level: 3 - Compression Speed MB/s > Higher Is Better a . 115.84 |================================================================= b . 115.91 |================================================================= c . 119.11 |=================================================================== LZ4 Compression 1.9.4 Compression Level: 3 - Decompression Speed MB/s > Higher Is Better a . 3813.8 |================================================================= b . 3836.7 |================================================================== c . 3911.2 |=================================================================== LZ4 Compression 1.9.4 Compression Level: 9 - Compression Speed MB/s > Higher Is Better a . 40.49 |=================================================================== b . 40.86 |==================================================================== c . 40.73 |==================================================================== LZ4 Compression 1.9.4 Compression Level: 9 - Decompression Speed MB/s > Higher Is Better a . 4049.1 |================================================================== b . 4086.1 |=================================================================== c . 4063.8 |=================================================================== dav1d 1.4 Video Input: Chimera 1080p FPS > Higher Is Better a . 372.97 |================================================================== b . 376.79 |=================================================================== c . 375.86 |=================================================================== dav1d 1.4 Video Input: Summer Nature 4K FPS > Higher Is Better a . 91.30 |================================================================= b . 89.01 |================================================================ c . 94.83 |==================================================================== dav1d 1.4 Video Input: Summer Nature 1080p FPS > Higher Is Better a . 355.03 |================================================================ b . 353.19 |=============================================================== c . 372.70 |=================================================================== dav1d 1.4 Video Input: Chimera 1080p 10-bit FPS > Higher Is Better a . 282.22 |================================================================= b . 292.17 |=================================================================== c . 281.72 |================================================================= Intel Open Image Denoise 2.2 Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only Images / Sec > Higher Is Better a . 0.14 |===================================================================== b . 0.14 |===================================================================== c . 0.14 |===================================================================== Intel Open Image Denoise 2.2 Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only Images / Sec > Higher Is Better a . 0.14 |================================================================ b . 0.14 |================================================================ c . 0.15 |===================================================================== Intel Open Image Denoise 2.2 Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only Images / Sec > Higher Is Better a . 0.07 |===================================================================== b . 0.07 |===================================================================== c . 0.07 |===================================================================== Y-Cruncher 0.8.3 Pi Digits To Calculate: 500M Seconds < Lower Is Better a . 28.54 |==================================================================== b . 28.52 |==================================================================== c . 28.40 |==================================================================== GROMACS 2024 Implementation: MPI CPU - Input: water_GMX50_bare Ns Per Day > Higher Is Better a . 0.451 |========================================================== b . 0.529 |==================================================================== c . 0.532 |==================================================================== ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 79.69 |==================================================================== b . 71.34 |============================================================= c . 71.36 |============================================================= ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 12.54 |============================================================= b . 14.01 |==================================================================== c . 14.01 |==================================================================== ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 104.43 |=================================================================== b . 104.63 |=================================================================== c . 75.79 |================================================= ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 9.56824 |=============================================== b . 9.55056 |=============================================== c . 13.18590 |================================================================= ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 2.67726 |======================================================= b . 2.99139 |============================================================== c . 3.20647 |================================================================== ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 373.51 |=================================================================== b . 334.29 |============================================================ c . 311.87 |======================================================== ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 4.74083 |============================================================= b . 5.09687 |================================================================== c . 4.40278 |========================================================= ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 210.93 |============================================================== b . 196.20 |========================================================== c . 227.13 |=================================================================== ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 83.73 |======================================================== b . 99.92 |================================================================== c . 100.99 |=================================================================== ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 11.93970 |================================================================= b . 10.00570 |====================================================== c . 9.89992 |====================================================== ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 147.00 |=================================================================== b . 86.25 |======================================= c . 143.06 |================================================================= ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 6.80058 |====================================== b . 11.59150 |================================================================= c . 6.98771 |======================================= ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 3.57545 |==================================================== b . 4.55577 |================================================================== c . 4.39843 |================================================================ ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 279.68 |=================================================================== b . 219.50 |===================================================== c . 227.35 |====================================================== ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 6.59505 |================================================================== b . 6.58223 |================================================================== c . 5.99361 |============================================================ ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 151.62 |============================================================= b . 151.92 |============================================================= c . 166.84 |=================================================================== ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 211.62 |=========================================================== b . 240.54 |=================================================================== c . 214.67 |============================================================ ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 4.72355 |================================================================== b . 4.15476 |========================================================== c . 4.65663 |================================================================= ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 323.59 |================================================================== b . 324.41 |================================================================== c . 330.56 |=================================================================== ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 3.08876 |================================================================== b . 3.08090 |================================================================== c . 3.02359 |================================================================= ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 0.381473 |================================================================ b . 0.381990 |================================================================ c . 0.389855 |================================================================= ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 2621.41 |================================================================== b . 2617.86 |================================================================== c . 2565.05 |================================================================= ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 0.701410 |================================================================ b . 0.655317 |============================================================ c . 0.714792 |================================================================= ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 1425.70 |============================================================== b . 1525.97 |================================================================== c . 1399.00 |============================================================= ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 8.50049 |================================================================== b . 8.46368 |================================================================== c . 8.34714 |================================================================= ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 117.64 |================================================================== b . 118.15 |================================================================== c . 119.80 |=================================================================== ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 15.15 |=================================================================== b . 13.98 |============================================================== c . 15.36 |==================================================================== ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 66.00 |=============================================================== b . 71.52 |==================================================================== c . 65.09 |============================================================== ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 82.38 |============================================================ b . 90.57 |================================================================== c . 92.86 |==================================================================== ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 12.14 |==================================================================== b . 11.04 |============================================================== c . 10.77 |============================================================ ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 122.13 |=================================================================== b . 122.21 |=================================================================== c . 114.46 |=============================================================== ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 8.18569 |============================================================== b . 8.18057 |============================================================== c . 8.73424 |================================================================== ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 25.79 |============================================================ b . 25.25 |=========================================================== c . 29.22 |==================================================================== ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 38.77 |=================================================================== b . 39.61 |==================================================================== c . 34.22 |=========================================================== ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 41.43 |=================================================================== b . 41.75 |==================================================================== c . 24.55 |======================================== ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 24.13 |======================================== b . 23.95 |======================================== c . 40.73 |==================================================================== ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 20.50 |========================================================== b . 20.77 |=========================================================== c . 24.03 |==================================================================== ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 48.77 |==================================================================== b . 48.14 |=================================================================== c . 41.60 |========================================================== ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 33.16 |==================================================================== b . 32.54 |=================================================================== c . 29.12 |============================================================ ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 30.15 |============================================================ b . 30.72 |============================================================= c . 34.34 |====================================================================