Intel Core i7-1065G7 testing with a Dell 06CDVY (1.0.9 BIOS) and Intel Iris Plus ICL GT2 16GB on Ubuntu 23.10 via the Phoronix Test Suite.
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 2402176-NE-DDD34232280
ddd
Intel Core i7-1065G7 testing with a Dell 06CDVY (1.0.9 BIOS) and Intel Iris Plus ICL GT2 16GB on Ubuntu 23.10 via the Phoronix Test Suite.
a:
Processor: Intel Core i7-1065G7 @ 3.90GHz (4 Cores / 8 Threads), Motherboard: Dell 06CDVY (1.0.9 BIOS), Chipset: Intel Ice Lake-LP DRAM, Memory: 16GB, Disk: Toshiba KBG40ZPZ512G NVMe 512GB, Graphics: Intel Iris Plus ICL GT2 16GB (1100MHz), Audio: Realtek ALC289, Network: Intel Ice Lake-LP PCH CNVi WiFi
OS: Ubuntu 23.10, Kernel: 6.7.0-060700rc5-generic (x86_64), Desktop: GNOME Shell 45.1, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 24.0~git2312230600.551924~oibaf~m (git-551924a 2023-12-23 mantic-oibaf-ppa), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 1920x1200
b:
Processor: Intel Core i7-1065G7 @ 3.90GHz (4 Cores / 8 Threads), Motherboard: Dell 06CDVY (1.0.9 BIOS), Chipset: Intel Ice Lake-LP DRAM, Memory: 16GB, Disk: Toshiba KBG40ZPZ512G NVMe 512GB, Graphics: Intel Iris Plus ICL GT2 16GB (1100MHz), Audio: Realtek ALC289, Network: Intel Ice Lake-LP PCH CNVi WiFi
OS: Ubuntu 23.10, Kernel: 6.7.0-060700rc5-generic (x86_64), Desktop: GNOME Shell 45.1, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 24.0~git2312230600.551924~oibaf~m (git-551924a 2023-12-23 mantic-oibaf-ppa), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 1920x1200
c:
Processor: Intel Core i7-1065G7 @ 3.90GHz (4 Cores / 8 Threads), Motherboard: Dell 06CDVY (1.0.9 BIOS), Chipset: Intel Ice Lake-LP DRAM, Memory: 16GB, Disk: Toshiba KBG40ZPZ512G NVMe 512GB, Graphics: Intel Iris Plus ICL GT2 16GB (1100MHz), Audio: Realtek ALC289, Network: Intel Ice Lake-LP PCH CNVi WiFi
OS: Ubuntu 23.10, Kernel: 6.7.0-060700rc5-generic (x86_64), Desktop: GNOME Shell 45.1, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 24.0~git2312230600.551924~oibaf~m (git-551924a 2023-12-23 mantic-oibaf-ppa), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 1920x1200
CacheBench
Test: Read
MB/s > Higher Is Better
a . 7352.49 |==================================================================
b . 7275.83 |=================================================================
c . 7348.69 |==================================================================
CacheBench
Test: Write
MB/s > Higher Is Better
a . 89660.24 |=================================================================
b . 89703.91 |=================================================================
c . 90114.46 |=================================================================
CacheBench
Test: Read / Modify / Write
MB/s > Higher Is Better
a . 87101.10 |=================================================================
b . 86658.60 |=================================================================
c . 87037.95 |=================================================================
dav1d 1.4
Video Input: Chimera 1080p
FPS > Higher Is Better
a . 308.90 |===================================================================
b . 308.37 |===================================================================
c . 308.61 |===================================================================
dav1d 1.4
Video Input: Summer Nature 4K
FPS > Higher Is Better
a . 72.82 |====================================================================
b . 64.05 |============================================================
c . 71.64 |===================================================================
dav1d 1.4
Video Input: Summer Nature 1080p
FPS > Higher Is Better
a . 288.67 |===================================================================
b . 284.16 |==================================================================
c . 285.25 |==================================================================
dav1d 1.4
Video Input: Chimera 1080p 10-bit
FPS > Higher Is Better
a . 198.52 |===================================================================
b . 197.05 |===================================================================
c . 198.05 |===================================================================
GROMACS 2024
Implementation: MPI CPU - Input: water_GMX50_bare
Ns Per Day > Higher Is Better
a . 0.378 |===================================================================
b . 0.386 |====================================================================
c . 0.386 |====================================================================
Intel Open Image Denoise 2.2
Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only
Images / Sec > Higher Is Better
a . 0.10 |=====================================================================
b . 0.10 |=====================================================================
c . 0.10 |=====================================================================
Intel Open Image Denoise 2.2
Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only
Images / Sec > Higher Is Better
a . 0.10 |=====================================================================
b . 0.10 |=====================================================================
c . 0.10 |=====================================================================
Intel Open Image Denoise 2.2
Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only
Images / Sec > Higher Is Better
a . 0.05 |=====================================================================
b . 0.05 |=====================================================================
c . 0.05 |=====================================================================
Llamafile 0.6
Test: llava-v1.5-7b-q4 - Acceleration: CPU
Tokens Per Second > Higher Is Better
a . 5.99 |=====================================================================
b . 5.99 |=====================================================================
c . 5.97 |=====================================================================
Llamafile 0.6
Test: mistral-7b-instruct-v0.2.Q8_0 - Acceleration: CPU
Tokens Per Second > Higher Is Better
a . 3.84 |=====================================================================
b . 3.72 |===================================================================
c . 3.83 |=====================================================================
LZ4 Compression 1.9.4
Compression Level: 1 - Compression Speed
MB/s > Higher Is Better
a . 626.46 |===================================================================
b . 627.46 |===================================================================
c . 626.25 |===================================================================
LZ4 Compression 1.9.4
Compression Level: 1 - Decompression Speed
MB/s > Higher Is Better
a . 3640.2 |===================================================================
b . 3631.1 |===================================================================
c . 3641.2 |===================================================================
LZ4 Compression 1.9.4
Compression Level: 3 - Compression Speed
MB/s > Higher Is Better
a . 99.18 |====================================================================
b . 99.06 |====================================================================
c . 98.49 |====================================================================
LZ4 Compression 1.9.4
Compression Level: 3 - Decompression Speed
MB/s > Higher Is Better
a . 3344.2 |===================================================================
b . 3354.8 |===================================================================
c . 3338.6 |===================================================================
LZ4 Compression 1.9.4
Compression Level: 9 - Compression Speed
MB/s > Higher Is Better
a . 34.47 |====================================================================
b . 34.25 |====================================================================
c . 34.23 |====================================================================
LZ4 Compression 1.9.4
Compression Level: 9 - Decompression Speed
MB/s > Higher Is Better
a . 3529.6 |===================================================================
b . 3488.2 |==================================================================
c . 3534.8 |===================================================================
NAMD 3.0b6
Input: ATPase with 327,506 Atoms
ns/day > Higher Is Better
a . 0.43368 |==================================================================
b . 0.32486 |=================================================
c . 0.42965 |=================================================================
NAMD 3.0b6
Input: STMV with 1,066,628 Atoms
ns/day > Higher Is Better
a . 0.10854 |==================================================================
b . 0.10144 |==============================================================
c . 0.10559 |================================================================
ONNX Runtime 1.17
Model: GPT-2 - Device: CPU - Executor: Parallel
Inferences Per Second > Higher Is Better
a . 61.61 |==================================================================
b . 63.22 |====================================================================
c . 61.51 |==================================================================
ONNX Runtime 1.17
Model: GPT-2 - Device: CPU - Executor: Parallel
Inference Time Cost (ms) < Lower Is Better
a . 16.22 |====================================================================
b . 15.81 |==================================================================
c . 16.25 |====================================================================
ONNX Runtime 1.17
Model: GPT-2 - Device: CPU - Executor: Standard
Inferences Per Second > Higher Is Better
a . 80.19 |====================================================================
b . 71.61 |=============================================================
c . 79.34 |===================================================================
ONNX Runtime 1.17
Model: GPT-2 - Device: CPU - Executor: Standard
Inference Time Cost (ms) < Lower Is Better
a . 12.46 |=============================================================
b . 13.96 |====================================================================
c . 12.59 |=============================================================
ONNX Runtime 1.17
Model: yolov4 - Device: CPU - Executor: Parallel
Inferences Per Second > Higher Is Better
a . 2.10689 |=============================================================
b . 2.28464 |==================================================================
c . 2.27521 |==================================================================
ONNX Runtime 1.17
Model: yolov4 - Device: CPU - Executor: Parallel
Inference Time Cost (ms) < Lower Is Better
a . 474.62 |===================================================================
b . 437.70 |==============================================================
c . 439.51 |==============================================================
ONNX Runtime 1.17
Model: yolov4 - Device: CPU - Executor: Standard
Inferences Per Second > Higher Is Better
a . 3.15455 |==================================================================
b . 3.15637 |==================================================================
c . 3.14672 |==================================================================
ONNX Runtime 1.17
Model: yolov4 - Device: CPU - Executor: Standard
Inference Time Cost (ms) < Lower Is Better
a . 317.00 |===================================================================
b . 316.81 |===================================================================
c . 317.79 |===================================================================
ONNX Runtime 1.17
Model: T5 Encoder - Device: CPU - Executor: Parallel
Inferences Per Second > Higher Is Better
a . 89.29 |===================================================================
b . 89.62 |===================================================================
c . 90.46 |====================================================================
ONNX Runtime 1.17
Model: T5 Encoder - Device: CPU - Executor: Parallel
Inference Time Cost (ms) < Lower Is Better
a . 11.20 |====================================================================
b . 11.16 |====================================================================
c . 11.05 |===================================================================
ONNX Runtime 1.17
Model: T5 Encoder - Device: CPU - Executor: Standard
Inferences Per Second > Higher Is Better
a . 118.10 |===================================================================
b . 118.74 |===================================================================
c . 87.90 |==================================================
ONNX Runtime 1.17
Model: T5 Encoder - Device: CPU - Executor: Standard
Inference Time Cost (ms) < Lower Is Better
a . 8.46498 |================================================
b . 8.41923 |================================================
c . 11.37430 |=================================================================
ONNX Runtime 1.17
Model: bertsquad-12 - Device: CPU - Executor: Parallel
Inferences Per Second > Higher Is Better
a . 2.99698 |================================================================
b . 3.09320 |==================================================================
c . 3.04906 |=================================================================
ONNX Runtime 1.17
Model: bertsquad-12 - Device: CPU - Executor: Parallel
Inference Time Cost (ms) < Lower Is Better
a . 333.66 |===================================================================
b . 323.28 |=================================================================
c . 327.96 |==================================================================
ONNX Runtime 1.17
Model: bertsquad-12 - Device: CPU - Executor: Standard
Inferences Per Second > Higher Is Better
a . 4.12523 |==================================================================
b . 4.13691 |==================================================================
c . 3.38264 |======================================================
ONNX Runtime 1.17
Model: bertsquad-12 - Device: CPU - Executor: Standard
Inference Time Cost (ms) < Lower Is Better
a . 242.40 |=======================================================
b . 241.72 |=======================================================
c . 295.62 |===================================================================
ONNX Runtime 1.17
Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel
Inferences Per Second > Higher Is Better
a . 203.38 |==================================================================
b . 204.92 |===================================================================
c . 205.61 |===================================================================
ONNX Runtime 1.17
Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel
Inference Time Cost (ms) < Lower Is Better
a . 4.91418 |==================================================================
b . 4.87760 |==================================================================
c . 4.86090 |=================================================================
ONNX Runtime 1.17
Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard
Inferences Per Second > Higher Is Better
a . 244.25 |==================================================================
b . 246.19 |===================================================================
c . 247.08 |===================================================================
ONNX Runtime 1.17
Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard
Inference Time Cost (ms) < Lower Is Better
a . 4.09193 |==================================================================
b . 4.05954 |=================================================================
c . 4.04501 |=================================================================
ONNX Runtime 1.17
Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel
Inferences Per Second > Higher Is Better
a . 0.303962 |================================================================
b . 0.310804 |=================================================================
c . 0.293851 |=============================================================
ONNX Runtime 1.17
Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel
Inference Time Cost (ms) < Lower Is Better
a . 3289.87 |================================================================
b . 3217.45 |==============================================================
c . 3403.08 |==================================================================
ONNX Runtime 1.17
Model: fcn-resnet101-11 - Device: CPU - Executor: Standard
Inferences Per Second > Higher Is Better
a . 0.459805 |=================================================================
b . 0.459073 |=================================================================
c . 0.298242 |==========================================
ONNX Runtime 1.17
Model: fcn-resnet101-11 - Device: CPU - Executor: Standard
Inference Time Cost (ms) < Lower Is Better
a . 2174.83 |===========================================
b . 2178.30 |===========================================
c . 3352.97 |==================================================================
ONNX Runtime 1.17
Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel
Inferences Per Second > Higher Is Better
a . 6.71417 |=============================================================
b . 6.99840 |===============================================================
c . 7.31540 |==================================================================
ONNX Runtime 1.17
Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel
Inference Time Cost (ms) < Lower Is Better
a . 148.94 |===================================================================
b . 142.89 |================================================================
c . 136.70 |=============================================================
ONNX Runtime 1.17
Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard
Inferences Per Second > Higher Is Better
a . 9.94812 |==================================================================
b . 9.94281 |==================================================================
c . 9.97734 |==================================================================
ONNX Runtime 1.17
Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard
Inference Time Cost (ms) < Lower Is Better
a . 100.52 |===================================================================
b . 100.57 |===================================================================
c . 100.22 |===================================================================
ONNX Runtime 1.17
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel
Inferences Per Second > Higher Is Better
a . 73.90 |===================================================================
b . 73.47 |==================================================================
c . 75.56 |====================================================================
ONNX Runtime 1.17
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel
Inference Time Cost (ms) < Lower Is Better
a . 13.53 |====================================================================
b . 13.61 |====================================================================
c . 13.23 |==================================================================
ONNX Runtime 1.17
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard
Inferences Per Second > Higher Is Better
a . 87.12 |===================================================================
b . 87.09 |===================================================================
c . 88.85 |====================================================================
ONNX Runtime 1.17
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard
Inference Time Cost (ms) < Lower Is Better
a . 11.48 |====================================================================
b . 11.48 |====================================================================
c . 11.25 |===================================================================
ONNX Runtime 1.17
Model: super-resolution-10 - Device: CPU - Executor: Parallel
Inferences Per Second > Higher Is Better
a . 19.91 |=================================================================
b . 20.14 |==================================================================
c . 20.75 |====================================================================
ONNX Runtime 1.17
Model: super-resolution-10 - Device: CPU - Executor: Parallel
Inference Time Cost (ms) < Lower Is Better
a . 50.22 |====================================================================
b . 49.64 |===================================================================
c . 48.20 |=================================================================
ONNX Runtime 1.17
Model: super-resolution-10 - Device: CPU - Executor: Standard
Inferences Per Second > Higher Is Better
a . 26.52 |===================================================================
b . 26.81 |====================================================================
c . 26.65 |====================================================================
ONNX Runtime 1.17
Model: super-resolution-10 - Device: CPU - Executor: Standard
Inference Time Cost (ms) < Lower Is Better
a . 37.71 |====================================================================
b . 37.30 |===================================================================
c . 37.52 |====================================================================
ONNX Runtime 1.17
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel
Inferences Per Second > Higher Is Better
a . 19.69 |================================================================
b . 20.22 |==================================================================
c . 20.95 |====================================================================
ONNX Runtime 1.17
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel
Inference Time Cost (ms) < Lower Is Better
a . 50.79 |====================================================================
b . 49.44 |==================================================================
c . 47.72 |================================================================
ONNX Runtime 1.17
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard
Inferences Per Second > Higher Is Better
a . 26.01 |====================================================================
b . 26.04 |====================================================================
c . 26.01 |====================================================================
ONNX Runtime 1.17
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard
Inference Time Cost (ms) < Lower Is Better
a . 38.44 |====================================================================
b . 38.40 |====================================================================
c . 38.45 |====================================================================
Y-Cruncher 0.8.3
Pi Digits To Calculate: 500M
Seconds < Lower Is Better
a . 48.15 |====================================================================
b . 48.38 |====================================================================
c . 48.39 |====================================================================