Intel Core i7-1065G7 testing with a Dell 06CDVY (1.0.9 BIOS) and Intel Iris Plus ICL GT2 16GB on Ubuntu 23.10 via the Phoronix Test Suite.
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 2402176-NE-DDD34232280
ddd
Intel Core i7-1065G7 testing with a Dell 06CDVY (1.0.9 BIOS) and Intel Iris Plus ICL GT2 16GB on Ubuntu 23.10 via the Phoronix Test Suite.
,,"a","b","c"
Processor,,Intel Core i7-1065G7 @ 3.90GHz (4 Cores / 8 Threads),Intel Core i7-1065G7 @ 3.90GHz (4 Cores / 8 Threads),Intel Core i7-1065G7 @ 3.90GHz (4 Cores / 8 Threads)
Motherboard,,Dell 06CDVY (1.0.9 BIOS),Dell 06CDVY (1.0.9 BIOS),Dell 06CDVY (1.0.9 BIOS)
Chipset,,Intel Ice Lake-LP DRAM,Intel Ice Lake-LP DRAM,Intel Ice Lake-LP DRAM
Memory,,16GB,16GB,16GB
Disk,,Toshiba KBG40ZPZ512G NVMe 512GB,Toshiba KBG40ZPZ512G NVMe 512GB,Toshiba KBG40ZPZ512G NVMe 512GB
Graphics,,Intel Iris Plus ICL GT2 16GB (1100MHz),Intel Iris Plus ICL GT2 16GB (1100MHz),Intel Iris Plus ICL GT2 16GB (1100MHz)
Audio,,Realtek ALC289,Realtek ALC289,Realtek ALC289
Network,,Intel Ice Lake-LP PCH CNVi WiFi,Intel Ice Lake-LP PCH CNVi WiFi,Intel Ice Lake-LP PCH CNVi WiFi
OS,,Ubuntu 23.10,Ubuntu 23.10,Ubuntu 23.10
Kernel,,6.7.0-060700rc5-generic (x86_64),6.7.0-060700rc5-generic (x86_64),6.7.0-060700rc5-generic (x86_64)
Desktop,,GNOME Shell 45.1,GNOME Shell 45.1,GNOME Shell 45.1
Display Server,,X Server + Wayland,X Server + Wayland,X Server + Wayland
OpenGL,,4.6 Mesa 24.0~git2312230600.551924~oibaf~m (git-551924a 2023-12-23 mantic-oibaf-ppa),4.6 Mesa 24.0~git2312230600.551924~oibaf~m (git-551924a 2023-12-23 mantic-oibaf-ppa),4.6 Mesa 24.0~git2312230600.551924~oibaf~m (git-551924a 2023-12-23 mantic-oibaf-ppa)
Compiler,,GCC 13.2.0,GCC 13.2.0,GCC 13.2.0
File-System,,ext4,ext4,ext4
Screen Resolution,,1920x1200,1920x1200,1920x1200
,,"a","b","c"
"ONNX Runtime - Model: fcn-resnet101-11 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,0.459805,0.459073,0.298242
"ONNX Runtime - Model: T5 Encoder - Device: CPU - Executor: Standard (Inferences/sec)",HIB,118.098,118.739,87.8986
"NAMD - Input: ATPase with 327,506 Atoms (ns/day)",HIB,0.43368,0.32486,0.42965
"ONNX Runtime - Model: bertsquad-12 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,4.12523,4.13691,3.38264
"dav1d - Video Input: Summer Nature 4K (FPS)",HIB,72.82,64.05,71.64
"ONNX Runtime - Model: GPT-2 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,80.193,71.608,79.3431
"ONNX Runtime - Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,6.71417,6.9984,7.3154
"ONNX Runtime - Model: yolov4 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,2.10689,2.28464,2.27521
"NAMD - Input: STMV with 1,066,628 Atoms (ns/day)",HIB,0.10854,0.10144,0.10559
"ONNX Runtime - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,19.6888,20.2226,20.9517
"ONNX Runtime - Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,0.303962,0.310804,0.293851
"ONNX Runtime - Model: super-resolution-10 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,19.9104,20.1428,20.7456
"Llamafile - Test: mistral-7b-instruct-v0.2.Q8_0 - Acceleration: CPU (Tokens/sec)",HIB,3.84,3.72,3.83
"ONNX Runtime - Model: bertsquad-12 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,2.99698,3.0932,3.04906
"ONNX Runtime - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,73.8968,73.4737,75.5616
"ONNX Runtime - Model: GPT-2 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,61.6062,63.2175,61.5112
"GROMACS - Implementation: MPI CPU - Input: water_GMX50_bare (Ns/Day)",HIB,0.378,0.386,0.386
"ONNX Runtime - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,87.1154,87.0943,88.8547
"dav1d - Video Input: Summer Nature 1080p (FPS)",HIB,288.67,284.16,285.25
"LZ4 Compression - Compression Level: 9 - Decompression Speed (MB/s)",HIB,3529.6,3488.2,3534.8
"ONNX Runtime - Model: T5 Encoder - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,89.2923,89.6151,90.4617
"ONNX Runtime - Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,244.253,246.188,247.077
"ONNX Runtime - Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,203.384,204.918,205.606
"ONNX Runtime - Model: super-resolution-10 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,26.5166,26.8058,26.6476
"CacheBench - Test: Read (MB/s)",HIB,7352.493417,7275.828585,7348.691314
"dav1d - Video Input: Chimera 1080p 10-bit (FPS)",HIB,198.52,197.05,198.05
"LZ4 Compression - Compression Level: 9 - Compression Speed (MB/s)",HIB,34.47,34.25,34.23
"LZ4 Compression - Compression Level: 3 - Compression Speed (MB/s)",HIB,99.18,99.06,98.49
"CacheBench - Test: Read / Modify / Write (MB/s)",HIB,87101.098264,86658.599503,87037.946523
"CacheBench - Test: Write (MB/s)",HIB,89660.243566,89703.908685,90114.456817
"Y-Cruncher - Pi Digits To Calculate: 500M (sec)",LIB,48.151,48.382,48.385
"LZ4 Compression - Compression Level: 3 - Decompression Speed (MB/s)",HIB,3344.2,3354.8,3338.6
"ONNX Runtime - Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,9.94812,9.94281,9.97734
"Llamafile - Test: llava-v1.5-7b-q4 - Acceleration: CPU (Tokens/sec)",HIB,5.99,5.99,5.97
"ONNX Runtime - Model: yolov4 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,3.15455,3.15637,3.14672
"LZ4 Compression - Compression Level: 1 - Decompression Speed (MB/s)",HIB,3640.2,3631.1,3641.2
"LZ4 Compression - Compression Level: 1 - Compression Speed (MB/s)",HIB,626.46,627.46,626.25
"dav1d - Video Input: Chimera 1080p (FPS)",HIB,308.9,308.37,308.61
"ONNX Runtime - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,26.0134,26.0424,26.0086
"Intel Open Image Denoise - Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only (Images / Sec)",HIB,0.05,0.05,0.05
"Intel Open Image Denoise - Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only (Images / Sec)",HIB,0.10,0.10,0.10
"Intel Open Image Denoise - Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only (Images / Sec)",HIB,0.10,0.10,0.10
"ONNX Runtime - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,38.4377,38.3951,38.4455
"ONNX Runtime - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,50.785,49.4447,47.7241
"ONNX Runtime - Model: super-resolution-10 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,37.7094,37.3024,37.5238
"ONNX Runtime - Model: super-resolution-10 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,50.221,49.641,48.1984
"ONNX Runtime - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,11.477,11.4794,11.252
"ONNX Runtime - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,13.5296,13.6078,13.2315
"ONNX Runtime - Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,100.519,100.573,100.224
"ONNX Runtime - Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,148.936,142.886,136.695
"ONNX Runtime - Model: fcn-resnet101-11 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,2174.83,2178.3,3352.97
"ONNX Runtime - Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,3289.87,3217.45,3403.08
"ONNX Runtime - Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,4.09193,4.05954,4.04501
"ONNX Runtime - Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,4.91418,4.8776,4.8609
"ONNX Runtime - Model: bertsquad-12 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,242.404,241.72,295.622
"ONNX Runtime - Model: bertsquad-12 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,333.661,323.283,327.962
"ONNX Runtime - Model: T5 Encoder - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,8.46498,8.41923,11.3743
"ONNX Runtime - Model: T5 Encoder - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,11.1969,11.1565,11.0521
"ONNX Runtime - Model: yolov4 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,316.996,316.814,317.786
"ONNX Runtime - Model: yolov4 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,474.624,437.697,439.512
"ONNX Runtime - Model: GPT-2 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,12.458,13.9552,12.5921
"ONNX Runtime - Model: GPT-2 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,16.2214,15.8062,16.2456