Intel Core i7-1065G7 testing with a Dell 06CDVY (1.0.9 BIOS) and Intel Iris Plus ICL GT2 16GB on Ubuntu 22.04 via the Phoronix Test Suite.
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 2304013-NE-ICELAKE3183 icelake 31 march - Phoronix Test Suite icelake 31 march Intel Core i7-1065G7 testing with a Dell 06CDVY (1.0.9 BIOS) and Intel Iris Plus ICL GT2 16GB on Ubuntu 22.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2304013-NE-ICELAKE3183&sro .
icelake 31 march Processor Motherboard Chipset Memory Disk Graphics Audio Network OS Kernel Desktop Display Server OpenGL OpenCL Vulkan Compiler File-System Screen Resolution a b c d Intel Core i7-1065G7 @ 3.90GHz (4 Cores / 8 Threads) Dell 06CDVY (1.0.9 BIOS) Intel Ice Lake-LP DRAM 16GB Toshiba KBG40ZPZ512G NVMe 512GB Intel Iris Plus ICL GT2 16GB (1100MHz) Realtek ALC289 Intel Ice Lake-LP PCH CNVi WiFi Ubuntu 22.04 5.19.0-38-generic (x86_64) GNOME Shell 42.2 X Server + Wayland 4.6 Mesa 22.0.1 OpenCL 3.0 1.3.204 GCC 11.3.0 ext4 1920x1200 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xb8 - Thermald 2.4.9 Security Details - itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Mitigation of Microcode + tsx_async_abort: Not affected
icelake 31 march onednn: IP Shapes 1D - f32 - CPU onednn: IP Shapes 3D - f32 - CPU onednn: IP Shapes 1D - u8s8f32 - CPU onednn: IP Shapes 3D - u8s8f32 - CPU onednn: IP Shapes 1D - bf16bf16bf16 - CPU onednn: IP Shapes 3D - bf16bf16bf16 - CPU onednn: Convolution Batch Shapes Auto - f32 - CPU onednn: Deconvolution Batch shapes_1d - f32 - CPU onednn: Deconvolution Batch shapes_3d - f32 - CPU onednn: Convolution Batch Shapes Auto - u8s8f32 - CPU onednn: Deconvolution Batch shapes_1d - u8s8f32 - CPU onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPU onednn: Recurrent Neural Network Training - f32 - CPU onednn: Recurrent Neural Network Inference - f32 - CPU onednn: Recurrent Neural Network Training - u8s8f32 - CPU onednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPU onednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPU onednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Inference - u8s8f32 - CPU onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPU blender: BMW27 - CPU-Only blender: Fishy Cat - CPU-Only blender: Pabellon Barcelona - CPU-Only a b c d 9.82185 6.81592 3.56346 4.22282 35.5552 11.7245 17.0713 23.4941 14.9885 12.2324 4.54904 4.09180 14822.5 6396.75 12541.9 70.3356 88.9025 54.5552 6414.18 12525.1 6410.87 667.82 853.93 2258.20 10.3437 6.66296 2.16527 2.65931 24.3916 7.59195 12.7224 15.2277 13.5345 11.2797 2.67147 3.21993 8683.31 5454.56 10488.7 47.8999 72.4512 48.9143 5859.75 11598.2 5909 607.66 787.19 2127.09 10.1213 6.65945 2.11974 2.64073 24.0965 7.62148 12.9216 15.3118 13.5698 11.2517 2.6843 3.22719 8829.67 5902.15 11586.7 47.9896 61.6623 48.6731 5899.34 11588.9 5904.98 610.25 800.37 2170.73 9.84325 6.68831 2.08567 2.81508 24.6786 8.35993 12.9928 20.207 13.1189 11.4215 3.91928 3.1538 11688.6 6177.94 12187.5 50.8979 81.9067 48.5824 6059.37 12146.1 6148 629.05 790.54 2102.53 OpenBenchmarking.org
oneDNN Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU a b c d 3 6 9 12 15 SE +/- 0.00684, N = 3 9.82185 10.34370 10.12130 9.84325 MIN: 8.89 MIN: 9.48 MIN: 9.49 MIN: 9.13 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU a b c d 2 4 6 8 10 SE +/- 0.01784, N = 3 6.81592 6.66296 6.65945 6.68831 MIN: 6.46 MIN: 6.36 MIN: 6.33 MIN: 6.35 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU a b c d 0.8018 1.6036 2.4054 3.2072 4.009 SE +/- 0.09702, N = 15 3.56346 2.16527 2.11974 2.08567 MIN: 2.09 MIN: 1.95 MIN: 1.97 MIN: 1.83 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU a b c d 0.9501 1.9002 2.8503 3.8004 4.7505 SE +/- 0.08023, N = 12 4.22282 2.65931 2.64073 2.81508 MIN: 2.53 MIN: 2.55 MIN: 2.51 MIN: 2.51 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU a b c d 8 16 24 32 40 SE +/- 0.81, N = 12 35.56 24.39 24.10 24.68 MIN: 23.87 MIN: 23.78 MIN: 23.5 MIN: 23.99 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU a b c d 3 6 9 12 15 SE +/- 0.13472, N = 12 11.72450 7.59195 7.62148 8.35993 MIN: 6.27 MIN: 6.3 MIN: 6.08 MIN: 6.28 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU a b c d 4 8 12 16 20 SE +/- 0.24, N = 12 17.07 12.72 12.92 12.99 MIN: 11.85 MIN: 12.51 MIN: 12.59 MIN: 11.94 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU a b c d 6 12 18 24 30 SE +/- 0.06, N = 3 23.49 15.23 15.31 20.21 MIN: 19.95 MIN: 14.43 MIN: 14.12 MIN: 18.47 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU a b c d 4 8 12 16 20 SE +/- 0.19, N = 15 14.99 13.53 13.57 13.12 MIN: 12.74 MIN: 13.28 MIN: 13.3 MIN: 12.87 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU a b c d 3 6 9 12 15 SE +/- 0.10, N = 9 12.23 11.28 11.25 11.42 MIN: 11.02 MIN: 11.04 MIN: 11.03 MIN: 11.11 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU a b c d 1.0235 2.047 3.0705 4.094 5.1175 SE +/- 0.05741, N = 3 4.54904 2.67147 2.68430 3.91928 MIN: 3.95 MIN: 2.6 MIN: 2.55 MIN: 3.44 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU a b c d 0.9207 1.8414 2.7621 3.6828 4.6035 SE +/- 0.07272, N = 14 4.09180 3.21993 3.22719 3.15380 MIN: 3.03 MIN: 3.14 MIN: 3.14 MIN: 3.04 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU a b c d 3K 6K 9K 12K 15K SE +/- 1893.68, N = 12 14822.50 8683.31 8829.67 11688.60 MIN: 12309.7 MIN: 8479.89 MIN: 8485.03 MIN: 11403.4 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU a b c d 1400 2800 4200 5600 7000 SE +/- 6.24, N = 3 6396.75 5454.56 5902.15 6177.94 MIN: 6202.66 MIN: 5266.44 MIN: 5729.58 MIN: 5867.3 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU a b c d 3K 6K 9K 12K 15K SE +/- 4.77, N = 3 12541.9 10488.7 11586.7 12187.5 MIN: 12352.8 MIN: 9390.43 MIN: 11430.2 MIN: 11857.4 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU a b c d 16 32 48 64 80 SE +/- 1.76, N = 12 70.34 47.90 47.99 50.90 MIN: 45.92 MIN: 47.32 MIN: 47.42 MIN: 47.33 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU a b c d 20 40 60 80 100 SE +/- 1.11, N = 3 88.90 72.45 61.66 81.91 MIN: 81.31 MIN: 68.46 MIN: 58.41 MIN: 72.94 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU a b c d 12 24 36 48 60 SE +/- 0.74, N = 15 54.56 48.91 48.67 48.58 MIN: 46.76 MIN: 48.35 MIN: 48.15 MIN: 48.09 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU a b c d 1400 2800 4200 5600 7000 SE +/- 3.77, N = 3 6414.18 5859.75 5899.34 6059.37 MIN: 6217.32 MIN: 5713.97 MIN: 5743.16 MIN: 5802.87 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU a b c d 3K 6K 9K 12K 15K SE +/- 3.99, N = 3 12525.1 11598.2 11588.9 12146.1 MIN: 12322 MIN: 11435 MIN: 11421.2 MIN: 11556 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU a b c d 1400 2800 4200 5600 7000 SE +/- 5.00, N = 3 6410.87 5909.00 5904.98 6148.00 MIN: 6212.61 MIN: 5746.69 MIN: 5742.42 MIN: 5862.24 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
Blender Blend File: BMW27 - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.5 Blend File: BMW27 - Compute: CPU-Only a b c d 140 280 420 560 700 SE +/- 0.39, N = 3 667.82 607.66 610.25 629.05
Blender Blend File: Fishy Cat - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.5 Blend File: Fishy Cat - Compute: CPU-Only a b c d 200 400 600 800 1000 SE +/- 2.14, N = 3 853.93 787.19 800.37 790.54
Blender Blend File: Pabellon Barcelona - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.5 Blend File: Pabellon Barcelona - Compute: CPU-Only a b c d 500 1000 1500 2000 2500 SE +/- 3.83, N = 3 2258.20 2127.09 2170.73 2102.53
Phoronix Test Suite v10.8.4