onednn 1185G7

Intel Core i7-1185G7 testing with a Dell 0DXP1F (3.0.3 BIOS) and Intel Xe TGL GT2 3GB on Ubuntu 21.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2107150-IB-ONEDNN11860&sor.

onednn 1185G7ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen Resolution123Intel Core i7-1185G7 @ 4.80GHz (4 Cores / 8 Threads)Dell 0DXP1F (3.0.3 BIOS)Intel Tiger Lake-LP16GBMicron 2300 NVMe 512GBIntel Xe TGL GT2 3GB (1350MHz)Realtek ALC289Intel Wi-Fi 6 AX201Ubuntu 21.045.13.0-051300-generic (x86_64)GNOME Shell 3.38.4X Server + Wayland4.6 Mesa 21.2.0-devel (git-dd98918 2021-07-12 hirsute-oibaf-ppa)1.2.182GCC 10.3.0ext41920x1200OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-mutex --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-gDeRY6/gcc-10-10.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-gDeRY6/gcc-10-10.3.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_pstate powersave - CPU Microcode: 0x88Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

onednn 1185G7onednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 3D - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUonednn: IP Shapes 1D - bf16bf16bf16 - CPUonednn: IP Shapes 3D - bf16bf16bf16 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPU12311.513756.458822.413092.7848125.37957.7308313.118514.209711.357339.654432.993312.701438885.954560.818879.3951.263956.614041.13084560.694.793338872.454561.542.7980911.860111.523196.487672.415902.7859725.38877.8200613.139713.911111.250349.640372.983372.698848875.654915.809309.7452.107158.518941.54114855.105.122749294.004850.573.0073712.550811.541777.240782.594013.1159325.855925.9862613.074513.884911.263799.615142.996652.695378884.924562.598885.7251.228856.704340.89504564.074.802318881.774566.992.7663311.8980OpenBenchmarking.org

oneDNN

Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU1233691215SE +/- 0.42, N = 12SE +/- 0.43, N = 12SE +/- 0.42, N = 1211.5111.5211.54MIN: 6.1MIN: 6.05MIN: 6.121. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU123246810SE +/- 0.05308, N = 9SE +/- 0.05056, N = 10SE +/- 0.06555, N = 76.458826.487677.24078MIN: 5.93MIN: 5.92MIN: 5.911. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU1230.58371.16741.75112.33482.9185SE +/- 0.03627, N = 12SE +/- 0.03595, N = 12SE +/- 0.03672, N = 122.413092.415902.59401MIN: 1.47MIN: 1.48MIN: 1.471. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU1230.70111.40222.10332.80443.5055SE +/- 0.03750, N = 12SE +/- 0.04040, N = 12SE +/- 0.04428, N = 122.784812.785973.11593MIN: 2.29MIN: 2.29MIN: 2.281. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU123612182430SE +/- 0.45, N = 12SE +/- 0.47, N = 12SE +/- 0.47, N = 1225.3825.3925.86MIN: 17.98MIN: 17.92MIN: 18.091. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU123612182430SE +/- 0.05002, N = 14SE +/- 0.05152, N = 15SE +/- 17.73871, N = 127.730837.8200625.98626MIN: 5.15MIN: 5.2MIN: 5.051. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU3123691215SE +/- 0.18, N = 12SE +/- 0.16, N = 12SE +/- 0.15, N = 1213.0713.1213.14MIN: 8.15MIN: 8.17MIN: 8.191. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU32148121620SE +/- 0.16, N = 4SE +/- 0.13, N = 7SE +/- 0.31, N = 1213.8813.9114.21MIN: 12.06MIN: 11.89MIN: 121. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU2313691215SE +/- 0.15, N = 15SE +/- 0.15, N = 15SE +/- 0.16, N = 1511.2511.2611.36MIN: 9.38MIN: 9.35MIN: 9.311. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU3213691215SE +/- 0.09506, N = 12SE +/- 0.09806, N = 12SE +/- 0.09318, N = 139.615149.640379.65443MIN: 7.96MIN: 7.98MIN: 7.961. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU2130.67421.34842.02262.69683.371SE +/- 0.04108, N = 3SE +/- 0.03341, N = 4SE +/- 0.03271, N = 42.983372.993312.99665MIN: 2.53MIN: 2.53MIN: 2.531. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU3210.60781.21561.82342.43123.039SE +/- 0.03678, N = 15SE +/- 0.03612, N = 15SE +/- 0.04242, N = 132.695372.698842.70143MIN: 2.16MIN: 2.16MIN: 2.161. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU2312K4K6K8K10KSE +/- 4.17, N = 3SE +/- 4.95, N = 3SE +/- 7.23, N = 38875.658884.928885.95MIN: 8829.39MIN: 8839.33MIN: 8838.591. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU13211002200330044005500SE +/- 4.43, N = 3SE +/- 3.31, N = 3SE +/- 74.40, N = 124560.814562.594915.80MIN: 4515.14MIN: 4508.54MIN: 4463.561. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU1322K4K6K8K10KSE +/- 1.63, N = 3SE +/- 7.49, N = 3SE +/- 1.39, N = 38879.398885.729309.74MIN: 8840.26MIN: 8841.26MIN: 9241.241. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU3121224364860SE +/- 0.99, N = 12SE +/- 0.93, N = 12SE +/- 0.98, N = 1251.2351.2652.11MIN: 35.24MIN: 35.24MIN: 35.321. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU1321326395265SE +/- 0.59, N = 3SE +/- 0.66, N = 3SE +/- 0.47, N = 356.6156.7058.52MIN: 49.22MIN: 49.2MIN: 49.591. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU312918273645SE +/- 0.52, N = 15SE +/- 0.53, N = 15SE +/- 0.54, N = 1540.9041.1341.54MIN: 35.35MIN: 35.33MIN: 35.441. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU13210002000300040005000SE +/- 3.94, N = 3SE +/- 6.26, N = 3SE +/- 6.33, N = 34560.694564.074855.10MIN: 4506.97MIN: 4510.06MIN: 4784.731. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU1321.15262.30523.45784.61045.763SE +/- 0.02021, N = 3SE +/- 0.00999, N = 3SE +/- 0.01628, N = 34.793334.802315.12274MIN: 3.84MIN: 3.78MIN: 3.781. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU1322K4K6K8K10KSE +/- 4.93, N = 3SE +/- 3.90, N = 3SE +/- 14.08, N = 38872.458881.779294.00MIN: 8836.53MIN: 8834.32MIN: 9208.241. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU13210002000300040005000SE +/- 6.20, N = 3SE +/- 2.24, N = 3SE +/- 8.59, N = 34561.544566.994850.57MIN: 4510.17MIN: 4513.2MIN: 4759.81. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU3120.67671.35342.03012.70683.3835SE +/- 0.00568, N = 3SE +/- 0.00528, N = 3SE +/- 0.01235, N = 32.766332.798093.00737MIN: 2.16MIN: 2.19MIN: 2.181. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU1323691215SE +/- 0.02, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 311.8611.9012.55MIN: 11MIN: 10.98MIN: 11.051. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl


Phoronix Test Suite v10.8.5