onednn 1185G7

Intel Core i7-1185G7 testing with a Dell 0DXP1F (3.0.3 BIOS) and Intel Xe TGL GT2 3GB on Ubuntu 21.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2107150-IB-ONEDNN11860.

onednn 1185G7ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen Resolution123Intel Core i7-1185G7 @ 4.80GHz (4 Cores / 8 Threads)Dell 0DXP1F (3.0.3 BIOS)Intel Tiger Lake-LP16GBMicron 2300 NVMe 512GBIntel Xe TGL GT2 3GB (1350MHz)Realtek ALC289Intel Wi-Fi 6 AX201Ubuntu 21.045.13.0-051300-generic (x86_64)GNOME Shell 3.38.4X Server + Wayland4.6 Mesa 21.2.0-devel (git-dd98918 2021-07-12 hirsute-oibaf-ppa)1.2.182GCC 10.3.0ext41920x1200OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-mutex --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-gDeRY6/gcc-10-10.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-gDeRY6/gcc-10-10.3.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_pstate powersave - CPU Microcode: 0x88Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

onednn 1185G7onednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 3D - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUonednn: IP Shapes 1D - bf16bf16bf16 - CPUonednn: IP Shapes 3D - bf16bf16bf16 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPU12311.513756.458822.413092.7848125.37957.7308313.118514.209711.357339.654432.993312.701438885.954560.818879.3951.263956.614041.13084560.694.793338872.454561.542.7980911.860111.523196.487672.415902.7859725.38877.8200613.139713.911111.250349.640372.983372.698848875.654915.809309.7452.107158.518941.54114855.105.122749294.004850.573.0073712.550811.541777.240782.594013.1159325.855925.9862613.074513.884911.263799.615142.996652.695378884.924562.598885.7251.228856.704340.89504564.074.802318881.774566.992.7663311.8980OpenBenchmarking.org

oneDNN

Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU1233691215SE +/- 0.42, N = 12SE +/- 0.43, N = 12SE +/- 0.42, N = 1211.5111.5211.54MIN: 6.1MIN: 6.05MIN: 6.121. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU123246810SE +/- 0.05308, N = 9SE +/- 0.05056, N = 10SE +/- 0.06555, N = 76.458826.487677.24078MIN: 5.93MIN: 5.92MIN: 5.911. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU1230.58371.16741.75112.33482.9185SE +/- 0.03627, N = 12SE +/- 0.03595, N = 12SE +/- 0.03672, N = 122.413092.415902.59401MIN: 1.47MIN: 1.48MIN: 1.471. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU1230.70111.40222.10332.80443.5055SE +/- 0.03750, N = 12SE +/- 0.04040, N = 12SE +/- 0.04428, N = 122.784812.785973.11593MIN: 2.29MIN: 2.29MIN: 2.281. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU123612182430SE +/- 0.45, N = 12SE +/- 0.47, N = 12SE +/- 0.47, N = 1225.3825.3925.86MIN: 17.98MIN: 17.92MIN: 18.091. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU123612182430SE +/- 0.05002, N = 14SE +/- 0.05152, N = 15SE +/- 17.73871, N = 127.730837.8200625.98626MIN: 5.15MIN: 5.2MIN: 5.051. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU1233691215SE +/- 0.16, N = 12SE +/- 0.15, N = 12SE +/- 0.18, N = 1213.1213.1413.07MIN: 8.17MIN: 8.19MIN: 8.151. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU12348121620SE +/- 0.31, N = 12SE +/- 0.13, N = 7SE +/- 0.16, N = 414.2113.9113.88MIN: 12MIN: 11.89MIN: 12.061. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU1233691215SE +/- 0.16, N = 15SE +/- 0.15, N = 15SE +/- 0.15, N = 1511.3611.2511.26MIN: 9.31MIN: 9.38MIN: 9.351. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU1233691215SE +/- 0.09318, N = 13SE +/- 0.09806, N = 12SE +/- 0.09506, N = 129.654439.640379.61514MIN: 7.96MIN: 7.98MIN: 7.961. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU1230.67421.34842.02262.69683.371SE +/- 0.03341, N = 4SE +/- 0.04108, N = 3SE +/- 0.03271, N = 42.993312.983372.99665MIN: 2.53MIN: 2.53MIN: 2.531. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU1230.60781.21561.82342.43123.039SE +/- 0.04242, N = 13SE +/- 0.03612, N = 15SE +/- 0.03678, N = 152.701432.698842.69537MIN: 2.16MIN: 2.16MIN: 2.161. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU1232K4K6K8K10KSE +/- 7.23, N = 3SE +/- 4.17, N = 3SE +/- 4.95, N = 38885.958875.658884.92MIN: 8838.59MIN: 8829.39MIN: 8839.331. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU12311002200330044005500SE +/- 4.43, N = 3SE +/- 74.40, N = 12SE +/- 3.31, N = 34560.814915.804562.59MIN: 4515.14MIN: 4463.56MIN: 4508.541. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU1232K4K6K8K10KSE +/- 1.63, N = 3SE +/- 1.39, N = 3SE +/- 7.49, N = 38879.399309.748885.72MIN: 8840.26MIN: 9241.24MIN: 8841.261. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU1231224364860SE +/- 0.93, N = 12SE +/- 0.98, N = 12SE +/- 0.99, N = 1251.2652.1151.23MIN: 35.24MIN: 35.32MIN: 35.241. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU1231326395265SE +/- 0.59, N = 3SE +/- 0.47, N = 3SE +/- 0.66, N = 356.6158.5256.70MIN: 49.22MIN: 49.59MIN: 49.21. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU123918273645SE +/- 0.53, N = 15SE +/- 0.54, N = 15SE +/- 0.52, N = 1541.1341.5440.90MIN: 35.33MIN: 35.44MIN: 35.351. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU12310002000300040005000SE +/- 3.94, N = 3SE +/- 6.33, N = 3SE +/- 6.26, N = 34560.694855.104564.07MIN: 4506.97MIN: 4784.73MIN: 4510.061. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU1231.15262.30523.45784.61045.763SE +/- 0.02021, N = 3SE +/- 0.01628, N = 3SE +/- 0.00999, N = 34.793335.122744.80231MIN: 3.84MIN: 3.78MIN: 3.781. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU1232K4K6K8K10KSE +/- 4.93, N = 3SE +/- 14.08, N = 3SE +/- 3.90, N = 38872.459294.008881.77MIN: 8836.53MIN: 9208.24MIN: 8834.321. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU12310002000300040005000SE +/- 6.20, N = 3SE +/- 8.59, N = 3SE +/- 2.24, N = 34561.544850.574566.99MIN: 4510.17MIN: 4759.8MIN: 4513.21. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU1230.67671.35342.03012.70683.3835SE +/- 0.00528, N = 3SE +/- 0.01235, N = 3SE +/- 0.00568, N = 32.798093.007372.76633MIN: 2.19MIN: 2.18MIN: 2.161. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU1233691215SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 311.8612.5511.90MIN: 11MIN: 11.05MIN: 10.981. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl


Phoronix Test Suite v10.8.4