onednn 5500U

AMD Ryzen 5 5500U testing with a LENOVO LNVNB161216 (GLCN22WW BIOS) and AMD Lucienne 2GB on Ubuntu 21.10 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2203306-PTS-ONEDNN5567&sor&gru.

onednn 5500UProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionABCAMD Ryzen 5 5500U @ 4.06GHz (6 Cores / 12 Threads)LENOVO LNVNB161216 (GLCN22WW BIOS)AMD Renoir/Cezanne6GB256GB SAMSUNG MZALQ256HBJD-00BL2AMD Lucienne 2GB (1800/400MHz)AMD Renoir Radeon HD AudioQualcomm Atheros QCA6174 802.11acUbuntu 21.105.17.0-051700-generic (x86_64)GNOME Shell 40.5X Server 1.20.13 + Wayland4.6 Mesa 22.1.0-devel (git-729f95a 2022-03-24 impish-oibaf-ppa) (LLVM 13.0.1 DRM 3.44)1.3.207GCC 11.2.0ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-ZPT0kp/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-ZPT0kp/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: amd-pstate schedutil (Boost: Enabled) - CPU Microcode: 0x8608102Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

onednn 5500Uonednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 3D - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPUABC12.105414.50033.729994.5530434.452613.622211.755737.77115.628428.032157127.164688.217099.284694.727.906327073.684687.065.1179012.087611.32583.796903.7546934.124413.641711.718533.73655.619118.028147087.034690.557098.274701.507.890747067.164683.435.1131112.086411.38643.794453.7632734.158613.325311.720533.72105.706478.031247043.604690.507066.774676.627.852767094.514662.025.11709OpenBenchmarking.org

oneDNN

Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUCBA3691215SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 312.0912.0912.11MIN: 11.63MIN: 11.57MIN: 11.641. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

oneDNN

Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUBCA48121620SE +/- 0.06, N = 3SE +/- 0.04, N = 3SE +/- 0.00, N = 311.3311.3914.50MIN: 11.07MIN: 11.16MIN: 14.371. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

oneDNN

Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUACB0.85431.70862.56293.41724.2715SE +/- 0.03657, N = 3SE +/- 0.03002, N = 3SE +/- 0.02821, N = 33.729993.794453.79690MIN: 3.5MIN: 3.4MIN: 3.571. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

oneDNN

Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUBCA1.02442.04883.07324.09765.122SE +/- 0.04023, N = 5SE +/- 0.05002, N = 3SE +/- 0.00367, N = 33.754693.763274.55304MIN: 3.55MIN: 3.58MIN: 4.491. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUBCA816243240SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 334.1234.1634.45MIN: 33.66MIN: 33.59MIN: 34.091. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUCAB48121620SE +/- 0.17, N = 15SE +/- 0.14, N = 15SE +/- 0.13, N = 1513.3313.6213.64MIN: 9.03MIN: 8.96MIN: 91. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUBCA3691215SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 311.7211.7211.76MIN: 11.49MIN: 11.51MIN: 11.41. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUCBA918273645SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 2.09, N = 1233.7233.7437.77MIN: 33.44MIN: 33.42MIN: 33.41. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUBAC1.2842.5683.8525.1366.42SE +/- 0.06253, N = 5SE +/- 0.05563, N = 6SE +/- 0.05850, N = 125.619115.628425.70647MIN: 4.89MIN: 5.07MIN: 4.881. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUBCA246810SE +/- 0.00293, N = 3SE +/- 0.01365, N = 3SE +/- 0.00982, N = 38.028148.031248.03215MIN: 7.79MIN: 7.55MIN: 7.711. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

oneDNN

Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUCBA15003000450060007500SE +/- 14.12, N = 3SE +/- 9.99, N = 3SE +/- 8.00, N = 37043.607087.037127.16MIN: 6995.5MIN: 7040.89MIN: 7078.191. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUACB10002000300040005000SE +/- 13.87, N = 3SE +/- 23.16, N = 3SE +/- 11.37, N = 34688.214690.504690.55MIN: 4643MIN: 4623.54MIN: 4647.661. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

oneDNN

Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUCBA15003000450060007500SE +/- 7.85, N = 3SE +/- 10.05, N = 3SE +/- 16.66, N = 37066.777098.277099.28MIN: 7019.15MIN: 7058.13MIN: 7031.561. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUCAB10002000300040005000SE +/- 15.17, N = 3SE +/- 12.56, N = 3SE +/- 20.21, N = 34676.624694.724701.50MIN: 4628.39MIN: 4650.38MIN: 4642.311. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUCBA246810SE +/- 0.00587, N = 3SE +/- 0.01478, N = 3SE +/- 0.01256, N = 37.852767.890747.90632MIN: 7.73MIN: 7.75MIN: 7.781. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

oneDNN

Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUBAC15003000450060007500SE +/- 12.42, N = 3SE +/- 5.15, N = 3SE +/- 12.08, N = 37067.167073.687094.51MIN: 7017.84MIN: 7037.09MIN: 7042.11. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUCBA10002000300040005000SE +/- 5.70, N = 3SE +/- 9.44, N = 3SE +/- 8.81, N = 34662.024683.434687.06MIN: 4631.35MIN: 4644.98MIN: 4647.191. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUBCA1.15152.3033.45454.6065.7575SE +/- 0.00391, N = 3SE +/- 0.00500, N = 3SE +/- 0.00093, N = 35.113115.117095.11790MIN: 4.9MIN: 4.92MIN: 4.831. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread


Phoronix Test Suite v10.8.4