oneDNN Apple M2

Apple M2 testing with a Apple MacBook Air (13 h M2 2022) and llvmpipe on Arch rolling via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2209289-NE-ONEDNNAPP07&sor&grt.

oneDNN Apple M2ProcessorMotherboardMemoryDiskGraphicsNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionABCApple M2 @ 2.42GHz (4 Cores / 8 Threads)Apple MacBook Air (13 h M2 2022)8GB251GB APPLE SSD AP0256Z + 2 x 0GB APPLE SSD AP0256ZllvmpipeBroadcom Device 4433 + Broadcom Device 5f71Arch rolling5.19.0-rc7-asahi-2-1-ARCH (aarch64)KDE Plasma 5.25.4X Server 1.21.1.44.5 Mesa 22.1.6 (LLVM 14.0.6 128 bits)GCC 12.1.0 + Clang 14.0.6ext42560x1600OpenBenchmarking.orgCompiler Details- --build=aarch64-unknown-linux-gnu --disable-libssp --disable-libstdcxx-pch --disable-multilib --disable-werror --enable-__cxa_atexit --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-default-ssp --enable-fix-cortex-a53-835769 --enable-fix-cortex-a53-843419 --enable-gnu-indirect-function --enable-gnu-unique-object --enable-languages=c,c++,fortran,go,lto,objc,obj-c++ --enable-lto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-unknown-linux-gnu --mandir=/usr/share/man --with-arch=armv8-a --with-linker-hash-style=gnu Processor Details- Scaling Governor: apple-cpufreq schedutilSecurity Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Not affected + srbds: Not affected + tsx_async_abort: Not affected

oneDNN Apple M2onednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 3D - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUABC27.146634.185858.1984109.747942.2236267.75636.5644175.803174.19848.580632230.216519.632236.516505.916.935732230.416512.738.665427.211434.127458.326495.113742.3861266.95837.23175.511174.5548.540132237.616493.332211.716512.616.93053223916511.238.770327.195534.134158.372394.853142.4022264.62136.8228175.674174.01248.552932226.716513.132214.716498.216.93732211.416515.538.7967OpenBenchmarking.org

oneDNN

Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUACB612182430SE +/- 0.02, N = 327.1527.2027.21MIN: 26.6MIN: 26.72MIN: 26.571. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUBCA816243240SE +/- 0.00, N = 334.1334.1334.19MIN: 33.99MIN: 34.01MIN: 34.041. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUABC1326395265SE +/- 0.07, N = 358.2058.3358.37MIN: 57.4MIN: 57.89MIN: 57.911. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUCBA20406080100SE +/- 14.08, N = 1594.8595.11109.75MIN: 94.43MIN: 94.69MIN: 94.841. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUABC1020304050SE +/- 0.03, N = 342.2242.3942.40MIN: 41.93MIN: 42.02MIN: 42.181. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUCBA60120180240300SE +/- 1.08, N = 3264.62266.96267.76MIN: 257.54MIN: 260.98MIN: 260.111. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUACB918273645SE +/- 0.12, N = 336.5636.8237.23MIN: 36.27MIN: 36.46MIN: 36.451. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUBCA4080120160200SE +/- 0.07, N = 3175.51175.67175.80MIN: 175.46MIN: 175.58MIN: 175.591. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUCAB4080120160200SE +/- 0.20, N = 3174.01174.20174.55MIN: 173.94MIN: 173.84MIN: 174.431. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUBCA1122334455SE +/- 0.04, N = 348.5448.5548.58MIN: 47.71MIN: 47.79MIN: 47.771. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUCAB7K14K21K28K35KSE +/- 1.97, N = 332226.732230.232237.6MIN: 32208.1MIN: 32211.1MIN: 32229.21. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUBCA4K8K12K16K20KSE +/- 11.60, N = 316493.316513.116519.6MIN: 16488.1MIN: 16496.6MIN: 16500.61. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUBCA7K14K21K28K35KSE +/- 3.13, N = 332211.732214.732236.5MIN: 32201.3MIN: 32208.2MIN: 32214.91. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUCAB4K8K12K16K20KSE +/- 3.03, N = 316498.216505.916512.6MIN: 16493.1MIN: 16494.2MIN: 16507.11. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUBAC48121620SE +/- 0.00, N = 316.9316.9416.94MIN: 16.89MIN: 16.89MIN: 16.91. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUCAB7K14K21K28K35KSE +/- 1.76, N = 332211.432230.432239.0MIN: 32200.3MIN: 32211.1MIN: 32223.81. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUBAC4K8K12K16K20KSE +/- 9.56, N = 316511.216512.716515.5MIN: 16503.1MIN: 16496MIN: 16501.61. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUABC918273645SE +/- 0.05, N = 338.6738.7738.80MIN: 38.56MIN: 38.75MIN: 38.791. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl


Phoronix Test Suite v10.8.5