oneDNN Apple M2

Apple M2 testing with a Apple MacBook Air (13 h M2 2022) and llvmpipe on Arch rolling via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2209289-NE-ONEDNNAPP07
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
A
September 28 2022
  1 Hour, 6 Minutes
B
September 28 2022
  22 Minutes
C
September 28 2022
  22 Minutes
Invert Hiding All Results Option
  36 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


oneDNN Apple M2OpenBenchmarking.orgPhoronix Test SuiteApple M2 @ 2.42GHz (4 Cores / 8 Threads)Apple MacBook Air (13 h M2 2022)8GB251GB APPLE SSD AP0256Z + 2 x 0GB APPLE SSD AP0256ZllvmpipeBroadcom Device 4433 + Broadcom Device 5f71Arch rolling5.19.0-rc7-asahi-2-1-ARCH (aarch64)KDE Plasma 5.25.4X Server 1.21.1.44.5 Mesa 22.1.6 (LLVM 14.0.6 128 bits)GCC 12.1.0 + Clang 14.0.6ext42560x1600ProcessorMotherboardMemoryDiskGraphicsNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionOneDNN Apple M2 BenchmarksSystem Logs- --build=aarch64-unknown-linux-gnu --disable-libssp --disable-libstdcxx-pch --disable-multilib --disable-werror --enable-__cxa_atexit --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-default-ssp --enable-fix-cortex-a53-835769 --enable-fix-cortex-a53-843419 --enable-gnu-indirect-function --enable-gnu-unique-object --enable-languages=c,c++,fortran,go,lto,objc,obj-c++ --enable-lto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-unknown-linux-gnu --mandir=/usr/share/man --with-arch=armv8-a --with-linker-hash-style=gnu - Scaling Governor: apple-cpufreq schedutil- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Not affected + srbds: Not affected + tsx_async_abort: Not affected

ABCResult OverviewPhoronix Test Suite100%104%108%112%116%oneDNNoneDNNoneDNNoneDNNoneDNNoneDNNoneDNNoneDNNoneDNNoneDNNoneDNNoneDNNoneDNNoneDNNoneDNNoneDNNoneDNNoneDNNIP Shapes 3D - u8s8f32 - CPUD.B.s - f32 - CPUD.B.s - f32 - CPUC.B.S.A - f32 - CPUM.M.B.S.T - u8s8f32 - CPUD.B.s - u8s8f32 - CPUIP Shapes 1D - u8s8f32 - CPUIP Shapes 1D - f32 - CPUIP Shapes 3D - f32 - CPUC.B.S.A - u8s8f32 - CPUR.N.N.I - f32 - CPUR.N.N.I - u8s8f32 - CPUR.N.N.T - bf16bf16bf16 - CPUD.B.s - u8s8f32 - CPUR.N.N.T - u8s8f32 - CPUM.M.B.S.T - f32 - CPUR.N.N.T - f32 - CPUR.N.N.I - bf16bf16bf16 - CPU

oneDNN Apple M2onednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUonednn: IP Shapes 1D - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: IP Shapes 3D - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUABC32230.232230.432236.516512.716505.916519.6109.7479267.756174.19858.198427.146638.665416.935734.1858175.80342.223648.580636.564432237.63223932211.716511.216512.616493.395.1137266.958174.5558.326427.211438.770316.930534.1274175.51142.386148.540137.2332226.732211.432214.716515.516498.216513.194.8531264.621174.01258.372327.195538.796716.93734.1341175.67442.402248.552936.8228OpenBenchmarking.org

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUABC7K14K21K28K35KSE +/- 1.97, N = 332230.232237.632226.7MIN: 32211.1MIN: 32229.2MIN: 32208.11. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUABC6K12K18K24K30KMin: 32227.2 / Avg: 32230.2 / Max: 32233.91. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUABC7K14K21K28K35KSE +/- 1.76, N = 332230.432239.032211.4MIN: 32211.1MIN: 32223.8MIN: 32200.31. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUABC6K12K18K24K30KMin: 32226.9 / Avg: 32230.4 / Max: 32232.51. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUABC7K14K21K28K35KSE +/- 3.13, N = 332236.532211.732214.7MIN: 32214.9MIN: 32201.3MIN: 32208.21. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUABC6K12K18K24K30KMin: 32232.9 / Avg: 32236.47 / Max: 32242.71. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUABC4K8K12K16K20KSE +/- 9.56, N = 316512.716511.216515.5MIN: 16496MIN: 16503.1MIN: 16501.61. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUABC3K6K9K12K15KMin: 16501.3 / Avg: 16512.7 / Max: 16531.71. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUABC4K8K12K16K20KSE +/- 3.03, N = 316505.916512.616498.2MIN: 16494.2MIN: 16507.1MIN: 16493.11. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUABC3K6K9K12K15KMin: 16500.8 / Avg: 16505.9 / Max: 16511.31. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUABC4K8K12K16K20KSE +/- 11.60, N = 316519.616493.316513.1MIN: 16500.6MIN: 16488.1MIN: 16496.61. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUABC3K6K9K12K15KMin: 16506.4 / Avg: 16519.57 / Max: 16542.71. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUABC20406080100SE +/- 14.08, N = 15109.7595.1194.85MIN: 94.84MIN: 94.69MIN: 94.431. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUABC20406080100Min: 95.38 / Avg: 109.75 / Max: 306.81. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUABC60120180240300SE +/- 1.08, N = 3267.76266.96264.62MIN: 260.11MIN: 260.98MIN: 257.541. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUABC50100150200250Min: 265.6 / Avg: 267.76 / Max: 268.911. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUABC4080120160200SE +/- 0.20, N = 3174.20174.55174.01MIN: 173.84MIN: 174.43MIN: 173.941. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUABC306090120150Min: 173.96 / Avg: 174.2 / Max: 174.591. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUABC1326395265SE +/- 0.07, N = 358.2058.3358.37MIN: 57.4MIN: 57.89MIN: 57.911. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUABC1224364860Min: 58.09 / Avg: 58.2 / Max: 58.321. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUABC612182430SE +/- 0.02, N = 327.1527.2127.20MIN: 26.6MIN: 26.57MIN: 26.721. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUABC612182430Min: 27.11 / Avg: 27.15 / Max: 27.181. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUABC918273645SE +/- 0.05, N = 338.6738.7738.80MIN: 38.56MIN: 38.75MIN: 38.791. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUABC816243240Min: 38.58 / Avg: 38.67 / Max: 38.751. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUABC48121620SE +/- 0.00, N = 316.9416.9316.94MIN: 16.89MIN: 16.89MIN: 16.91. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUABC48121620Min: 16.93 / Avg: 16.94 / Max: 16.941. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUABC816243240SE +/- 0.00, N = 334.1934.1334.13MIN: 34.04MIN: 33.99MIN: 34.011. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUABC714212835Min: 34.18 / Avg: 34.19 / Max: 34.191. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUABC4080120160200SE +/- 0.07, N = 3175.80175.51175.67MIN: 175.59MIN: 175.46MIN: 175.581. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUABC306090120150Min: 175.68 / Avg: 175.8 / Max: 175.911. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUABC1020304050SE +/- 0.03, N = 342.2242.3942.40MIN: 41.93MIN: 42.02MIN: 42.181. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUABC918273645Min: 42.16 / Avg: 42.22 / Max: 42.281. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUABC1122334455SE +/- 0.04, N = 348.5848.5448.55MIN: 47.77MIN: 47.71MIN: 47.791. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUABC1020304050Min: 48.52 / Avg: 48.58 / Max: 48.661. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUABC918273645SE +/- 0.12, N = 336.5637.2336.82MIN: 36.27MIN: 36.45MIN: 36.461. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUABC816243240Min: 36.33 / Avg: 36.56 / Max: 36.741. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=native -fPIC -pie -ldl