apple-macmini-2020-hpc Apple M1 testing with a Apple Mac mini and Apple M1 on macOS 12.0.1 via the Phoronix Test Suite. Apple M1: Processor: Apple M1 (8 Cores), Motherboard: Apple Mac mini, Memory: 8GB, Disk: 229GB, Graphics: Apple M1, Monitor: Evanlak4K60 OS: macOS 12.0.1, Kernel: 21.1.0 (arm64), Compiler: GCC 13.0.0 + Clang 13.0.0 + Xcode 13.1, File-System: APFS, Screen Resolution: 3840x2160 PlaidML FP16: No - Mode: Inference - Network: VGG16 - Device: CPU Examples Per Second > Higher Is Better PlaidML FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU Examples Per Second > Higher Is Better ACES DGEMM 1.0 GFLOP/s > Higher Is Better Himeno Benchmark 3.0 MFLOPS > Higher Is Better GROMACS 2022.1 Implementation: MPI CPU - Input: water_GMX50_bare Ns Per Day > Higher Is Better LAMMPS Molecular Dynamics Simulator 23Jun2022 Model: 20k Atoms ns/day > Higher Is Better LAMMPS Molecular Dynamics Simulator 23Jun2022 Model: Rhodopsin Protein ns/day > Higher Is Better Numpy Benchmark Score > Higher Is Better AI Benchmark Alpha 0.1.2 Score > Higher Is Better NAMD 2.14 ATPase Simulation - 327,506 Atoms days/ns < Lower Is Better Apple M1 . 3.68461 |=========================================================== oneDNN 2.7 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU ms < Lower Is Better oneDNN 2.7 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU ms < Lower Is Better oneDNN 2.7 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better oneDNN 2.7 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better oneDNN 2.7 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 2.7 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 2.7 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU ms < Lower Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU ms < Lower Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU ms < Lower Is Better oneDNN 2.7 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better oneDNN 2.7 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU ms < Lower Is Better oneDNN 2.7 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU ms < Lower Is Better oneDNN 2.7 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better oneDNN 2.7 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 2.7 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better oneDNN 2.7 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU ms < Lower Is Better oneDNN 2.7 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 2.7 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 2.7 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better oneDNN 2.7 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better Mobile Neural Network 2.1 Model: nasnet ms < Lower Is Better Apple M1 . 15.66 |============================================================= Mobile Neural Network 2.1 Model: mobilenetV3 ms < Lower Is Better Apple M1 . 2.500 |============================================================= Mobile Neural Network 2.1 Model: squeezenetv1.1 ms < Lower Is Better Apple M1 . 4.304 |============================================================= Mobile Neural Network 2.1 Model: resnet-v2-50 ms < Lower Is Better Apple M1 . 33.33 |============================================================= Mobile Neural Network 2.1 Model: SqueezeNetV1.0 ms < Lower Is Better Apple M1 . 7.445 |============================================================= Mobile Neural Network 2.1 Model: MobileNetV2_224 ms < Lower Is Better Apple M1 . 4.930 |============================================================= Mobile Neural Network 2.1 Model: mobilenet-v1-1.0 ms < Lower Is Better Apple M1 . 6.154 |============================================================= Mobile Neural Network 2.1 Model: inception-v3 ms < Lower Is Better Apple M1 . 46.75 |============================================================= NCNN 20220729 Target: CPU - Model: mobilenet ms < Lower Is Better Apple M1 . 20.04 |============================================================= NCNN 20220729 Target: CPU-v2-v2 - Model: mobilenet-v2 ms < Lower Is Better Apple M1 . 5.12 |============================================================== NCNN 20220729 Target: CPU-v3-v3 - Model: mobilenet-v3 ms < Lower Is Better Apple M1 . 4.15 |============================================================== NCNN 20220729 Target: CPU - Model: shufflenet-v2 ms < Lower Is Better Apple M1 . 3.22 |============================================================== NCNN 20220729 Target: CPU - Model: mnasnet ms < Lower Is Better Apple M1 . 5.18 |============================================================== NCNN 20220729 Target: CPU - Model: efficientnet-b0 ms < Lower Is Better Apple M1 . 8.38 |============================================================== NCNN 20220729 Target: CPU - Model: blazeface ms < Lower Is Better Apple M1 . 1.58 |============================================================== NCNN 20220729 Target: CPU - Model: googlenet ms < Lower Is Better Apple M1 . 23.51 |============================================================= NCNN 20220729 Target: CPU - Model: vgg16 ms < Lower Is Better Apple M1 . 71.47 |============================================================= NCNN 20220729 Target: CPU - Model: resnet18 ms < Lower Is Better Apple M1 . 15.75 |============================================================= NCNN 20220729 Target: CPU - Model: alexnet ms < Lower Is Better Apple M1 . 29.15 |============================================================= NCNN 20220729 Target: CPU - Model: resnet50 ms < Lower Is Better Apple M1 . 43.04 |============================================================= NCNN 20220729 Target: CPU - Model: yolov4-tiny ms < Lower Is Better Apple M1 . 29.35 |============================================================= NCNN 20220729 Target: CPU - Model: squeezenet_ssd ms < Lower Is Better Apple M1 . 16.70 |============================================================= NCNN 20220729 Target: CPU - Model: regnety_400m ms < Lower Is Better Apple M1 . 7.02 |============================================================== NCNN 20220729 Target: CPU - Model: vision_transformer ms < Lower Is Better Apple M1 . 1626.05 |=========================================================== NCNN 20220729 Target: CPU - Model: FastestDet ms < Lower Is Better Apple M1 . 2.52 |============================================================== NCNN 20220729 Target: Vulkan GPU - Model: mobilenet ms < Lower Is Better Apple M1 . 20.02 |============================================================= NCNN 20220729 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 ms < Lower Is Better Apple M1 . 5.11 |============================================================== NCNN 20220729 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 ms < Lower Is Better Apple M1 . 4.15 |============================================================== NCNN 20220729 Target: Vulkan GPU - Model: shufflenet-v2 ms < Lower Is Better Apple M1 . 3.22 |============================================================== NCNN 20220729 Target: Vulkan GPU - Model: mnasnet ms < Lower Is Better Apple M1 . 5.17 |============================================================== NCNN 20220729 Target: Vulkan GPU - Model: efficientnet-b0 ms < Lower Is Better Apple M1 . 8.37 |============================================================== NCNN 20220729 Target: Vulkan GPU - Model: blazeface ms < Lower Is Better Apple M1 . 1.58 |============================================================== NCNN 20220729 Target: Vulkan GPU - Model: googlenet ms < Lower Is Better Apple M1 . 23.51 |============================================================= NCNN 20220729 Target: Vulkan GPU - Model: vgg16 ms < Lower Is Better Apple M1 . 71.38 |============================================================= NCNN 20220729 Target: Vulkan GPU - Model: resnet18 ms < Lower Is Better Apple M1 . 15.73 |============================================================= NCNN 20220729 Target: Vulkan GPU - Model: alexnet ms < Lower Is Better Apple M1 . 29.13 |============================================================= NCNN 20220729 Target: Vulkan GPU - Model: resnet50 ms < Lower Is Better Apple M1 . 43.06 |============================================================= NCNN 20220729 Target: Vulkan GPU - Model: yolov4-tiny ms < Lower Is Better Apple M1 . 29.39 |============================================================= NCNN 20220729 Target: Vulkan GPU - Model: squeezenet_ssd ms < Lower Is Better Apple M1 . 16.70 |============================================================= NCNN 20220729 Target: Vulkan GPU - Model: regnety_400m ms < Lower Is Better Apple M1 . 7.01 |============================================================== NCNN 20220729 Target: Vulkan GPU - Model: vision_transformer ms < Lower Is Better Apple M1 . 1625.83 |=========================================================== NCNN 20220729 Target: Vulkan GPU - Model: FastestDet ms < Lower Is Better Apple M1 . 2.52 |============================================================== Nebular Empirical Analysis Tool 2.3 Seconds < Lower Is Better Timed MrBayes Analysis 3.2.7 Seconds < Lower Is Better Timed HMMer Search 3.3.2 Seconds < Lower Is Better Timed MAFFT Alignment 7.471 Multiple Sequence Alignment - LSU RNA Seconds < Lower Is Better Apple M1 . 11.54 |============================================================= DeepSpeech 0.6 Acceleration: CPU Seconds < Lower Is Better GNU Octave Benchmark Seconds < Lower Is Better Numenta Anomaly Benchmark 1.1 Detector: EXPoSE Seconds < Lower Is Better Numenta Anomaly Benchmark 1.1 Detector: Relative Entropy Seconds < Lower Is Better Numenta Anomaly Benchmark 1.1 Detector: Windowed Gaussian Seconds < Lower Is Better Numenta Anomaly Benchmark 1.1 Detector: Earthgecko Skyline Seconds < Lower Is Better Numenta Anomaly Benchmark 1.1 Detector: Bayesian Changepoint Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 16384 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 16384 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 65536 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 65536 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 262144 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 262144 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 1048576 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 1048576 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 4194304 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 4194304 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 16384 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 16384 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 65536 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 65536 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 65536 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 65536 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 16384 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 16384 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 65536 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 65536 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 262144 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 262144 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 262144 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 262144 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 262144 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 262144 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 1048576 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 1048576 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 16384 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 16384 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 65536 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 65536 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 1048576 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 1048576 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 262144 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 262144 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 1048576 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 1048576 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 16384 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 16384 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 65536 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 65536 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 262144 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 262144 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 1048576 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 1048576 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better