pts-hpc Apple M2 Pro testing with a Apple Mac mini and Apple M2 Pro on macOS 13.5 via the Phoronix Test Suite. pts-hpc: Processor: Apple M2 Pro (10 Cores), Motherboard: Apple Mac mini, Memory: 16GB, Disk: 461GB, Graphics: Apple M2 Pro, Monitor: Apple M2 Pro OS: macOS 13.5, Kernel: 22.6.0 (arm64), Compiler: GCC 14.0.3 + Clang 17.0.6 + LLVM 17.0.6 + Xcode 14.3.1, File-System: APFS, Screen Resolution: x NAMD 2.14 ATPase Simulation - 327,506 Atoms days/ns < Lower Is Better pts-hpc . 2.37121 |============================================================ pts-hpc . 2.36706 |============================================================ Nebular Empirical Analysis Tool 2.3 Seconds < Lower Is Better Timed MrBayes Analysis 3.2.7 Seconds < Lower Is Better Timed HMMer Search 3.3.2 Seconds < Lower Is Better Timed MAFFT Alignment 7.471 Multiple Sequence Alignment - LSU RNA Seconds < Lower Is Better pts-hpc . 8.315 |============================================================== pts-hpc . 8.166 |============================================================= LAMMPS Molecular Dynamics Simulator 23Jun2022 Model: 20k Atoms ns/day > Higher Is Better LAMMPS Molecular Dynamics Simulator 23Jun2022 Model: Rhodopsin Protein ns/day > Higher Is Better ACES DGEMM 1.0 GFLOP/s > Higher Is Better Himeno Benchmark 3.0 MFLOPS > Higher Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU ms < Lower Is Better pts-hpc . 198.54 |============================================================= oneDNN 3.3 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU ms < Lower Is Better pts-hpc . 127.93 |============================================================= oneDNN 3.3 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better pts-hpc . 266.30 |============================================================= oneDNN 3.3 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better pts-hpc . 341.08 |============================================================= oneDNN 3.3 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU ms < Lower Is Better pts-hpc . 349.58 |============================================================= oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU ms < Lower Is Better pts-hpc . 888.78 |============================================================= oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU ms < Lower Is Better pts-hpc . 348.91 |============================================================= oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better pts-hpc . 771.37 |============================================================= oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better pts-hpc . 612.43 |============================================================= oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better pts-hpc . 375.87 |============================================================= oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU ms < Lower Is Better pts-hpc . 216031 |============================================================= oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU ms < Lower Is Better pts-hpc . 132806 |============================================================= oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better pts-hpc . 215461 |============================================================= oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better pts-hpc . 132819 |============================================================= oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better pts-hpc . 215425 |============================================================= oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better pts-hpc . 132815 |============================================================= Numpy Benchmark Score > Higher Is Better DeepSpeech 0.6 Acceleration: CPU Seconds < Lower Is Better GROMACS 2023 Implementation: MPI CPU - Input: water_GMX50_bare Ns Per Day > Higher Is Better GNU Octave Benchmark Seconds < Lower Is Better Mobile Neural Network 2.1 Model: nasnet ms < Lower Is Better pts-hpc . 11.52 |============================================================== Mobile Neural Network 2.1 Model: mobilenetV3 ms < Lower Is Better pts-hpc . 1.825 |============================================================== Mobile Neural Network 2.1 Model: squeezenetv1.1 ms < Lower Is Better pts-hpc . 3.199 |============================================================== Mobile Neural Network 2.1 Model: resnet-v2-50 ms < Lower Is Better pts-hpc . 25.08 |============================================================== Mobile Neural Network 2.1 Model: SqueezeNetV1.0 ms < Lower Is Better pts-hpc . 5.527 |============================================================== Mobile Neural Network 2.1 Model: MobileNetV2_224 ms < Lower Is Better pts-hpc . 3.629 |============================================================== Mobile Neural Network 2.1 Model: mobilenet-v1-1.0 ms < Lower Is Better pts-hpc . 4.664 |============================================================== Mobile Neural Network 2.1 Model: inception-v3 ms < Lower Is Better pts-hpc . 35.28 |============================================================== NCNN 20230517 Target: CPU - Model: mobilenet ms < Lower Is Better pts-hpc . 18.07 |============================================================== NCNN 20230517 Target: CPU-v2-v2 - Model: mobilenet-v2 ms < Lower Is Better pts-hpc . 4.64 |=============================================================== NCNN 20230517 Target: CPU-v3-v3 - Model: mobilenet-v3 ms < Lower Is Better pts-hpc . 3.77 |=============================================================== NCNN 20230517 Target: CPU - Model: shufflenet-v2 ms < Lower Is Better pts-hpc . 2.69 |=============================================================== NCNN 20230517 Target: CPU - Model: mnasnet ms < Lower Is Better pts-hpc . 4.73 |=============================================================== NCNN 20230517 Target: CPU - Model: efficientnet-b0 ms < Lower Is Better pts-hpc . 7.59 |=============================================================== NCNN 20230517 Target: CPU - Model: blazeface ms < Lower Is Better pts-hpc . 0.88 |=============================================================== NCNN 20230517 Target: CPU - Model: googlenet ms < Lower Is Better pts-hpc . 22.11 |============================================================== NCNN 20230517 Target: CPU - Model: vgg16 ms < Lower Is Better pts-hpc . 66.25 |============================================================== NCNN 20230517 Target: CPU - Model: resnet18 ms < Lower Is Better pts-hpc . 14.85 |============================================================== NCNN 20230517 Target: CPU - Model: alexnet ms < Lower Is Better pts-hpc . 16.75 |============================================================== NCNN 20230517 Target: CPU - Model: resnet50 ms < Lower Is Better pts-hpc . 40.62 |============================================================== NCNN 20230517 Target: CPU - Model: yolov4-tiny ms < Lower Is Better pts-hpc . 26.77 |============================================================== NCNN 20230517 Target: CPU - Model: squeezenet_ssd ms < Lower Is Better pts-hpc . 13.80 |============================================================== NCNN 20230517 Target: CPU - Model: regnety_400m ms < Lower Is Better pts-hpc . 6.44 |=============================================================== NCNN 20230517 Target: CPU - Model: vision_transformer ms < Lower Is Better pts-hpc . 1187.73 |============================================================ NCNN 20230517 Target: CPU - Model: FastestDet ms < Lower Is Better pts-hpc . 2.11 |=============================================================== NCNN 20230517 Target: Vulkan GPU - Model: mobilenet ms < Lower Is Better pts-hpc . 18.01 |============================================================== NCNN 20230517 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 ms < Lower Is Better pts-hpc . 4.59 |=============================================================== NCNN 20230517 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 ms < Lower Is Better pts-hpc . 3.73 |=============================================================== NCNN 20230517 Target: Vulkan GPU - Model: shufflenet-v2 ms < Lower Is Better pts-hpc . 2.66 |=============================================================== NCNN 20230517 Target: Vulkan GPU - Model: mnasnet ms < Lower Is Better pts-hpc . 4.69 |=============================================================== NCNN 20230517 Target: Vulkan GPU - Model: efficientnet-b0 ms < Lower Is Better pts-hpc . 7.51 |=============================================================== NCNN 20230517 Target: Vulkan GPU - Model: blazeface ms < Lower Is Better pts-hpc . 0.87 |=============================================================== NCNN 20230517 Target: Vulkan GPU - Model: googlenet ms < Lower Is Better pts-hpc . 21.91 |============================================================== NCNN 20230517 Target: Vulkan GPU - Model: vgg16 ms < Lower Is Better pts-hpc . 65.60 |============================================================== NCNN 20230517 Target: Vulkan GPU - Model: resnet18 ms < Lower Is Better pts-hpc . 14.72 |============================================================== NCNN 20230517 Target: Vulkan GPU - Model: alexnet ms < Lower Is Better pts-hpc . 16.59 |============================================================== NCNN 20230517 Target: Vulkan GPU - Model: resnet50 ms < Lower Is Better pts-hpc . 40.27 |============================================================== NCNN 20230517 Target: Vulkan GPU - Model: yolov4-tiny ms < Lower Is Better pts-hpc . 26.71 |============================================================== NCNN 20230517 Target: Vulkan GPU - Model: squeezenet_ssd ms < Lower Is Better pts-hpc . 13.69 |============================================================== NCNN 20230517 Target: Vulkan GPU - Model: regnety_400m ms < Lower Is Better pts-hpc . 6.38 |=============================================================== NCNN 20230517 Target: Vulkan GPU - Model: vision_transformer ms < Lower Is Better pts-hpc . 1187.42 |============================================================ NCNN 20230517 Target: Vulkan GPU - Model: FastestDet ms < Lower Is Better pts-hpc . 2.11 |=============================================================== PlaidML FP16: No - Mode: Inference - Network: VGG16 - Device: CPU Examples Per Second > Higher Is Better PlaidML FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU Examples Per Second > Higher Is Better Numenta Anomaly Benchmark 1.1 Detector: KNN CAD Seconds < Lower Is Better Numenta Anomaly Benchmark 1.1 Detector: Relative Entropy Seconds < Lower Is Better Numenta Anomaly Benchmark 1.1 Detector: Windowed Gaussian Seconds < Lower Is Better Numenta Anomaly Benchmark 1.1 Detector: Earthgecko Skyline Seconds < Lower Is Better Numenta Anomaly Benchmark 1.1 Detector: Bayesian Changepoint Seconds < Lower Is Better Numenta Anomaly Benchmark 1.1 Detector: Contextual Anomaly Detector OSE Seconds < Lower Is Better AI Benchmark Alpha 0.1.2 Score > Higher Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 16384 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 16384 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 65536 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 65536 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 262144 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 262144 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 1048576 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 1048576 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 4194304 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 4194304 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 16384 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 16384 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 65536 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 65536 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 65536 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 65536 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 16384 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 16384 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 65536 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 65536 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 262144 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 262144 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 262144 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 262144 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 262144 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 262144 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 1048576 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 1048576 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 16384 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 16384 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 65536 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 65536 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 1048576 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 1048576 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 262144 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 262144 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 1048576 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 1048576 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 16384 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 16384 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 65536 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 65536 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 262144 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 262144 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 1048576 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 1048576 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better