MBP M1 Max Machine Learning, sys76-kudu-ML

Apple M1 Max testing with a Apple MacBook Pro and Apple M1 Max on macOS 12.1 via the Phoronix Test Suite.

sys76-kudu-ML: AMD Ryzen 9 5900HX testing with a System76 Kudu (1.07.09RSA1 BIOS) and AMD Cezanne on Pop 21.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2202161-NE-MBPM1MAXM40
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

BLAS (Basic Linear Algebra Sub-Routine) Tests 2 Tests
CPU Massive 7 Tests
Creator Workloads 4 Tests
HPC - High Performance Computing 20 Tests
Machine Learning 20 Tests
Multi-Core 2 Tests
NVIDIA GPU Compute 4 Tests
Intel oneAPI 2 Tests
Python 3 Tests
Server CPU Tests 3 Tests
Single-Threaded 3 Tests
Speech 2 Tests
Telephony 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
MBP M1 Max Machine Learning
February 16 2022
  6 Hours, 21 Minutes
ML Tests
February 15 2022
  7 Hours, 15 Minutes
Invert Hiding All Results Option
  6 Hours, 48 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


MBP M1 Max Machine Learning, sys76-kudu-ML Suite 1.0.0 System Test suite extracted from MBP M1 Max Machine Learning, sys76-kudu-ML. pts/tensorflow-1.1.0 cifar10_train.py --max_steps 1000 Build: Cifar10 pts/plaidml-1.0.4 --no-fp16 --no-train vgg16 CPU FP16: No - Mode: Inference - Network: VGG16 - Device: CPU pts/plaidml-1.0.4 --no-fp16 --no-train resnet50 CPU FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU pts/lczero-1.6.0 -b blas Backend: BLAS pts/numenta-nab-1.1.0 -d expose Detector: EXPoSE pts/numenta-nab-1.1.0 -d relativeEntropy Detector: Relative Entropy pts/numenta-nab-1.1.0 -d windowedGaussian Detector: Windowed Gaussian pts/numenta-nab-1.1.0 -d earthgeckoSkyline Detector: Earthgecko Skyline pts/numenta-nab-1.1.0 -d bayesChangePt Detector: Bayesian Changepoint pts/rbenchmark-1.0.3 pts/numpy-1.2.1 pts/deepspeech-1.0.0 CPU Acceleration: CPU pts/rnnoise-1.0.2 pts/ai-benchmark-1.0.1 pts/ecp-candle-1.1.0 P1B2 Benchmark: P1B2 pts/ecp-candle-1.1.0 P3B1 Benchmark: P3B1 pts/ecp-candle-1.1.0 P3B2 Benchmark: P3B2 pts/mnn-1.3.0 Model: mobilenetV3 pts/mnn-1.3.0 Model: squeezenetv1.1 pts/mnn-1.3.0 Model: resnet-v2-50 pts/mnn-1.3.0 Model: SqueezeNetV1.0 pts/mnn-1.3.0 Model: MobileNetV2_224 pts/mnn-1.3.0 Model: mobilenet-v1-1.0 pts/mnn-1.3.0 Model: inception-v3 pts/onnx-1.3.0 yolov4/yolov4.onnx -e cpu Model: yolov4 - Device: CPU pts/onnx-1.3.0 fcn-resnet101-11/model.onnx -e cpu Model: fcn-resnet101-11 - Device: CPU pts/onnx-1.3.0 model/test_shufflenetv2/model.onnx -e cpu Model: shufflenet-v2-10 - Device: CPU pts/onnx-1.3.0 super_resolution/super_resolution.onnx -e cpu Model: super-resolution-10 - Device: CPU pts/opencv-1.1.0 dnn Test: DNN - Deep Neural Network pts/tensorflow-lite-1.0.0 --graph=squeezenet.tflite Model: SqueezeNet pts/tensorflow-lite-1.0.0 --graph=inception_v4.tflite Model: Inception V4 pts/tensorflow-lite-1.0.0 --graph=nasnet_mobile.tflite Model: NASNet Mobile pts/tensorflow-lite-1.0.0 --graph=mobilenet_v1_1.0_224.tflite Model: Mobilenet Float pts/tensorflow-lite-1.0.0 --graph=mobilenet_v1_1.0_224_quant.tflite Model: Mobilenet Quant pts/tensorflow-lite-1.0.0 --graph=inception_resnet_v2.tflite Model: Inception ResNet V2 pts/tnn-1.1.0 -dt NAIVE -mp ../benchmark/benchmark-model/densenet.tnnproto Target: CPU - Model: DenseNet pts/tnn-1.1.0 -dt NAIVE -mp ../benchmark/benchmark-model/mobilenet_v2.tnnproto Target: CPU - Model: MobileNet v2 pts/tnn-1.1.0 -dt NAIVE -mp ../benchmark/benchmark-model/shufflenet_v2.tnnproto Target: CPU - Model: SqueezeNet v2 pts/tnn-1.1.0 -dt NAIVE -mp ../benchmark/benchmark-model/squeezenet_v1.1.tnnproto Target: CPU - Model: SqueezeNet v1.1 pts/caffe-1.5.0 --model=../models/bvlc_alexnet/deploy.prototxt -iterations 100 Model: AlexNet - Acceleration: CPU - Iterations: 100 pts/caffe-1.5.0 --model=../models/bvlc_alexnet/deploy.prototxt -iterations 200 Model: AlexNet - Acceleration: CPU - Iterations: 200 pts/caffe-1.5.0 --model=../models/bvlc_alexnet/deploy.prototxt -iterations 1000 Model: AlexNet - Acceleration: CPU - Iterations: 1000 pts/caffe-1.5.0 --model=../models/bvlc_googlenet/deploy.prototxt -iterations 100 Model: GoogleNet - Acceleration: CPU - Iterations: 100 pts/caffe-1.5.0 --model=../models/bvlc_googlenet/deploy.prototxt -iterations 200 Model: GoogleNet - Acceleration: CPU - Iterations: 200 pts/caffe-1.5.0 --model=../models/bvlc_googlenet/deploy.prototxt -iterations 1000 Model: GoogleNet - Acceleration: CPU - Iterations: 1000 pts/ncnn-1.3.0 -1 Target: CPU - Model: mobilenet pts/ncnn-1.3.0 -1 Target: CPU-v2-v2 - Model: mobilenet-v2 pts/ncnn-1.3.0 -1 Target: CPU-v3-v3 - Model: mobilenet-v3 pts/ncnn-1.3.0 -1 Target: CPU - Model: shufflenet-v2 pts/ncnn-1.3.0 -1 Target: CPU - Model: mnasnet pts/ncnn-1.3.0 -1 Target: CPU - Model: efficientnet-b0 pts/ncnn-1.3.0 -1 Target: CPU - Model: blazeface pts/ncnn-1.3.0 -1 Target: CPU - Model: googlenet pts/ncnn-1.3.0 -1 Target: CPU - Model: vgg16 pts/ncnn-1.3.0 -1 Target: CPU - Model: resnet18 pts/ncnn-1.3.0 -1 Target: CPU - Model: alexnet pts/ncnn-1.3.0 -1 Target: CPU - Model: resnet50 pts/ncnn-1.3.0 -1 Target: CPU - Model: yolov4-tiny pts/ncnn-1.3.0 -1 Target: CPU - Model: squeezenet_ssd pts/ncnn-1.3.0 -1 Target: CPU - Model: regnety_400m pts/ncnn-1.3.0 Target: Vulkan GPU - Model: mobilenet pts/ncnn-1.3.0 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 pts/ncnn-1.3.0 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 pts/ncnn-1.3.0 Target: Vulkan GPU - Model: shufflenet-v2 pts/ncnn-1.3.0 Target: Vulkan GPU - Model: mnasnet pts/ncnn-1.3.0 Target: Vulkan GPU - Model: efficientnet-b0 pts/ncnn-1.3.0 Target: Vulkan GPU - Model: blazeface pts/ncnn-1.3.0 Target: Vulkan GPU - Model: googlenet pts/ncnn-1.3.0 Target: Vulkan GPU - Model: vgg16 pts/ncnn-1.3.0 Target: Vulkan GPU - Model: resnet18 pts/ncnn-1.3.0 Target: Vulkan GPU - Model: alexnet pts/ncnn-1.3.0 Target: Vulkan GPU - Model: resnet50 pts/ncnn-1.3.0 Target: Vulkan GPU - Model: yolov4-tiny pts/ncnn-1.3.0 Target: Vulkan GPU - Model: squeezenet_ssd pts/ncnn-1.3.0 Target: Vulkan GPU - Model: regnety_400m pts/mlpack-1.0.2 SCIKIT_ICA Benchmark: scikit_ica pts/mlpack-1.0.2 SCIKIT_QDA Benchmark: scikit_qda pts/mlpack-1.0.2 SCIKIT_SVM Benchmark: scikit_svm pts/mlpack-1.0.2 SCIKIT_LINEARRIDGEREGRESSION Benchmark: scikit_linearridgeregression pts/onednn-1.7.0 --ip --batch=inputs/ip/shapes_1d --cfg=f32 --engine=cpu Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU pts/onednn-1.7.0 --ip --batch=inputs/ip/shapes_3d --cfg=f32 --engine=cpu Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU pts/onednn-1.7.0 --ip --batch=inputs/ip/shapes_1d --cfg=u8s8f32 --engine=cpu Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.7.0 --ip --batch=inputs/ip/shapes_3d --cfg=u8s8f32 --engine=cpu Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.7.0 --ip --batch=inputs/ip/shapes_1d --cfg=bf16bf16bf16 --engine=cpu Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.7.0 --ip --batch=inputs/ip/shapes_3d --cfg=bf16bf16bf16 --engine=cpu Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.7.0 --conv --batch=inputs/conv/shapes_auto --cfg=f32 --engine=cpu Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU pts/onednn-1.7.0 --deconv --batch=inputs/deconv/shapes_1d --cfg=f32 --engine=cpu Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU pts/onednn-1.7.0 --deconv --batch=inputs/deconv/shapes_3d --cfg=f32 --engine=cpu Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU pts/onednn-1.7.0 --conv --batch=inputs/conv/shapes_auto --cfg=u8s8f32 --engine=cpu Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.7.0 --deconv --batch=inputs/deconv/shapes_1d --cfg=u8s8f32 --engine=cpu Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.7.0 --deconv --batch=inputs/deconv/shapes_3d --cfg=u8s8f32 --engine=cpu Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.7.0 --rnn --batch=inputs/rnn/perf_rnn_training --cfg=f32 --engine=cpu Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU pts/onednn-1.7.0 --rnn --batch=inputs/rnn/perf_rnn_inference_lb --cfg=f32 --engine=cpu Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU pts/onednn-1.7.0 --rnn --batch=inputs/rnn/perf_rnn_training --cfg=u8s8f32 --engine=cpu Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.7.0 --conv --batch=inputs/conv/shapes_auto --cfg=bf16bf16bf16 --engine=cpu Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.7.0 --deconv --batch=inputs/deconv/shapes_1d --cfg=bf16bf16bf16 --engine=cpu Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.7.0 --deconv --batch=inputs/deconv/shapes_3d --cfg=bf16bf16bf16 --engine=cpu Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.7.0 --rnn --batch=inputs/rnn/perf_rnn_inference_lb --cfg=u8s8f32 --engine=cpu Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.7.0 --matmul --batch=inputs/matmul/shapes_transformer --cfg=f32 --engine=cpu Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU pts/onednn-1.7.0 --rnn --batch=inputs/rnn/perf_rnn_training --cfg=bf16bf16bf16 --engine=cpu Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.7.0 --rnn --batch=inputs/rnn/perf_rnn_inference_lb --cfg=bf16bf16bf16 --engine=cpu Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.7.0 --matmul --batch=inputs/matmul/shapes_transformer --cfg=u8s8f32 --engine=cpu Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.7.0 --matmul --batch=inputs/matmul/shapes_transformer --cfg=bf16bf16bf16 --engine=cpu Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU pts/openvino-1.0.4 -m models/intel/face-detection-0106/FP16/face-detection-0106.xml -d CPU Model: Face Detection 0106 FP16 - Device: CPU pts/openvino-1.0.4 -m models/intel/face-detection-0106/FP32/face-detection-0106.xml -d CPU Model: Face Detection 0106 FP32 - Device: CPU pts/openvino-1.0.4 -m models/intel/person-detection-0106/FP16/person-detection-0106.xml -d CPU Model: Person Detection 0106 FP16 - Device: CPU pts/openvino-1.0.4 -m models/intel/person-detection-0106/FP32/person-detection-0106.xml -d CPU Model: Person Detection 0106 FP32 - Device: CPU pts/openvino-1.0.4 -m models/intel/face-detection-0106/FP16/face-detection-0106.xml -d GPU Model: Face Detection 0106 FP16 - Device: Intel GPU pts/openvino-1.0.4 -m models/intel/face-detection-0106/FP32/face-detection-0106.xml -d GPU Model: Face Detection 0106 FP32 - Device: Intel GPU pts/openvino-1.0.4 -m models/intel/person-detection-0106/FP16/person-detection-0106.xml -d GPU Model: Person Detection 0106 FP16 - Device: Intel GPU pts/openvino-1.0.4 -m models/intel/person-detection-0106/FP32/person-detection-0106.xml -d GPU Model: Person Detection 0106 FP32 - Device: Intel GPU pts/openvino-1.0.4 -m models/intel/age-gender-recognition-retail-0013/FP16/age-gender-recognition-retail-0013.xml -d CPU Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU pts/openvino-1.0.4 -m models/intel/age-gender-recognition-retail-0013/FP32/age-gender-recognition-retail-0013.xml -d CPU Model: Age Gender Recognition Retail 0013 FP32 - Device: CPU pts/openvino-1.0.4 -m models/intel/age-gender-recognition-retail-0013/FP16/age-gender-recognition-retail-0013.xml -d GPU Model: Age Gender Recognition Retail 0013 FP16 - Device: Intel GPU pts/openvino-1.0.4 -m models/intel/age-gender-recognition-retail-0013/FP32/age-gender-recognition-retail-0013.xml -d GPU Model: Age Gender Recognition Retail 0013 FP32 - Device: Intel GPU