mtl-aa
Intel Core Ultra 7 155H testing with a MTL Swift SFG14-72T Coral_MTH (V1.01 BIOS) and Intel Arc MTL 8GB on Ubuntu 24.10 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2410173-NE-MTLAA480961&gru&sor.
LiteRT
Model: DeepLab V3
LiteRT
Model: SqueezeNet
LiteRT
Model: Inception V4
LiteRT
Model: NASNet Mobile
LiteRT
Model: Mobilenet Float
LiteRT
Model: Mobilenet Quant
LiteRT
Model: Inception ResNet V2
LiteRT
Model: Quantized COCO SSD MobileNet v1
oneDNN
Harness: IP Shapes 1D - Engine: CPU
oneDNN
Harness: IP Shapes 3D - Engine: CPU
oneDNN
Harness: Convolution Batch Shapes Auto - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_1d - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_3d - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Training - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Inference - Engine: CPU
CP2K Molecular Dynamics
Input: H20-64
CP2K Molecular Dynamics
Input: H20-256
CP2K Molecular Dynamics
Input: Fayalite-FIST
Epoch
Epoch3D Deck: Cone
WarpX
Input: Uniform Plasma
WarpX
Input: Plasma Acceleration
Apache CouchDB
Bulk Size: 100 - Inserts: 1000 - Rounds: 30
Apache CouchDB
Bulk Size: 100 - Inserts: 3000 - Rounds: 30
Apache CouchDB
Bulk Size: 300 - Inserts: 1000 - Rounds: 30
XNNPACK
Model: FP32MobileNetV1
XNNPACK
Model: FP32MobileNetV2
XNNPACK
Model: FP32MobileNetV3Large
XNNPACK
Model: FP32MobileNetV3Small
XNNPACK
Model: FP16MobileNetV1
XNNPACK
Model: FP16MobileNetV2
XNNPACK
Model: FP16MobileNetV3Large
XNNPACK
Model: FP16MobileNetV3Small
XNNPACK
Model: QS8MobileNetV2
Phoronix Test Suite v10.8.5