mtl-aa
Intel Core Ultra 7 155H testing with a MTL Swift SFG14-72T Coral_MTH (V1.01 BIOS) and Intel Arc MTL 8GB on Ubuntu 24.10 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2410173-NE-MTLAA480961&grs.
Apache CouchDB
Bulk Size: 300 - Inserts: 1000 - Rounds: 30
WarpX
Input: Uniform Plasma
LiteRT
Model: DeepLab V3
Apache CouchDB
Bulk Size: 100 - Inserts: 1000 - Rounds: 30
XNNPACK
Model: FP32MobileNetV1
Apache CouchDB
Bulk Size: 100 - Inserts: 3000 - Rounds: 30
XNNPACK
Model: FP16MobileNetV1
LiteRT
Model: NASNet Mobile
CP2K Molecular Dynamics
Input: Fayalite-FIST
XNNPACK
Model: FP16MobileNetV3Small
XNNPACK
Model: FP16MobileNetV2
LiteRT
Model: Quantized COCO SSD MobileNet v1
LiteRT
Model: Mobilenet Quant
XNNPACK
Model: FP16MobileNetV3Large
XNNPACK
Model: FP32MobileNetV3Large
XNNPACK
Model: FP32MobileNetV2
LiteRT
Model: Mobilenet Float
oneDNN
Harness: Deconvolution Batch shapes_1d - Engine: CPU
WarpX
Input: Plasma Acceleration
Epoch
Epoch3D Deck: Cone
oneDNN
Harness: Recurrent Neural Network Training - Engine: CPU
oneDNN
Harness: Convolution Batch Shapes Auto - Engine: CPU
LiteRT
Model: SqueezeNet
LiteRT
Model: Inception ResNet V2
oneDNN
Harness: IP Shapes 1D - Engine: CPU
CP2K Molecular Dynamics
Input: H20-64
LiteRT
Model: Inception V4
XNNPACK
Model: FP32MobileNetV3Small
oneDNN
Harness: IP Shapes 3D - Engine: CPU
CP2K Molecular Dynamics
Input: H20-256
XNNPACK
Model: QS8MobileNetV2
oneDNN
Harness: Deconvolution Batch shapes_3d - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Inference - Engine: CPU
Phoronix Test Suite v10.8.5