litert-onednn-xnnpack
2 x AMD EPYC 9575F 64-Core testing with a AMD VOLCANO (RVOT1000D BIOS) and ASPEED on Ubuntu 24.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2410169-NE-LITERTONE51&grs&sor.
XNNPACK
Model: FP16MobileNetV3Small
XNNPACK
Model: QS8MobileNetV2
XNNPACK
Model: FP16MobileNetV1
XNNPACK
Model: FP16MobileNetV2
XNNPACK
Model: FP32MobileNetV3Small
XNNPACK
Model: FP32MobileNetV2
oneDNN
Harness: Deconvolution Batch shapes_1d - Engine: CPU
XNNPACK
Model: FP32MobileNetV1
LiteRT
Model: SqueezeNet
oneDNN
Harness: Recurrent Neural Network Inference - Engine: CPU
LiteRT
Model: Quantized COCO SSD MobileNet v1
XNNPACK
Model: FP32MobileNetV3Large
oneDNN
Harness: IP Shapes 3D - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_3d - Engine: CPU
XNNPACK
Model: FP16MobileNetV3Large
oneDNN
Harness: Convolution Batch Shapes Auto - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Training - Engine: CPU
oneDNN
Harness: IP Shapes 1D - Engine: CPU
LiteRT
Model: Inception ResNet V2
LiteRT
Model: Mobilenet Quant
LiteRT
Model: Mobilenet Float
LiteRT
Model: NASNet Mobile
LiteRT
Model: Inception V4
LiteRT
Model: DeepLab V3
Phoronix Test Suite v10.8.5