tgl onnx onednn
Intel Core i7-1185G7 testing with a Dell 0DXP1F (3.4.0 BIOS) and Intel Xe TGL GT2 3GB on Ubuntu 22.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2203319-PTS-TGLONNXO37&grs.
ONNX Runtime
Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel
oneDNN
Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU
ONNX Runtime
Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel
oneDNN
Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU
ONNX Runtime
Model: super-resolution-10 - Device: CPU - Executor: Parallel
oneDNN
Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU
oneDNN
Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU
ONNX Runtime
Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard
oneDNN
Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU
oneDNN
Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU
ONNX Runtime
Model: GPT-2 - Device: CPU - Executor: Parallel
oneDNN
Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU
ONNX Runtime
Model: yolov4 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: yolov4 - Device: CPU - Executor: Standard
oneDNN
Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU
ONNX Runtime
Model: GPT-2 - Device: CPU - Executor: Standard
oneDNN
Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU
ONNX Runtime
Model: super-resolution-10 - Device: CPU - Executor: Standard
ONNX Runtime
Model: fcn-resnet101-11 - Device: CPU - Executor: Standard
ONNX Runtime
Model: bertsquad-12 - Device: CPU - Executor: Standard
ONNX Runtime
Model: bertsquad-12 - Device: CPU - Executor: Parallel
oneDNN
Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU
oneDNN
Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU
oneDNN
Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU
Phoronix Test Suite v10.8.5