a40-ml
KVM testing on Ubuntu 22.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2412124-NE-A40ML481010&grt.
DeepSpeech
Acceleration: CPU
LiteRT
Model: DeepLab V3
LiteRT
Model: SqueezeNet
LiteRT
Model: Inception V4
LiteRT
Model: NASNet Mobile
LiteRT
Model: Mobilenet Float
LiteRT
Model: Mobilenet Quant
LiteRT
Model: Inception ResNet V2
LiteRT
Model: Quantized COCO SSD MobileNet v1
Numpy Benchmark
oneDNN
Harness: IP Shapes 1D - Engine: CPU
oneDNN
Harness: IP Shapes 3D - Engine: CPU
oneDNN
Harness: Convolution Batch Shapes Auto - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_1d - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_3d - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Training - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Inference - Engine: CPU
R Benchmark
RNNoise
Input: 26 Minute Long Talking Sample
SHOC Scalable HeterOgeneous Computing
Target: OpenCL - Benchmark: Max SP Flops
SHOC Scalable HeterOgeneous Computing
Target: OpenCL - Benchmark: Bus Speed Download
SHOC Scalable HeterOgeneous Computing
Target: OpenCL - Benchmark: Bus Speed Readback
SHOC Scalable HeterOgeneous Computing
Target: OpenCL - Benchmark: Texture Read Bandwidth
SHOC Scalable HeterOgeneous Computing
Target: OpenCL - Benchmark: S3D
SHOC Scalable HeterOgeneous Computing
Target: OpenCL - Benchmark: Triad
SHOC Scalable HeterOgeneous Computing
Target: OpenCL - Benchmark: FFT SP
SHOC Scalable HeterOgeneous Computing
Target: OpenCL - Benchmark: MD5 Hash
SHOC Scalable HeterOgeneous Computing
Target: OpenCL - Benchmark: Reduction
SHOC Scalable HeterOgeneous Computing
Target: OpenCL - Benchmark: GEMM SGEMM_N
TensorFlow Lite
Model: SqueezeNet
TensorFlow Lite
Model: Inception V4
TensorFlow Lite
Model: NASNet Mobile
TensorFlow Lite
Model: Mobilenet Float
TensorFlow Lite
Model: Mobilenet Quant
TensorFlow Lite
Model: Inception ResNet V2
Phoronix Test Suite v10.8.5