sle.hpc-wk1-ML-29aug2020
VMware testing on SUSE Linux Enterprise High Performance Computing 15 SP2 15.2 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2008293-NI-SLEHPCWK139.
oneDNN
Harness: IP Batch 1D - Data Type: f32 - Engine: CPU
oneDNN
Harness: IP Batch All - Data Type: f32 - Engine: CPU
oneDNN
Harness: IP Batch 1D - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: IP Batch All - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU
oneDNN
Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPU
oneDNN
Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPU
oneDNN
Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU
oneDNN
Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU
oneDNN
Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU
Numpy Benchmark
DeepSpeech
TensorFlow Lite
Model: SqueezeNet
TensorFlow Lite
Model: Inception V4
TensorFlow Lite
Model: NASNet Mobile
TensorFlow Lite
Model: Mobilenet Float
TensorFlow Lite
Model: Mobilenet Quant
TensorFlow Lite
Model: Inception ResNet V2
PlaidML
FP16: No - Mode: Inference - Network: VGG16 - Device: CPU
PlaidML
FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU
Numenta Anomaly Benchmark
Detector: EXPoSE
Numenta Anomaly Benchmark
Detector: Relative Entropy
Numenta Anomaly Benchmark
Detector: Windowed Gaussian
Numenta Anomaly Benchmark
Detector: Earthgecko Skyline
Numenta Anomaly Benchmark
Detector: Bayesian Changepoint
Phoronix Test Suite v10.8.4