f33-machine-learning
Intel Core i7-6700 testing with a ASUS Z170I PRO GAMING (0806 BIOS) and Intel HD 530 3GB on Fedora 33 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2012012-AS-F33MACHIN68&grt.
DeepSpeech
Acceleration: CPU
Mlpack Benchmark
Benchmark: scikit_ica
Mlpack Benchmark
Benchmark: scikit_qda
Mlpack Benchmark
Benchmark: scikit_svm
Mlpack Benchmark
Benchmark: scikit_linearridgeregression
Mobile Neural Network
Model: SqueezeNetV1.0
Mobile Neural Network
Model: resnet-v2-50
Mobile Neural Network
Model: MobileNetV2_224
Mobile Neural Network
Model: mobilenet-v1-1.0
Mobile Neural Network
Model: inception-v3
Numenta Anomaly Benchmark
Detector: EXPoSE
Numenta Anomaly Benchmark
Detector: Relative Entropy
Numenta Anomaly Benchmark
Detector: Windowed Gaussian
Numenta Anomaly Benchmark
Detector: Earthgecko Skyline
Numenta Anomaly Benchmark
Detector: Bayesian Changepoint
Numpy Benchmark
oneDNN
Harness: IP Batch 1D - Data Type: f32 - Engine: CPU
oneDNN
Harness: IP Batch All - Data Type: f32 - Engine: CPU
oneDNN
Harness: IP Batch 1D - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: IP Batch All - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU
oneDNN
Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPU
oneDNN
Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPU
oneDNN
Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU
oneDNN
Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU
oneDNN
Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU
PlaidML
FP16: No - Mode: Inference - Network: VGG16 - Device: CPU
PlaidML
FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU
R Benchmark
RNNoise
Scikit-Learn
TensorFlow Lite
Model: SqueezeNet
TensorFlow Lite
Model: Inception V4
TensorFlow Lite
Model: NASNet Mobile
TensorFlow Lite
Model: Mobilenet Float
TensorFlow Lite
Model: Mobilenet Quant
TensorFlow Lite
Model: Inception ResNet V2
Phoronix Test Suite v10.8.4