HDR3-A44000-1
AMD A4-5300 APU testing with a ASRock FM2A88M-HD+ R3.0 (P1.50 BIOS) and AMD Radeon HD 7480D 256MB on Ubuntu 20.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2402144-HERT-HDR3A4427&grw.
TensorFlow
Device: CPU - Batch Size: 1 - Model: VGG-16
TensorFlow
Device: GPU - Batch Size: 1 - Model: VGG-16
TensorFlow
Device: CPU - Batch Size: 1 - Model: AlexNet
TensorFlow
Device: CPU - Batch Size: 16 - Model: VGG-16
TensorFlow
Device: CPU - Batch Size: 32 - Model: VGG-16
TensorFlow
Device: CPU - Batch Size: 64 - Model: VGG-16
TensorFlow
Device: GPU - Batch Size: 1 - Model: AlexNet
TensorFlow
Device: GPU - Batch Size: 16 - Model: VGG-16
TensorFlow
Device: GPU - Batch Size: 32 - Model: VGG-16
TensorFlow
Device: GPU - Batch Size: 64 - Model: VGG-16
TensorFlow
Device: CPU - Batch Size: 16 - Model: AlexNet
TensorFlow
Device: CPU - Batch Size: 32 - Model: AlexNet
TensorFlow
Device: CPU - Batch Size: 64 - Model: AlexNet
TensorFlow
Device: GPU - Batch Size: 16 - Model: AlexNet
TensorFlow
Device: GPU - Batch Size: 32 - Model: AlexNet
TensorFlow
Device: GPU - Batch Size: 64 - Model: AlexNet
TensorFlow
Device: CPU - Batch Size: 1 - Model: GoogLeNet
TensorFlow
Device: CPU - Batch Size: 1 - Model: ResNet-50
TensorFlow
Device: CPU - Batch Size: 256 - Model: AlexNet
TensorFlow
Device: CPU - Batch Size: 512 - Model: AlexNet
TensorFlow
Device: GPU - Batch Size: 1 - Model: GoogLeNet
TensorFlow
Device: GPU - Batch Size: 1 - Model: ResNet-50
TensorFlow
Device: GPU - Batch Size: 256 - Model: AlexNet
TensorFlow
Device: GPU - Batch Size: 512 - Model: AlexNet
TensorFlow
Device: CPU - Batch Size: 16 - Model: GoogLeNet
TensorFlow
Device: CPU - Batch Size: 16 - Model: ResNet-50
TensorFlow
Device: CPU - Batch Size: 32 - Model: GoogLeNet
TensorFlow
Device: CPU - Batch Size: 32 - Model: ResNet-50
TensorFlow
Device: CPU - Batch Size: 64 - Model: GoogLeNet
TensorFlow
Device: CPU - Batch Size: 64 - Model: ResNet-50
TensorFlow
Device: GPU - Batch Size: 16 - Model: GoogLeNet
TensorFlow
Device: GPU - Batch Size: 16 - Model: ResNet-50
TensorFlow
Device: GPU - Batch Size: 32 - Model: GoogLeNet
TensorFlow
Device: GPU - Batch Size: 32 - Model: ResNet-50
TensorFlow
Device: GPU - Batch Size: 64 - Model: GoogLeNet
TensorFlow
Device: GPU - Batch Size: 64 - Model: ResNet-50
TensorFlow
Device: CPU - Batch Size: 256 - Model: GoogLeNet
TensorFlow
Device: GPU - Batch Size: 256 - Model: GoogLeNet
PlaidML
FP16: No - Mode: Inference - Network: VGG16 - Device: CPU
PlaidML
FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU
Numenta Anomaly Benchmark
Detector: KNN CAD
Numenta Anomaly Benchmark
Detector: Relative Entropy
Numenta Anomaly Benchmark
Detector: Windowed Gaussian
Numenta Anomaly Benchmark
Detector: Earthgecko Skyline
Numenta Anomaly Benchmark
Detector: Bayesian Changepoint
Numenta Anomaly Benchmark
Detector: Contextual Anomaly Detector OSE
Scikit-Learn
Benchmark: GLM
Scikit-Learn
Benchmark: SAGA
Scikit-Learn
Benchmark: Tree
Scikit-Learn
Benchmark: Lasso
Scikit-Learn
Benchmark: Sparsify
Scikit-Learn
Benchmark: Plot Ward
Scikit-Learn
Benchmark: MNIST Dataset
Scikit-Learn
Benchmark: Plot Neighbors
Scikit-Learn
Benchmark: SGD Regression
Scikit-Learn
Benchmark: Plot Lasso Path
Scikit-Learn
Benchmark: Text Vectorizers
Scikit-Learn
Benchmark: Plot Hierarchical
Scikit-Learn
Benchmark: Plot OMP vs. LARS
Scikit-Learn
Benchmark: Feature Expansions
Scikit-Learn
Benchmark: LocalOutlierFactor
Scikit-Learn
Benchmark: TSNE MNIST Dataset
Scikit-Learn
Benchmark: Plot Incremental PCA
Scikit-Learn
Benchmark: Hist Gradient Boosting
Scikit-Learn
Benchmark: Sample Without Replacement
Scikit-Learn
Benchmark: Covertype Dataset Benchmark
Scikit-Learn
Benchmark: Hist Gradient Boosting Adult
Scikit-Learn
Benchmark: Hist Gradient Boosting Threading
Scikit-Learn
Benchmark: Plot Singular Value Decomposition
Scikit-Learn
Benchmark: Hist Gradient Boosting Higgs Boson
Scikit-Learn
Benchmark: 20 Newsgroups / Logistic Regression
Scikit-Learn
Benchmark: Plot Polynomial Kernel Approximation
Scikit-Learn
Benchmark: Hist Gradient Boosting Categorical Only
Scikit-Learn
Benchmark: Kernel PCA Solvers / Time vs. N Samples
Scikit-Learn
Benchmark: Kernel PCA Solvers / Time vs. N Components
Scikit-Learn
Benchmark: Sparse Random Projections / 100 Iterations
R Benchmark
Numpy Benchmark
DeepSpeech
Acceleration: CPU
RNNoise
Mobile Neural Network
Model: nasnet
Mobile Neural Network
Model: mobilenetV3
Mobile Neural Network
Model: squeezenetv1.1
Mobile Neural Network
Model: resnet-v2-50
Mobile Neural Network
Model: SqueezeNetV1.0
Mobile Neural Network
Model: MobileNetV2_224
Mobile Neural Network
Model: mobilenet-v1-1.0
Mobile Neural Network
Model: inception-v3
ONNX Runtime
Model: GPT-2 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: GPT-2 - Device: CPU - Executor: Standard
ONNX Runtime
Model: bertsquad-12 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: bertsquad-12 - Device: CPU - Executor: Standard
ONNX Runtime
Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard
ONNX Runtime
Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: fcn-resnet101-11 - Device: CPU - Executor: Standard
ONNX Runtime
Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard
ONNX Runtime
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard
ONNX Runtime
Model: super-resolution-10 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: super-resolution-10 - Device: CPU - Executor: Standard
ONNX Runtime
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard
PyTorch
Device: CPU - Batch Size: 1 - Model: ResNet-50
PyTorch
Device: CPU - Batch Size: 1 - Model: ResNet-152
PyTorch
Device: CPU - Batch Size: 16 - Model: ResNet-50
PyTorch
Device: CPU - Batch Size: 32 - Model: ResNet-50
PyTorch
Device: CPU - Batch Size: 64 - Model: ResNet-50
PyTorch
Device: CPU - Batch Size: 16 - Model: ResNet-152
PyTorch
Device: CPU - Batch Size: 256 - Model: ResNet-50
PyTorch
Device: CPU - Batch Size: 32 - Model: ResNet-152
PyTorch
Device: CPU - Batch Size: 512 - Model: ResNet-50
PyTorch
Device: CPU - Batch Size: 64 - Model: ResNet-152
PyTorch
Device: CPU - Batch Size: 256 - Model: ResNet-152
PyTorch
Device: CPU - Batch Size: 512 - Model: ResNet-152
PyTorch
Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l
PyTorch
Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l
PyTorch
Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l
PyTorch
Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l
PyTorch
Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l
PyTorch
Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l
TensorFlow Lite
Model: SqueezeNet
TensorFlow Lite
Model: Inception V4
TensorFlow Lite
Model: NASNet Mobile
TensorFlow Lite
Model: Mobilenet Float
TensorFlow Lite
Model: Mobilenet Quant
TensorFlow Lite
Model: Inception ResNet V2
TNN
Target: CPU - Model: DenseNet
TNN
Target: CPU - Model: MobileNet v2
TNN
Target: CPU - Model: SqueezeNet v2
TNN
Target: CPU - Model: SqueezeNet v1.1
Whisper.cpp
Model: ggml-base.en - Input: 2016 State of the Union
Whisper.cpp
Model: ggml-small.en - Input: 2016 State of the Union
Whisper.cpp
Model: ggml-medium.en - Input: 2016 State of the Union
Caffe
Model: AlexNet - Acceleration: CPU - Iterations: 100
Caffe
Model: AlexNet - Acceleration: CPU - Iterations: 200
Caffe
Model: AlexNet - Acceleration: CPU - Iterations: 1000
Caffe
Model: GoogleNet - Acceleration: CPU - Iterations: 100
Caffe
Model: GoogleNet - Acceleration: CPU - Iterations: 200
Caffe
Model: GoogleNet - Acceleration: CPU - Iterations: 1000
NCNN
Target: CPU - Model: mobilenet
NCNN
Target: CPU-v2-v2 - Model: mobilenet-v2
NCNN
Target: CPU-v3-v3 - Model: mobilenet-v3
NCNN
Target: CPU - Model: shufflenet-v2
NCNN
Target: CPU - Model: mnasnet
NCNN
Target: CPU - Model: efficientnet-b0
NCNN
Target: CPU - Model: blazeface
NCNN
Target: CPU - Model: googlenet
NCNN
Target: CPU - Model: vgg16
NCNN
Target: CPU - Model: resnet18
NCNN
Target: CPU - Model: alexnet
NCNN
Target: CPU - Model: resnet50
NCNN
Target: CPU - Model: yolov4-tiny
NCNN
Target: CPU - Model: squeezenet_ssd
NCNN
Target: CPU - Model: regnety_400m
NCNN
Target: CPU - Model: vision_transformer
NCNN
Target: CPU - Model: FastestDet
NCNN
Target: Vulkan GPU - Model: mobilenet
NCNN
Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2
NCNN
Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3
NCNN
Target: Vulkan GPU - Model: shufflenet-v2
NCNN
Target: Vulkan GPU - Model: mnasnet
NCNN
Target: Vulkan GPU - Model: efficientnet-b0
NCNN
Target: Vulkan GPU - Model: blazeface
NCNN
Target: Vulkan GPU - Model: googlenet
NCNN
Target: Vulkan GPU - Model: vgg16
NCNN
Target: Vulkan GPU - Model: resnet18
NCNN
Target: Vulkan GPU - Model: alexnet
NCNN
Target: Vulkan GPU - Model: resnet50
NCNN
Target: Vulkan GPU - Model: yolov4-tiny
NCNN
Target: Vulkan GPU - Model: squeezenet_ssd
NCNN
Target: Vulkan GPU - Model: regnety_400m
NCNN
Target: Vulkan GPU - Model: vision_transformer
NCNN
Target: Vulkan GPU - Model: FastestDet
Mlpack Benchmark
Benchmark: scikit_ica
Mlpack Benchmark
Benchmark: scikit_svm
oneDNN
Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU
oneDNN
Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU
oneDNN
Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU
oneDNN
Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU
OpenVINO
Model: Face Detection FP16 - Device: CPU
OpenVINO
Model: Face Detection FP16 - Device: CPU
OpenVINO
Model: Person Detection FP16 - Device: CPU
OpenVINO
Model: Person Detection FP16 - Device: CPU
OpenVINO
Model: Person Detection FP32 - Device: CPU
OpenVINO
Model: Person Detection FP32 - Device: CPU
OpenVINO
Model: Vehicle Detection FP16 - Device: CPU
OpenVINO
Model: Vehicle Detection FP16 - Device: CPU
OpenVINO
Model: Face Detection FP16-INT8 - Device: CPU
OpenVINO
Model: Face Detection FP16-INT8 - Device: CPU
OpenVINO
Model: Face Detection Retail FP16 - Device: CPU
OpenVINO
Model: Face Detection Retail FP16 - Device: CPU
OpenVINO
Model: Road Segmentation ADAS FP16 - Device: CPU
OpenVINO
Model: Road Segmentation ADAS FP16 - Device: CPU
OpenVINO
Model: Vehicle Detection FP16-INT8 - Device: CPU
OpenVINO
Model: Vehicle Detection FP16-INT8 - Device: CPU
OpenVINO
Model: Weld Porosity Detection FP16 - Device: CPU
OpenVINO
Model: Weld Porosity Detection FP16 - Device: CPU
OpenVINO
Model: Face Detection Retail FP16-INT8 - Device: CPU
OpenVINO
Model: Face Detection Retail FP16-INT8 - Device: CPU
OpenVINO
Model: Road Segmentation ADAS FP16-INT8 - Device: CPU
OpenVINO
Model: Road Segmentation ADAS FP16-INT8 - Device: CPU
OpenVINO
Model: Machine Translation EN To DE FP16 - Device: CPU
OpenVINO
Model: Machine Translation EN To DE FP16 - Device: CPU
OpenVINO
Model: Weld Porosity Detection FP16-INT8 - Device: CPU
OpenVINO
Model: Weld Porosity Detection FP16-INT8 - Device: CPU
OpenVINO
Model: Handwritten English Recognition FP16 - Device: CPU
OpenVINO
Model: Handwritten English Recognition FP16 - Device: CPU
OpenVINO
Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU
OpenVINO
Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU
OpenVINO
Model: Handwritten English Recognition FP16-INT8 - Device: CPU
OpenVINO
Model: Handwritten English Recognition FP16-INT8 - Device: CPU
OpenVINO
Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU
OpenVINO
Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU
ONNX Runtime
Model: GPT-2 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: GPT-2 - Device: CPU - Executor: Standard
ONNX Runtime
Model: bertsquad-12 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: bertsquad-12 - Device: CPU - Executor: Standard
ONNX Runtime
Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard
ONNX Runtime
Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: fcn-resnet101-11 - Device: CPU - Executor: Standard
ONNX Runtime
Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard
ONNX Runtime
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard
ONNX Runtime
Model: super-resolution-10 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: super-resolution-10 - Device: CPU - Executor: Standard
ONNX Runtime
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard
Phoronix Test Suite v10.8.5