H610-i312100-1
Intel Core i3-12100 testing with a ASRock H610M-HDV/M.2 R2.0 (6.03 BIOS) and Intel ADL-S GT1 3GB on Ubuntu 20.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2408228-HERT-H610I3171.
oneDNN
Harness: IP Shapes 1D - Engine: CPU
oneDNN
Harness: IP Shapes 3D - Engine: CPU
oneDNN
Harness: Convolution Batch Shapes Auto - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_1d - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_3d - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Training - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Inference - Engine: CPU
Numpy Benchmark
DeepSpeech
Acceleration: CPU
R Benchmark
TensorFlow Lite
Model: SqueezeNet
TensorFlow Lite
Model: Inception V4
TensorFlow Lite
Model: NASNet Mobile
TensorFlow Lite
Model: Mobilenet Float
TensorFlow Lite
Model: Mobilenet Quant
TensorFlow Lite
Model: Inception ResNet V2
PyTorch
Device: CPU - Batch Size: 1 - Model: ResNet-50
PyTorch
Device: CPU - Batch Size: 1 - Model: ResNet-152
PyTorch
Device: CPU - Batch Size: 16 - Model: ResNet-50
PyTorch
Device: CPU - Batch Size: 32 - Model: ResNet-50
PyTorch
Device: CPU - Batch Size: 64 - Model: ResNet-50
PyTorch
Device: CPU - Batch Size: 16 - Model: ResNet-152
PyTorch
Device: CPU - Batch Size: 256 - Model: ResNet-50
PyTorch
Device: CPU - Batch Size: 32 - Model: ResNet-152
PyTorch
Device: CPU - Batch Size: 512 - Model: ResNet-50
PyTorch
Device: CPU - Batch Size: 64 - Model: ResNet-152
PyTorch
Device: CPU - Batch Size: 256 - Model: ResNet-152
PyTorch
Device: CPU - Batch Size: 512 - Model: ResNet-152
PyTorch
Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l
PyTorch
Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l
PyTorch
Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l
PyTorch
Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l
PyTorch
Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l
PyTorch
Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l
Neural Magic DeepSparse
Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream
Caffe
Model: AlexNet - Acceleration: CPU - Iterations: 100
Caffe
Model: AlexNet - Acceleration: CPU - Iterations: 200
Caffe
Model: AlexNet - Acceleration: CPU - Iterations: 1000
Caffe
Model: GoogleNet - Acceleration: CPU - Iterations: 100
Caffe
Model: GoogleNet - Acceleration: CPU - Iterations: 200
Caffe
Model: GoogleNet - Acceleration: CPU - Iterations: 1000
Mobile Neural Network
Model: nasnet
Mobile Neural Network
Model: mobilenetV3
Mobile Neural Network
Model: squeezenetv1.1
Mobile Neural Network
Model: resnet-v2-50
Mobile Neural Network
Model: SqueezeNetV1.0
Mobile Neural Network
Model: MobileNetV2_224
Mobile Neural Network
Model: mobilenet-v1-1.0
Mobile Neural Network
Model: inception-v3
NCNN
Target: CPU - Model: mobilenet
NCNN
Target: CPU-v2-v2 - Model: mobilenet-v2
NCNN
Target: CPU-v3-v3 - Model: mobilenet-v3
NCNN
Target: CPU - Model: shufflenet-v2
NCNN
Target: CPU - Model: mnasnet
NCNN
Target: CPU - Model: efficientnet-b0
NCNN
Target: CPU - Model: blazeface
NCNN
Target: CPU - Model: googlenet
NCNN
Target: CPU - Model: vgg16
NCNN
Target: CPU - Model: resnet18
NCNN
Target: CPU - Model: alexnet
NCNN
Target: CPU - Model: resnet50
NCNN
Target: CPU - Model: yolov4-tiny
NCNN
Target: CPU - Model: squeezenet_ssd
NCNN
Target: CPU - Model: regnety_400m
NCNN
Target: CPU - Model: vision_transformer
NCNN
Target: CPU - Model: FastestDet
NCNN
Target: Vulkan GPU - Model: mobilenet
NCNN
Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2
NCNN
Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3
NCNN
Target: Vulkan GPU - Model: shufflenet-v2
NCNN
Target: Vulkan GPU - Model: mnasnet
NCNN
Target: Vulkan GPU - Model: efficientnet-b0
NCNN
Target: Vulkan GPU - Model: blazeface
NCNN
Target: Vulkan GPU - Model: googlenet
NCNN
Target: Vulkan GPU - Model: vgg16
NCNN
Target: Vulkan GPU - Model: resnet18
NCNN
Target: Vulkan GPU - Model: alexnet
NCNN
Target: Vulkan GPU - Model: resnet50
NCNN
Target: Vulkan GPU - Model: yolov4-tiny
NCNN
Target: Vulkan GPU - Model: squeezenet_ssd
NCNN
Target: Vulkan GPU - Model: regnety_400m
NCNN
Target: Vulkan GPU - Model: vision_transformer
NCNN
Target: Vulkan GPU - Model: FastestDet
TNN
Target: CPU - Model: DenseNet
TNN
Target: CPU - Model: MobileNet v2
TNN
Target: CPU - Model: SqueezeNet v2
TNN
Target: CPU - Model: SqueezeNet v1.1
XNNPACK
Model: FP32MobileNetV2
XNNPACK
Model: FP32MobileNetV3Large
XNNPACK
Model: FP32MobileNetV3Small
XNNPACK
Model: FP16MobileNetV2
XNNPACK
Model: FP16MobileNetV3Large
XNNPACK
Model: FP16MobileNetV3Small
XNNPACK
Model: QU8MobileNetV2
XNNPACK
Model: QU8MobileNetV3Large
XNNPACK
Model: QU8MobileNetV3Small
PlaidML
FP16: No - Mode: Inference - Network: VGG16 - Device: CPU
PlaidML
FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU
OpenVINO
Model: Face Detection FP16 - Device: CPU
OpenVINO
Model: Face Detection FP16 - Device: CPU
OpenVINO
Model: Person Detection FP16 - Device: CPU
OpenVINO
Model: Person Detection FP16 - Device: CPU
OpenVINO
Model: Person Detection FP32 - Device: CPU
OpenVINO
Model: Person Detection FP32 - Device: CPU
OpenVINO
Model: Vehicle Detection FP16 - Device: CPU
OpenVINO
Model: Vehicle Detection FP16 - Device: CPU
OpenVINO
Model: Face Detection FP16-INT8 - Device: CPU
OpenVINO
Model: Face Detection FP16-INT8 - Device: CPU
OpenVINO
Model: Face Detection Retail FP16 - Device: CPU
OpenVINO
Model: Face Detection Retail FP16 - Device: CPU
OpenVINO
Model: Road Segmentation ADAS FP16 - Device: CPU
OpenVINO
Model: Road Segmentation ADAS FP16 - Device: CPU
OpenVINO
Model: Vehicle Detection FP16-INT8 - Device: CPU
OpenVINO
Model: Vehicle Detection FP16-INT8 - Device: CPU
OpenVINO
Model: Weld Porosity Detection FP16 - Device: CPU
OpenVINO
Model: Weld Porosity Detection FP16 - Device: CPU
OpenVINO
Model: Face Detection Retail FP16-INT8 - Device: CPU
OpenVINO
Model: Face Detection Retail FP16-INT8 - Device: CPU
OpenVINO
Model: Road Segmentation ADAS FP16-INT8 - Device: CPU
OpenVINO
Model: Road Segmentation ADAS FP16-INT8 - Device: CPU
OpenVINO
Model: Machine Translation EN To DE FP16 - Device: CPU
OpenVINO
Model: Machine Translation EN To DE FP16 - Device: CPU
OpenVINO
Model: Weld Porosity Detection FP16-INT8 - Device: CPU
OpenVINO
Model: Weld Porosity Detection FP16-INT8 - Device: CPU
OpenVINO
Model: Person Vehicle Bike Detection FP16 - Device: CPU
OpenVINO
Model: Person Vehicle Bike Detection FP16 - Device: CPU
OpenVINO
Model: Noise Suppression Poconet-Like FP16 - Device: CPU
OpenVINO
Model: Noise Suppression Poconet-Like FP16 - Device: CPU
OpenVINO
Model: Handwritten English Recognition FP16 - Device: CPU
OpenVINO
Model: Handwritten English Recognition FP16 - Device: CPU
OpenVINO
Model: Person Re-Identification Retail FP16 - Device: CPU
OpenVINO
Model: Person Re-Identification Retail FP16 - Device: CPU
OpenVINO
Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU
OpenVINO
Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU
OpenVINO
Model: Handwritten English Recognition FP16-INT8 - Device: CPU
OpenVINO
Model: Handwritten English Recognition FP16-INT8 - Device: CPU
OpenVINO
Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU
OpenVINO
Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU
Numenta Anomaly Benchmark
Detector: KNN CAD
Numenta Anomaly Benchmark
Detector: Relative Entropy
Numenta Anomaly Benchmark
Detector: Windowed Gaussian
Numenta Anomaly Benchmark
Detector: Earthgecko Skyline
Numenta Anomaly Benchmark
Detector: Bayesian Changepoint
Numenta Anomaly Benchmark
Detector: Contextual Anomaly Detector OSE
ONNX Runtime
Model: GPT-2 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: GPT-2 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: GPT-2 - Device: CPU - Executor: Standard
ONNX Runtime
Model: GPT-2 - Device: CPU - Executor: Standard
ONNX Runtime
Model: T5 Encoder - Device: CPU - Executor: Parallel
ONNX Runtime
Model: T5 Encoder - Device: CPU - Executor: Parallel
ONNX Runtime
Model: T5 Encoder - Device: CPU - Executor: Standard
ONNX Runtime
Model: T5 Encoder - Device: CPU - Executor: Standard
ONNX Runtime
Model: bertsquad-12 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: bertsquad-12 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: bertsquad-12 - Device: CPU - Executor: Standard
ONNX Runtime
Model: bertsquad-12 - Device: CPU - Executor: Standard
ONNX Runtime
Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard
ONNX Runtime
Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard
ONNX Runtime
Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: fcn-resnet101-11 - Device: CPU - Executor: Standard
ONNX Runtime
Model: fcn-resnet101-11 - Device: CPU - Executor: Standard
ONNX Runtime
Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard
ONNX Runtime
Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard
ONNX Runtime
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard
ONNX Runtime
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard
ONNX Runtime
Model: super-resolution-10 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: super-resolution-10 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: super-resolution-10 - Device: CPU - Executor: Standard
ONNX Runtime
Model: super-resolution-10 - Device: CPU - Executor: Standard
ONNX Runtime
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard
ONNX Runtime
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard
Mlpack Benchmark
Benchmark: scikit_ica
Mlpack Benchmark
Benchmark: scikit_svm
Scikit-Learn
Benchmark: GLM
Scikit-Learn
Benchmark: SAGA
Scikit-Learn
Benchmark: Tree
Scikit-Learn
Benchmark: Lasso
Scikit-Learn
Benchmark: Sparsify
Scikit-Learn
Benchmark: Plot Ward
Scikit-Learn
Benchmark: MNIST Dataset
Scikit-Learn
Benchmark: Plot Neighbors
Scikit-Learn
Benchmark: SGD Regression
Scikit-Learn
Benchmark: Plot Lasso Path
Scikit-Learn
Benchmark: Text Vectorizers
Scikit-Learn
Benchmark: Plot Hierarchical
Scikit-Learn
Benchmark: Plot OMP vs. LARS
Scikit-Learn
Benchmark: Feature Expansions
Scikit-Learn
Benchmark: LocalOutlierFactor
Scikit-Learn
Benchmark: TSNE MNIST Dataset
Scikit-Learn
Benchmark: Plot Incremental PCA
Scikit-Learn
Benchmark: Hist Gradient Boosting
Scikit-Learn
Benchmark: Sample Without Replacement
Scikit-Learn
Benchmark: Covertype Dataset Benchmark
Scikit-Learn
Benchmark: Hist Gradient Boosting Adult
Scikit-Learn
Benchmark: Hist Gradient Boosting Threading
Scikit-Learn
Benchmark: Plot Singular Value Decomposition
Scikit-Learn
Benchmark: Hist Gradient Boosting Higgs Boson
Scikit-Learn
Benchmark: 20 Newsgroups / Logistic Regression
Scikit-Learn
Benchmark: Plot Polynomial Kernel Approximation
Scikit-Learn
Benchmark: Hist Gradient Boosting Categorical Only
Scikit-Learn
Benchmark: Kernel PCA Solvers / Time vs. N Samples
Scikit-Learn
Benchmark: Kernel PCA Solvers / Time vs. N Components
Scikit-Learn
Benchmark: Sparse Random Projections / 100 Iterations
Whisper.cpp
Model: ggml-base.en - Input: 2016 State of the Union
Whisper.cpp
Model: ggml-small.en - Input: 2016 State of the Union
Whisper.cpp
Model: ggml-medium.en - Input: 2016 State of the Union
OpenCV
Test: DNN - Deep Neural Network
Phoronix Test Suite v10.8.5