H610-i312100-1

Intel Core i3-12100 testing with a ASRock H610M-HDV/M.2 R2.0 (6.03 BIOS) and Intel ADL-S GT1 3GB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2408228-HERT-H610I3171
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Intel ADL-S GT1 - Intel Core i3-12100
August 20
  2 Days, 17 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


H610-i312100-1 Intel Core i3-12100 testing with a ASRock H610M-HDV/M.2 R2.0 (6.03 BIOS) and Intel ADL-S GT1 3GB on Ubuntu 20.04 via the Phoronix Test Suite. ,,"Intel ADL-S GT1 - Intel Core i3-12100" Processor,,Intel Core i3-12100 @ 4.30GHz (4 Cores / 8 Threads) Motherboard,,ASRock H610M-HDV/M.2 R2.0 (6.03 BIOS) Chipset,,Intel Device 7aa7 Memory,,4096MB Disk,,1000GB Western Digital WDS100T2B0A Graphics,,Intel ADL-S GT1 3GB (1400MHz) Audio,,Realtek ALC897 Network,,Realtek RTL8111/8168/8411 OS,,Ubuntu 20.04 Kernel,,5.15.0-89-generic (x86_64) Desktop,,GNOME Shell 3.36.9 Display Server,,X Server 1.20.13 OpenGL,,4.6 Mesa 21.2.6 Vulkan,,1.2.182 Compiler,,GCC 9.4.0 File-System,,ext4 Screen Resolution,,1366x768 ,,"Intel ADL-S GT1 - Intel Core i3-12100" "TensorFlow - Device: CPU - Batch Size: 1 - Model: VGG-16 (images/sec)",HIB, "TensorFlow - Device: GPU - Batch Size: 1 - Model: VGG-16 (images/sec)",HIB, "TensorFlow - Device: CPU - Batch Size: 1 - Model: AlexNet (images/sec)",HIB, "TensorFlow - Device: CPU - Batch Size: 16 - Model: VGG-16 (images/sec)",HIB, "TensorFlow - Device: CPU - Batch Size: 32 - Model: VGG-16 (images/sec)",HIB, "TensorFlow - Device: CPU - Batch Size: 64 - Model: VGG-16 (images/sec)",HIB, "TensorFlow - Device: GPU - Batch Size: 1 - Model: AlexNet (images/sec)",HIB, "TensorFlow - Device: GPU - Batch Size: 16 - Model: VGG-16 (images/sec)",HIB, "TensorFlow - Device: GPU - Batch Size: 32 - Model: VGG-16 (images/sec)",HIB, "TensorFlow - Device: GPU - Batch Size: 64 - Model: VGG-16 (images/sec)",HIB, "TensorFlow - Device: CPU - Batch Size: 16 - Model: AlexNet (images/sec)",HIB, "TensorFlow - Device: CPU - Batch Size: 256 - Model: VGG-16 (images/sec)",HIB, "TensorFlow - Device: CPU - Batch Size: 32 - Model: AlexNet (images/sec)",HIB, "TensorFlow - Device: CPU - Batch Size: 512 - Model: VGG-16 (images/sec)",HIB, "TensorFlow - Device: CPU - Batch Size: 64 - Model: AlexNet (images/sec)",HIB, "TensorFlow - Device: GPU - Batch Size: 16 - Model: AlexNet (images/sec)",HIB, "TensorFlow - Device: GPU - Batch Size: 256 - Model: VGG-16 (images/sec)",HIB, "TensorFlow - Device: GPU - Batch Size: 32 - Model: AlexNet (images/sec)",HIB, "TensorFlow - Device: GPU - Batch Size: 512 - Model: VGG-16 (images/sec)",HIB, "TensorFlow - Device: GPU - Batch Size: 64 - Model: AlexNet (images/sec)",HIB, "TensorFlow - Device: CPU - Batch Size: 1 - Model: GoogLeNet (images/sec)",HIB, "TensorFlow - Device: CPU - Batch Size: 1 - Model: ResNet-50 (images/sec)",HIB, "TensorFlow - Device: CPU - Batch Size: 256 - Model: AlexNet (images/sec)",HIB, "TensorFlow - Device: CPU - Batch Size: 512 - Model: AlexNet (images/sec)",HIB, "TensorFlow - Device: GPU - Batch Size: 1 - Model: GoogLeNet (images/sec)",HIB, "TensorFlow - Device: GPU - Batch Size: 1 - Model: ResNet-50 (images/sec)",HIB, "TensorFlow - Device: GPU - Batch Size: 256 - Model: AlexNet (images/sec)",HIB, "TensorFlow - Device: GPU - Batch Size: 512 - Model: AlexNet (images/sec)",HIB, "TensorFlow - Device: CPU - Batch Size: 16 - Model: GoogLeNet (images/sec)",HIB, "TensorFlow - Device: CPU - Batch Size: 16 - Model: ResNet-50 (images/sec)",HIB, "TensorFlow - Device: CPU - Batch Size: 32 - Model: GoogLeNet (images/sec)",HIB, "TensorFlow - Device: CPU - Batch Size: 32 - Model: ResNet-50 (images/sec)",HIB, "TensorFlow - Device: CPU - Batch Size: 64 - Model: GoogLeNet (images/sec)",HIB, "TensorFlow - Device: CPU - Batch Size: 64 - Model: ResNet-50 (images/sec)",HIB, "TensorFlow - Device: GPU - Batch Size: 16 - Model: GoogLeNet (images/sec)",HIB, "TensorFlow - Device: GPU - Batch Size: 16 - Model: ResNet-50 (images/sec)",HIB, "TensorFlow - Device: GPU - Batch Size: 32 - Model: GoogLeNet (images/sec)",HIB, "TensorFlow - Device: GPU - Batch Size: 32 - Model: ResNet-50 (images/sec)",HIB, "TensorFlow - Device: GPU - Batch Size: 64 - Model: GoogLeNet (images/sec)",HIB, "TensorFlow - Device: GPU - Batch Size: 64 - Model: ResNet-50 (images/sec)",HIB, "TensorFlow - Device: CPU - Batch Size: 256 - Model: GoogLeNet (images/sec)",HIB, "TensorFlow - Device: CPU - Batch Size: 256 - Model: ResNet-50 (images/sec)",HIB, "TensorFlow - Device: CPU - Batch Size: 512 - Model: GoogLeNet (images/sec)",HIB, "TensorFlow - Device: CPU - Batch Size: 512 - Model: ResNet-50 (images/sec)",HIB, "TensorFlow - Device: GPU - Batch Size: 256 - Model: GoogLeNet (images/sec)",HIB, "TensorFlow - Device: GPU - Batch Size: 256 - Model: ResNet-50 (images/sec)",HIB, "TensorFlow - Device: GPU - Batch Size: 512 - Model: GoogLeNet (images/sec)",HIB, "TensorFlow - Device: GPU - Batch Size: 512 - Model: ResNet-50 (images/sec)",HIB, "PlaidML - FP16: No - Mode: Inference - Network: VGG16 - Device: CPU (FPS)",HIB,8.86 "PlaidML - FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU (FPS)",HIB,4.62 "LeelaChessZero - Backend: BLAS (Nodes/s)",HIB, "Numenta Anomaly Benchmark - Detector: KNN CAD (sec)",LIB,245.088 "Numenta Anomaly Benchmark - Detector: Relative Entropy (sec)",LIB,20.596 "Numenta Anomaly Benchmark - Detector: Windowed Gaussian (sec)",LIB,11.696 "Numenta Anomaly Benchmark - Detector: Earthgecko Skyline (sec)",LIB,209.161 "Numenta Anomaly Benchmark - Detector: Bayesian Changepoint (sec)",LIB,47.477 "Numenta Anomaly Benchmark - Detector: Contextual Anomaly Detector OSE (sec)",LIB,52.860 "Scikit-Learn - Benchmark: GLM (sec)",LIB,611.356 "Scikit-Learn - Benchmark: SAGA (sec)",LIB,728.144 "Scikit-Learn - Benchmark: Tree (sec)",LIB,39.661 "Scikit-Learn - Benchmark: Lasso (sec)",LIB,492.674 "Scikit-Learn - Benchmark: Glmnet (sec)",LIB, "Scikit-Learn - Benchmark: Sparsify (sec)",LIB,74.982 "Scikit-Learn - Benchmark: Plot Ward (sec)",LIB,52.219 "Scikit-Learn - Benchmark: MNIST Dataset (sec)",LIB,64.562 "Scikit-Learn - Benchmark: Plot Neighbors (sec)",LIB,105.444 "Scikit-Learn - Benchmark: SGD Regression (sec)",LIB,119.971 "Scikit-Learn - Benchmark: SGDOneClassSVM (sec)",LIB, "Scikit-Learn - Benchmark: Plot Lasso Path (sec)",LIB,226.078 "Scikit-Learn - Benchmark: Isolation Forest (sec)",LIB, "Scikit-Learn - Benchmark: Plot Fast KMeans (sec)",LIB, "Scikit-Learn - Benchmark: Text Vectorizers (sec)",LIB,51.999 "Scikit-Learn - Benchmark: Plot Hierarchical (sec)",LIB,186.554 "Scikit-Learn - Benchmark: Plot OMP vs. LARS (sec)",LIB,134.664 "Scikit-Learn - Benchmark: Feature Expansions (sec)",LIB,241.475 "Scikit-Learn - Benchmark: LocalOutlierFactor (sec)",LIB,83.873 "Scikit-Learn - Benchmark: TSNE MNIST Dataset (sec)",LIB,324.552 "Scikit-Learn - Benchmark: Isotonic / Logistic (sec)",LIB, "Scikit-Learn - Benchmark: Plot Incremental PCA (sec)",LIB,45.043 "Scikit-Learn - Benchmark: Hist Gradient Boosting (sec)",LIB,96.579 "Scikit-Learn - Benchmark: Plot Parallel Pairwise (sec)",LIB, "Scikit-Learn - Benchmark: Isotonic / Pathological (sec)",LIB, "Scikit-Learn - Benchmark: RCV1 Logreg Convergencet (sec)",LIB, "Scikit-Learn - Benchmark: Sample Without Replacement (sec)",LIB,95.571 "Scikit-Learn - Benchmark: Covertype Dataset Benchmark (sec)",LIB,446.723 "Scikit-Learn - Benchmark: Hist Gradient Boosting Adult (sec)",LIB,60.214 "Scikit-Learn - Benchmark: Isotonic / Perturbed Logarithm (sec)",LIB, "Scikit-Learn - Benchmark: Hist Gradient Boosting Threading (sec)",LIB,336.099 "Scikit-Learn - Benchmark: Plot Singular Value Decomposition (sec)",LIB,199.818 "Scikit-Learn - Benchmark: Hist Gradient Boosting Higgs Boson (sec)",LIB,138.505 "Scikit-Learn - Benchmark: 20 Newsgroups / Logistic Regression (sec)",LIB,49.524 "Scikit-Learn - Benchmark: Plot Polynomial Kernel Approximation (sec)",LIB,233.561 "Scikit-Learn - Benchmark: Plot Non-Negative Matrix Factorization (sec)",LIB, "Scikit-Learn - Benchmark: Hist Gradient Boosting Categorical Only (sec)",LIB,14.585 "Scikit-Learn - Benchmark: Kernel PCA Solvers / Time vs. N Samples (sec)",LIB,276.878 "Scikit-Learn - Benchmark: Kernel PCA Solvers / Time vs. N Components (sec)",LIB,319.472 "Scikit-Learn - Benchmark: Sparse Random Projections / 100 Iterations (sec)",LIB,905.082 "R Benchmark - (sec)",LIB,0.1847 "Numpy Benchmark - (Score)",HIB,463.17 "DeepSpeech - Acceleration: CPU (sec)",LIB,87.51840 "RNNoise - Input: 26 Minute Long Talking Sample (sec)",LIB, "AI Benchmark Alpha - (Score)",HIB, "ECP-CANDLE - Benchmark: P1B2 (sec)",LIB, "ECP-CANDLE - Benchmark: P3B1 (sec)",LIB, "ECP-CANDLE - Benchmark: P3B2 (sec)",LIB, "Mobile Neural Network - Model: nasnet (ms)",LIB,9.072 "Mobile Neural Network - Model: mobilenetV3 (ms)",LIB,1.188 "Mobile Neural Network - Model: squeezenetv1.1 (ms)",LIB,2.900 "Mobile Neural Network - Model: resnet-v2-50 (ms)",LIB,26.312 "Mobile Neural Network - Model: SqueezeNetV1.0 (ms)",LIB,4.744 "Mobile Neural Network - Model: MobileNetV2_224 (ms)",LIB,2.705 "Mobile Neural Network - Model: mobilenet-v1-1.0 (ms)",LIB,3.373 "Mobile Neural Network - Model: inception-v3 (ms)",LIB,33.507 "Neural Magic DeepSparse - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream (items/sec)",HIB,4.0749 "Neural Magic DeepSparse - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream (ms/batch)",LIB,490.7977 "Neural Magic DeepSparse - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream (items/sec)",HIB,4.3138 "Neural Magic DeepSparse - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream (ms/batch)",LIB,231.8064 "Neural Magic DeepSparse - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream (items/sec)",HIB,164.6514 "Neural Magic DeepSparse - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream (ms/batch)",LIB,12.1290 "Neural Magic DeepSparse - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream (items/sec)",HIB,145.2740 "Neural Magic DeepSparse - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream (ms/batch)",LIB,6.8777 "Neural Magic DeepSparse - Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream (items/sec)",HIB,51.6867 "Neural Magic DeepSparse - Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream (ms/batch)",LIB,38.6775 "Neural Magic DeepSparse - Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream (items/sec)",HIB,47.5516 "Neural Magic DeepSparse - Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream (ms/batch)",LIB,21.0243 "Neural Magic DeepSparse - Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream (items/sec)",HIB,369.3583 "Neural Magic DeepSparse - Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream (ms/batch)",LIB,5.3924 "Neural Magic DeepSparse - Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream (items/sec)",HIB,300.1121 "Neural Magic DeepSparse - Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream (ms/batch)",LIB,3.3220 "Neural Magic DeepSparse - Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream (items/sec)",HIB,0.0070 "Neural Magic DeepSparse - Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream (ms/batch)",LIB,211288.5780 "Neural Magic DeepSparse - Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream (items/sec)",HIB,0.0092 "Neural Magic DeepSparse - Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream (ms/batch)",LIB,108889.8452 "Neural Magic DeepSparse - Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream (items/sec)",HIB,51.5732 "Neural Magic DeepSparse - Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream (ms/batch)",LIB,38.7596 "Neural Magic DeepSparse - Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream (items/sec)",HIB,47.4739 "Neural Magic DeepSparse - Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream (ms/batch)",LIB,21.0587 "Neural Magic DeepSparse - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream (items/sec)",HIB,23.8766 "Neural Magic DeepSparse - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream (ms/batch)",LIB,83.7416 "Neural Magic DeepSparse - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream (items/sec)",HIB,23.4544 "Neural Magic DeepSparse - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream (ms/batch)",LIB,42.6304 "Neural Magic DeepSparse - Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream (items/sec)",HIB,34.4105 "Neural Magic DeepSparse - Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream (ms/batch)",LIB,58.0973 "Neural Magic DeepSparse - Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream (items/sec)",HIB,32.2416 "Neural Magic DeepSparse - Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream (ms/batch)",LIB,31.0112 "Neural Magic DeepSparse - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream (items/sec)",HIB,5.0039 "Neural Magic DeepSparse - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream (ms/batch)",LIB,399.5003 "Neural Magic DeepSparse - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream (items/sec)",HIB,4.8149 "Neural Magic DeepSparse - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream (ms/batch)",LIB,207.6729 "Neural Magic DeepSparse - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream (items/sec)",HIB,78.1891 "Neural Magic DeepSparse - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream (ms/batch)",LIB,25.5563 "Neural Magic DeepSparse - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream (items/sec)",HIB,64.8531 "Neural Magic DeepSparse - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream (ms/batch)",LIB,15.4107 "Neural Magic DeepSparse - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream (items/sec)",HIB,4.1179 "Neural Magic DeepSparse - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream (ms/batch)",LIB,485.3757 "Neural Magic DeepSparse - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream (items/sec)",HIB,4.3321 "Neural Magic DeepSparse - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream (ms/batch)",LIB,230.8279 "ONNX Runtime - Model: GPT-2 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,50.3149 "ONNX Runtime - Model: GPT-2 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,51.1188 "ONNX Runtime - Model: yolov4 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB, "ONNX Runtime - Model: yolov4 - Device: CPU - Executor: Standard (Inferences/sec)",HIB, "ONNX Runtime - Model: T5 Encoder - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,56.0251 "ONNX Runtime - Model: T5 Encoder - Device: CPU - Executor: Standard (Inferences/sec)",HIB,57.5920 "ONNX Runtime - Model: bertsquad-12 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,5.34494 "ONNX Runtime - Model: bertsquad-12 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,7.91502 "ONNX Runtime - Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,209.447 "ONNX Runtime - Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,233.254 "ONNX Runtime - Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,0.512476 "ONNX Runtime - Model: fcn-resnet101-11 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,0.742532 "ONNX Runtime - Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,11.5402 "ONNX Runtime - Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,17.3121 "ONNX Runtime - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,122.008 "ONNX Runtime - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,142.738 "ONNX Runtime - Model: super-resolution-10 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,38.0422 "ONNX Runtime - Model: super-resolution-10 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,53.2627 "ONNX Runtime - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,4.43575 "ONNX Runtime - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,5.10110 "OpenCV - Test: DNN - Deep Neural Network (ms)",LIB,27185 "PyTorch - Device: CPU - Batch Size: 1 - Model: ResNet-50 (batches/sec)",HIB,24.89 "PyTorch - Device: CPU - Batch Size: 1 - Model: ResNet-152 (batches/sec)",HIB,11.34 "PyTorch - Device: CPU - Batch Size: 16 - Model: ResNet-50 (batches/sec)",HIB,12.57 "PyTorch - Device: CPU - Batch Size: 32 - Model: ResNet-50 (batches/sec)",HIB,12.56 "PyTorch - Device: CPU - Batch Size: 64 - Model: ResNet-50 (batches/sec)",HIB,12.69 "PyTorch - Device: CPU - Batch Size: 16 - Model: ResNet-152 (batches/sec)",HIB,6.33 "PyTorch - Device: CPU - Batch Size: 256 - Model: ResNet-50 (batches/sec)",HIB,12.91 "PyTorch - Device: CPU - Batch Size: 32 - Model: ResNet-152 (batches/sec)",HIB,6.24 "PyTorch - Device: CPU - Batch Size: 512 - Model: ResNet-50 (batches/sec)",HIB,12.76 "PyTorch - Device: CPU - Batch Size: 64 - Model: ResNet-152 (batches/sec)",HIB,6.23 "PyTorch - Device: CPU - Batch Size: 256 - Model: ResNet-152 (batches/sec)",HIB,6.22 "PyTorch - Device: CPU - Batch Size: 512 - Model: ResNet-152 (batches/sec)",HIB,6.17 "PyTorch - Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l (batches/sec)",HIB,8.10 "PyTorch - Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l (batches/sec)",HIB,4.51 "PyTorch - Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l (batches/sec)",HIB,4.55 "PyTorch - Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l (batches/sec)",HIB,4.52 "PyTorch - Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l (batches/sec)",HIB,4.50 "PyTorch - Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l (batches/sec)",HIB,4.48 "spaCy - (tokens/sec)",HIB, "TensorFlow Lite - Model: SqueezeNet (us)",LIB,4322.71 "TensorFlow Lite - Model: Inception V4 (us)",LIB,64931.8 "TensorFlow Lite - Model: NASNet Mobile (us)",LIB,12005.2 "TensorFlow Lite - Model: Mobilenet Float (us)",LIB,3574.97 "TensorFlow Lite - Model: Mobilenet Quant (us)",LIB,5198.07 "TensorFlow Lite - Model: Inception ResNet V2 (us)",LIB,60949.0 "TNN - Target: CPU - Model: DenseNet (ms)",LIB,2446.578 "TNN - Target: CPU - Model: MobileNet v2 (ms)",LIB,201.109 "TNN - Target: CPU - Model: SqueezeNet v2 (ms)",LIB,46.205 "TNN - Target: CPU - Model: SqueezeNet v1.1 (ms)",LIB,166.131 "Whisper.cpp - Model: ggml-base.en - Input: 2016 State of the Union (sec)",LIB,246.28971 "Whisper.cpp - Model: ggml-small.en - Input: 2016 State of the Union (sec)",LIB,753.36065 "Whisper.cpp - Model: ggml-medium.en - Input: 2016 State of the Union (sec)",LIB,2280.50283 "XNNPACK - Model: FP32MobileNetV2 (us)",LIB,5545 "XNNPACK - Model: FP32MobileNetV3Large (us)",LIB,5484 "XNNPACK - Model: FP32MobileNetV3Small (us)",LIB,1302 "XNNPACK - Model: FP16MobileNetV2 (us)",LIB,3444 "XNNPACK - Model: FP16MobileNetV3Large (us)",LIB,3396 "XNNPACK - Model: FP16MobileNetV3Small (us)",LIB,965 "XNNPACK - Model: QU8MobileNetV2 (us)",LIB,2163 "XNNPACK - Model: QU8MobileNetV3Large (us)",LIB,1974 "XNNPACK - Model: QU8MobileNetV3Small (us)",LIB,728 "Llama.cpp - Model: llama-2-7b.Q4_0.gguf (Tokens/sec)",HIB, "Llama.cpp - Model: llama-2-13b.Q4_0.gguf (Tokens/sec)",HIB, "Llama.cpp - Model: llama-2-70b-chat.Q5_0.gguf (Tokens/sec)",HIB, "Llamafile - Test: llava-v1.5-7b-q4 - Acceleration: CPU (Tokens/sec)",HIB, "Llamafile - Test: mistral-7b-instruct-v0.2.Q8_0 - Acceleration: CPU (Tokens/sec)",HIB, "Llamafile - Test: wizardcoder-python-34b-v1.0.Q6_K - Acceleration: CPU (Tokens/sec)",HIB, "Caffe - Model: AlexNet - Acceleration: CPU - Iterations: 100 (ms)",LIB,44622 "Caffe - Model: AlexNet - Acceleration: CPU - Iterations: 200 (ms)",LIB,90229 "Caffe - Model: AlexNet - Acceleration: CPU - Iterations: 1000 (ms)",LIB,445933 "Caffe - Model: GoogleNet - Acceleration: CPU - Iterations: 100 (ms)",LIB,103411 "Caffe - Model: GoogleNet - Acceleration: CPU - Iterations: 200 (ms)",LIB,206462 "Caffe - Model: GoogleNet - Acceleration: CPU - Iterations: 1000 (ms)",LIB,1038320 "NCNN - Target: CPU - Model: mobilenet (ms)",LIB,21.50 "NCNN - Target: CPU-v2-v2 - Model: mobilenet-v2 (ms)",LIB,5.35 "NCNN - Target: CPU-v3-v3 - Model: mobilenet-v3 (ms)",LIB,3.18 "NCNN - Target: CPU - Model: shufflenet-v2 (ms)",LIB,2.25 "NCNN - Target: CPU - Model: mnasnet (ms)",LIB,3.53 "NCNN - Target: CPU - Model: efficientnet-b0 (ms)",LIB,8.15 "NCNN - Target: CPU - Model: blazeface (ms)",LIB,0.62 "NCNN - Target: CPU - Model: googlenet (ms)",LIB,16.13 "NCNN - Target: CPU - Model: vgg16 (ms)",LIB,109.92 "NCNN - Target: CPU - Model: resnet18 (ms)",LIB,12.34 "NCNN - Target: CPU - Model: alexnet (ms)",LIB,12.63 "NCNN - Target: CPU - Model: resnet50 (ms)",LIB,26.71 "NCNN - Target: CPU - Model: yolov4-tiny (ms)",LIB,40.12 "NCNN - Target: CPU - Model: squeezenet_ssd (ms)",LIB,17.42 "NCNN - Target: CPU - Model: regnety_400m (ms)",LIB,6.57 "NCNN - Target: CPU - Model: vision_transformer (ms)",LIB,135.49 "NCNN - Target: CPU - Model: FastestDet (ms)",LIB,3.31 "NCNN - Target: Vulkan GPU - Model: mobilenet (ms)",LIB,21.05 "NCNN - Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 (ms)",LIB,5.29 "NCNN - Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 (ms)",LIB,3.19 "NCNN - Target: Vulkan GPU - Model: shufflenet-v2 (ms)",LIB,2.21 "NCNN - Target: Vulkan GPU - Model: mnasnet (ms)",LIB,3.19 "NCNN - Target: Vulkan GPU - Model: efficientnet-b0 (ms)",LIB,7.99 "NCNN - Target: Vulkan GPU - Model: blazeface (ms)",LIB,0.61 "NCNN - Target: Vulkan GPU - Model: googlenet (ms)",LIB,15.83 "NCNN - Target: Vulkan GPU - Model: vgg16 (ms)",LIB,111.12 "NCNN - Target: Vulkan GPU - Model: resnet18 (ms)",LIB,12.25 "NCNN - Target: Vulkan GPU - Model: alexnet (ms)",LIB,12.54 "NCNN - Target: Vulkan GPU - Model: resnet50 (ms)",LIB,26.97 "NCNN - Target: Vulkan GPU - Model: yolov4-tiny (ms)",LIB,39.80 "NCNN - Target: Vulkan GPU - Model: squeezenet_ssd (ms)",LIB,17.49 "NCNN - Target: Vulkan GPU - Model: regnety_400m (ms)",LIB,6.53 "NCNN - Target: Vulkan GPU - Model: vision_transformer (ms)",LIB,136.23 "NCNN - Target: Vulkan GPU - Model: FastestDet (ms)",LIB,3.30 "Mlpack Benchmark - Benchmark: scikit_ica (sec)",LIB,64.74 "Mlpack Benchmark - Benchmark: scikit_qda (sec)",LIB, "Mlpack Benchmark - Benchmark: scikit_svm (sec)",LIB,14.51 "Mlpack Benchmark - Benchmark: scikit_linearridgeregression (sec)",LIB, "oneDNN - Harness: IP Shapes 1D - Engine: CPU (ms)",LIB,6.00601 "oneDNN - Harness: IP Shapes 3D - Engine: CPU (ms)",LIB,26.6998 "oneDNN - Harness: Convolution Batch Shapes Auto - Engine: CPU (ms)",LIB,44.1438 "oneDNN - Harness: Deconvolution Batch shapes_1d - Engine: CPU (ms)",LIB,10.7533 "oneDNN - Harness: Deconvolution Batch shapes_3d - Engine: CPU (ms)",LIB,10.3718 "oneDNN - Harness: Recurrent Neural Network Training - Engine: CPU (ms)",LIB,5708.97 "oneDNN - Harness: Recurrent Neural Network Inference - Engine: CPU (ms)",LIB,3071.31 "OpenVINO - Model: Face Detection FP16 - Device: CPU (FPS)",HIB,1.24 "OpenVINO - Model: Face Detection FP16 - Device: CPU (ms)",LIB,3213.28 "OpenVINO - Model: Person Detection FP16 - Device: CPU (FPS)",HIB,10.06 "OpenVINO - Model: Person Detection FP16 - Device: CPU (ms)",LIB,397.47 "OpenVINO - Model: Person Detection FP32 - Device: CPU (FPS)",HIB,10.04 "OpenVINO - Model: Person Detection FP32 - Device: CPU (ms)",LIB,398.50 "OpenVINO - Model: Vehicle Detection FP16 - Device: CPU (FPS)",HIB,62.61 "OpenVINO - Model: Vehicle Detection FP16 - Device: CPU (ms)",LIB,63.85 "OpenVINO - Model: Face Detection FP16-INT8 - Device: CPU (FPS)",HIB,5.26 "OpenVINO - Model: Face Detection FP16-INT8 - Device: CPU (ms)",LIB,759.24 "OpenVINO - Model: Face Detection Retail FP16 - Device: CPU (FPS)",HIB,330.42 "OpenVINO - Model: Face Detection Retail FP16 - Device: CPU (ms)",LIB,12.09 "OpenVINO - Model: Road Segmentation ADAS FP16 - Device: CPU (FPS)",HIB,18.24 "OpenVINO - Model: Road Segmentation ADAS FP16 - Device: CPU (ms)",LIB,219.10 "OpenVINO - Model: Vehicle Detection FP16-INT8 - Device: CPU (FPS)",HIB,214.21 "OpenVINO - Model: Vehicle Detection FP16-INT8 - Device: CPU (ms)",LIB,18.66 "OpenVINO - Model: Weld Porosity Detection FP16 - Device: CPU (FPS)",HIB,136.74 "OpenVINO - Model: Weld Porosity Detection FP16 - Device: CPU (ms)",LIB,29.24 "OpenVINO - Model: Face Detection Retail FP16-INT8 - Device: CPU (FPS)",HIB,833.34 "OpenVINO - Model: Face Detection Retail FP16-INT8 - Device: CPU (ms)",LIB,4.80 "OpenVINO - Model: Road Segmentation ADAS FP16-INT8 - Device: CPU (FPS)",HIB,63.19 "OpenVINO - Model: Road Segmentation ADAS FP16-INT8 - Device: CPU (ms)",LIB,63.27 "OpenVINO - Model: Machine Translation EN To DE FP16 - Device: CPU (FPS)",HIB,14.95 "OpenVINO - Model: Machine Translation EN To DE FP16 - Device: CPU (ms)",LIB,267.44 "OpenVINO - Model: Weld Porosity Detection FP16-INT8 - Device: CPU (FPS)",HIB,513.04 "OpenVINO - Model: Weld Porosity Detection FP16-INT8 - Device: CPU (ms)",LIB,7.79 "OpenVINO - Model: Person Vehicle Bike Detection FP16 - Device: CPU (FPS)",HIB,166.71 "OpenVINO - Model: Person Vehicle Bike Detection FP16 - Device: CPU (ms)",LIB,23.99 "OpenVINO - Model: Noise Suppression Poconet-Like FP16 - Device: CPU (FPS)",HIB,208.84 "OpenVINO - Model: Noise Suppression Poconet-Like FP16 - Device: CPU (ms)",LIB,19.10 "OpenVINO - Model: Handwritten English Recognition FP16 - Device: CPU (FPS)",HIB,66.40 "OpenVINO - Model: Handwritten English Recognition FP16 - Device: CPU (ms)",LIB,60.21 "OpenVINO - Model: Person Re-Identification Retail FP16 - Device: CPU (FPS)",HIB,201.90 "OpenVINO - Model: Person Re-Identification Retail FP16 - Device: CPU (ms)",LIB,19.80 "OpenVINO - Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU (FPS)",HIB,3481.92 "OpenVINO - Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU (ms)",LIB,1.14 "OpenVINO - Model: Handwritten English Recognition FP16-INT8 - Device: CPU (FPS)",HIB,88.41 "OpenVINO - Model: Handwritten English Recognition FP16-INT8 - Device: CPU (ms)",LIB,45.22 "OpenVINO - Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU (FPS)",HIB,10592.67 "OpenVINO - Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU (ms)",LIB,0.37 "ONNX Runtime - Model: GPT-2 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,19.8719 "ONNX Runtime - Model: GPT-2 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,19.5609 "ONNX Runtime - Model: T5 Encoder - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,17.8485 "ONNX Runtime - Model: T5 Encoder - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,17.3627 "ONNX Runtime - Model: bertsquad-12 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,187.689 "ONNX Runtime - Model: bertsquad-12 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,130.814 "ONNX Runtime - Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,4.77472 "ONNX Runtime - Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,4.28771 "ONNX Runtime - Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,1975.36 "ONNX Runtime - Model: fcn-resnet101-11 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,1353.24 "ONNX Runtime - Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,86.6977 "ONNX Runtime - Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,57.7900 "ONNX Runtime - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,8.19737 "ONNX Runtime - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,7.00536 "ONNX Runtime - Model: super-resolution-10 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,26.2946 "ONNX Runtime - Model: super-resolution-10 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,19.2123 "ONNX Runtime - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,225.465 "ONNX Runtime - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,196.432