h510-i310100-1 Intel Core i3-10100 testing with a ASRock H510M-HVS (P1.60 BIOS) and Intel UHD 630 CML GT2 3GB on Ubuntu 20.04 via the Phoronix Test Suite. Intel UHD 630 CML GT2: Processor: Intel Core i3-10100 @ 4.30GHz (4 Cores / 8 Threads), Motherboard: ASRock H510M-HVS (P1.60 BIOS), Chipset: Intel Device 43ef, Memory: 3584MB, Disk: 1000GB Western Digital WDS100T2B0A, Graphics: Intel UHD 630 CML GT2 3GB (1100MHz), Audio: Realtek ALC897, Monitor: G185BGEL01, Network: Realtek RTL8111/8168/8411 OS: Ubuntu 20.04, Kernel: 5.15.0-88-generic (x86_64), Desktop: GNOME Shell 3.36.9, Display Server: X Server 1.20.13, OpenGL: 4.6 Mesa 21.2.6, Vulkan: 1.2.182, Compiler: GCC 9.4.0, File-System: ext4, Screen Resolution: 1368x768 TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: VGG-16 images/sec > Higher Is Better Intel UHD 630 CML GT2 . 1.85 |================================================= TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: VGG-16 images/sec > Higher Is Better Intel UHD 630 CML GT2 . 1.58 |================================================= TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: VGG-16 images/sec > Higher Is Better Intel UHD 630 CML GT2 . 1.05 |================================================= TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: AlexNet images/sec > Higher Is Better Intel UHD 630 CML GT2 . 21.65 |================================================ TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: VGG-16 images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: AlexNet images/sec > Higher Is Better Intel UHD 630 CML GT2 . 28.12 |================================================ TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: VGG-16 images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: AlexNet images/sec > Higher Is Better Intel UHD 630 CML GT2 . 32.89 |================================================ TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: AlexNet images/sec > Higher Is Better Intel UHD 630 CML GT2 . 36.80 |================================================ TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: AlexNet images/sec > Higher Is Better Intel UHD 630 CML GT2 . 36.83 |================================================ TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: GoogLeNet images/sec > Higher Is Better Intel UHD 630 CML GT2 . 11.17 |================================================ TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: ResNet-50 images/sec > Higher Is Better Intel UHD 630 CML GT2 . 3.84 |================================================= TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: GoogLeNet images/sec > Higher Is Better Intel UHD 630 CML GT2 . 11.45 |================================================ TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: ResNet-50 images/sec > Higher Is Better Intel UHD 630 CML GT2 . 2.95 |================================================= TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: GoogLeNet images/sec > Higher Is Better Intel UHD 630 CML GT2 . 11.50 |================================================ TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: ResNet-50 images/sec > Higher Is Better Intel UHD 630 CML GT2 . 1.74 |================================================= TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: GoogLeNet images/sec > Higher Is Better Intel UHD 630 CML GT2 . 4.84 |================================================= TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: ResNet-50 images/sec > Higher Is Better PlaidML FP16: No - Mode: Inference - Network: VGG16 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 6.09 |================================================= PlaidML FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 3.37 |================================================= LeelaChessZero 0.28 Backend: BLAS Nodes Per Second > Higher Is Better Intel UHD 630 CML GT2 . 147 |================================================== Numenta Anomaly Benchmark 1.1 Detector: KNN CAD Seconds < Lower Is Better Intel UHD 630 CML GT2 . 401.29 |=============================================== Numenta Anomaly Benchmark 1.1 Detector: Relative Entropy Seconds < Lower Is Better Intel UHD 630 CML GT2 . 39.53 |================================================ Numenta Anomaly Benchmark 1.1 Detector: Windowed Gaussian Seconds < Lower Is Better Intel UHD 630 CML GT2 . 18.37 |================================================ Numenta Anomaly Benchmark 1.1 Detector: Earthgecko Skyline Seconds < Lower Is Better Intel UHD 630 CML GT2 . 395.73 |=============================================== Numenta Anomaly Benchmark 1.1 Detector: Bayesian Changepoint Seconds < Lower Is Better Intel UHD 630 CML GT2 . 126.13 |=============================================== Numenta Anomaly Benchmark 1.1 Detector: Contextual Anomaly Detector OSE Seconds < Lower Is Better Intel UHD 630 CML GT2 . 99.65 |================================================ Scikit-Learn 1.2.2 Benchmark: GLM Seconds < Lower Is Better Intel UHD 630 CML GT2 . 1042.34 |============================================== Scikit-Learn 1.2.2 Benchmark: SAGA Seconds < Lower Is Better Intel UHD 630 CML GT2 . 1013.79 |============================================== Scikit-Learn 1.2.2 Benchmark: Tree Seconds < Lower Is Better Intel UHD 630 CML GT2 . 45.33 |================================================ Scikit-Learn 1.2.2 Benchmark: Lasso Seconds < Lower Is Better Intel UHD 630 CML GT2 . 560.62 |=============================================== Scikit-Learn 1.2.2 Benchmark: Glmnet Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Sparsify Seconds < Lower Is Better Intel UHD 630 CML GT2 . 100.83 |=============================================== Scikit-Learn 1.2.2 Benchmark: Plot Ward Seconds < Lower Is Better Intel UHD 630 CML GT2 . 82.99 |================================================ Scikit-Learn 1.2.2 Benchmark: MNIST Dataset Seconds < Lower Is Better Intel UHD 630 CML GT2 . 82.15 |================================================ Scikit-Learn 1.2.2 Benchmark: Plot Neighbors Seconds < Lower Is Better Intel UHD 630 CML GT2 . 174.83 |=============================================== Scikit-Learn 1.2.2 Benchmark: SGD Regression Seconds < Lower Is Better Intel UHD 630 CML GT2 . 152.30 |=============================================== Scikit-Learn 1.2.2 Benchmark: SGDOneClassSVM Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Lasso Path Seconds < Lower Is Better Intel UHD 630 CML GT2 . 357.73 |=============================================== Scikit-Learn 1.2.2 Benchmark: Isolation Forest Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Fast KMeans Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Text Vectorizers Seconds < Lower Is Better Intel UHD 630 CML GT2 . 74.61 |================================================ Scikit-Learn 1.2.2 Benchmark: Plot Hierarchical Seconds < Lower Is Better Intel UHD 630 CML GT2 . 256.95 |=============================================== Scikit-Learn 1.2.2 Benchmark: Plot OMP vs. LARS Seconds < Lower Is Better Intel UHD 630 CML GT2 . 192.00 |=============================================== Scikit-Learn 1.2.2 Benchmark: Feature Expansions Seconds < Lower Is Better Intel UHD 630 CML GT2 . 195.64 |=============================================== Scikit-Learn 1.2.2 Benchmark: LocalOutlierFactor Seconds < Lower Is Better Intel UHD 630 CML GT2 . 127.76 |=============================================== Scikit-Learn 1.2.2 Benchmark: TSNE MNIST Dataset Seconds < Lower Is Better Intel UHD 630 CML GT2 . 467.53 |=============================================== Scikit-Learn 1.2.2 Benchmark: Isotonic / Logistic Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Incremental PCA Seconds < Lower Is Better Intel UHD 630 CML GT2 . 54.07 |================================================ Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Seconds < Lower Is Better Intel UHD 630 CML GT2 . 158.50 |=============================================== Scikit-Learn 1.2.2 Benchmark: Plot Parallel Pairwise Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Isotonic / Pathological Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: RCV1 Logreg Convergencet Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Sample Without Replacement Seconds < Lower Is Better Intel UHD 630 CML GT2 . 137.89 |=============================================== Scikit-Learn 1.2.2 Benchmark: Covertype Dataset Benchmark Seconds < Lower Is Better Intel UHD 630 CML GT2 . 556.13 |=============================================== Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Adult Seconds < Lower Is Better Intel UHD 630 CML GT2 . 94.91 |================================================ Scikit-Learn 1.2.2 Benchmark: Isotonic / Perturbed Logarithm Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Threading Seconds < Lower Is Better Intel UHD 630 CML GT2 . 361.51 |=============================================== Scikit-Learn 1.2.2 Benchmark: Plot Singular Value Decomposition Seconds < Lower Is Better Intel UHD 630 CML GT2 . 330.98 |=============================================== Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Higgs Boson Seconds < Lower Is Better Intel UHD 630 CML GT2 . 170.74 |=============================================== Scikit-Learn 1.2.2 Benchmark: 20 Newsgroups / Logistic Regression Seconds < Lower Is Better Intel UHD 630 CML GT2 . 62.13 |================================================ Scikit-Learn 1.2.2 Benchmark: Plot Polynomial Kernel Approximation Seconds < Lower Is Better Intel UHD 630 CML GT2 . 275.14 |=============================================== Scikit-Learn 1.2.2 Benchmark: Plot Non-Negative Matrix Factorization Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Categorical Only Seconds < Lower Is Better Intel UHD 630 CML GT2 . 23.30 |================================================ Scikit-Learn 1.2.2 Benchmark: Kernel PCA Solvers / Time vs. N Samples Seconds < Lower Is Better Intel UHD 630 CML GT2 . 430.92 |=============================================== Scikit-Learn 1.2.2 Benchmark: Kernel PCA Solvers / Time vs. N Components Seconds < Lower Is Better Intel UHD 630 CML GT2 . 355.19 |=============================================== Scikit-Learn 1.2.2 Benchmark: Sparse Random Projections / 100 Iterations Seconds < Lower Is Better Intel UHD 630 CML GT2 . 1736.46 |============================================== R Benchmark Seconds < Lower Is Better Intel UHD 630 CML GT2 . 0.2675 |=============================================== Numpy Benchmark Score > Higher Is Better Intel UHD 630 CML GT2 . 326.47 |=============================================== DeepSpeech 0.6 Acceleration: CPU Seconds < Lower Is Better Intel UHD 630 CML GT2 . 106.13 |=============================================== RNNoise 2020-06-28 Seconds < Lower Is Better Intel UHD 630 CML GT2 . 25.32 |================================================ AI Benchmark Alpha 0.1.2 Score > Higher Is Better ECP-CANDLE 0.4 Benchmark: P1B2 Seconds < Lower Is Better ECP-CANDLE 0.4 Benchmark: P3B1 Seconds < Lower Is Better ECP-CANDLE 0.4 Benchmark: P3B2 Seconds < Lower Is Better Mobile Neural Network 2.1 Model: nasnet ms < Lower Is Better Intel UHD 630 CML GT2 . 15.10 |================================================ Mobile Neural Network 2.1 Model: mobilenetV3 ms < Lower Is Better Intel UHD 630 CML GT2 . 2.141 |================================================ Mobile Neural Network 2.1 Model: squeezenetv1.1 ms < Lower Is Better Intel UHD 630 CML GT2 . 4.132 |================================================ Mobile Neural Network 2.1 Model: resnet-v2-50 ms < Lower Is Better Intel UHD 630 CML GT2 . 49.75 |================================================ Mobile Neural Network 2.1 Model: SqueezeNetV1.0 ms < Lower Is Better Intel UHD 630 CML GT2 . 9.364 |================================================ Mobile Neural Network 2.1 Model: MobileNetV2_224 ms < Lower Is Better Intel UHD 630 CML GT2 . 5.768 |================================================ Mobile Neural Network 2.1 Model: mobilenet-v1-1.0 ms < Lower Is Better Intel UHD 630 CML GT2 . 5.891 |================================================ Mobile Neural Network 2.1 Model: inception-v3 ms < Lower Is Better Intel UHD 630 CML GT2 . 57.66 |================================================ Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 2.9599 |=============================================== Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 675.69 |=============================================== Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 2.8978 |=============================================== Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 345.08 |=============================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 67.13 |================================================ Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 29.77 |================================================ Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 64.01 |================================================ Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 15.61 |================================================ Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 43.81 |================================================ Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 45.63 |================================================ Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 38.59 |================================================ Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 25.93 |================================================ Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 245.34 |=============================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 8.1295 |=============================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 222.30 |=============================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 4.4891 |=============================================== Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 18.84 |================================================ Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 106.14 |=============================================== Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 18.65 |================================================ Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 53.61 |================================================ Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 4.1684 |=============================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 479.28 |=============================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 4.1704 |=============================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 239.75 |=============================================== Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 41.87 |================================================ Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 47.74 |================================================ Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 39.78 |================================================ Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 25.13 |================================================ Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 19.31 |================================================ Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 103.57 |=============================================== Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 18.86 |================================================ Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 53.01 |================================================ Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 27.56 |================================================ Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 72.55 |================================================ Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 24.11 |================================================ Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 41.47 |================================================ Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 3.9479 |=============================================== Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 506.57 |=============================================== Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 3.7043 |=============================================== Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 269.94 |=============================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 32.09 |================================================ Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 62.31 |================================================ Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 31.42 |================================================ Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 31.82 |================================================ Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 2.9270 |=============================================== Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 683.27 |=============================================== Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 2.8786 |=============================================== Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 347.38 |=============================================== ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better Intel UHD 630 CML GT2 . 44.84 |================================================ ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better Intel UHD 630 CML GT2 . 47.65 |================================================ ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better Intel UHD 630 CML GT2 . 3.91378 |============================================== ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better Intel UHD 630 CML GT2 . 5.24740 |============================================== ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better Intel UHD 630 CML GT2 . 139.50 |=============================================== ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better Intel UHD 630 CML GT2 . 158.59 |=============================================== ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better Intel UHD 630 CML GT2 . 0.414443 |============================================= ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better Intel UHD 630 CML GT2 . 0.574477 |============================================= ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better Intel UHD 630 CML GT2 . 9.89471 |============================================== ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better Intel UHD 630 CML GT2 . 14.11 |================================================ ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better Intel UHD 630 CML GT2 . 59.75 |================================================ ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better Intel UHD 630 CML GT2 . 75.89 |================================================ ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better Intel UHD 630 CML GT2 . 28.33 |================================================ ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better Intel UHD 630 CML GT2 . 42.70 |================================================ ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better Intel UHD 630 CML GT2 . 3.18997 |============================================== ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better Intel UHD 630 CML GT2 . 3.96508 |============================================== OpenCV 4.7 Test: DNN - Deep Neural Network ms < Lower Is Better Intel UHD 630 CML GT2 . 40614 |================================================ PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-50 batches/sec > Higher Is Better Intel UHD 630 CML GT2 . 12.82 |================================================ PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-152 batches/sec > Higher Is Better Intel UHD 630 CML GT2 . 5.99 |================================================= PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 batches/sec > Higher Is Better Intel UHD 630 CML GT2 . 6.95 |================================================= PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: ResNet-50 batches/sec > Higher Is Better Intel UHD 630 CML GT2 . 6.93 |================================================= PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: ResNet-50 batches/sec > Higher Is Better Intel UHD 630 CML GT2 . 7.07 |================================================= PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-152 batches/sec > Higher Is Better Intel UHD 630 CML GT2 . 3.41 |================================================= PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: ResNet-50 batches/sec > Higher Is Better Intel UHD 630 CML GT2 . 7.13 |================================================= PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: ResNet-152 batches/sec > Higher Is Better Intel UHD 630 CML GT2 . 3.42 |================================================= PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: ResNet-50 batches/sec > Higher Is Better Intel UHD 630 CML GT2 . 7.13 |================================================= PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: ResNet-152 batches/sec > Higher Is Better Intel UHD 630 CML GT2 . 3.50 |================================================= PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: ResNet-152 batches/sec > Higher Is Better Intel UHD 630 CML GT2 . 3.44 |================================================= PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: ResNet-152 batches/sec > Higher Is Better Intel UHD 630 CML GT2 . 3.40 |================================================= PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l batches/sec > Higher Is Better Intel UHD 630 CML GT2 . 4.43 |================================================= PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l batches/sec > Higher Is Better Intel UHD 630 CML GT2 . 2.21 |================================================= PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l batches/sec > Higher Is Better Intel UHD 630 CML GT2 . 2.27 |================================================= PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l batches/sec > Higher Is Better Intel UHD 630 CML GT2 . 2.27 |================================================= PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l batches/sec > Higher Is Better Intel UHD 630 CML GT2 . 2.25 |================================================= PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l batches/sec > Higher Is Better Intel UHD 630 CML GT2 . 2.27 |================================================= spaCy 3.4.1 tokens/sec > Higher Is Better TensorFlow Lite 2022-05-18 Model: SqueezeNet Microseconds < Lower Is Better Intel UHD 630 CML GT2 . 7050.26 |============================================== TensorFlow Lite 2022-05-18 Model: Inception V4 Microseconds < Lower Is Better Intel UHD 630 CML GT2 . 98713.2 |============================================== TensorFlow Lite 2022-05-18 Model: NASNet Mobile Microseconds < Lower Is Better Intel UHD 630 CML GT2 . 17768.7 |============================================== TensorFlow Lite 2022-05-18 Model: Mobilenet Float Microseconds < Lower Is Better Intel UHD 630 CML GT2 . 5753.91 |============================================== TensorFlow Lite 2022-05-18 Model: Mobilenet Quant Microseconds < Lower Is Better Intel UHD 630 CML GT2 . 6821.92 |============================================== TensorFlow Lite 2022-05-18 Model: Inception ResNet V2 Microseconds < Lower Is Better Intel UHD 630 CML GT2 . 88427.7 |============================================== TNN 0.3 Target: CPU - Model: DenseNet ms < Lower Is Better Intel UHD 630 CML GT2 . 4010.89 |============================================== TNN 0.3 Target: CPU - Model: MobileNet v2 ms < Lower Is Better Intel UHD 630 CML GT2 . 346.04 |=============================================== TNN 0.3 Target: CPU - Model: SqueezeNet v2 ms < Lower Is Better Intel UHD 630 CML GT2 . 70.17 |================================================ TNN 0.3 Target: CPU - Model: SqueezeNet v1.1 ms < Lower Is Better Intel UHD 630 CML GT2 . 313.97 |=============================================== Whisper.cpp 1.4 Model: ggml-base.en - Input: 2016 State of the Union Seconds < Lower Is Better Intel UHD 630 CML GT2 . 419.51 |=============================================== Whisper.cpp 1.4 Model: ggml-small.en - Input: 2016 State of the Union Seconds < Lower Is Better Intel UHD 630 CML GT2 . 1335.21 |============================================== Whisper.cpp 1.4 Model: ggml-medium.en - Input: 2016 State of the Union Seconds < Lower Is Better Intel UHD 630 CML GT2 . 4481.69 |============================================== Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 100 Milli-Seconds < Lower Is Better Intel UHD 630 CML GT2 . 65490 |================================================ Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 200 Milli-Seconds < Lower Is Better Intel UHD 630 CML GT2 . 131136 |=============================================== Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 1000 Milli-Seconds < Lower Is Better Intel UHD 630 CML GT2 . 662266 |=============================================== Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 100 Milli-Seconds < Lower Is Better Intel UHD 630 CML GT2 . 154104 |=============================================== Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 200 Milli-Seconds < Lower Is Better Intel UHD 630 CML GT2 . 309367 |=============================================== Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 1000 Milli-Seconds < Lower Is Better Intel UHD 630 CML GT2 . 1546367 |============================================== NCNN 20230517 Target: CPU - Model: mobilenet ms < Lower Is Better Intel UHD 630 CML GT2 . 40.47 |================================================ NCNN 20230517 Target: CPU-v2-v2 - Model: mobilenet-v2 ms < Lower Is Better Intel UHD 630 CML GT2 . 10.90 |================================================ NCNN 20230517 Target: CPU-v3-v3 - Model: mobilenet-v3 ms < Lower Is Better Intel UHD 630 CML GT2 . 6.53 |================================================= NCNN 20230517 Target: CPU - Model: shufflenet-v2 ms < Lower Is Better Intel UHD 630 CML GT2 . 3.17 |================================================= NCNN 20230517 Target: CPU - Model: mnasnet ms < Lower Is Better Intel UHD 630 CML GT2 . 6.75 |================================================= NCNN 20230517 Target: CPU - Model: efficientnet-b0 ms < Lower Is Better Intel UHD 630 CML GT2 . 13.84 |================================================ NCNN 20230517 Target: CPU - Model: blazeface ms < Lower Is Better Intel UHD 630 CML GT2 . 0.97 |================================================= NCNN 20230517 Target: CPU - Model: googlenet ms < Lower Is Better Intel UHD 630 CML GT2 . 26.08 |================================================ NCNN 20230517 Target: CPU - Model: vgg16 ms < Lower Is Better Intel UHD 630 CML GT2 . 148.39 |=============================================== NCNN 20230517 Target: CPU - Model: resnet18 ms < Lower Is Better Intel UHD 630 CML GT2 . 22.09 |================================================ NCNN 20230517 Target: CPU - Model: alexnet ms < Lower Is Better Intel UHD 630 CML GT2 . 18.08 |================================================ NCNN 20230517 Target: CPU - Model: resnet50 ms < Lower Is Better Intel UHD 630 CML GT2 . 51.46 |================================================ NCNN 20230517 Target: CPU - Model: yolov4-tiny ms < Lower Is Better Intel UHD 630 CML GT2 . 59.47 |================================================ NCNN 20230517 Target: CPU - Model: squeezenet_ssd ms < Lower Is Better Intel UHD 630 CML GT2 . 26.02 |================================================ NCNN 20230517 Target: CPU - Model: regnety_400m ms < Lower Is Better Intel UHD 630 CML GT2 . 10.28 |================================================ NCNN 20230517 Target: CPU - Model: vision_transformer ms < Lower Is Better Intel UHD 630 CML GT2 . 168.97 |=============================================== NCNN 20230517 Target: CPU - Model: FastestDet ms < Lower Is Better Intel UHD 630 CML GT2 . 6.06 |================================================= NCNN 20230517 Target: Vulkan GPU - Model: mobilenet ms < Lower Is Better Intel UHD 630 CML GT2 . 40.45 |================================================ NCNN 20230517 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 ms < Lower Is Better Intel UHD 630 CML GT2 . 10.90 |================================================ NCNN 20230517 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 ms < Lower Is Better Intel UHD 630 CML GT2 . 6.55 |================================================= NCNN 20230517 Target: Vulkan GPU - Model: shufflenet-v2 ms < Lower Is Better Intel UHD 630 CML GT2 . 3.18 |================================================= NCNN 20230517 Target: Vulkan GPU - Model: mnasnet ms < Lower Is Better Intel UHD 630 CML GT2 . 6.77 |================================================= NCNN 20230517 Target: Vulkan GPU - Model: efficientnet-b0 ms < Lower Is Better Intel UHD 630 CML GT2 . 13.91 |================================================ NCNN 20230517 Target: Vulkan GPU - Model: blazeface ms < Lower Is Better Intel UHD 630 CML GT2 . 0.99 |================================================= NCNN 20230517 Target: Vulkan GPU - Model: googlenet ms < Lower Is Better Intel UHD 630 CML GT2 . 26.11 |================================================ NCNN 20230517 Target: Vulkan GPU - Model: vgg16 ms < Lower Is Better Intel UHD 630 CML GT2 . 148.83 |=============================================== NCNN 20230517 Target: Vulkan GPU - Model: resnet18 ms < Lower Is Better Intel UHD 630 CML GT2 . 22.04 |================================================ NCNN 20230517 Target: Vulkan GPU - Model: alexnet ms < Lower Is Better Intel UHD 630 CML GT2 . 18.12 |================================================ NCNN 20230517 Target: Vulkan GPU - Model: resnet50 ms < Lower Is Better Intel UHD 630 CML GT2 . 51.51 |================================================ NCNN 20230517 Target: Vulkan GPU - Model: yolov4-tiny ms < Lower Is Better Intel UHD 630 CML GT2 . 59.60 |================================================ NCNN 20230517 Target: Vulkan GPU - Model: squeezenet_ssd ms < Lower Is Better Intel UHD 630 CML GT2 . 25.97 |================================================ NCNN 20230517 Target: Vulkan GPU - Model: regnety_400m ms < Lower Is Better Intel UHD 630 CML GT2 . 10.31 |================================================ NCNN 20230517 Target: Vulkan GPU - Model: vision_transformer ms < Lower Is Better Intel UHD 630 CML GT2 . 169.32 |=============================================== NCNN 20230517 Target: Vulkan GPU - Model: FastestDet ms < Lower Is Better Intel UHD 630 CML GT2 . 6.02 |================================================= Mlpack Benchmark Benchmark: scikit_ica Seconds < Lower Is Better Intel UHD 630 CML GT2 . 84.73 |================================================ Mlpack Benchmark Benchmark: scikit_qda Seconds < Lower Is Better Mlpack Benchmark Benchmark: scikit_svm Seconds < Lower Is Better Intel UHD 630 CML GT2 . 25.55 |================================================ Mlpack Benchmark Benchmark: scikit_linearridgeregression Seconds < Lower Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 12.78 |================================================ oneDNN 3.3 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 32.19 |================================================ oneDNN 3.3 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 3.10155 |============================================== oneDNN 3.3 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 5.50061 |============================================== oneDNN 3.3 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 59.67 |================================================ oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 10.44 |================================================ oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 15.06 |================================================ oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 52.10 |================================================ oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 4.34929 |============================================== oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 7.88714 |============================================== oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 10295.8 |============================================== oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 6656.38 |============================================== oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 10313.9 |============================================== oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 6632.54 |============================================== oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 10350.7 |============================================== oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 6626.14 |============================================== OpenVINO 2023.2.dev Model: Face Detection FP16 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 1.09 |================================================= OpenVINO 2023.2.dev Model: Face Detection FP16 - Device: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 3660.46 |============================================== OpenVINO 2023.2.dev Model: Person Detection FP16 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 7.96 |================================================= OpenVINO 2023.2.dev Model: Person Detection FP16 - Device: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 502.35 |=============================================== OpenVINO 2023.2.dev Model: Person Detection FP32 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 7.96 |================================================= OpenVINO 2023.2.dev Model: Person Detection FP32 - Device: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 502.32 |=============================================== OpenVINO 2023.2.dev Model: Vehicle Detection FP16 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 43.49 |================================================ OpenVINO 2023.2.dev Model: Vehicle Detection FP16 - Device: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 91.92 |================================================ OpenVINO 2023.2.dev Model: Face Detection FP16-INT8 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 2.18 |================================================= OpenVINO 2023.2.dev Model: Face Detection FP16-INT8 - Device: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 1829.80 |============================================== OpenVINO 2023.2.dev Model: Face Detection Retail FP16 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 138.04 |=============================================== OpenVINO 2023.2.dev Model: Face Detection Retail FP16 - Device: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 28.95 |================================================ OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 10.30 |================================================ OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16 - Device: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 388.22 |=============================================== OpenVINO 2023.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 101.81 |=============================================== OpenVINO 2023.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 39.27 |================================================ OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 102.52 |=============================================== OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16 - Device: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 38.99 |================================================ OpenVINO 2023.2.dev Model: Face Detection Retail FP16-INT8 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 387.43 |=============================================== OpenVINO 2023.2.dev Model: Face Detection Retail FP16-INT8 - Device: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 10.31 |================================================ OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16-INT8 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 38.69 |================================================ OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16-INT8 - Device: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 103.35 |=============================================== OpenVINO 2023.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 10.24 |================================================ OpenVINO 2023.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 390.31 |=============================================== OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 215.06 |=============================================== OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 18.58 |================================================ OpenVINO 2023.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 85.89 |================================================ OpenVINO 2023.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 46.56 |================================================ OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 39.30 |================================================ OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16 - Device: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 101.72 |=============================================== OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 1937.85 |============================================== OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 2.04 |================================================= OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16-INT8 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 47.69 |================================================ OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16-INT8 - Device: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 83.84 |================================================ OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 4197.73 |============================================== OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 0.94 |================================================= ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better Intel UHD 630 CML GT2 . 22.30 |================================================ ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better Intel UHD 630 CML GT2 . 20.99 |================================================ ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better Intel UHD 630 CML GT2 . 255.55 |=============================================== ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better Intel UHD 630 CML GT2 . 190.62 |=============================================== ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better Intel UHD 630 CML GT2 . 7.16796 |============================================== ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better Intel UHD 630 CML GT2 . 6.30439 |============================================== ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better Intel UHD 630 CML GT2 . 2437.66 |============================================== ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better Intel UHD 630 CML GT2 . 1740.74 |============================================== ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better Intel UHD 630 CML GT2 . 101.07 |=============================================== ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better Intel UHD 630 CML GT2 . 70.86 |================================================ ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better Intel UHD 630 CML GT2 . 16.74 |================================================ ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better Intel UHD 630 CML GT2 . 13.18 |================================================ ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better Intel UHD 630 CML GT2 . 35.38 |================================================ ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better Intel UHD 630 CML GT2 . 23.42 |================================================ ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better Intel UHD 630 CML GT2 . 313.48 |=============================================== ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better Intel UHD 630 CML GT2 . 252.20 |===============================================