wxl2pay-h510-ml Intel Core i3-10105 testing with a ASRock H510M-HDV/M.2 SE (P1.60 BIOS) and Intel UHD 630 CML GT2 3GB on Ubuntu 20.04 via the Phoronix Test Suite. Intel UHD 630 CML GT2: Processor: Intel Core i3-10105 @ 4.40GHz (4 Cores / 8 Threads), Motherboard: ASRock H510M-HDV/M.2 SE (P1.60 BIOS), Chipset: Intel Comet Lake PCH, Memory: 3584MB, Disk: 1000GB Western Digital WDS100T2B0A, Graphics: Intel UHD 630 CML GT2 3GB (1100MHz), Audio: Realtek ALC897, Monitor: G185BGEL01, Network: Realtek RTL8111/8168/8411 OS: Ubuntu 20.04, Kernel: 5.15.0-83-generic (x86_64), Desktop: GNOME Shell 3.36.9, Display Server: X Server 1.20.13, OpenGL: 4.6 Mesa 21.2.6, Vulkan: 1.2.182, Compiler: GCC 9.4.0, File-System: ext4, Screen Resolution: 1368x768 LeelaChessZero 0.28 Backend: BLAS Nodes Per Second > Higher Is Better Intel UHD 630 CML GT2 . 150 |================================================== oneDNN 3.1 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 12.58 |================================================ oneDNN 3.1 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 28.14 |================================================ oneDNN 3.1 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 3.06986 |============================================== oneDNN 3.1 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 5.21109 |============================================== oneDNN 3.1 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 3.1 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 3.1 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 59.20 |================================================ oneDNN 3.1 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 11.97 |================================================ oneDNN 3.1 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 14.72 |================================================ oneDNN 3.1 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 51.15 |================================================ oneDNN 3.1 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 4.27840 |============================================== oneDNN 3.1 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 7.61768 |============================================== oneDNN 3.1 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 9905.23 |============================================== oneDNN 3.1 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 6419.16 |============================================== oneDNN 3.1 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 9995.81 |============================================== oneDNN 3.1 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 3.1 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 3.1 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 3.1 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 6376.18 |============================================== oneDNN 3.1 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 9956.01 |============================================== oneDNN 3.1 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 6433.01 |============================================== Numpy Benchmark Score > Higher Is Better Intel UHD 630 CML GT2 . 335.66 |=============================================== DeepSpeech 0.6 Acceleration: CPU Seconds < Lower Is Better Intel UHD 630 CML GT2 . 110.86 |=============================================== R Benchmark Seconds < Lower Is Better Intel UHD 630 CML GT2 . 0.2640 |=============================================== RNNoise 2020-06-28 Seconds < Lower Is Better Intel UHD 630 CML GT2 . 24.62 |================================================ TensorFlow Lite 2022-05-18 Model: SqueezeNet Microseconds < Lower Is Better Intel UHD 630 CML GT2 . 7954.00 |============================================== TensorFlow Lite 2022-05-18 Model: Inception V4 Microseconds < Lower Is Better Intel UHD 630 CML GT2 . 94854.7 |============================================== TensorFlow Lite 2022-05-18 Model: NASNet Mobile Microseconds < Lower Is Better Intel UHD 630 CML GT2 . 16980.6 |============================================== TensorFlow Lite 2022-05-18 Model: Mobilenet Float Microseconds < Lower Is Better Intel UHD 630 CML GT2 . 5296.02 |============================================== TensorFlow Lite 2022-05-18 Model: Mobilenet Quant Microseconds < Lower Is Better Intel UHD 630 CML GT2 . 6530.93 |============================================== TensorFlow Lite 2022-05-18 Model: Inception ResNet V2 Microseconds < Lower Is Better Intel UHD 630 CML GT2 . 179365.3 |============================================= TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: VGG-16 images/sec > Higher Is Better Intel UHD 630 CML GT2 . 1.85 |================================================= TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: VGG-16 images/sec > Higher Is Better Intel UHD 630 CML GT2 . 1.57 |================================================= TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: VGG-16 images/sec > Higher Is Better Intel UHD 630 CML GT2 . 1.01 |================================================= TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: AlexNet images/sec > Higher Is Better Intel UHD 630 CML GT2 . 21.64 |================================================ TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: VGG-16 images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: AlexNet images/sec > Higher Is Better Intel UHD 630 CML GT2 . 28.20 |================================================ TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: VGG-16 images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: AlexNet images/sec > Higher Is Better Intel UHD 630 CML GT2 . 32.96 |================================================ TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: AlexNet images/sec > Higher Is Better Intel UHD 630 CML GT2 . 37.11 |================================================ TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: AlexNet images/sec > Higher Is Better Intel UHD 630 CML GT2 . 37.35 |================================================ TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: GoogLeNet images/sec > Higher Is Better Intel UHD 630 CML GT2 . 11.17 |================================================ TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: ResNet-50 images/sec > Higher Is Better Intel UHD 630 CML GT2 . 3.85 |================================================= TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: GoogLeNet images/sec > Higher Is Better Intel UHD 630 CML GT2 . 11.49 |================================================ TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: ResNet-50 images/sec > Higher Is Better Intel UHD 630 CML GT2 . 2.76 |================================================= TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: GoogLeNet images/sec > Higher Is Better Intel UHD 630 CML GT2 . 11.58 |================================================ TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: ResNet-50 images/sec > Higher Is Better Intel UHD 630 CML GT2 . 1.67 |================================================= TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: GoogLeNet images/sec > Higher Is Better Intel UHD 630 CML GT2 . 4.86 |================================================= TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: ResNet-50 images/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 2.9679 |=============================================== Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 673.65 |=============================================== Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 2.9456 |=============================================== Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 339.48 |=============================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 68.00 |================================================ Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 29.38 |================================================ Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 66.22 |================================================ Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 15.09 |================================================ Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 29.30 |================================================ Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 68.23 |================================================ Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 22.57 |================================================ Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 44.30 |================================================ Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 7.0357 |=============================================== Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 284.16 |=============================================== Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 7.6194 |=============================================== Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 131.24 |=============================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 43.02 |================================================ Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 46.48 |================================================ Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 40.24 |================================================ Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 24.84 |================================================ Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 248.74 |=============================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 8.0184 |=============================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 225.54 |=============================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 4.4246 |=============================================== Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 19.28 |================================================ Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 103.73 |=============================================== Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 19.06 |================================================ Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 52.46 |================================================ Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 4.2921 |=============================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 465.67 |=============================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 4.2085 |=============================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 237.58 |=============================================== Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 44.54 |================================================ Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 44.89 |================================================ Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 40.40 |================================================ Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 24.74 |================================================ Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 19.68 |================================================ Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 101.61 |=============================================== Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 19.37 |================================================ Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 51.60 |================================================ Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 27.26 |================================================ Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 73.34 |================================================ Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 24.41 |================================================ Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 40.97 |================================================ Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 4.0056 |=============================================== Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 499.27 |=============================================== Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 3.7437 |=============================================== Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 267.10 |=============================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 32.69 |================================================ Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 61.15 |================================================ Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 32.37 |================================================ Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 30.89 |================================================ Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 14.35 |================================================ Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 139.37 |=============================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 12.45 |================================================ Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 80.31 |================================================ Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 2.9720 |=============================================== Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 672.92 |=============================================== Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Intel UHD 630 CML GT2 . 2.9162 |=============================================== Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Intel UHD 630 CML GT2 . 342.90 |=============================================== spaCy 3.4.1 tokens/sec > Higher Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 100 Milli-Seconds < Lower Is Better Intel UHD 630 CML GT2 . 64887 |================================================ Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 200 Milli-Seconds < Lower Is Better Intel UHD 630 CML GT2 . 129888 |=============================================== Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 1000 Milli-Seconds < Lower Is Better Intel UHD 630 CML GT2 . 653167 |=============================================== Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 100 Milli-Seconds < Lower Is Better Intel UHD 630 CML GT2 . 150069 |=============================================== Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 200 Milli-Seconds < Lower Is Better Intel UHD 630 CML GT2 . 299544 |=============================================== Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 1000 Milli-Seconds < Lower Is Better Intel UHD 630 CML GT2 . 1500340 |============================================== Mobile Neural Network 2.1 Model: nasnet ms < Lower Is Better Intel UHD 630 CML GT2 . 14.23 |================================================ Mobile Neural Network 2.1 Model: mobilenetV3 ms < Lower Is Better Intel UHD 630 CML GT2 . 2.113 |================================================ Mobile Neural Network 2.1 Model: squeezenetv1.1 ms < Lower Is Better Intel UHD 630 CML GT2 . 4.082 |================================================ Mobile Neural Network 2.1 Model: resnet-v2-50 ms < Lower Is Better Intel UHD 630 CML GT2 . 48.91 |================================================ Mobile Neural Network 2.1 Model: SqueezeNetV1.0 ms < Lower Is Better Intel UHD 630 CML GT2 . 9.145 |================================================ Mobile Neural Network 2.1 Model: MobileNetV2_224 ms < Lower Is Better Intel UHD 630 CML GT2 . 5.587 |================================================ Mobile Neural Network 2.1 Model: mobilenet-v1-1.0 ms < Lower Is Better Intel UHD 630 CML GT2 . 5.690 |================================================ Mobile Neural Network 2.1 Model: inception-v3 ms < Lower Is Better Intel UHD 630 CML GT2 . 56.89 |================================================ NCNN 20230517 Target: CPU - Model: mobilenet ms < Lower Is Better Intel UHD 630 CML GT2 . 40.34 |================================================ NCNN 20230517 Target: CPU-v2-v2 - Model: mobilenet-v2 ms < Lower Is Better Intel UHD 630 CML GT2 . 10.84 |================================================ NCNN 20230517 Target: CPU-v3-v3 - Model: mobilenet-v3 ms < Lower Is Better Intel UHD 630 CML GT2 . 6.51 |================================================= NCNN 20230517 Target: CPU - Model: shufflenet-v2 ms < Lower Is Better Intel UHD 630 CML GT2 . 3.13 |================================================= NCNN 20230517 Target: CPU - Model: mnasnet ms < Lower Is Better Intel UHD 630 CML GT2 . 6.71 |================================================= NCNN 20230517 Target: CPU - Model: efficientnet-b0 ms < Lower Is Better Intel UHD 630 CML GT2 . 13.66 |================================================ NCNN 20230517 Target: CPU - Model: blazeface ms < Lower Is Better Intel UHD 630 CML GT2 . 0.96 |================================================= NCNN 20230517 Target: CPU - Model: googlenet ms < Lower Is Better Intel UHD 630 CML GT2 . 26.17 |================================================ NCNN 20230517 Target: CPU - Model: vgg16 ms < Lower Is Better Intel UHD 630 CML GT2 . 148.72 |=============================================== NCNN 20230517 Target: CPU - Model: resnet18 ms < Lower Is Better Intel UHD 630 CML GT2 . 22.06 |================================================ NCNN 20230517 Target: CPU - Model: alexnet ms < Lower Is Better Intel UHD 630 CML GT2 . 18.01 |================================================ NCNN 20230517 Target: CPU - Model: resnet50 ms < Lower Is Better Intel UHD 630 CML GT2 . 51.44 |================================================ NCNN 20230517 Target: CPU - Model: yolov4-tiny ms < Lower Is Better Intel UHD 630 CML GT2 . 59.59 |================================================ NCNN 20230517 Target: CPU - Model: squeezenet_ssd ms < Lower Is Better Intel UHD 630 CML GT2 . 25.87 |================================================ NCNN 20230517 Target: CPU - Model: regnety_400m ms < Lower Is Better Intel UHD 630 CML GT2 . 10.14 |================================================ NCNN 20230517 Target: CPU - Model: vision_transformer ms < Lower Is Better Intel UHD 630 CML GT2 . 165.49 |=============================================== NCNN 20230517 Target: CPU - Model: FastestDet ms < Lower Is Better Intel UHD 630 CML GT2 . 5.97 |================================================= NCNN 20230517 Target: Vulkan GPU - Model: mobilenet ms < Lower Is Better Intel UHD 630 CML GT2 . 40.43 |================================================ NCNN 20230517 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 ms < Lower Is Better Intel UHD 630 CML GT2 . 10.82 |================================================ NCNN 20230517 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 ms < Lower Is Better Intel UHD 630 CML GT2 . 6.50 |================================================= NCNN 20230517 Target: Vulkan GPU - Model: shufflenet-v2 ms < Lower Is Better Intel UHD 630 CML GT2 . 3.13 |================================================= NCNN 20230517 Target: Vulkan GPU - Model: mnasnet ms < Lower Is Better Intel UHD 630 CML GT2 . 6.75 |================================================= NCNN 20230517 Target: Vulkan GPU - Model: efficientnet-b0 ms < Lower Is Better Intel UHD 630 CML GT2 . 13.68 |================================================ NCNN 20230517 Target: Vulkan GPU - Model: blazeface ms < Lower Is Better Intel UHD 630 CML GT2 . 0.98 |================================================= NCNN 20230517 Target: Vulkan GPU - Model: googlenet ms < Lower Is Better Intel UHD 630 CML GT2 . 26.05 |================================================ NCNN 20230517 Target: Vulkan GPU - Model: vgg16 ms < Lower Is Better Intel UHD 630 CML GT2 . 148.53 |=============================================== NCNN 20230517 Target: Vulkan GPU - Model: resnet18 ms < Lower Is Better Intel UHD 630 CML GT2 . 22.02 |================================================ NCNN 20230517 Target: Vulkan GPU - Model: alexnet ms < Lower Is Better Intel UHD 630 CML GT2 . 18.11 |================================================ NCNN 20230517 Target: Vulkan GPU - Model: resnet50 ms < Lower Is Better Intel UHD 630 CML GT2 . 51.57 |================================================ NCNN 20230517 Target: Vulkan GPU - Model: yolov4-tiny ms < Lower Is Better Intel UHD 630 CML GT2 . 59.52 |================================================ NCNN 20230517 Target: Vulkan GPU - Model: squeezenet_ssd ms < Lower Is Better Intel UHD 630 CML GT2 . 25.97 |================================================ NCNN 20230517 Target: Vulkan GPU - Model: regnety_400m ms < Lower Is Better Intel UHD 630 CML GT2 . 10.12 |================================================ NCNN 20230517 Target: Vulkan GPU - Model: vision_transformer ms < Lower Is Better Intel UHD 630 CML GT2 . 166.32 |=============================================== NCNN 20230517 Target: Vulkan GPU - Model: FastestDet ms < Lower Is Better Intel UHD 630 CML GT2 . 5.94 |================================================= TNN 0.3 Target: CPU - Model: DenseNet ms < Lower Is Better Intel UHD 630 CML GT2 . 3918.63 |============================================== TNN 0.3 Target: CPU - Model: MobileNet v2 ms < Lower Is Better Intel UHD 630 CML GT2 . 337.29 |=============================================== TNN 0.3 Target: CPU - Model: SqueezeNet v2 ms < Lower Is Better Intel UHD 630 CML GT2 . 68.01 |================================================ TNN 0.3 Target: CPU - Model: SqueezeNet v1.1 ms < Lower Is Better Intel UHD 630 CML GT2 . 305.52 |=============================================== PlaidML FP16: No - Mode: Inference - Network: VGG16 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 6.36 |================================================= PlaidML FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 3.42 |================================================= OpenVINO 2023.1 Model: Face Detection FP16 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 1.12 |================================================= OpenVINO 2023.1 Model: Face Detection FP16 - Device: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 3524.39 |============================================== OpenVINO 2023.1 Model: Person Detection FP16 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 8.04 |================================================= OpenVINO 2023.1 Model: Person Detection FP16 - Device: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 497.31 |=============================================== OpenVINO 2023.1 Model: Person Detection FP32 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 8.01 |================================================= OpenVINO 2023.1 Model: Person Detection FP32 - Device: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 499.32 |=============================================== OpenVINO 2023.1 Model: Vehicle Detection FP16 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 43.69 |================================================ OpenVINO 2023.1 Model: Vehicle Detection FP16 - Device: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 91.50 |================================================ OpenVINO 2023.1 Model: Face Detection FP16-INT8 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 2.34 |================================================= OpenVINO 2023.1 Model: Face Detection FP16-INT8 - Device: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 1706.58 |============================================== OpenVINO 2023.1 Model: Face Detection Retail FP16 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 140.31 |=============================================== OpenVINO 2023.1 Model: Face Detection Retail FP16 - Device: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 28.47 |================================================ OpenVINO 2023.1 Model: Road Segmentation ADAS FP16 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 10.34 |================================================ OpenVINO 2023.1 Model: Road Segmentation ADAS FP16 - Device: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 386.61 |=============================================== OpenVINO 2023.1 Model: Vehicle Detection FP16-INT8 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 110.72 |=============================================== OpenVINO 2023.1 Model: Vehicle Detection FP16-INT8 - Device: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 36.09 |================================================ OpenVINO 2023.1 Model: Weld Porosity Detection FP16 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 107.92 |=============================================== OpenVINO 2023.1 Model: Weld Porosity Detection FP16 - Device: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 37.04 |================================================ OpenVINO 2023.1 Model: Face Detection Retail FP16-INT8 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 415.02 |=============================================== OpenVINO 2023.1 Model: Face Detection Retail FP16-INT8 - Device: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 9.62 |================================================= OpenVINO 2023.1 Model: Road Segmentation ADAS FP16-INT8 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 39.34 |================================================ OpenVINO 2023.1 Model: Road Segmentation ADAS FP16-INT8 - Device: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 101.64 |=============================================== OpenVINO 2023.1 Model: Machine Translation EN To DE FP16 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 10.57 |================================================ OpenVINO 2023.1 Model: Machine Translation EN To DE FP16 - Device: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 378.26 |=============================================== OpenVINO 2023.1 Model: Weld Porosity Detection FP16-INT8 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 230.89 |=============================================== OpenVINO 2023.1 Model: Weld Porosity Detection FP16-INT8 - Device: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 17.31 |================================================ OpenVINO 2023.1 Model: Person Vehicle Bike Detection FP16 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 88.37 |================================================ OpenVINO 2023.1 Model: Person Vehicle Bike Detection FP16 - Device: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 45.23 |================================================ OpenVINO 2023.1 Model: Handwritten English Recognition FP16 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 39.75 |================================================ OpenVINO 2023.1 Model: Handwritten English Recognition FP16 - Device: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 100.60 |=============================================== OpenVINO 2023.1 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 1970.08 |============================================== OpenVINO 2023.1 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 2.01 |================================================= OpenVINO 2023.1 Model: Handwritten English Recognition FP16-INT8 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 48.58 |================================================ OpenVINO 2023.1 Model: Handwritten English Recognition FP16-INT8 - Device: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 82.37 |================================================ OpenVINO 2023.1 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU FPS > Higher Is Better Intel UHD 630 CML GT2 . 4859.57 |============================================== OpenVINO 2023.1 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU ms < Lower Is Better Intel UHD 630 CML GT2 . 0.81 |================================================= ECP-CANDLE 0.4 Benchmark: P1B2 Seconds < Lower Is Better ECP-CANDLE 0.4 Benchmark: P3B1 Seconds < Lower Is Better ECP-CANDLE 0.4 Benchmark: P3B2 Seconds < Lower Is Better Numenta Anomaly Benchmark 1.1 Detector: KNN CAD Seconds < Lower Is Better Intel UHD 630 CML GT2 . 383.71 |=============================================== Numenta Anomaly Benchmark 1.1 Detector: Relative Entropy Seconds < Lower Is Better Intel UHD 630 CML GT2 . 38.97 |================================================ Numenta Anomaly Benchmark 1.1 Detector: Windowed Gaussian Seconds < Lower Is Better Intel UHD 630 CML GT2 . 17.92 |================================================ Numenta Anomaly Benchmark 1.1 Detector: Earthgecko Skyline Seconds < Lower Is Better Intel UHD 630 CML GT2 . 389.30 |=============================================== Numenta Anomaly Benchmark 1.1 Detector: Bayesian Changepoint Seconds < Lower Is Better Intel UHD 630 CML GT2 . 125.56 |=============================================== Numenta Anomaly Benchmark 1.1 Detector: Contextual Anomaly Detector OSE Seconds < Lower Is Better Intel UHD 630 CML GT2 . 97.57 |================================================ ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better AI Benchmark Alpha 0.1.2 Score > Higher Is Better Mlpack Benchmark Benchmark: scikit_ica Seconds < Lower Is Better Intel UHD 630 CML GT2 . 82.06 |================================================ Mlpack Benchmark Benchmark: scikit_qda Seconds < Lower Is Better Mlpack Benchmark Benchmark: scikit_svm Seconds < Lower Is Better Intel UHD 630 CML GT2 . 24.89 |================================================ Mlpack Benchmark Benchmark: scikit_linearridgeregression Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: GLM Seconds < Lower Is Better Intel UHD 630 CML GT2 . 1053.25 |============================================== Scikit-Learn 1.2.2 Benchmark: SAGA Seconds < Lower Is Better Intel UHD 630 CML GT2 . 995.70 |=============================================== Scikit-Learn 1.2.2 Benchmark: Tree Seconds < Lower Is Better Intel UHD 630 CML GT2 . 100.99 |=============================================== Scikit-Learn 1.2.2 Benchmark: Lasso Seconds < Lower Is Better Intel UHD 630 CML GT2 . 547.02 |=============================================== Scikit-Learn 1.2.2 Benchmark: Glmnet Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Sparsify Seconds < Lower Is Better Intel UHD 630 CML GT2 . 98.94 |================================================ Scikit-Learn 1.2.2 Benchmark: Plot Ward Seconds < Lower Is Better Intel UHD 630 CML GT2 . 81.29 |================================================ Scikit-Learn 1.2.2 Benchmark: MNIST Dataset Seconds < Lower Is Better Intel UHD 630 CML GT2 . 97.60 |================================================ Scikit-Learn 1.2.2 Benchmark: Plot Neighbors Seconds < Lower Is Better Intel UHD 630 CML GT2 . 332.56 |=============================================== Scikit-Learn 1.2.2 Benchmark: SGD Regression Seconds < Lower Is Better Intel UHD 630 CML GT2 . 151.87 |=============================================== Scikit-Learn 1.2.2 Benchmark: SGDOneClassSVM Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Lasso Path Seconds < Lower Is Better Intel UHD 630 CML GT2 . 355.54 |=============================================== Scikit-Learn 1.2.2 Benchmark: Isolation Forest Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Fast KMeans Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Text Vectorizers Seconds < Lower Is Better Intel UHD 630 CML GT2 . 72.48 |================================================ Scikit-Learn 1.2.2 Benchmark: Plot Hierarchical Seconds < Lower Is Better Intel UHD 630 CML GT2 . 261.03 |=============================================== Scikit-Learn 1.2.2 Benchmark: Plot OMP vs. LARS Seconds < Lower Is Better Intel UHD 630 CML GT2 . 191.59 |=============================================== Scikit-Learn 1.2.2 Benchmark: Feature Expansions Seconds < Lower Is Better Intel UHD 630 CML GT2 . 198.21 |=============================================== Scikit-Learn 1.2.2 Benchmark: LocalOutlierFactor Seconds < Lower Is Better Intel UHD 630 CML GT2 . 132.59 |=============================================== Scikit-Learn 1.2.2 Benchmark: TSNE MNIST Dataset Seconds < Lower Is Better Intel UHD 630 CML GT2 . 477.72 |=============================================== Scikit-Learn 1.2.2 Benchmark: Isotonic / Logistic Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Incremental PCA Seconds < Lower Is Better Intel UHD 630 CML GT2 . 53.96 |================================================ Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Seconds < Lower Is Better Intel UHD 630 CML GT2 . 196.39 |=============================================== Scikit-Learn 1.2.2 Benchmark: Plot Parallel Pairwise Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Isotonic / Pathological Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: RCV1 Logreg Convergencet Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Sample Without Replacement Seconds < Lower Is Better Intel UHD 630 CML GT2 . 135.98 |=============================================== Scikit-Learn 1.2.2 Benchmark: Covertype Dataset Benchmark Seconds < Lower Is Better Intel UHD 630 CML GT2 . 551.06 |=============================================== Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Adult Seconds < Lower Is Better Intel UHD 630 CML GT2 . 163.81 |=============================================== Scikit-Learn 1.2.2 Benchmark: Isotonic / Perturbed Logarithm Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Threading Seconds < Lower Is Better Intel UHD 630 CML GT2 . 372.86 |=============================================== Scikit-Learn 1.2.2 Benchmark: Plot Singular Value Decomposition Seconds < Lower Is Better Intel UHD 630 CML GT2 . 330.74 |=============================================== Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Higgs Boson Seconds < Lower Is Better Intel UHD 630 CML GT2 . 150.76 |=============================================== Scikit-Learn 1.2.2 Benchmark: 20 Newsgroups / Logistic Regression Seconds < Lower Is Better Intel UHD 630 CML GT2 . 60.51 |================================================ Scikit-Learn 1.2.2 Benchmark: Plot Polynomial Kernel Approximation Seconds < Lower Is Better Intel UHD 630 CML GT2 . 270.16 |=============================================== Scikit-Learn 1.2.2 Benchmark: Plot Non-Negative Matrix Factorization Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Categorical Only Seconds < Lower Is Better Intel UHD 630 CML GT2 . 44.04 |================================================ Scikit-Learn 1.2.2 Benchmark: Kernel PCA Solvers / Time vs. N Samples Seconds < Lower Is Better Intel UHD 630 CML GT2 . 431.65 |=============================================== Scikit-Learn 1.2.2 Benchmark: Kernel PCA Solvers / Time vs. N Components Seconds < Lower Is Better Intel UHD 630 CML GT2 . 337.11 |=============================================== Scikit-Learn 1.2.2 Benchmark: Sparse Random Projections / 100 Iterations Seconds < Lower Is Better Intel UHD 630 CML GT2 . 1685.83 |============================================== Whisper.cpp 1.4 Model: ggml-base.en - Input: 2016 State of the Union Seconds < Lower Is Better Intel UHD 630 CML GT2 . 416.37 |=============================================== Whisper.cpp 1.4 Model: ggml-small.en - Input: 2016 State of the Union Seconds < Lower Is Better Intel UHD 630 CML GT2 . 1317.34 |============================================== Whisper.cpp 1.4 Model: ggml-medium.en - Input: 2016 State of the Union Seconds < Lower Is Better Intel UHD 630 CML GT2 . 4440.69 |============================================== OpenCV 4.7 Test: DNN - Deep Neural Network ms < Lower Is Better Intel UHD 630 CML GT2 . 43113 |================================================