h510-g6405-1 Intel Pentium Gold G6405 testing with a ASRock H510M-HDV/M.2 SE (P1.60 BIOS) and Intel UHD 610 CML GT1 3GB on Ubuntu 20.04 via the Phoronix Test Suite. Intel UHD 610 CML GT1: Processor: Intel Pentium Gold G6405 @ 4.10GHz (2 Cores / 4 Threads), Motherboard: ASRock H510M-HDV/M.2 SE (P1.60 BIOS), Chipset: Intel Comet Lake PCH, Memory: 3584MB, Disk: 1000GB Western Digital WDS100T2B0A, Graphics: Intel UHD 610 CML GT1 3GB (1050MHz), Audio: Realtek ALC897, Monitor: G185BGEL01, Network: Realtek RTL8111/8168/8411 OS: Ubuntu 20.04, Kernel: 5.15.0-88-generic (x86_64), Desktop: GNOME Shell 3.36.9, Display Server: X Server 1.20.13, OpenGL: 4.6 Mesa 21.2.6, Vulkan: 1.2.182, Compiler: GCC 9.4.0, File-System: ext4, Screen Resolution: 1368x768 Whisper.cpp 1.4 Model: ggml-medium.en - Input: 2016 State of the Union Seconds < Lower Is Better Intel UHD 610 CML GT1 . 39940.56 |============================================= Whisper.cpp 1.4 Model: ggml-small.en - Input: 2016 State of the Union Seconds < Lower Is Better Intel UHD 610 CML GT1 . 11358.50 |============================================= Whisper.cpp 1.4 Model: ggml-base.en - Input: 2016 State of the Union Seconds < Lower Is Better Intel UHD 610 CML GT1 . 3116.24 |============================================== Scikit-Learn 1.2.2 Benchmark: Sparse Random Projections / 100 Iterations Seconds < Lower Is Better Intel UHD 610 CML GT1 . 3027.19 |============================================== Scikit-Learn 1.2.2 Benchmark: Kernel PCA Solvers / Time vs. N Components Seconds < Lower Is Better Intel UHD 610 CML GT1 . 324.22 |=============================================== Scikit-Learn 1.2.2 Benchmark: Kernel PCA Solvers / Time vs. N Samples Seconds < Lower Is Better Intel UHD 610 CML GT1 . 456.28 |=============================================== Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Categorical Only Seconds < Lower Is Better Intel UHD 610 CML GT1 . 27.02 |================================================ Scikit-Learn 1.2.2 Benchmark: Plot Polynomial Kernel Approximation Seconds < Lower Is Better Intel UHD 610 CML GT1 . 295.98 |=============================================== Scikit-Learn 1.2.2 Benchmark: 20 Newsgroups / Logistic Regression Seconds < Lower Is Better Intel UHD 610 CML GT1 . 60.84 |================================================ Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Higgs Boson Seconds < Lower Is Better Intel UHD 610 CML GT1 . 185.74 |=============================================== Scikit-Learn 1.2.2 Benchmark: Plot Singular Value Decomposition Seconds < Lower Is Better Intel UHD 610 CML GT1 . 352.77 |=============================================== Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Threading Seconds < Lower Is Better Intel UHD 610 CML GT1 . 449.44 |=============================================== Scikit-Learn 1.2.2 Benchmark: Covertype Dataset Benchmark Seconds < Lower Is Better Intel UHD 610 CML GT1 . 581.60 |=============================================== Scikit-Learn 1.2.2 Benchmark: Sample Without Replacement Seconds < Lower Is Better Intel UHD 610 CML GT1 . 147.24 |=============================================== Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Seconds < Lower Is Better Intel UHD 610 CML GT1 . 194.73 |=============================================== Scikit-Learn 1.2.2 Benchmark: Plot Incremental PCA Seconds < Lower Is Better Intel UHD 610 CML GT1 . 82.86 |================================================ Scikit-Learn 1.2.2 Benchmark: TSNE MNIST Dataset Seconds < Lower Is Better Intel UHD 610 CML GT1 . 796.41 |=============================================== Scikit-Learn 1.2.2 Benchmark: LocalOutlierFactor Seconds < Lower Is Better Intel UHD 610 CML GT1 . 448.57 |=============================================== Scikit-Learn 1.2.2 Benchmark: Feature Expansions Seconds < Lower Is Better Intel UHD 610 CML GT1 . 212.15 |=============================================== Scikit-Learn 1.2.2 Benchmark: Plot OMP vs. LARS Seconds < Lower Is Better Intel UHD 610 CML GT1 . 249.52 |=============================================== Scikit-Learn 1.2.2 Benchmark: Plot Hierarchical Seconds < Lower Is Better Intel UHD 610 CML GT1 . 273.30 |=============================================== Scikit-Learn 1.2.2 Benchmark: Text Vectorizers Seconds < Lower Is Better Intel UHD 610 CML GT1 . 79.73 |================================================ Scikit-Learn 1.2.2 Benchmark: Plot Lasso Path Seconds < Lower Is Better Intel UHD 610 CML GT1 . 465.91 |=============================================== Scikit-Learn 1.2.2 Benchmark: SGD Regression Seconds < Lower Is Better Intel UHD 610 CML GT1 . 219.60 |=============================================== Scikit-Learn 1.2.2 Benchmark: Plot Neighbors Seconds < Lower Is Better Intel UHD 610 CML GT1 . 262.92 |=============================================== Scikit-Learn 1.2.2 Benchmark: MNIST Dataset Seconds < Lower Is Better Intel UHD 610 CML GT1 . 86.74 |================================================ Scikit-Learn 1.2.2 Benchmark: Plot Ward Seconds < Lower Is Better Intel UHD 610 CML GT1 . 93.20 |================================================ Scikit-Learn 1.2.2 Benchmark: Sparsify Seconds < Lower Is Better Intel UHD 610 CML GT1 . 104.00 |=============================================== Scikit-Learn 1.2.2 Benchmark: Lasso Seconds < Lower Is Better Intel UHD 610 CML GT1 . 1060.63 |============================================== Scikit-Learn 1.2.2 Benchmark: Tree Seconds < Lower Is Better Intel UHD 610 CML GT1 . 47.10 |================================================ Scikit-Learn 1.2.2 Benchmark: SAGA Seconds < Lower Is Better Intel UHD 610 CML GT1 . 1055.94 |============================================== Scikit-Learn 1.2.2 Benchmark: GLM Seconds < Lower Is Better Intel UHD 610 CML GT1 . 1113.27 |============================================== Mlpack Benchmark Benchmark: scikit_svm Seconds < Lower Is Better Intel UHD 610 CML GT1 . 27.15 |================================================ Mlpack Benchmark Benchmark: scikit_ica Seconds < Lower Is Better Intel UHD 610 CML GT1 . 114.79 |=============================================== ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better Intel UHD 610 CML GT1 . 1.36375 |============================================== ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better Intel UHD 610 CML GT1 . 9.33438 |============================================== ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better Intel UHD 610 CML GT1 . 8.29467 |============================================== ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better Intel UHD 610 CML GT1 . 10.71 |================================================ ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better Intel UHD 610 CML GT1 . 9.03460 |============================================== ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better Intel UHD 610 CML GT1 . 2.23139 |============================================== ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better Intel UHD 610 CML GT1 . 0.108787 |============================================= ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better Intel UHD 610 CML GT1 . 0.0548757 |============================================ ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better Intel UHD 610 CML GT1 . 31.99 |================================================ ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better Intel UHD 610 CML GT1 . 30.01 |================================================ ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better Intel UHD 610 CML GT1 . 28.76 |================================================ Numenta Anomaly Benchmark 1.1 Detector: Contextual Anomaly Detector OSE Seconds < Lower Is Better Intel UHD 610 CML GT1 . 137.72 |=============================================== Numenta Anomaly Benchmark 1.1 Detector: Bayesian Changepoint Seconds < Lower Is Better Intel UHD 610 CML GT1 . 164.50 |=============================================== Numenta Anomaly Benchmark 1.1 Detector: Earthgecko Skyline Seconds < Lower Is Better Intel UHD 610 CML GT1 . 528.35 |=============================================== Numenta Anomaly Benchmark 1.1 Detector: Windowed Gaussian Seconds < Lower Is Better Intel UHD 610 CML GT1 . 33.81 |================================================ Numenta Anomaly Benchmark 1.1 Detector: Relative Entropy Seconds < Lower Is Better Intel UHD 610 CML GT1 . 66.19 |================================================ Numenta Anomaly Benchmark 1.1 Detector: KNN CAD Seconds < Lower Is Better Intel UHD 610 CML GT1 . 704.21 |=============================================== OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU ms < Lower Is Better Intel UHD 610 CML GT1 . 1.49 |================================================= OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU FPS > Higher Is Better Intel UHD 610 CML GT1 . 1342.06 |============================================== OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16-INT8 - Device: CPU ms < Lower Is Better Intel UHD 610 CML GT1 . 123.65 |=============================================== OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16-INT8 - Device: CPU FPS > Higher Is Better Intel UHD 610 CML GT1 . 16.17 |================================================ OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU ms < Lower Is Better Intel UHD 610 CML GT1 . 3.80 |================================================= OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU FPS > Higher Is Better Intel UHD 610 CML GT1 . 525.03 |=============================================== OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16 - Device: CPU ms < Lower Is Better Intel UHD 610 CML GT1 . 155.38 |=============================================== OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16 - Device: CPU FPS > Higher Is Better Intel UHD 610 CML GT1 . 12.87 |================================================ OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU ms < Lower Is Better Intel UHD 610 CML GT1 . 37.49 |================================================ OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU FPS > Higher Is Better Intel UHD 610 CML GT1 . 53.33 |================================================ OpenVINO 2023.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU ms < Lower Is Better Intel UHD 610 CML GT1 . 804.02 |=============================================== OpenVINO 2023.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU FPS > Higher Is Better Intel UHD 610 CML GT1 . 2.49 |================================================= OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16-INT8 - Device: CPU ms < Lower Is Better Intel UHD 610 CML GT1 . 119.36 |=============================================== OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16-INT8 - Device: CPU FPS > Higher Is Better Intel UHD 610 CML GT1 . 16.76 |================================================ OpenVINO 2023.2.dev Model: Face Detection Retail FP16-INT8 - Device: CPU ms < Lower Is Better Intel UHD 610 CML GT1 . 18.67 |================================================ OpenVINO 2023.2.dev Model: Face Detection Retail FP16-INT8 - Device: CPU FPS > Higher Is Better Intel UHD 610 CML GT1 . 107.05 |=============================================== OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16 - Device: CPU ms < Lower Is Better Intel UHD 610 CML GT1 . 117.47 |=============================================== OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16 - Device: CPU FPS > Higher Is Better Intel UHD 610 CML GT1 . 17.02 |================================================ OpenVINO 2023.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU ms < Lower Is Better Intel UHD 610 CML GT1 . 59.24 |================================================ OpenVINO 2023.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU FPS > Higher Is Better Intel UHD 610 CML GT1 . 33.75 |================================================ OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16 - Device: CPU ms < Lower Is Better Intel UHD 610 CML GT1 . 236.97 |=============================================== OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16 - Device: CPU FPS > Higher Is Better Intel UHD 610 CML GT1 . 8.44 |================================================= OpenVINO 2023.2.dev Model: Face Detection Retail FP16 - Device: CPU ms < Lower Is Better Intel UHD 610 CML GT1 . 38.71 |================================================ OpenVINO 2023.2.dev Model: Face Detection Retail FP16 - Device: CPU FPS > Higher Is Better Intel UHD 610 CML GT1 . 51.65 |================================================ OpenVINO 2023.2.dev Model: Face Detection FP16-INT8 - Device: CPU ms < Lower Is Better Intel UHD 610 CML GT1 . 3927.81 |============================================== OpenVINO 2023.2.dev Model: Face Detection FP16-INT8 - Device: CPU FPS > Higher Is Better Intel UHD 610 CML GT1 . 0.51 |================================================= OpenVINO 2023.2.dev Model: Vehicle Detection FP16 - Device: CPU ms < Lower Is Better Intel UHD 610 CML GT1 . 127.43 |=============================================== OpenVINO 2023.2.dev Model: Vehicle Detection FP16 - Device: CPU FPS > Higher Is Better Intel UHD 610 CML GT1 . 15.69 |================================================ OpenVINO 2023.2.dev Model: Person Detection FP32 - Device: CPU ms < Lower Is Better Intel UHD 610 CML GT1 . 932.30 |=============================================== OpenVINO 2023.2.dev Model: Person Detection FP32 - Device: CPU FPS > Higher Is Better Intel UHD 610 CML GT1 . 2.15 |================================================= OpenVINO 2023.2.dev Model: Person Detection FP16 - Device: CPU ms < Lower Is Better Intel UHD 610 CML GT1 . 957.97 |=============================================== OpenVINO 2023.2.dev Model: Person Detection FP16 - Device: CPU FPS > Higher Is Better Intel UHD 610 CML GT1 . 2.09 |================================================= OpenVINO 2023.2.dev Model: Face Detection FP16 - Device: CPU ms < Lower Is Better Intel UHD 610 CML GT1 . 11527.95 |============================================= OpenVINO 2023.2.dev Model: Face Detection FP16 - Device: CPU FPS > Higher Is Better Intel UHD 610 CML GT1 . 0.17 |================================================= PlaidML FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU FPS > Higher Is Better Intel UHD 610 CML GT1 . 2.42 |================================================= PlaidML FP16: No - Mode: Inference - Network: VGG16 - Device: CPU FPS > Higher Is Better Intel UHD 610 CML GT1 . 1.62 |================================================= TNN 0.3 Target: CPU - Model: SqueezeNet v1.1 ms < Lower Is Better Intel UHD 610 CML GT1 . 327.08 |=============================================== TNN 0.3 Target: CPU - Model: SqueezeNet v2 ms < Lower Is Better Intel UHD 610 CML GT1 . 72.92 |================================================ TNN 0.3 Target: CPU - Model: MobileNet v2 ms < Lower Is Better Intel UHD 610 CML GT1 . 365.27 |=============================================== TNN 0.3 Target: CPU - Model: DenseNet ms < Lower Is Better Intel UHD 610 CML GT1 . 5253.38 |============================================== NCNN 20230517 Target: Vulkan GPU - Model: FastestDet ms < Lower Is Better Intel UHD 610 CML GT1 . 10.09 |================================================ NCNN 20230517 Target: Vulkan GPU - Model: vision_transformer ms < Lower Is Better Intel UHD 610 CML GT1 . 892.53 |=============================================== NCNN 20230517 Target: Vulkan GPU - Model: regnety_400m ms < Lower Is Better Intel UHD 610 CML GT1 . 23.04 |================================================ NCNN 20230517 Target: Vulkan GPU - Model: squeezenet_ssd ms < Lower Is Better Intel UHD 610 CML GT1 . 37.98 |================================================ NCNN 20230517 Target: Vulkan GPU - Model: yolov4-tiny ms < Lower Is Better Intel UHD 610 CML GT1 . 91.87 |================================================ NCNN 20230517 Target: Vulkan GPU - Model: resnet50 ms < Lower Is Better Intel UHD 610 CML GT1 . 122.76 |=============================================== NCNN 20230517 Target: Vulkan GPU - Model: alexnet ms < Lower Is Better Intel UHD 610 CML GT1 . 37.16 |================================================ NCNN 20230517 Target: Vulkan GPU - Model: resnet18 ms < Lower Is Better Intel UHD 610 CML GT1 . 45.11 |================================================ NCNN 20230517 Target: Vulkan GPU - Model: vgg16 ms < Lower Is Better Intel UHD 610 CML GT1 . 265.05 |=============================================== NCNN 20230517 Target: Vulkan GPU - Model: blazeface ms < Lower Is Better Intel UHD 610 CML GT1 . 2.5 |================================================== NCNN 20230517 Target: Vulkan GPU - Model: efficientnet-b0 ms < Lower Is Better Intel UHD 610 CML GT1 . 28.55 |================================================ NCNN 20230517 Target: Vulkan GPU - Model: shufflenet-v2 ms < Lower Is Better Intel UHD 610 CML GT1 . 8.40 |================================================= NCNN 20230517 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 ms < Lower Is Better Intel UHD 610 CML GT1 . 14.79 |================================================ NCNN 20230517 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 ms < Lower Is Better Intel UHD 610 CML GT1 . 20.35 |================================================ NCNN 20230517 Target: Vulkan GPU - Model: mobilenet ms < Lower Is Better Intel UHD 610 CML GT1 . 74.28 |================================================ NCNN 20230517 Target: CPU - Model: FastestDet ms < Lower Is Better Intel UHD 610 CML GT1 . 10.07 |================================================ NCNN 20230517 Target: CPU - Model: vision_transformer ms < Lower Is Better Intel UHD 610 CML GT1 . 893.07 |=============================================== NCNN 20230517 Target: CPU - Model: regnety_400m ms < Lower Is Better Intel UHD 610 CML GT1 . 22.51 |================================================ NCNN 20230517 Target: CPU - Model: squeezenet_ssd ms < Lower Is Better Intel UHD 610 CML GT1 . 37.95 |================================================ NCNN 20230517 Target: CPU - Model: yolov4-tiny ms < Lower Is Better Intel UHD 610 CML GT1 . 92.22 |================================================ NCNN 20230517 Target: CPU - Model: resnet50 ms < Lower Is Better Intel UHD 610 CML GT1 . 122.56 |=============================================== NCNN 20230517 Target: CPU - Model: alexnet ms < Lower Is Better Intel UHD 610 CML GT1 . 37.21 |================================================ NCNN 20230517 Target: CPU - Model: resnet18 ms < Lower Is Better Intel UHD 610 CML GT1 . 45.14 |================================================ NCNN 20230517 Target: CPU - Model: vgg16 ms < Lower Is Better Intel UHD 610 CML GT1 . 264.68 |=============================================== NCNN 20230517 Target: CPU - Model: googlenet ms < Lower Is Better Intel UHD 610 CML GT1 . 53.87 |================================================ NCNN 20230517 Target: CPU - Model: blazeface ms < Lower Is Better Intel UHD 610 CML GT1 . 2.4 |================================================== NCNN 20230517 Target: CPU - Model: efficientnet-b0 ms < Lower Is Better Intel UHD 610 CML GT1 . 28.50 |================================================ NCNN 20230517 Target: CPU - Model: mnasnet ms < Lower Is Better Intel UHD 610 CML GT1 . 17.06 |================================================ NCNN 20230517 Target: CPU - Model: shufflenet-v2 ms < Lower Is Better Intel UHD 610 CML GT1 . 8.38 |================================================= NCNN 20230517 Target: CPU-v3-v3 - Model: mobilenet-v3 ms < Lower Is Better Intel UHD 610 CML GT1 . 14.78 |================================================ NCNN 20230517 Target: CPU-v2-v2 - Model: mobilenet-v2 ms < Lower Is Better Intel UHD 610 CML GT1 . 20.36 |================================================ NCNN 20230517 Target: CPU - Model: mobilenet ms < Lower Is Better Intel UHD 610 CML GT1 . 74.20 |================================================ Mobile Neural Network 2.1 Model: inception-v3 ms < Lower Is Better Intel UHD 610 CML GT1 . 154.33 |=============================================== Mobile Neural Network 2.1 Model: mobilenet-v1-1.0 ms < Lower Is Better Intel UHD 610 CML GT1 . 20.03 |================================================ Mobile Neural Network 2.1 Model: MobileNetV2_224 ms < Lower Is Better Intel UHD 610 CML GT1 . 12.96 |================================================ Mobile Neural Network 2.1 Model: SqueezeNetV1.0 ms < Lower Is Better Intel UHD 610 CML GT1 . 22.00 |================================================ Mobile Neural Network 2.1 Model: resnet-v2-50 ms < Lower Is Better Intel UHD 610 CML GT1 . 115.58 |=============================================== Mobile Neural Network 2.1 Model: squeezenetv1.1 ms < Lower Is Better Intel UHD 610 CML GT1 . 11.87 |================================================ Mobile Neural Network 2.1 Model: mobilenetV3 ms < Lower Is Better Intel UHD 610 CML GT1 . 3.743 |================================================ Mobile Neural Network 2.1 Model: nasnet ms < Lower Is Better Intel UHD 610 CML GT1 . 29.08 |================================================ Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 1000 Milli-Seconds < Lower Is Better Intel UHD 610 CML GT1 . 2089050 |============================================== Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 200 Milli-Seconds < Lower Is Better Intel UHD 610 CML GT1 . 417906 |=============================================== Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 100 Milli-Seconds < Lower Is Better Intel UHD 610 CML GT1 . 208988 |=============================================== Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 1000 Milli-Seconds < Lower Is Better Intel UHD 610 CML GT1 . 967170 |=============================================== Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 200 Milli-Seconds < Lower Is Better Intel UHD 610 CML GT1 . 193587 |=============================================== Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 100 Milli-Seconds < Lower Is Better Intel UHD 610 CML GT1 . 96808 |================================================ TensorFlow Lite 2022-05-18 Model: Inception ResNet V2 Microseconds < Lower Is Better Intel UHD 610 CML GT1 . 386539 |=============================================== TensorFlow Lite 2022-05-18 Model: Mobilenet Quant Microseconds < Lower Is Better Intel UHD 610 CML GT1 . 560906 |=============================================== TensorFlow Lite 2022-05-18 Model: Mobilenet Float Microseconds < Lower Is Better Intel UHD 610 CML GT1 . 21944.0 |============================================== TensorFlow Lite 2022-05-18 Model: NASNet Mobile Microseconds < Lower Is Better Intel UHD 610 CML GT1 . 40066.8 |============================================== TensorFlow Lite 2022-05-18 Model: Inception V4 Microseconds < Lower Is Better Intel UHD 610 CML GT1 . 415157 |=============================================== TensorFlow Lite 2022-05-18 Model: SqueezeNet Microseconds < Lower Is Better Intel UHD 610 CML GT1 . 30349.5 |============================================== RNNoise 2020-06-28 Seconds < Lower Is Better Intel UHD 610 CML GT1 . 26.35 |================================================ R Benchmark Seconds < Lower Is Better Intel UHD 610 CML GT1 . 0.3482 |=============================================== Numpy Benchmark Score > Higher Is Better Intel UHD 610 CML GT1 . 294.94 |=============================================== oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better Intel UHD 610 CML GT1 . 20613.0 |============================================== oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better Intel UHD 610 CML GT1 . 41031.8 |============================================== oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better Intel UHD 610 CML GT1 . 20604.1 |============================================== oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better Intel UHD 610 CML GT1 . 41046.8 |============================================== oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU ms < Lower Is Better Intel UHD 610 CML GT1 . 20608.4 |============================================== oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU ms < Lower Is Better Intel UHD 610 CML GT1 . 41032.5 |============================================== oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better Intel UHD 610 CML GT1 . 26.17 |================================================ oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better Intel UHD 610 CML GT1 . 19.16 |================================================ oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better Intel UHD 610 CML GT1 . 49.27 |================================================ oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU ms < Lower Is Better Intel UHD 610 CML GT1 . 90.10 |================================================ oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU ms < Lower Is Better Intel UHD 610 CML GT1 . 76.01 |================================================ oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU ms < Lower Is Better Intel UHD 610 CML GT1 . 58.72 |================================================ oneDNN 3.3 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better Intel UHD 610 CML GT1 . 5.82157 |============================================== oneDNN 3.3 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better Intel UHD 610 CML GT1 . 11.93 |================================================ oneDNN 3.3 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU ms < Lower Is Better Intel UHD 610 CML GT1 . 37.72 |================================================ oneDNN 3.3 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU ms < Lower Is Better Intel UHD 610 CML GT1 . 36.82 |================================================ LeelaChessZero 0.28 Backend: BLAS Nodes Per Second > Higher Is Better Intel UHD 610 CML GT1 . 141 |================================================== OpenCV 4.7 Test: DNN - Deep Neural Network ms < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Non-Negative Matrix Factorization Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Isotonic / Perturbed Logarithm Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Adult Seconds < Lower Is Better Intel UHD 610 CML GT1 . 108.15 |=============================================== Scikit-Learn 1.2.2 Benchmark: RCV1 Logreg Convergencet Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Isotonic / Pathological Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Parallel Pairwise Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Isotonic / Logistic Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Fast KMeans Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Isolation Forest Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: SGDOneClassSVM Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Glmnet Seconds < Lower Is Better Mlpack Benchmark Benchmark: scikit_linearridgeregression Seconds < Lower Is Better Mlpack Benchmark Benchmark: scikit_qda Seconds < Lower Is Better AI Benchmark Alpha 0.1.2 Score > Higher Is Better ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better Intel UHD 610 CML GT1 . 733.30 |=============================================== ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better Intel UHD 610 CML GT1 . 1022.70 |============================================== ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better Intel UHD 610 CML GT1 . 0.993387 |============================================= ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better Intel UHD 610 CML GT1 . 107.13 |=============================================== ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better Intel UHD 610 CML GT1 . 120.56 |=============================================== ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better Intel UHD 610 CML GT1 . 93.34 |================================================ ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better Intel UHD 610 CML GT1 . 110.71 |=============================================== ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better Intel UHD 610 CML GT1 . 430.82 |=============================================== ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better Intel UHD 610 CML GT1 . 2.39975 |============================================== ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better Intel UHD 610 CML GT1 . 448.15 |=============================================== ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better Intel UHD 610 CML GT1 . 9192.22 |============================================== ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better Intel UHD 610 CML GT1 . 18223.2 |============================================== ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better Intel UHD 610 CML GT1 . 28.35 |================================================ ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better Intel UHD 610 CML GT1 . 36.58 |================================================ ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better Intel UHD 610 CML GT1 . 31.35 |================================================ ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better Intel UHD 610 CML GT1 . 835.68 |=============================================== ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better Intel UHD 610 CML GT1 . 1.20176 |============================================== ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better Intel UHD 610 CML GT1 . 953.42 |=============================================== ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better Intel UHD 610 CML GT1 . 1.062983 |============================================= ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better Intel UHD 610 CML GT1 . 33.32 |================================================ ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better Intel UHD 610 CML GT1 . 34.77 |================================================ ECP-CANDLE 0.4 Benchmark: P3B2 Seconds < Lower Is Better ECP-CANDLE 0.4 Benchmark: P3B1 Seconds < Lower Is Better ECP-CANDLE 0.4 Benchmark: P1B2 Seconds < Lower Is Better OpenVINO 2023.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU FPS > Higher Is Better NCNN 20230517 Target: Vulkan GPU - Model: googlenet ms < Lower Is Better Intel UHD 610 CML GT1 . 56.04 |================================================ NCNN 20230517 Target: Vulkan GPU - Model: mnasnet ms < Lower Is Better Intel UHD 610 CML GT1 . 27.55 |================================================ spaCy 3.4.1 tokens/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: AlexNet images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: AlexNet images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: AlexNet images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: VGG-16 images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: AlexNet images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: VGG-16 images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: AlexNet images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: VGG-16 images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: VGG-16 images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: VGG-16 images/sec > Higher Is Better DeepSpeech 0.6 Acceleration: CPU Seconds < Lower Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better