h510-g6405-1

Intel Pentium Gold G6405 testing with a ASRock H510M-HDV/M.2 SE (P1.60 BIOS) and Intel UHD 610 CML GT1 3GB on Ubuntu 20.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2311104-HERT-H510G6457&grr.

h510-g6405-1ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionIntel UHD 610 CML GT1Intel Pentium Gold G6405 @ 4.10GHz (2 Cores / 4 Threads)ASRock H510M-HDV/M.2 SE (P1.60 BIOS)Intel Comet Lake PCH3584MB1000GB Western Digital WDS100T2B0AIntel UHD 610 CML GT1 3GB (1050MHz)Realtek ALC897G185BGEL01Realtek RTL8111/8168/8411Ubuntu 20.045.15.0-88-generic (x86_64)GNOME Shell 3.36.9X Server 1.20.134.6 Mesa 21.2.61.2.182GCC 9.4.0ext41368x768OpenBenchmarking.org- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-9QDOt0/gcc-9-9.4.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xf8 - Thermald 1.9.1 - Python 3.8.10- gather_data_sampling: Not affected + itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Mitigation of Microcode + tsx_async_abort: Not affected

h510-g6405-1whisper-cpp: ggml-medium.en - 2016 State of the Unionwhisper-cpp: ggml-small.en - 2016 State of the Unionscikit-learn: Sparse Rand Projections / 100 Iterationswhisper-cpp: ggml-base.en - 2016 State of the Unioncaffe: GoogleNet - CPU - 1000scikit-learn: GLMscikit-learn: Lassoscikit-learn: SAGAplaidml: No - Inference - VGG16 - CPUscikit-learn: Kernel PCA Solvers / Time vs. N Componentsscikit-learn: TSNE MNIST Datasetcaffe: AlexNet - CPU - 1000plaidml: No - Inference - ResNet 50 - CPUmnn: inception-v3mnn: mobilenet-v1-1.0mnn: MobileNetV2_224mnn: SqueezeNetV1.0mnn: resnet-v2-50mnn: squeezenetv1.1mnn: mobilenetV3mnn: nasnetscikit-learn: Covertype Dataset Benchmarknumenta-nab: KNN CADscikit-learn: Plot Lasso Pathscikit-learn: Kernel PCA Solvers / Time vs. N Samplesscikit-learn: Hist Gradient Boosting Threadingscikit-learn: LocalOutlierFactorscikit-learn: Hist Gradient Boosting Adultncnn: Vulkan GPU - FastestDetncnn: Vulkan GPU - vision_transformerncnn: Vulkan GPU - regnety_400mncnn: Vulkan GPU - squeezenet_ssdncnn: Vulkan GPU - yolov4-tinyncnn: Vulkan GPU - resnet50ncnn: Vulkan GPU - alexnetncnn: Vulkan GPU - resnet18ncnn: Vulkan GPU - vgg16ncnn: Vulkan GPU - googlenetncnn: Vulkan GPU - blazefacencnn: Vulkan GPU - efficientnet-b0ncnn: Vulkan GPU - mnasnetncnn: Vulkan GPU - shufflenet-v2ncnn: Vulkan GPU-v3-v3 - mobilenet-v3ncnn: Vulkan GPU-v2-v2 - mobilenet-v2ncnn: Vulkan GPU - mobilenetncnn: CPU - FastestDetncnn: CPU - vision_transformerncnn: CPU - regnety_400mncnn: CPU - squeezenet_ssdncnn: CPU - yolov4-tinyncnn: CPU - resnet50ncnn: CPU - alexnetncnn: CPU - resnet18ncnn: CPU - vgg16ncnn: CPU - googlenetncnn: CPU - blazefacencnn: CPU - efficientnet-b0ncnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - mobilenetnumenta-nab: Earthgecko Skylinelczero: BLASscikit-learn: Plot Singular Value Decompositiononnx: bertsquad-12 - CPU - Parallelonnx: bertsquad-12 - CPU - Parallelonnx: CaffeNet 12-int8 - CPU - Standardonnx: CaffeNet 12-int8 - CPU - Standardcaffe: GoogleNet - CPU - 200scikit-learn: Plot Polynomial Kernel Approximationonnx: bertsquad-12 - CPU - Standardonnx: bertsquad-12 - CPU - Standardonnx: Faster R-CNN R-50-FPN-int8 - CPU - Parallelonnx: Faster R-CNN R-50-FPN-int8 - CPU - Paralleltnn: CPU - DenseNetonnx: ArcFace ResNet-100 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardscikit-learn: Plot Hierarchicalonnx: CaffeNet 12-int8 - CPU - Parallelonnx: CaffeNet 12-int8 - CPU - Parallelscikit-learn: Plot Neighborsopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUscikit-learn: Hist Gradient Boosting Higgs Bosonscikit-learn: Plot OMP vs. LARSscikit-learn: SGD Regressiononednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Training - f32 - CPUscikit-learn: Feature Expansionsscikit-learn: Hist Gradient Boostingcaffe: GoogleNet - CPU - 100numpy: scikit-learn: Sample Without Replacementcaffe: AlexNet - CPU - 200onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUnumenta-nab: Bayesian Changepointscikit-learn: Sparsifynumenta-nab: Contextual Anomaly Detector OSEscikit-learn: Plot Wardmlpack: scikit_icascikit-learn: MNIST Datasetscikit-learn: Plot Incremental PCAscikit-learn: Text Vectorizersonnx: fcn-resnet101-11 - CPU - Parallelonnx: fcn-resnet101-11 - CPU - Parallelonnx: fcn-resnet101-11 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Standardcaffe: AlexNet - CPU - 100onnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Parallelonnx: ArcFace ResNet-100 - CPU - Parallelonnx: GPT-2 - CPU - Parallelonnx: GPT-2 - CPU - Parallelonnx: GPT-2 - CPU - Standardonnx: GPT-2 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Parallelonnx: ResNet50 v1-12-int8 - CPU - Parallelonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: super-resolution-10 - CPU - Parallelonnx: super-resolution-10 - CPU - Parallelonnx: super-resolution-10 - CPU - Standardonnx: super-resolution-10 - CPU - Standardopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUscikit-learn: 20 Newsgroups / Logistic Regressionnumenta-nab: Relative Entropyopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUscikit-learn: Treetensorflow-lite: Mobilenet Quantopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUtensorflow-lite: Inception V4tensorflow-lite: Inception ResNet V2openvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16 - CPUopenvino: Road Segmentation ADAS FP16 - CPUopenvino: Handwritten English Recognition FP16 - CPUopenvino: Handwritten English Recognition FP16 - CPUtensorflow-lite: NASNet Mobileopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUtensorflow-lite: SqueezeNettensorflow-lite: Mobilenet Floatopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Face Detection Retail FP16 - CPUopenvino: Face Detection Retail FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUscikit-learn: Hist Gradient Boosting Categorical Onlyrbenchmark: numenta-nab: Windowed Gaussianmlpack: scikit_svmrnnoise: tnn: CPU - MobileNet v2tnn: CPU - SqueezeNet v1.1onednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUonednn: IP Shapes 3D - f32 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUtnn: CPU - SqueezeNet v2onednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: IP Shapes 1D - bf16bf16bf16 - CPUIntel UHD 610 CML GT139940.55511358.5043027.1933116.2420820890501113.2731060.6271055.9401.62324.216796.4059671702.42154.32620.03412.96322.004115.58211.8683.74329.075581.596704.214465.908456.279449.435448.570108.14710.09892.5323.0437.9891.87122.7637.1645.11265.0556.042.528.5527.558.4014.7920.3574.2810.07893.0722.5137.9592.22122.5637.2145.14264.6853.872.428.5017.068.3814.7820.3674.20528.345141352.774953.4181.06298328.346436.5771417906295.976835.6811.201761022.7030.9933875253.381430.8202.39975273.30331.349431.9931262.9153927.810.51185.743249.520219.59941046.841031.841032.5212.153194.726208988294.94147.24219358720613.020604.120608.4164.503103.999137.71793.201114.7986.74382.85879.73418223.20.05487579192.220.10878796808733.2981.36375448.1502.2313934.767728.758333.321130.0089110.7099.0346093.340310.7133120.5578.29467107.1299.3343811527.950.1760.84366.187804.022.4947.100560906932.302.15415157386539957.972.09119.3616.76236.978.44155.3812.8740066.8123.6516.1759.2433.7530349.521944.0127.4315.69117.4717.0218.67107.0537.4953.3338.7151.651.491342.063.80525.0327.0240.348233.80727.1526.349365.268327.08276.014319.161536.818711.926037.72395.8215758.718949.271572.92090.095826.1720OpenBenchmarking.org

Whisper.cpp

Model: ggml-medium.en - Input: 2016 State of the Union

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.4Model: ggml-medium.en - Input: 2016 State of the UnionIntel UHD 610 CML GT19K18K27K36K45KSE +/- 132.56, N = 339940.561. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread

Whisper.cpp

Model: ggml-small.en - Input: 2016 State of the Union

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.4Model: ggml-small.en - Input: 2016 State of the UnionIntel UHD 610 CML GT12K4K6K8K10KSE +/- 62.26, N = 311358.501. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread

Scikit-Learn

Benchmark: Sparse Random Projections / 100 Iterations

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Sparse Random Projections / 100 IterationsIntel UHD 610 CML GT16001200180024003000SE +/- 2.34, N = 33027.191. (F9X) gfortran options: -O0

Whisper.cpp

Model: ggml-base.en - Input: 2016 State of the Union

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.4Model: ggml-base.en - Input: 2016 State of the UnionIntel UHD 610 CML GT17001400210028003500SE +/- 0.81, N = 33116.241. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread

Caffe

Model: GoogleNet - Acceleration: CPU - Iterations: 1000

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 1000Intel UHD 610 CML GT1400K800K1200K1600K2000KSE +/- 548.57, N = 320890501. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

Scikit-Learn

Benchmark: GLM

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: GLMIntel UHD 610 CML GT12004006008001000SE +/- 4.22, N = 31113.271. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Lasso

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: LassoIntel UHD 610 CML GT12004006008001000SE +/- 0.24, N = 31060.631. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: SAGA

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: SAGAIntel UHD 610 CML GT12004006008001000SE +/- 1.27, N = 31055.941. (F9X) gfortran options: -O0

PlaidML

FP16: No - Mode: Inference - Network: VGG16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: CPUIntel UHD 610 CML GT10.36450.7291.09351.4581.8225SE +/- 0.01, N = 31.62

Scikit-Learn

Benchmark: Kernel PCA Solvers / Time vs. N Components

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Kernel PCA Solvers / Time vs. N ComponentsIntel UHD 610 CML GT170140210280350SE +/- 4.78, N = 9324.221. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: TSNE MNIST Dataset

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: TSNE MNIST DatasetIntel UHD 610 CML GT12004006008001000SE +/- 0.17, N = 3796.411. (F9X) gfortran options: -O0

Caffe

Model: AlexNet - Acceleration: CPU - Iterations: 1000

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 1000Intel UHD 610 CML GT1200K400K600K800K1000KSE +/- 207.18, N = 39671701. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

PlaidML

FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: ResNet 50 - Device: CPUIntel UHD 610 CML GT10.54451.0891.63352.1782.7225SE +/- 0.00, N = 32.42

Mobile Neural Network

Model: inception-v3

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3Intel UHD 610 CML GT1306090120150SE +/- 0.24, N = 3154.33MIN: 153.49 / MAX: 202.911. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: mobilenet-v1-1.0

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0Intel UHD 610 CML GT1510152025SE +/- 0.02, N = 320.03MIN: 19.9 / MAX: 63.441. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: MobileNetV2_224

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224Intel UHD 610 CML GT13691215SE +/- 0.03, N = 312.96MIN: 12.84 / MAX: 26.571. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: SqueezeNetV1.0

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0Intel UHD 610 CML GT1510152025SE +/- 0.02, N = 322.00MIN: 21.89 / MAX: 23.871. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: resnet-v2-50

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50Intel UHD 610 CML GT1306090120150SE +/- 0.34, N = 3115.58MIN: 114.94 / MAX: 158.651. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: squeezenetv1.1

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1Intel UHD 610 CML GT13691215SE +/- 0.01, N = 311.87MIN: 11.8 / MAX: 13.781. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: mobilenetV3

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3Intel UHD 610 CML GT10.84221.68442.52663.36884.211SE +/- 0.017, N = 33.743MIN: 3.69 / MAX: 6.531. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: nasnet

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetIntel UHD 610 CML GT1714212835SE +/- 0.04, N = 329.08MIN: 28.92 / MAX: 42.651. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Scikit-Learn

Benchmark: Covertype Dataset Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Covertype Dataset BenchmarkIntel UHD 610 CML GT1130260390520650SE +/- 0.92, N = 3581.601. (F9X) gfortran options: -O0

Numenta Anomaly Benchmark

Detector: KNN CAD

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: KNN CADIntel UHD 610 CML GT1150300450600750SE +/- 5.84, N = 3704.21

Scikit-Learn

Benchmark: Plot Lasso Path

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Lasso PathIntel UHD 610 CML GT1100200300400500SE +/- 0.95, N = 3465.911. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Kernel PCA Solvers / Time vs. N Samples

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Kernel PCA Solvers / Time vs. N SamplesIntel UHD 610 CML GT1100200300400500SE +/- 0.35, N = 3456.281. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Hist Gradient Boosting Threading

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Hist Gradient Boosting ThreadingIntel UHD 610 CML GT1100200300400500SE +/- 1.33, N = 3449.441. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: LocalOutlierFactor

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: LocalOutlierFactorIntel UHD 610 CML GT1100200300400500SE +/- 2.17, N = 3448.571. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Hist Gradient Boosting Adult

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Hist Gradient Boosting AdultIntel UHD 610 CML GT120406080100SE +/- 6.84, N = 15108.151. (F9X) gfortran options: -O0

NCNN

Target: Vulkan GPU - Model: FastestDet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: FastestDetIntel UHD 610 CML GT13691215SE +/- 0.00, N = 310.09MIN: 10.04 / MAX: 11.611. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: Vulkan GPU - Model: vision_transformer

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: vision_transformerIntel UHD 610 CML GT12004006008001000SE +/- 0.30, N = 3892.53MIN: 890.76 / MAX: 915.851. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: Vulkan GPU - Model: regnety_400m

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: regnety_400mIntel UHD 610 CML GT1612182430SE +/- 0.50, N = 323.04MIN: 22.46 / MAX: 45.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: Vulkan GPU - Model: squeezenet_ssd

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: squeezenet_ssdIntel UHD 610 CML GT1918273645SE +/- 0.04, N = 337.98MIN: 37.77 / MAX: 38.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: Vulkan GPU - Model: yolov4-tiny

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: yolov4-tinyIntel UHD 610 CML GT120406080100SE +/- 0.06, N = 391.87MIN: 91.54 / MAX: 92.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: Vulkan GPU - Model: resnet50

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: resnet50Intel UHD 610 CML GT1306090120150SE +/- 0.08, N = 3122.76MIN: 122.41 / MAX: 134.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: Vulkan GPU - Model: alexnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: alexnetIntel UHD 610 CML GT1918273645SE +/- 0.02, N = 337.16MIN: 37.05 / MAX: 37.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: Vulkan GPU - Model: resnet18

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: resnet18Intel UHD 610 CML GT11020304050SE +/- 0.02, N = 345.11MIN: 44.98 / MAX: 45.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: Vulkan GPU - Model: vgg16

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: vgg16Intel UHD 610 CML GT160120180240300SE +/- 0.38, N = 3265.05MIN: 263.9 / MAX: 275.491. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: Vulkan GPU - Model: googlenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: googlenetIntel UHD 610 CML GT11326395265SE +/- 2.12, N = 356.04MIN: 53.72 / MAX: 1402.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: Vulkan GPU - Model: blazeface

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: blazefaceIntel UHD 610 CML GT10.56251.1251.68752.252.8125SE +/- 0.07, N = 32.5MIN: 2.33 / MAX: 48.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: Vulkan GPU - Model: efficientnet-b0

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: efficientnet-b0Intel UHD 610 CML GT1714212835SE +/- 0.03, N = 328.55MIN: 28.42 / MAX: 37.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: Vulkan GPU - Model: mnasnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: mnasnetIntel UHD 610 CML GT1612182430SE +/- 10.53, N = 327.55MIN: 16.93 / MAX: 1277.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: Vulkan GPU - Model: shufflenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: shufflenet-v2Intel UHD 610 CML GT1246810SE +/- 0.00, N = 38.40MIN: 8.36 / MAX: 9.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3Intel UHD 610 CML GT148121620SE +/- 0.02, N = 314.79MIN: 14.67 / MAX: 26.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2Intel UHD 610 CML GT1510152025SE +/- 0.02, N = 320.35MIN: 20.21 / MAX: 22.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: Vulkan GPU - Model: mobilenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: mobilenetIntel UHD 610 CML GT11632486480SE +/- 0.06, N = 374.28MIN: 74.07 / MAX: 81.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: FastestDet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: FastestDetIntel UHD 610 CML GT13691215SE +/- 0.02, N = 310.07MIN: 10 / MAX: 10.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: vision_transformer

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vision_transformerIntel UHD 610 CML GT12004006008001000SE +/- 0.47, N = 3893.07MIN: 890.73 / MAX: 966.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: regnety_400m

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: regnety_400mIntel UHD 610 CML GT1510152025SE +/- 0.01, N = 322.51MIN: 22.44 / MAX: 23.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: squeezenet_ssd

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: squeezenet_ssdIntel UHD 610 CML GT1918273645SE +/- 0.02, N = 337.95MIN: 37.76 / MAX: 45.131. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: yolov4-tiny

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: yolov4-tinyIntel UHD 610 CML GT120406080100SE +/- 0.03, N = 392.22MIN: 91.94 / MAX: 92.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: resnet50

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet50Intel UHD 610 CML GT1306090120150SE +/- 0.06, N = 3122.56MIN: 122.27 / MAX: 129.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: alexnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: alexnetIntel UHD 610 CML GT1918273645SE +/- 0.00, N = 337.21MIN: 37.03 / MAX: 48.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: resnet18

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet18Intel UHD 610 CML GT11020304050SE +/- 0.03, N = 345.14MIN: 44.95 / MAX: 47.171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: vgg16

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vgg16Intel UHD 610 CML GT160120180240300SE +/- 0.19, N = 3264.68MIN: 263.84 / MAX: 276.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: googlenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: googlenetIntel UHD 610 CML GT11224364860SE +/- 0.01, N = 353.87MIN: 53.75 / MAX: 54.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: blazeface

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: blazefaceIntel UHD 610 CML GT10.541.081.622.162.7SE +/- 0.00, N = 32.4MIN: 2.33 / MAX: 2.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: efficientnet-b0

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: efficientnet-b0Intel UHD 610 CML GT1714212835SE +/- 0.02, N = 328.50MIN: 28.38 / MAX: 28.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: mnasnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mnasnetIntel UHD 610 CML GT148121620SE +/- 0.02, N = 317.06MIN: 16.92 / MAX: 28.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: shufflenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: shufflenet-v2Intel UHD 610 CML GT1246810SE +/- 0.01, N = 38.38MIN: 8.33 / MAX: 10.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU-v3-v3 - Model: mobilenet-v3

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v3-v3 - Model: mobilenet-v3Intel UHD 610 CML GT148121620SE +/- 0.02, N = 314.78MIN: 14.69 / MAX: 16.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU-v2-v2 - Model: mobilenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v2-v2 - Model: mobilenet-v2Intel UHD 610 CML GT1510152025SE +/- 0.03, N = 320.36MIN: 20.2 / MAX: 30.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: mobilenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mobilenetIntel UHD 610 CML GT11632486480SE +/- 0.01, N = 374.20MIN: 73.99 / MAX: 86.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Numenta Anomaly Benchmark

Detector: Earthgecko Skyline

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko SkylineIntel UHD 610 CML GT1110220330440550SE +/- 2.87, N = 3528.35

LeelaChessZero

Backend: BLAS

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLASIntel UHD 610 CML GT1306090120150SE +/- 1.55, N = 41411. (CXX) g++ options: -flto -pthread

Scikit-Learn

Benchmark: Plot Singular Value Decomposition

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Singular Value DecompositionIntel UHD 610 CML GT180160240320400SE +/- 0.33, N = 3352.771. (F9X) gfortran options: -O0

ONNX Runtime

Model: bertsquad-12 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: ParallelIntel UHD 610 CML GT12004006008001000SE +/- 32.34, N = 15953.421. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: bertsquad-12 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: ParallelIntel UHD 610 CML GT10.23920.47840.71760.95681.196SE +/- 0.029803, N = 151.0629831. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: CaffeNet 12-int8 - Device: CPU - Executor: StandardIntel UHD 610 CML GT1714212835SE +/- 1.75, N = 1528.351. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: CaffeNet 12-int8 - Device: CPU - Executor: StandardIntel UHD 610 CML GT1816243240SE +/- 1.51, N = 1536.581. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

Caffe

Model: GoogleNet - Acceleration: CPU - Iterations: 200

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 200Intel UHD 610 CML GT190K180K270K360K450KSE +/- 156.10, N = 34179061. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

Scikit-Learn

Benchmark: Plot Polynomial Kernel Approximation

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Polynomial Kernel ApproximationIntel UHD 610 CML GT160120180240300SE +/- 0.33, N = 3295.981. (F9X) gfortran options: -O0

ONNX Runtime

Model: bertsquad-12 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: StandardIntel UHD 610 CML GT12004006008001000SE +/- 18.17, N = 12835.681. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: bertsquad-12 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: StandardIntel UHD 610 CML GT10.27040.54080.81121.08161.352SE +/- 0.02147, N = 121.201761. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: ParallelIntel UHD 610 CML GT12004006008001000SE +/- 38.86, N = 121022.701. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: ParallelIntel UHD 610 CML GT10.22350.4470.67050.8941.1175SE +/- 0.037356, N = 120.9933871. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

TNN

Target: CPU - Model: DenseNet

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetIntel UHD 610 CML GT111002200330044005500SE +/- 3.59, N = 35253.38MIN: 5205.46 / MAX: 5295.821. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

ONNX Runtime

Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardIntel UHD 610 CML GT190180270360450SE +/- 30.78, N = 12430.821. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardIntel UHD 610 CML GT10.53991.07981.61972.15962.6995SE +/- 0.10001, N = 122.399751. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

Scikit-Learn

Benchmark: Plot Hierarchical

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot HierarchicalIntel UHD 610 CML GT160120180240300SE +/- 0.38, N = 3273.301. (F9X) gfortran options: -O0

ONNX Runtime

Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: CaffeNet 12-int8 - Device: CPU - Executor: ParallelIntel UHD 610 CML GT1714212835SE +/- 0.56, N = 1231.351. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: CaffeNet 12-int8 - Device: CPU - Executor: ParallelIntel UHD 610 CML GT1714212835SE +/- 0.49, N = 1231.991. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

Scikit-Learn

Benchmark: Plot Neighbors

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot NeighborsIntel UHD 610 CML GT160120180240300SE +/- 0.89, N = 3262.921. (F9X) gfortran options: -O0

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection FP16-INT8 - Device: CPUIntel UHD 610 CML GT18001600240032004000SE +/- 41.90, N = 153927.81MIN: 3722.59 / MAX: 4255.21. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection FP16-INT8 - Device: CPUIntel UHD 610 CML GT10.11480.22960.34440.45920.574SE +/- 0.01, N = 150.511. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

Scikit-Learn

Benchmark: Hist Gradient Boosting Higgs Boson

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Hist Gradient Boosting Higgs BosonIntel UHD 610 CML GT14080120160200SE +/- 0.49, N = 3185.741. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Plot OMP vs. LARS

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot OMP vs. LARSIntel UHD 610 CML GT150100150200250SE +/- 0.07, N = 3249.521. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: SGD Regression

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: SGD RegressionIntel UHD 610 CML GT150100150200250SE +/- 0.43, N = 3219.601. (F9X) gfortran options: -O0

oneDNN

Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUIntel UHD 610 CML GT19K18K27K36K45KSE +/- 13.11, N = 341046.8MIN: 41006.51. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUIntel UHD 610 CML GT19K18K27K36K45KSE +/- 13.51, N = 341031.8MIN: 40996.41. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUIntel UHD 610 CML GT19K18K27K36K45KSE +/- 5.07, N = 341032.5MIN: 41006.61. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Scikit-Learn

Benchmark: Feature Expansions

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Feature ExpansionsIntel UHD 610 CML GT150100150200250SE +/- 0.25, N = 3212.151. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Hist Gradient Boosting

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Hist Gradient BoostingIntel UHD 610 CML GT14080120160200SE +/- 0.61, N = 3194.731. (F9X) gfortran options: -O0

Caffe

Model: GoogleNet - Acceleration: CPU - Iterations: 100

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 100Intel UHD 610 CML GT140K80K120K160K200KSE +/- 123.85, N = 32089881. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

Numpy Benchmark

OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkIntel UHD 610 CML GT160120180240300SE +/- 0.23, N = 3294.94

Scikit-Learn

Benchmark: Sample Without Replacement

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Sample Without ReplacementIntel UHD 610 CML GT1306090120150SE +/- 1.32, N = 3147.241. (F9X) gfortran options: -O0

Caffe

Model: AlexNet - Acceleration: CPU - Iterations: 200

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 200Intel UHD 610 CML GT140K80K120K160K200KSE +/- 86.43, N = 31935871. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUIntel UHD 610 CML GT14K8K12K16K20KSE +/- 7.60, N = 320613.0MIN: 20591.81. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUIntel UHD 610 CML GT14K8K12K16K20KSE +/- 8.89, N = 320604.1MIN: 20576.61. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUIntel UHD 610 CML GT14K8K12K16K20KSE +/- 9.98, N = 320608.4MIN: 20581.91. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Numenta Anomaly Benchmark

Detector: Bayesian Changepoint

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian ChangepointIntel UHD 610 CML GT14080120160200SE +/- 0.63, N = 3164.50

Scikit-Learn

Benchmark: Sparsify

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: SparsifyIntel UHD 610 CML GT120406080100SE +/- 0.01, N = 3104.001. (F9X) gfortran options: -O0

Numenta Anomaly Benchmark

Detector: Contextual Anomaly Detector OSE

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Contextual Anomaly Detector OSEIntel UHD 610 CML GT1306090120150SE +/- 0.88, N = 3137.72

Scikit-Learn

Benchmark: Plot Ward

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot WardIntel UHD 610 CML GT120406080100SE +/- 0.03, N = 393.201. (F9X) gfortran options: -O0

Mlpack Benchmark

Benchmark: scikit_ica

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_icaIntel UHD 610 CML GT1306090120150SE +/- 0.48, N = 3114.79

Scikit-Learn

Benchmark: MNIST Dataset

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: MNIST DatasetIntel UHD 610 CML GT120406080100SE +/- 0.15, N = 386.741. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Plot Incremental PCA

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Incremental PCAIntel UHD 610 CML GT120406080100SE +/- 0.09, N = 382.861. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Text Vectorizers

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Text VectorizersIntel UHD 610 CML GT120406080100SE +/- 0.04, N = 379.731. (F9X) gfortran options: -O0

ONNX Runtime

Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: ParallelIntel UHD 610 CML GT14K8K12K16K20KSE +/- 45.50, N = 318223.21. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: ParallelIntel UHD 610 CML GT10.01230.02460.03690.04920.0615SE +/- 0.0001368, N = 30.05487571. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: fcn-resnet101-11 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: StandardIntel UHD 610 CML GT12K4K6K8K10KSE +/- 2.41, N = 39192.221. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: fcn-resnet101-11 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: StandardIntel UHD 610 CML GT10.02450.0490.07350.0980.1225SE +/- 0.000029, N = 30.1087871. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

Caffe

Model: AlexNet - Acceleration: CPU - Iterations: 100

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 100Intel UHD 610 CML GT120K40K60K80K100KSE +/- 154.03, N = 3968081. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

ONNX Runtime

Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: StandardIntel UHD 610 CML GT1160320480640800SE +/- 3.22, N = 3733.301. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: StandardIntel UHD 610 CML GT10.30680.61360.92041.22721.534SE +/- 0.00596, N = 31.363751. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: ParallelIntel UHD 610 CML GT1100200300400500SE +/- 0.26, N = 3448.151. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: ParallelIntel UHD 610 CML GT10.50211.00421.50632.00842.5105SE +/- 0.00128, N = 32.231391. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: GPT-2 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: ParallelIntel UHD 610 CML GT1816243240SE +/- 0.04, N = 334.771. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: GPT-2 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: ParallelIntel UHD 610 CML GT1714212835SE +/- 0.03, N = 328.761. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: GPT-2 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: StandardIntel UHD 610 CML GT1816243240SE +/- 0.21, N = 333.321. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: GPT-2 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: StandardIntel UHD 610 CML GT1714212835SE +/- 0.19, N = 330.011. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: ParallelIntel UHD 610 CML GT120406080100SE +/- 1.19, N = 3110.711. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: ParallelIntel UHD 610 CML GT13691215SE +/- 0.09629, N = 39.034601. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: StandardIntel UHD 610 CML GT120406080100SE +/- 0.09, N = 393.341. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: StandardIntel UHD 610 CML GT13691215SE +/- 0.01, N = 310.711. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: super-resolution-10 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: ParallelIntel UHD 610 CML GT1306090120150SE +/- 0.20, N = 3120.561. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: super-resolution-10 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: ParallelIntel UHD 610 CML GT1246810SE +/- 0.01346, N = 38.294671. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: super-resolution-10 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: StandardIntel UHD 610 CML GT120406080100SE +/- 0.06, N = 3107.131. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: super-resolution-10 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: StandardIntel UHD 610 CML GT13691215SE +/- 0.00561, N = 39.334381. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection FP16 - Device: CPUIntel UHD 610 CML GT12K4K6K8K10KSE +/- 1.87, N = 311527.95MIN: 11510.06 / MAX: 11568.081. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection FP16 - Device: CPUIntel UHD 610 CML GT10.03830.07660.11490.15320.1915SE +/- 0.00, N = 30.171. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

Scikit-Learn

Benchmark: 20 Newsgroups / Logistic Regression

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: 20 Newsgroups / Logistic RegressionIntel UHD 610 CML GT11428425670SE +/- 0.06, N = 360.841. (F9X) gfortran options: -O0

Numenta Anomaly Benchmark

Detector: Relative Entropy

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative EntropyIntel UHD 610 CML GT11530456075SE +/- 0.23, N = 366.19

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Machine Translation EN To DE FP16 - Device: CPUIntel UHD 610 CML GT12004006008001000SE +/- 10.67, N = 3804.02MIN: 772.04 / MAX: 830.671. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Machine Translation EN To DE FP16 - Device: CPUIntel UHD 610 CML GT10.56031.12061.68092.24122.8015SE +/- 0.03, N = 32.491. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

Scikit-Learn

Benchmark: Tree

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: TreeIntel UHD 610 CML GT11122334455SE +/- 0.17, N = 347.101. (F9X) gfortran options: -O0

TensorFlow Lite

Model: Mobilenet Quant

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet QuantIntel UHD 610 CML GT1120K240K360K480K600KSE +/- 20.11, N = 3560906

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Detection FP32 - Device: CPUIntel UHD 610 CML GT12004006008001000SE +/- 9.39, N = 3932.30MIN: 905.44 / MAX: 968.531. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Person Detection FP32 - Device: CPUIntel UHD 610 CML GT10.48380.96761.45141.93522.419SE +/- 0.02, N = 32.151. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

TensorFlow Lite

Model: Inception V4

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4Intel UHD 610 CML GT190K180K270K360K450KSE +/- 136.01, N = 3415157

TensorFlow Lite

Model: Inception ResNet V2

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V2Intel UHD 610 CML GT180K160K240K320K400KSE +/- 125.07, N = 3386539

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Detection FP16 - Device: CPUIntel UHD 610 CML GT12004006008001000SE +/- 1.01, N = 3957.97MIN: 908.75 / MAX: 969.761. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Person Detection FP16 - Device: CPUIntel UHD 610 CML GT10.47030.94061.41091.88122.3515SE +/- 0.00, N = 32.091. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Road Segmentation ADAS FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16-INT8 - Device: CPUIntel UHD 610 CML GT1306090120150SE +/- 1.02, N = 3119.36MIN: 114.3 / MAX: 132.681. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Road Segmentation ADAS FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16-INT8 - Device: CPUIntel UHD 610 CML GT148121620SE +/- 0.14, N = 316.761. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Road Segmentation ADAS FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16 - Device: CPUIntel UHD 610 CML GT150100150200250SE +/- 0.30, N = 3236.97MIN: 121.52 / MAX: 369.291. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Road Segmentation ADAS FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16 - Device: CPUIntel UHD 610 CML GT1246810SE +/- 0.01, N = 38.441. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Handwritten English Recognition FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16 - Device: CPUIntel UHD 610 CML GT1306090120150SE +/- 0.31, N = 3155.38MIN: 131.53 / MAX: 168.031. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Handwritten English Recognition FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16 - Device: CPUIntel UHD 610 CML GT13691215SE +/- 0.03, N = 312.871. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

TensorFlow Lite

Model: NASNet Mobile

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet MobileIntel UHD 610 CML GT19K18K27K36K45KSE +/- 22.46, N = 340066.8

OpenVINO

Model: Handwritten English Recognition FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16-INT8 - Device: CPUIntel UHD 610 CML GT1306090120150SE +/- 0.90, N = 3123.65MIN: 114.82 / MAX: 175.121. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Handwritten English Recognition FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16-INT8 - Device: CPUIntel UHD 610 CML GT148121620SE +/- 0.12, N = 316.171. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUIntel UHD 610 CML GT11326395265SE +/- 0.05, N = 359.24MIN: 56.57 / MAX: 73.791. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUIntel UHD 610 CML GT1816243240SE +/- 0.03, N = 333.751. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

TensorFlow Lite

Model: SqueezeNet

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNetIntel UHD 610 CML GT17K14K21K28K35KSE +/- 33.23, N = 330349.5

TensorFlow Lite

Model: Mobilenet Float

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet FloatIntel UHD 610 CML GT15K10K15K20K25KSE +/- 8.27, N = 321944.0

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16 - Device: CPUIntel UHD 610 CML GT1306090120150SE +/- 0.05, N = 3127.43MIN: 122.64 / MAX: 143.551. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16 - Device: CPUIntel UHD 610 CML GT148121620SE +/- 0.01, N = 315.691. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16 - Device: CPUIntel UHD 610 CML GT1306090120150SE +/- 0.02, N = 3117.47MIN: 95.54 / MAX: 131.931. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16 - Device: CPUIntel UHD 610 CML GT148121620SE +/- 0.00, N = 317.021. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Face Detection Retail FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16-INT8 - Device: CPUIntel UHD 610 CML GT1510152025SE +/- 0.01, N = 318.67MIN: 10.24 / MAX: 40.581. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Face Detection Retail FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16-INT8 - Device: CPUIntel UHD 610 CML GT120406080100SE +/- 0.03, N = 3107.051. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUIntel UHD 610 CML GT1918273645SE +/- 0.00, N = 337.49MIN: 36.71 / MAX: 55.031. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUIntel UHD 610 CML GT11224364860SE +/- 0.00, N = 353.331. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Face Detection Retail FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16 - Device: CPUIntel UHD 610 CML GT1918273645SE +/- 0.42, N = 338.71MIN: 19.5 / MAX: 55.071. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Face Detection Retail FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16 - Device: CPUIntel UHD 610 CML GT11224364860SE +/- 0.56, N = 351.651. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUIntel UHD 610 CML GT10.33530.67061.00591.34121.6765SE +/- 0.01, N = 31.49MIN: 0.93 / MAX: 15.961. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUIntel UHD 610 CML GT130060090012001500SE +/- 5.84, N = 31342.061. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUIntel UHD 610 CML GT10.8551.712.5653.424.275SE +/- 0.01, N = 33.80MIN: 3.04 / MAX: 19.931. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUIntel UHD 610 CML GT1110220330440550SE +/- 0.95, N = 3525.031. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

Scikit-Learn

Benchmark: Hist Gradient Boosting Categorical Only

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Hist Gradient Boosting Categorical OnlyIntel UHD 610 CML GT1612182430SE +/- 0.01, N = 327.021. (F9X) gfortran options: -O0

R Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterR BenchmarkIntel UHD 610 CML GT10.07830.15660.23490.31320.3915SE +/- 0.0003, N = 30.34821. R scripting front-end version 3.6.3 (2020-02-29)

Numenta Anomaly Benchmark

Detector: Windowed Gaussian

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed GaussianIntel UHD 610 CML GT1816243240SE +/- 0.37, N = 333.81

Mlpack Benchmark

Benchmark: scikit_svm

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svmIntel UHD 610 CML GT1612182430SE +/- 0.03, N = 327.15

RNNoise

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Intel UHD 610 CML GT1612182430SE +/- 0.08, N = 326.351. (CC) gcc options: -O2 -pedantic -fvisibility=hidden

TNN

Target: CPU - Model: MobileNet v2

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2Intel UHD 610 CML GT180160240320400SE +/- 0.06, N = 3365.27MIN: 364.24 / MAX: 368.821. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

TNN

Target: CPU - Model: SqueezeNet v1.1

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.1Intel UHD 610 CML GT170140210280350SE +/- 0.04, N = 3327.08MIN: 326.93 / MAX: 327.821. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUIntel UHD 610 CML GT120406080100SE +/- 0.42, N = 376.01MIN: 73.691. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUIntel UHD 610 CML GT1510152025SE +/- 0.01, N = 319.16MIN: 19.061. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUIntel UHD 610 CML GT1816243240SE +/- 0.06, N = 336.82MIN: 36.411. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUIntel UHD 610 CML GT13691215SE +/- 0.03, N = 311.93MIN: 11.841. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUIntel UHD 610 CML GT1918273645SE +/- 0.19, N = 337.72MIN: 36.951. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUIntel UHD 610 CML GT11.30992.61983.92975.23966.5495SE +/- 0.00596, N = 35.82157MIN: 5.611. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUIntel UHD 610 CML GT11326395265SE +/- 0.09, N = 358.72MIN: 58.451. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUIntel UHD 610 CML GT11122334455SE +/- 0.13, N = 349.27MIN: 47.331. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

TNN

Target: CPU - Model: SqueezeNet v2

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v2Intel UHD 610 CML GT11632486480SE +/- 0.01, N = 372.92MIN: 72.83 / MAX: 73.211. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUIntel UHD 610 CML GT120406080100SE +/- 0.11, N = 390.10MIN: 87.571. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUIntel UHD 610 CML GT1612182430SE +/- 0.02, N = 326.17MIN: 25.841. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl


Phoronix Test Suite v10.8.5