H610-i312100-1

Intel Core i3-12100 testing with a ASRock H610M-HDV/M.2 R2.0 (6.03 BIOS) and Intel ADL-S GT1 3GB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2408228-HERT-H610I3171
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Intel ADL-S GT1 - Intel Core i3-12100
August 20
  2 Days, 17 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


H610-i312100-1 - Phoronix Test Suite

H610-i312100-1

Intel Core i3-12100 testing with a ASRock H610M-HDV/M.2 R2.0 (6.03 BIOS) and Intel ADL-S GT1 3GB on Ubuntu 20.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2408228-HERT-H610I3171&grw.

H610-i312100-1ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionIntel ADL-S GT1 - Intel Core i3-12100Intel Core i3-12100 @ 4.30GHz (4 Cores / 8 Threads)ASRock H610M-HDV/M.2 R2.0 (6.03 BIOS)Intel Device 7aa74096MB1000GB Western Digital WDS100T2B0AIntel ADL-S GT1 3GB (1400MHz)Realtek ALC897Realtek RTL8111/8168/8411Ubuntu 20.045.15.0-89-generic (x86_64)GNOME Shell 3.36.9X Server 1.20.134.6 Mesa 21.2.61.2.182GCC 9.4.0ext41366x768OpenBenchmarking.org- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-9QDOt0/gcc-9-9.4.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x2e - Python 3.8.10- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

H610-i312100-1plaidml: No - Inference - VGG16 - CPUplaidml: No - Inference - ResNet 50 - CPUnumenta-nab: KNN CADnumenta-nab: Relative Entropynumenta-nab: Windowed Gaussiannumenta-nab: Earthgecko Skylinenumenta-nab: Bayesian Changepointnumenta-nab: Contextual Anomaly Detector OSEscikit-learn: GLMscikit-learn: SAGAscikit-learn: Treescikit-learn: Lassoscikit-learn: Sparsifyscikit-learn: Plot Wardscikit-learn: MNIST Datasetscikit-learn: Plot Neighborsscikit-learn: SGD Regressionscikit-learn: Plot Lasso Pathscikit-learn: Text Vectorizersscikit-learn: Plot Hierarchicalscikit-learn: Plot OMP vs. LARSscikit-learn: Feature Expansionsscikit-learn: LocalOutlierFactorscikit-learn: TSNE MNIST Datasetscikit-learn: Plot Incremental PCAscikit-learn: Hist Gradient Boostingscikit-learn: Sample Without Replacementscikit-learn: Covertype Dataset Benchmarkscikit-learn: Hist Gradient Boosting Adultscikit-learn: Hist Gradient Boosting Threadingscikit-learn: Plot Singular Value Decompositionscikit-learn: Hist Gradient Boosting Higgs Bosonscikit-learn: 20 Newsgroups / Logistic Regressionscikit-learn: Plot Polynomial Kernel Approximationscikit-learn: Hist Gradient Boosting Categorical Onlyscikit-learn: Kernel PCA Solvers / Time vs. N Samplesscikit-learn: Kernel PCA Solvers / Time vs. N Componentsscikit-learn: Sparse Rand Projections / 100 Iterationsrbenchmark: numpy: deepspeech: CPUmnn: nasnetmnn: mobilenetV3mnn: squeezenetv1.1mnn: resnet-v2-50mnn: SqueezeNetV1.0mnn: MobileNetV2_224mnn: mobilenet-v1-1.0mnn: inception-v3deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: Llama2 Chat 7b Quantized - Asynchronous Multi-Streamdeepsparse: Llama2 Chat 7b Quantized - Asynchronous Multi-Streamdeepsparse: Llama2 Chat 7b Quantized - Synchronous Single-Streamdeepsparse: Llama2 Chat 7b Quantized - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamonnx: GPT-2 - CPU - Parallelonnx: GPT-2 - CPU - Standardonnx: T5 Encoder - CPU - Parallelonnx: T5 Encoder - CPU - Standardonnx: bertsquad-12 - CPU - Parallelonnx: bertsquad-12 - CPU - Standardonnx: CaffeNet 12-int8 - CPU - Parallelonnx: CaffeNet 12-int8 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Parallelonnx: fcn-resnet101-11 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Parallelonnx: ArcFace ResNet-100 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Parallelonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: super-resolution-10 - CPU - Parallelonnx: super-resolution-10 - CPU - Standardonnx: Faster R-CNN R-50-FPN-int8 - CPU - Parallelonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardopencv: DNN - Deep Neural Networkpytorch: CPU - 1 - ResNet-50pytorch: CPU - 1 - ResNet-152pytorch: CPU - 16 - ResNet-50pytorch: CPU - 32 - ResNet-50pytorch: CPU - 64 - ResNet-50pytorch: CPU - 16 - ResNet-152pytorch: CPU - 256 - ResNet-50pytorch: CPU - 32 - ResNet-152pytorch: CPU - 512 - ResNet-50pytorch: CPU - 64 - ResNet-152pytorch: CPU - 256 - ResNet-152pytorch: CPU - 512 - ResNet-152pytorch: CPU - 1 - Efficientnet_v2_lpytorch: CPU - 16 - Efficientnet_v2_lpytorch: CPU - 32 - Efficientnet_v2_lpytorch: CPU - 64 - Efficientnet_v2_lpytorch: CPU - 256 - Efficientnet_v2_lpytorch: CPU - 512 - Efficientnet_v2_ltensorflow-lite: SqueezeNettensorflow-lite: Inception V4tensorflow-lite: NASNet Mobiletensorflow-lite: Mobilenet Floattensorflow-lite: Mobilenet Quanttensorflow-lite: Inception ResNet V2tnn: CPU - DenseNettnn: CPU - MobileNet v2tnn: CPU - SqueezeNet v2tnn: CPU - SqueezeNet v1.1whisper-cpp: ggml-base.en - 2016 State of the Unionwhisper-cpp: ggml-small.en - 2016 State of the Unionwhisper-cpp: ggml-medium.en - 2016 State of the Unionxnnpack: FP32MobileNetV2xnnpack: FP32MobileNetV3Largexnnpack: FP32MobileNetV3Smallxnnpack: FP16MobileNetV2xnnpack: FP16MobileNetV3Largexnnpack: FP16MobileNetV3Smallxnnpack: QU8MobileNetV2xnnpack: QU8MobileNetV3Largexnnpack: QU8MobileNetV3Smallcaffe: AlexNet - CPU - 100caffe: AlexNet - CPU - 200caffe: AlexNet - CPU - 1000caffe: GoogleNet - CPU - 100caffe: GoogleNet - CPU - 200caffe: GoogleNet - CPU - 1000ncnn: CPU - mobilenetncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - shufflenet-v2ncnn: CPU - mnasnetncnn: CPU - efficientnet-b0ncnn: CPU - blazefacencnn: CPU - googlenetncnn: CPU - vgg16ncnn: CPU - resnet18ncnn: CPU - alexnetncnn: CPU - resnet50ncnn: CPU - yolov4-tinyncnn: CPU - squeezenet_ssdncnn: CPU - regnety_400mncnn: CPU - vision_transformerncnn: CPU - FastestDetncnn: Vulkan GPU - mobilenetncnn: Vulkan GPU-v2-v2 - mobilenet-v2ncnn: Vulkan GPU-v3-v3 - mobilenet-v3ncnn: Vulkan GPU - shufflenet-v2ncnn: Vulkan GPU - mnasnetncnn: Vulkan GPU - efficientnet-b0ncnn: Vulkan GPU - blazefacencnn: Vulkan GPU - googlenetncnn: Vulkan GPU - vgg16ncnn: Vulkan GPU - resnet18ncnn: Vulkan GPU - alexnetncnn: Vulkan GPU - resnet50ncnn: Vulkan GPU - yolov4-tinyncnn: Vulkan GPU - squeezenet_ssdncnn: Vulkan GPU - regnety_400mncnn: Vulkan GPU - vision_transformerncnn: Vulkan GPU - FastestDetmlpack: scikit_icamlpack: scikit_svmonednn: IP Shapes 1D - CPUonednn: IP Shapes 3D - CPUonednn: Convolution Batch Shapes Auto - CPUonednn: Deconvolution Batch shapes_1d - CPUonednn: Deconvolution Batch shapes_3d - CPUonednn: Recurrent Neural Network Training - CPUonednn: Recurrent Neural Network Inference - CPUopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection Retail FP16 - CPUopenvino: Face Detection Retail FP16 - CPUopenvino: Road Segmentation ADAS FP16 - CPUopenvino: Road Segmentation ADAS FP16 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Noise Suppression Poconet-Like FP16 - CPUopenvino: Noise Suppression Poconet-Like FP16 - CPUopenvino: Handwritten English Recognition FP16 - CPUopenvino: Handwritten English Recognition FP16 - CPUopenvino: Person Re-Identification Retail FP16 - CPUopenvino: Person Re-Identification Retail FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUonnx: GPT-2 - CPU - Parallelonnx: GPT-2 - CPU - Standardonnx: T5 Encoder - CPU - Parallelonnx: T5 Encoder - CPU - Standardonnx: bertsquad-12 - CPU - Parallelonnx: bertsquad-12 - CPU - Standardonnx: CaffeNet 12-int8 - CPU - Parallelonnx: CaffeNet 12-int8 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Parallelonnx: fcn-resnet101-11 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Parallelonnx: ArcFace ResNet-100 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Parallelonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: super-resolution-10 - CPU - Parallelonnx: super-resolution-10 - CPU - Standardonnx: Faster R-CNN R-50-FPN-int8 - CPU - Parallelonnx: Faster R-CNN R-50-FPN-int8 - CPU - StandardIntel ADL-S GT1 - Intel Core i3-121008.864.62245.08820.59611.696209.16147.47752.860611.356728.14439.661492.67474.98252.21964.562105.444119.971226.07851.999186.554134.664241.47583.873324.55245.04396.57995.571446.72360.214336.099199.818138.50549.524233.56114.585276.878319.472905.0820.1847463.1787.518409.0721.1882.90026.3124.7442.7053.37333.5074.0749490.79774.3138231.8064164.651412.1290145.27406.877751.686738.677547.551621.0243369.35835.3924300.11213.32200.0070211288.57800.0092108889.845251.573238.759647.473921.058723.876683.741623.454442.630434.410558.097332.241631.01125.0039399.50034.8149207.672978.189125.556364.853115.41074.1179485.37574.3321230.827950.314951.118856.025157.59205.344947.91502209.447233.2540.5124760.74253211.540217.3121122.008142.73838.042253.26274.435755.101102718524.8911.3412.5712.5612.696.3312.916.2412.766.236.226.178.104.514.554.524.504.484322.7164931.812005.23574.975198.0760949.02446.578201.10946.205166.131246.28971753.360652280.5028355455484130234443396965216319747284462290229445933103411206462103832021.505.353.182.253.538.150.6216.13109.9212.3412.6326.7140.1217.426.57135.493.3121.055.293.192.213.197.990.6115.83111.1212.2512.5426.9739.8017.496.53136.233.3064.7414.516.0060126.699844.143810.753310.37185708.973071.311.243213.2810.06397.4710.04398.5062.6163.855.26759.24330.4212.0918.24219.10214.2118.66136.7429.24833.344.8063.1963.2714.95267.44513.047.79166.7123.99208.8419.1066.4060.21201.9019.803481.921.1488.4145.2210592.670.3719.871919.560917.848517.3627187.689130.8144.774724.287711975.361353.2486.697757.79008.197377.0053626.294619.2123225.465196.432OpenBenchmarking.org

PlaidML

FP16: No - Mode: Inference - Network: VGG16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: CPUIntel ADL-S GT1 - Intel Core i3-12100246810SE +/- 0.09, N = 38.86

PlaidML

FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: ResNet 50 - Device: CPUIntel ADL-S GT1 - Intel Core i3-121001.03952.0793.11854.1585.1975SE +/- 0.01, N = 34.62

Numenta Anomaly Benchmark

Detector: KNN CAD

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: KNN CADIntel ADL-S GT1 - Intel Core i3-1210050100150200250SE +/- 1.93, N = 3245.09

Numenta Anomaly Benchmark

Detector: Relative Entropy

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative EntropyIntel ADL-S GT1 - Intel Core i3-12100510152025SE +/- 0.18, N = 820.60

Numenta Anomaly Benchmark

Detector: Windowed Gaussian

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed GaussianIntel ADL-S GT1 - Intel Core i3-121003691215SE +/- 0.02, N = 311.70

Numenta Anomaly Benchmark

Detector: Earthgecko Skyline

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko SkylineIntel ADL-S GT1 - Intel Core i3-1210050100150200250SE +/- 1.51, N = 3209.16

Numenta Anomaly Benchmark

Detector: Bayesian Changepoint

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian ChangepointIntel ADL-S GT1 - Intel Core i3-121001122334455SE +/- 0.32, N = 1347.48

Numenta Anomaly Benchmark

Detector: Contextual Anomaly Detector OSE

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Contextual Anomaly Detector OSEIntel ADL-S GT1 - Intel Core i3-121001224364860SE +/- 0.27, N = 352.86

Scikit-Learn

Benchmark: GLM

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: GLMIntel ADL-S GT1 - Intel Core i3-12100130260390520650SE +/- 0.96, N = 3611.361. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: SAGA

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: SAGAIntel ADL-S GT1 - Intel Core i3-12100160320480640800SE +/- 0.42, N = 3728.141. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Tree

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: TreeIntel ADL-S GT1 - Intel Core i3-12100918273645SE +/- 0.30, N = 1539.661. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Lasso

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: LassoIntel ADL-S GT1 - Intel Core i3-12100110220330440550SE +/- 1.04, N = 3492.671. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Sparsify

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: SparsifyIntel ADL-S GT1 - Intel Core i3-1210020406080100SE +/- 0.27, N = 374.981. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Plot Ward

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot WardIntel ADL-S GT1 - Intel Core i3-121001224364860SE +/- 0.10, N = 352.221. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: MNIST Dataset

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: MNIST DatasetIntel ADL-S GT1 - Intel Core i3-121001428425670SE +/- 0.04, N = 364.561. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Plot Neighbors

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot NeighborsIntel ADL-S GT1 - Intel Core i3-1210020406080100SE +/- 0.15, N = 3105.441. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: SGD Regression

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: SGD RegressionIntel ADL-S GT1 - Intel Core i3-12100306090120150SE +/- 0.56, N = 3119.971. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Plot Lasso Path

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Lasso PathIntel ADL-S GT1 - Intel Core i3-1210050100150200250SE +/- 1.40, N = 3226.081. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Text Vectorizers

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Text VectorizersIntel ADL-S GT1 - Intel Core i3-121001224364860SE +/- 0.05, N = 352.001. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Plot Hierarchical

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot HierarchicalIntel ADL-S GT1 - Intel Core i3-121004080120160200SE +/- 0.37, N = 3186.551. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Plot OMP vs. LARS

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot OMP vs. LARSIntel ADL-S GT1 - Intel Core i3-12100306090120150SE +/- 0.10, N = 3134.661. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Feature Expansions

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Feature ExpansionsIntel ADL-S GT1 - Intel Core i3-1210050100150200250SE +/- 0.42, N = 3241.481. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: LocalOutlierFactor

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: LocalOutlierFactorIntel ADL-S GT1 - Intel Core i3-1210020406080100SE +/- 0.08, N = 383.871. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: TSNE MNIST Dataset

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: TSNE MNIST DatasetIntel ADL-S GT1 - Intel Core i3-1210070140210280350SE +/- 0.43, N = 3324.551. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Plot Incremental PCA

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Incremental PCAIntel ADL-S GT1 - Intel Core i3-121001020304050SE +/- 0.33, N = 345.041. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Hist Gradient Boosting

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Hist Gradient BoostingIntel ADL-S GT1 - Intel Core i3-1210020406080100SE +/- 0.18, N = 396.581. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Sample Without Replacement

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Sample Without ReplacementIntel ADL-S GT1 - Intel Core i3-1210020406080100SE +/- 0.11, N = 395.571. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Covertype Dataset Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Covertype Dataset BenchmarkIntel ADL-S GT1 - Intel Core i3-12100100200300400500SE +/- 0.14, N = 3446.721. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Hist Gradient Boosting Adult

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Hist Gradient Boosting AdultIntel ADL-S GT1 - Intel Core i3-121001326395265SE +/- 0.09, N = 360.211. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Hist Gradient Boosting Threading

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Hist Gradient Boosting ThreadingIntel ADL-S GT1 - Intel Core i3-1210070140210280350SE +/- 3.75, N = 3336.101. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Plot Singular Value Decomposition

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Singular Value DecompositionIntel ADL-S GT1 - Intel Core i3-121004080120160200SE +/- 0.02, N = 3199.821. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Hist Gradient Boosting Higgs Boson

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Hist Gradient Boosting Higgs BosonIntel ADL-S GT1 - Intel Core i3-12100306090120150SE +/- 1.02, N = 12138.511. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: 20 Newsgroups / Logistic Regression

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: 20 Newsgroups / Logistic RegressionIntel ADL-S GT1 - Intel Core i3-121001122334455SE +/- 0.02, N = 349.521. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Plot Polynomial Kernel Approximation

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Polynomial Kernel ApproximationIntel ADL-S GT1 - Intel Core i3-1210050100150200250SE +/- 1.21, N = 3233.561. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Hist Gradient Boosting Categorical Only

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Hist Gradient Boosting Categorical OnlyIntel ADL-S GT1 - Intel Core i3-1210048121620SE +/- 0.10, N = 314.591. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Kernel PCA Solvers / Time vs. N Samples

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Kernel PCA Solvers / Time vs. N SamplesIntel ADL-S GT1 - Intel Core i3-1210060120180240300SE +/- 0.57, N = 3276.881. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Kernel PCA Solvers / Time vs. N Components

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Kernel PCA Solvers / Time vs. N ComponentsIntel ADL-S GT1 - Intel Core i3-1210070140210280350SE +/- 14.09, N = 9319.471. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Sparse Random Projections / 100 Iterations

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Sparse Random Projections / 100 IterationsIntel ADL-S GT1 - Intel Core i3-121002004006008001000SE +/- 0.87, N = 3905.081. (F9X) gfortran options: -O0

R Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterR BenchmarkIntel ADL-S GT1 - Intel Core i3-121000.04160.08320.12480.16640.208SE +/- 0.0015, N = 30.18471. R scripting front-end version 3.6.3 (2020-02-29)

Numpy Benchmark

OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkIntel ADL-S GT1 - Intel Core i3-12100100200300400500SE +/- 0.39, N = 3463.17

DeepSpeech

Acceleration: CPU

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUIntel ADL-S GT1 - Intel Core i3-1210020406080100SE +/- 0.60, N = 387.52

Mobile Neural Network

Model: nasnet

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: nasnetIntel ADL-S GT1 - Intel Core i3-121003691215SE +/- 0.022, N = 39.072MIN: 8.93 / MAX: 29.561. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

Mobile Neural Network

Model: mobilenetV3

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: mobilenetV3Intel ADL-S GT1 - Intel Core i3-121000.26730.53460.80191.06921.3365SE +/- 0.047, N = 31.188MIN: 1.12 / MAX: 49.71. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

Mobile Neural Network

Model: squeezenetv1.1

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: squeezenetv1.1Intel ADL-S GT1 - Intel Core i3-121000.65251.3051.95752.613.2625SE +/- 0.015, N = 32.900MIN: 2.83 / MAX: 42.331. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

Mobile Neural Network

Model: resnet-v2-50

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: resnet-v2-50Intel ADL-S GT1 - Intel Core i3-12100612182430SE +/- 0.04, N = 326.31MIN: 25.83 / MAX: 99.471. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

Mobile Neural Network

Model: SqueezeNetV1.0

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: SqueezeNetV1.0Intel ADL-S GT1 - Intel Core i3-121001.06742.13483.20224.26965.337SE +/- 0.009, N = 34.744MIN: 4.68 / MAX: 20.91. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

Mobile Neural Network

Model: MobileNetV2_224

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: MobileNetV2_224Intel ADL-S GT1 - Intel Core i3-121000.60861.21721.82582.43443.043SE +/- 0.007, N = 32.705MIN: 2.65 / MAX: 4.631. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

Mobile Neural Network

Model: mobilenet-v1-1.0

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: mobilenet-v1-1.0Intel ADL-S GT1 - Intel Core i3-121000.75891.51782.27673.03563.7945SE +/- 0.048, N = 33.373MIN: 3.24 / MAX: 29.191. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

Mobile Neural Network

Model: inception-v3

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: inception-v3Intel ADL-S GT1 - Intel Core i3-12100816243240SE +/- 0.07, N = 333.51MIN: 32.99 / MAX: 1091. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamIntel ADL-S GT1 - Intel Core i3-121000.91691.83382.75073.66764.5845SE +/- 0.0124, N = 34.0749

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamIntel ADL-S GT1 - Intel Core i3-12100110220330440550SE +/- 1.50, N = 3490.80

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamIntel ADL-S GT1 - Intel Core i3-121000.97061.94122.91183.88244.853SE +/- 0.0039, N = 34.3138

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamIntel ADL-S GT1 - Intel Core i3-1210050100150200250SE +/- 0.21, N = 3231.81

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-StreamIntel ADL-S GT1 - Intel Core i3-121004080120160200SE +/- 0.07, N = 3164.65

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-StreamIntel ADL-S GT1 - Intel Core i3-121003691215SE +/- 0.00, N = 312.13

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-StreamIntel ADL-S GT1 - Intel Core i3-12100306090120150SE +/- 0.34, N = 3145.27

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-StreamIntel ADL-S GT1 - Intel Core i3-12100246810SE +/- 0.0159, N = 36.8777

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-StreamIntel ADL-S GT1 - Intel Core i3-121001224364860SE +/- 0.19, N = 351.69

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-StreamIntel ADL-S GT1 - Intel Core i3-12100918273645SE +/- 0.14, N = 338.68

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Synchronous Single-StreamIntel ADL-S GT1 - Intel Core i3-121001122334455SE +/- 0.05, N = 347.55

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Synchronous Single-StreamIntel ADL-S GT1 - Intel Core i3-12100510152025SE +/- 0.02, N = 321.02

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-StreamIntel ADL-S GT1 - Intel Core i3-1210080160240320400SE +/- 1.14, N = 3369.36

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-StreamIntel ADL-S GT1 - Intel Core i3-121001.21332.42663.63994.85326.0665SE +/- 0.0167, N = 35.3924

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-StreamIntel ADL-S GT1 - Intel Core i3-1210070140210280350SE +/- 1.17, N = 3300.11

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-StreamIntel ADL-S GT1 - Intel Core i3-121000.74751.4952.24252.993.7375SE +/- 0.0131, N = 33.3220

Neural Magic DeepSparse

Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-StreamIntel ADL-S GT1 - Intel Core i3-121000.00160.00320.00480.00640.008SE +/- 0.0001, N = 90.0070

Neural Magic DeepSparse

Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-StreamIntel ADL-S GT1 - Intel Core i3-1210050K100K150K200K250KSE +/- 2251.32, N = 9211288.58

Neural Magic DeepSparse

Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-StreamIntel ADL-S GT1 - Intel Core i3-121000.00210.00420.00630.00840.0105SE +/- 0.0001, N = 90.0092

Neural Magic DeepSparse

Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-StreamIntel ADL-S GT1 - Intel Core i3-1210020K40K60K80K100KSE +/- 1162.79, N = 9108889.85

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamIntel ADL-S GT1 - Intel Core i3-121001224364860SE +/- 0.15, N = 351.57

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamIntel ADL-S GT1 - Intel Core i3-12100918273645SE +/- 0.11, N = 338.76

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamIntel ADL-S GT1 - Intel Core i3-121001122334455SE +/- 0.04, N = 347.47

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamIntel ADL-S GT1 - Intel Core i3-12100510152025SE +/- 0.02, N = 321.06

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-StreamIntel ADL-S GT1 - Intel Core i3-12100612182430SE +/- 0.03, N = 323.88

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-StreamIntel ADL-S GT1 - Intel Core i3-1210020406080100SE +/- 0.10, N = 383.74

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-StreamIntel ADL-S GT1 - Intel Core i3-12100612182430SE +/- 0.01, N = 323.45

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-StreamIntel ADL-S GT1 - Intel Core i3-121001020304050SE +/- 0.02, N = 342.63

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamIntel ADL-S GT1 - Intel Core i3-12100816243240SE +/- 0.12, N = 334.41

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamIntel ADL-S GT1 - Intel Core i3-121001326395265SE +/- 0.19, N = 358.10

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamIntel ADL-S GT1 - Intel Core i3-12100714212835SE +/- 0.05, N = 332.24

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamIntel ADL-S GT1 - Intel Core i3-12100714212835SE +/- 0.04, N = 331.01

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamIntel ADL-S GT1 - Intel Core i3-121001.12592.25183.37774.50365.6295SE +/- 0.0404, N = 35.0039

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamIntel ADL-S GT1 - Intel Core i3-1210090180270360450SE +/- 3.23, N = 3399.50

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-StreamIntel ADL-S GT1 - Intel Core i3-121001.08342.16683.25024.33365.417SE +/- 0.0104, N = 34.8149

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-StreamIntel ADL-S GT1 - Intel Core i3-1210050100150200250SE +/- 0.45, N = 3207.67

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-StreamIntel ADL-S GT1 - Intel Core i3-1210020406080100SE +/- 0.07, N = 378.19

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-StreamIntel ADL-S GT1 - Intel Core i3-12100612182430SE +/- 0.02, N = 325.56

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-StreamIntel ADL-S GT1 - Intel Core i3-121001428425670SE +/- 0.06, N = 364.85

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-StreamIntel ADL-S GT1 - Intel Core i3-1210048121620SE +/- 0.01, N = 315.41

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamIntel ADL-S GT1 - Intel Core i3-121000.92651.8532.77953.7064.6325SE +/- 0.0219, N = 34.1179

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamIntel ADL-S GT1 - Intel Core i3-12100110220330440550SE +/- 2.32, N = 3485.38

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamIntel ADL-S GT1 - Intel Core i3-121000.97471.94942.92413.89884.8735SE +/- 0.0059, N = 34.3321

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamIntel ADL-S GT1 - Intel Core i3-1210050100150200250SE +/- 0.31, N = 3230.83

ONNX Runtime

Model: GPT-2 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.17Model: GPT-2 - Device: CPU - Executor: ParallelIntel ADL-S GT1 - Intel Core i3-121001122334455SE +/- 0.08, N = 350.311. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: GPT-2 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.17Model: GPT-2 - Device: CPU - Executor: StandardIntel ADL-S GT1 - Intel Core i3-121001224364860SE +/- 0.51, N = 351.121. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: T5 Encoder - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.17Model: T5 Encoder - Device: CPU - Executor: ParallelIntel ADL-S GT1 - Intel Core i3-121001326395265SE +/- 0.04, N = 356.031. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: T5 Encoder - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.17Model: T5 Encoder - Device: CPU - Executor: StandardIntel ADL-S GT1 - Intel Core i3-121001326395265SE +/- 0.02, N = 357.591. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: bertsquad-12 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.17Model: bertsquad-12 - Device: CPU - Executor: ParallelIntel ADL-S GT1 - Intel Core i3-121001.20262.40523.60784.81046.013SE +/- 0.07908, N = 155.344941. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: bertsquad-12 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.17Model: bertsquad-12 - Device: CPU - Executor: StandardIntel ADL-S GT1 - Intel Core i3-12100246810SE +/- 0.33732, N = 147.915021. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.17Model: CaffeNet 12-int8 - Device: CPU - Executor: ParallelIntel ADL-S GT1 - Intel Core i3-1210050100150200250SE +/- 2.29, N = 3209.451. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.17Model: CaffeNet 12-int8 - Device: CPU - Executor: StandardIntel ADL-S GT1 - Intel Core i3-1210050100150200250SE +/- 2.63, N = 3233.251. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.17Model: fcn-resnet101-11 - Device: CPU - Executor: ParallelIntel ADL-S GT1 - Intel Core i3-121000.11530.23060.34590.46120.5765SE +/- 0.015328, N = 150.5124761. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: fcn-resnet101-11 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.17Model: fcn-resnet101-11 - Device: CPU - Executor: StandardIntel ADL-S GT1 - Intel Core i3-121000.16710.33420.50130.66840.8355SE +/- 0.012881, N = 150.7425321. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.17Model: ArcFace ResNet-100 - Device: CPU - Executor: ParallelIntel ADL-S GT1 - Intel Core i3-121003691215SE +/- 0.12, N = 611.541. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.17Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardIntel ADL-S GT1 - Intel Core i3-1210048121620SE +/- 0.19, N = 517.311. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.17Model: ResNet50 v1-12-int8 - Device: CPU - Executor: ParallelIntel ADL-S GT1 - Intel Core i3-12100306090120150SE +/- 1.34, N = 3122.011. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.17Model: ResNet50 v1-12-int8 - Device: CPU - Executor: StandardIntel ADL-S GT1 - Intel Core i3-12100306090120150SE +/- 0.84, N = 3142.741. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: super-resolution-10 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.17Model: super-resolution-10 - Device: CPU - Executor: ParallelIntel ADL-S GT1 - Intel Core i3-12100918273645SE +/- 0.42, N = 438.041. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: super-resolution-10 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.17Model: super-resolution-10 - Device: CPU - Executor: StandardIntel ADL-S GT1 - Intel Core i3-121001224364860SE +/- 1.87, N = 1553.261. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.17Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: ParallelIntel ADL-S GT1 - Intel Core i3-121000.9981.9962.9943.9924.99SE +/- 0.03335, N = 34.435751. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.17Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: StandardIntel ADL-S GT1 - Intel Core i3-121001.14772.29543.44314.59085.7385SE +/- 0.06535, N = 125.101101. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

OpenCV

Test: DNN - Deep Neural Network

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: DNN - Deep Neural NetworkIntel ADL-S GT1 - Intel Core i3-121006K12K18K24K30KSE +/- 832.87, N = 12271851. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 1 - Model: ResNet-50Intel ADL-S GT1 - Intel Core i3-12100612182430SE +/- 0.26, N = 424.89MIN: 11.71 / MAX: 25.9

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 1 - Model: ResNet-152Intel ADL-S GT1 - Intel Core i3-121003691215SE +/- 0.03, N = 311.34MIN: 7.58 / MAX: 11.54

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 16 - Model: ResNet-50Intel ADL-S GT1 - Intel Core i3-121003691215SE +/- 0.11, N = 312.57MIN: 8.58 / MAX: 12.97

PyTorch

Device: CPU - Batch Size: 32 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 32 - Model: ResNet-50Intel ADL-S GT1 - Intel Core i3-121003691215SE +/- 0.11, N = 812.56MIN: 7.8 / MAX: 13.33

PyTorch

Device: CPU - Batch Size: 64 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 64 - Model: ResNet-50Intel ADL-S GT1 - Intel Core i3-121003691215SE +/- 0.12, N = 312.69MIN: 8.56 / MAX: 13.22

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 16 - Model: ResNet-152Intel ADL-S GT1 - Intel Core i3-12100246810SE +/- 0.04, N = 36.33MIN: 4.35 / MAX: 6.45

PyTorch

Device: CPU - Batch Size: 256 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 256 - Model: ResNet-50Intel ADL-S GT1 - Intel Core i3-121003691215SE +/- 0.09, N = 312.91MIN: 8.58 / MAX: 13.25

PyTorch

Device: CPU - Batch Size: 32 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 32 - Model: ResNet-152Intel ADL-S GT1 - Intel Core i3-12100246810SE +/- 0.03, N = 36.24MIN: 4.36 / MAX: 6.39

PyTorch

Device: CPU - Batch Size: 512 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 512 - Model: ResNet-50Intel ADL-S GT1 - Intel Core i3-121003691215SE +/- 0.18, N = 312.76MIN: 7.94 / MAX: 13.36

PyTorch

Device: CPU - Batch Size: 64 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 64 - Model: ResNet-152Intel ADL-S GT1 - Intel Core i3-12100246810SE +/- 0.03, N = 36.23MIN: 3.95 / MAX: 6.36

PyTorch

Device: CPU - Batch Size: 256 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 256 - Model: ResNet-152Intel ADL-S GT1 - Intel Core i3-12100246810SE +/- 0.02, N = 36.22MIN: 4.3 / MAX: 6.36

PyTorch

Device: CPU - Batch Size: 512 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 512 - Model: ResNet-152Intel ADL-S GT1 - Intel Core i3-12100246810SE +/- 0.03, N = 36.17MIN: 4.17 / MAX: 6.34

PyTorch

Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_lIntel ADL-S GT1 - Intel Core i3-12100246810SE +/- 0.08, N = 58.10MIN: 5.47 / MAX: 8.32

PyTorch

Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_lIntel ADL-S GT1 - Intel Core i3-121001.01482.02963.04444.05925.074SE +/- 0.04, N = 34.51MIN: 3.16 / MAX: 4.61

PyTorch

Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_lIntel ADL-S GT1 - Intel Core i3-121001.02382.04763.07144.09525.119SE +/- 0.01, N = 34.55MIN: 3.32 / MAX: 4.61

PyTorch

Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_lIntel ADL-S GT1 - Intel Core i3-121001.0172.0343.0514.0685.085SE +/- 0.01, N = 34.52MIN: 3.17 / MAX: 4.6

PyTorch

Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_lIntel ADL-S GT1 - Intel Core i3-121001.01252.0253.03754.055.0625SE +/- 0.02, N = 34.50MIN: 3.35 / MAX: 4.57

PyTorch

Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_lIntel ADL-S GT1 - Intel Core i3-121001.0082.0163.0244.0325.04SE +/- 0.04, N = 34.48MIN: 3.37 / MAX: 4.59

TensorFlow Lite

Model: SqueezeNet

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNetIntel ADL-S GT1 - Intel Core i3-121009001800270036004500SE +/- 12.96, N = 34322.71

TensorFlow Lite

Model: Inception V4

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4Intel ADL-S GT1 - Intel Core i3-1210014K28K42K56K70KSE +/- 110.99, N = 364931.8

TensorFlow Lite

Model: NASNet Mobile

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet MobileIntel ADL-S GT1 - Intel Core i3-121003K6K9K12K15KSE +/- 3.59, N = 312005.2

TensorFlow Lite

Model: Mobilenet Float

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet FloatIntel ADL-S GT1 - Intel Core i3-121008001600240032004000SE +/- 12.76, N = 33574.97

TensorFlow Lite

Model: Mobilenet Quant

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet QuantIntel ADL-S GT1 - Intel Core i3-1210011002200330044005500SE +/- 9.03, N = 35198.07

TensorFlow Lite

Model: Inception ResNet V2

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V2Intel ADL-S GT1 - Intel Core i3-1210013K26K39K52K65KSE +/- 49.41, N = 360949.0

TNN

Target: CPU - Model: DenseNet

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetIntel ADL-S GT1 - Intel Core i3-121005001000150020002500SE +/- 1.75, N = 32446.58MIN: 2392.91 / MAX: 2547.471. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

TNN

Target: CPU - Model: MobileNet v2

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2Intel ADL-S GT1 - Intel Core i3-121004080120160200SE +/- 0.11, N = 3201.11MIN: 198.48 / MAX: 216.21. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

TNN

Target: CPU - Model: SqueezeNet v2

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v2Intel ADL-S GT1 - Intel Core i3-121001020304050SE +/- 0.07, N = 346.21MIN: 45.29 / MAX: 48.721. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

TNN

Target: CPU - Model: SqueezeNet v1.1

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.1Intel ADL-S GT1 - Intel Core i3-121004080120160200SE +/- 0.09, N = 3166.13MIN: 162.86 / MAX: 170.171. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

Whisper.cpp

Model: ggml-base.en - Input: 2016 State of the Union

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.6.2Model: ggml-base.en - Input: 2016 State of the UnionIntel ADL-S GT1 - Intel Core i3-1210050100150200250SE +/- 0.15, N = 3246.291. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2

Whisper.cpp

Model: ggml-small.en - Input: 2016 State of the Union

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.6.2Model: ggml-small.en - Input: 2016 State of the UnionIntel ADL-S GT1 - Intel Core i3-12100160320480640800SE +/- 0.45, N = 3753.361. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2

Whisper.cpp

Model: ggml-medium.en - Input: 2016 State of the Union

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.6.2Model: ggml-medium.en - Input: 2016 State of the UnionIntel ADL-S GT1 - Intel Core i3-121005001000150020002500SE +/- 3.55, N = 32280.501. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2

XNNPACK

Model: FP32MobileNetV2

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK 2cd86bModel: FP32MobileNetV2Intel ADL-S GT1 - Intel Core i3-1210012002400360048006000SE +/- 15.31, N = 355451. (CXX) g++ options: -O3 -lrt -pthread -lpthread -lm

XNNPACK

Model: FP32MobileNetV3Large

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK 2cd86bModel: FP32MobileNetV3LargeIntel ADL-S GT1 - Intel Core i3-1210012002400360048006000SE +/- 28.00, N = 354841. (CXX) g++ options: -O3 -lrt -pthread -lpthread -lm

XNNPACK

Model: FP32MobileNetV3Small

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK 2cd86bModel: FP32MobileNetV3SmallIntel ADL-S GT1 - Intel Core i3-1210030060090012001500SE +/- 14.19, N = 313021. (CXX) g++ options: -O3 -lrt -pthread -lpthread -lm

XNNPACK

Model: FP16MobileNetV2

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK 2cd86bModel: FP16MobileNetV2Intel ADL-S GT1 - Intel Core i3-12100700140021002800350034441. (CXX) g++ options: -O3 -lrt -pthread -lpthread -lm

XNNPACK

Model: FP16MobileNetV3Large

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK 2cd86bModel: FP16MobileNetV3LargeIntel ADL-S GT1 - Intel Core i3-121007001400210028003500SE +/- 40.00, N = 333961. (CXX) g++ options: -O3 -lrt -pthread -lpthread -lm

XNNPACK

Model: FP16MobileNetV3Small

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK 2cd86bModel: FP16MobileNetV3SmallIntel ADL-S GT1 - Intel Core i3-121002004006008001000SE +/- 5.84, N = 39651. (CXX) g++ options: -O3 -lrt -pthread -lpthread -lm

XNNPACK

Model: QU8MobileNetV2

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK 2cd86bModel: QU8MobileNetV2Intel ADL-S GT1 - Intel Core i3-121005001000150020002500SE +/- 21.07, N = 321631. (CXX) g++ options: -O3 -lrt -pthread -lpthread -lm

XNNPACK

Model: QU8MobileNetV3Large

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK 2cd86bModel: QU8MobileNetV3LargeIntel ADL-S GT1 - Intel Core i3-12100400800120016002000SE +/- 20.30, N = 319741. (CXX) g++ options: -O3 -lrt -pthread -lpthread -lm

XNNPACK

Model: QU8MobileNetV3Small

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK 2cd86bModel: QU8MobileNetV3SmallIntel ADL-S GT1 - Intel Core i3-12100160320480640800SE +/- 0.67, N = 37281. (CXX) g++ options: -O3 -lrt -pthread -lpthread -lm

Caffe

Model: AlexNet - Acceleration: CPU - Iterations: 100

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 100Intel ADL-S GT1 - Intel Core i3-1210010K20K30K40K50KSE +/- 405.55, N = 3446221. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

Caffe

Model: AlexNet - Acceleration: CPU - Iterations: 200

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 200Intel ADL-S GT1 - Intel Core i3-1210020K40K60K80K100KSE +/- 881.92, N = 3902291. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

Caffe

Model: AlexNet - Acceleration: CPU - Iterations: 1000

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 1000Intel ADL-S GT1 - Intel Core i3-12100100K200K300K400K500KSE +/- 4664.71, N = 34459331. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

Caffe

Model: GoogleNet - Acceleration: CPU - Iterations: 100

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 100Intel ADL-S GT1 - Intel Core i3-1210020K40K60K80K100KSE +/- 148.50, N = 31034111. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

Caffe

Model: GoogleNet - Acceleration: CPU - Iterations: 200

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 200Intel ADL-S GT1 - Intel Core i3-1210040K80K120K160K200KSE +/- 44.14, N = 32064621. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

Caffe

Model: GoogleNet - Acceleration: CPU - Iterations: 1000

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 1000Intel ADL-S GT1 - Intel Core i3-12100200K400K600K800K1000KSE +/- 3248.77, N = 310383201. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

NCNN

Target: CPU - Model: mobilenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mobilenetIntel ADL-S GT1 - Intel Core i3-12100510152025SE +/- 0.26, N = 421.50MIN: 20.87 / MAX: 110.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU-v2-v2 - Model: mobilenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v2-v2 - Model: mobilenet-v2Intel ADL-S GT1 - Intel Core i3-121001.20382.40763.61144.81526.019SE +/- 0.04, N = 45.35MIN: 5.1 / MAX: 16.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU-v3-v3 - Model: mobilenet-v3

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v3-v3 - Model: mobilenet-v3Intel ADL-S GT1 - Intel Core i3-121000.71551.4312.14652.8623.5775SE +/- 0.02, N = 43.18MIN: 3.1 / MAX: 5.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: shufflenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: shufflenet-v2Intel ADL-S GT1 - Intel Core i3-121000.50631.01261.51892.02522.5315SE +/- 0.03, N = 42.25MIN: 2.16 / MAX: 18.021. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: mnasnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mnasnetIntel ADL-S GT1 - Intel Core i3-121000.79431.58862.38293.17723.9715SE +/- 0.25, N = 43.53MIN: 3.13 / MAX: 39.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: efficientnet-b0

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: efficientnet-b0Intel ADL-S GT1 - Intel Core i3-12100246810SE +/- 0.06, N = 48.15MIN: 7.89 / MAX: 41.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: blazeface

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: blazefaceIntel ADL-S GT1 - Intel Core i3-121000.13950.2790.41850.5580.6975SE +/- 0.00, N = 40.62MIN: 0.59 / MAX: 0.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: googlenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: googlenetIntel ADL-S GT1 - Intel Core i3-1210048121620SE +/- 0.28, N = 416.13MIN: 15.65 / MAX: 75.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: vgg16

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vgg16Intel ADL-S GT1 - Intel Core i3-1210020406080100SE +/- 0.09, N = 4109.92MIN: 108.76 / MAX: 150.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: resnet18

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet18Intel ADL-S GT1 - Intel Core i3-121003691215SE +/- 0.02, N = 412.34MIN: 12.22 / MAX: 14.131. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: alexnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: alexnetIntel ADL-S GT1 - Intel Core i3-121003691215SE +/- 0.02, N = 412.63MIN: 12.45 / MAX: 50.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: resnet50

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet50Intel ADL-S GT1 - Intel Core i3-12100612182430SE +/- 0.11, N = 426.71MIN: 26.31 / MAX: 69.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: yolov4-tiny

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: yolov4-tinyIntel ADL-S GT1 - Intel Core i3-12100918273645SE +/- 0.06, N = 440.12MIN: 39.72 / MAX: 83.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: squeezenet_ssd

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: squeezenet_ssdIntel ADL-S GT1 - Intel Core i3-1210048121620SE +/- 0.08, N = 417.42MIN: 17.17 / MAX: 33.611. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: regnety_400m

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: regnety_400mIntel ADL-S GT1 - Intel Core i3-12100246810SE +/- 0.03, N = 46.57MIN: 6.47 / MAX: 15.951. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: vision_transformer

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vision_transformerIntel ADL-S GT1 - Intel Core i3-12100306090120150SE +/- 0.38, N = 4135.49MIN: 134.44 / MAX: 253.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: FastestDet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: FastestDetIntel ADL-S GT1 - Intel Core i3-121000.74481.48962.23442.97923.724SE +/- 0.05, N = 43.31MIN: 3.17 / MAX: 7.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: Vulkan GPU - Model: mobilenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: mobilenetIntel ADL-S GT1 - Intel Core i3-12100510152025SE +/- 0.03, N = 321.05MIN: 20.81 / MAX: 24.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2Intel ADL-S GT1 - Intel Core i3-121001.19032.38063.57094.76125.9515SE +/- 0.02, N = 35.29MIN: 5.08 / MAX: 7.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3Intel ADL-S GT1 - Intel Core i3-121000.71781.43562.15342.87123.589SE +/- 0.00, N = 33.19MIN: 3.11 / MAX: 4.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: Vulkan GPU - Model: shufflenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: shufflenet-v2Intel ADL-S GT1 - Intel Core i3-121000.49730.99461.49191.98922.4865SE +/- 0.01, N = 32.21MIN: 2.16 / MAX: 3.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: Vulkan GPU - Model: mnasnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: mnasnetIntel ADL-S GT1 - Intel Core i3-121000.71781.43562.15342.87123.589SE +/- 0.02, N = 23.19MIN: 3.12 / MAX: 4.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: Vulkan GPU - Model: efficientnet-b0

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: efficientnet-b0Intel ADL-S GT1 - Intel Core i3-12100246810SE +/- 0.02, N = 37.99MIN: 7.85 / MAX: 10.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: Vulkan GPU - Model: blazeface

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: blazefaceIntel ADL-S GT1 - Intel Core i3-121000.13730.27460.41190.54920.6865SE +/- 0.00, N = 30.61MIN: 0.59 / MAX: 0.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: Vulkan GPU - Model: googlenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: googlenetIntel ADL-S GT1 - Intel Core i3-1210048121620SE +/- 0.04, N = 315.83MIN: 15.64 / MAX: 54.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: Vulkan GPU - Model: vgg16

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: vgg16Intel ADL-S GT1 - Intel Core i3-1210020406080100SE +/- 0.29, N = 3111.12MIN: 109.05 / MAX: 253.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: Vulkan GPU - Model: resnet18

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: resnet18Intel ADL-S GT1 - Intel Core i3-121003691215SE +/- 0.02, N = 312.25MIN: 12.15 / MAX: 12.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: Vulkan GPU - Model: alexnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: alexnetIntel ADL-S GT1 - Intel Core i3-121003691215SE +/- 0.02, N = 312.54MIN: 12.43 / MAX: 14.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: Vulkan GPU - Model: resnet50

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: resnet50Intel ADL-S GT1 - Intel Core i3-12100612182430SE +/- 0.29, N = 326.97MIN: 26.39 / MAX: 123.41. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: Vulkan GPU - Model: yolov4-tiny

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: yolov4-tinyIntel ADL-S GT1 - Intel Core i3-12100918273645SE +/- 0.14, N = 339.80MIN: 39.37 / MAX: 78.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: Vulkan GPU - Model: squeezenet_ssd

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: squeezenet_ssdIntel ADL-S GT1 - Intel Core i3-1210048121620SE +/- 0.03, N = 317.49MIN: 17.21 / MAX: 58.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: Vulkan GPU - Model: regnety_400m

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: regnety_400mIntel ADL-S GT1 - Intel Core i3-12100246810SE +/- 0.00, N = 36.53MIN: 6.48 / MAX: 6.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: Vulkan GPU - Model: vision_transformer

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: vision_transformerIntel ADL-S GT1 - Intel Core i3-12100306090120150SE +/- 0.09, N = 3136.23MIN: 134.17 / MAX: 308.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: Vulkan GPU - Model: FastestDet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: FastestDetIntel ADL-S GT1 - Intel Core i3-121000.74251.4852.22752.973.7125SE +/- 0.02, N = 33.30MIN: 3.23 / MAX: 3.391. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Mlpack Benchmark

Benchmark: scikit_ica

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_icaIntel ADL-S GT1 - Intel Core i3-121001428425670SE +/- 0.72, N = 464.74

Mlpack Benchmark

Benchmark: scikit_svm

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svmIntel ADL-S GT1 - Intel Core i3-1210048121620SE +/- 0.03, N = 314.51

oneDNN

Harness: IP Shapes 1D - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: IP Shapes 1D - Engine: CPUIntel ADL-S GT1 - Intel Core i3-12100246810SE +/- 0.01732, N = 36.00601MIN: 5.511. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: IP Shapes 3D - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: IP Shapes 3D - Engine: CPUIntel ADL-S GT1 - Intel Core i3-12100612182430SE +/- 0.03, N = 326.70MIN: 26.521. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Convolution Batch Shapes Auto - Engine: CPUIntel ADL-S GT1 - Intel Core i3-121001020304050SE +/- 0.43, N = 344.14MIN: 43.161. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Deconvolution Batch shapes_1d - Engine: CPUIntel ADL-S GT1 - Intel Core i3-121003691215SE +/- 0.10, N = 1510.75MIN: 8.011. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Deconvolution Batch shapes_3d - Engine: CPUIntel ADL-S GT1 - Intel Core i3-121003691215SE +/- 0.09, N = 310.37MIN: 10.11. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Training - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Recurrent Neural Network Training - Engine: CPUIntel ADL-S GT1 - Intel Core i3-1210012002400360048006000SE +/- 4.76, N = 35708.97MIN: 5677.081. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Recurrent Neural Network Inference - Engine: CPUIntel ADL-S GT1 - Intel Core i3-121007001400210028003500SE +/- 12.33, N = 33071.31MIN: 3037.611. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection FP16 - Device: CPUIntel ADL-S GT1 - Intel Core i3-121000.2790.5580.8371.1161.395SE +/- 0.00, N = 31.241. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection FP16 - Device: CPUIntel ADL-S GT1 - Intel Core i3-121007001400210028003500SE +/- 1.80, N = 33213.28MIN: 2964.37 / MAX: 3356.751. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Detection FP16 - Device: CPUIntel ADL-S GT1 - Intel Core i3-121003691215SE +/- 0.03, N = 310.061. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Detection FP16 - Device: CPUIntel ADL-S GT1 - Intel Core i3-1210090180270360450SE +/- 1.13, N = 3397.47MIN: 223.86 / MAX: 566.441. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Detection FP32 - Device: CPUIntel ADL-S GT1 - Intel Core i3-121003691215SE +/- 0.02, N = 310.041. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Detection FP32 - Device: CPUIntel ADL-S GT1 - Intel Core i3-1210090180270360450SE +/- 0.71, N = 3398.50MIN: 293.03 / MAX: 564.351. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16 - Device: CPUIntel ADL-S GT1 - Intel Core i3-121001428425670SE +/- 0.06, N = 362.611. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16 - Device: CPUIntel ADL-S GT1 - Intel Core i3-121001428425670SE +/- 0.06, N = 363.85MIN: 30.33 / MAX: 201.511. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection FP16-INT8 - Device: CPUIntel ADL-S GT1 - Intel Core i3-121001.18352.3673.55054.7345.9175SE +/- 0.01, N = 35.261. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection FP16-INT8 - Device: CPUIntel ADL-S GT1 - Intel Core i3-12100160320480640800SE +/- 1.39, N = 3759.24MIN: 716.66 / MAX: 920.511. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Face Detection Retail FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16 - Device: CPUIntel ADL-S GT1 - Intel Core i3-1210070140210280350SE +/- 2.36, N = 3330.421. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Face Detection Retail FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16 - Device: CPUIntel ADL-S GT1 - Intel Core i3-121003691215SE +/- 0.09, N = 312.09MIN: 5.92 / MAX: 79.841. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Road Segmentation ADAS FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16 - Device: CPUIntel ADL-S GT1 - Intel Core i3-1210048121620SE +/- 0.04, N = 318.241. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Road Segmentation ADAS FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16 - Device: CPUIntel ADL-S GT1 - Intel Core i3-1210050100150200250SE +/- 0.53, N = 3219.10MIN: 120.84 / MAX: 389.851. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16-INT8 - Device: CPUIntel ADL-S GT1 - Intel Core i3-1210050100150200250SE +/- 0.79, N = 3214.211. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16-INT8 - Device: CPUIntel ADL-S GT1 - Intel Core i3-12100510152025SE +/- 0.07, N = 318.66MIN: 9.8 / MAX: 90.771. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16 - Device: CPUIntel ADL-S GT1 - Intel Core i3-12100306090120150SE +/- 0.25, N = 3136.741. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16 - Device: CPUIntel ADL-S GT1 - Intel Core i3-12100714212835SE +/- 0.06, N = 329.24MIN: 19.89 / MAX: 66.261. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Face Detection Retail FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16-INT8 - Device: CPUIntel ADL-S GT1 - Intel Core i3-121002004006008001000SE +/- 3.58, N = 3833.341. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Face Detection Retail FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16-INT8 - Device: CPUIntel ADL-S GT1 - Intel Core i3-121001.082.163.244.325.4SE +/- 0.02, N = 34.80MIN: 3.51 / MAX: 52.641. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Road Segmentation ADAS FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16-INT8 - Device: CPUIntel ADL-S GT1 - Intel Core i3-121001428425670SE +/- 0.23, N = 363.191. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Road Segmentation ADAS FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16-INT8 - Device: CPUIntel ADL-S GT1 - Intel Core i3-121001428425670SE +/- 0.22, N = 363.27MIN: 31.99 / MAX: 132.51. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Machine Translation EN To DE FP16 - Device: CPUIntel ADL-S GT1 - Intel Core i3-1210048121620SE +/- 0.02, N = 314.951. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Machine Translation EN To DE FP16 - Device: CPUIntel ADL-S GT1 - Intel Core i3-1210060120180240300SE +/- 0.35, N = 3267.44MIN: 221.5 / MAX: 472.421. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16-INT8 - Device: CPUIntel ADL-S GT1 - Intel Core i3-12100110220330440550SE +/- 0.62, N = 3513.041. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16-INT8 - Device: CPUIntel ADL-S GT1 - Intel Core i3-12100246810SE +/- 0.01, N = 37.79MIN: 4.62 / MAX: 55.081. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Vehicle Bike Detection FP16 - Device: CPUIntel ADL-S GT1 - Intel Core i3-121004080120160200SE +/- 2.19, N = 3166.711. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Vehicle Bike Detection FP16 - Device: CPUIntel ADL-S GT1 - Intel Core i3-12100612182430SE +/- 0.32, N = 323.99MIN: 13.37 / MAX: 116.891. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Noise Suppression Poconet-Like FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Noise Suppression Poconet-Like FP16 - Device: CPUIntel ADL-S GT1 - Intel Core i3-1210050100150200250SE +/- 0.74, N = 3208.841. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Noise Suppression Poconet-Like FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Noise Suppression Poconet-Like FP16 - Device: CPUIntel ADL-S GT1 - Intel Core i3-12100510152025SE +/- 0.06, N = 319.10MIN: 10.05 / MAX: 69.61. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Handwritten English Recognition FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Handwritten English Recognition FP16 - Device: CPUIntel ADL-S GT1 - Intel Core i3-121001530456075SE +/- 0.24, N = 366.401. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Handwritten English Recognition FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Handwritten English Recognition FP16 - Device: CPUIntel ADL-S GT1 - Intel Core i3-121001326395265SE +/- 0.22, N = 360.21MIN: 40.96 / MAX: 136.761. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Person Re-Identification Retail FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Re-Identification Retail FP16 - Device: CPUIntel ADL-S GT1 - Intel Core i3-121004080120160200SE +/- 1.60, N = 3201.901. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Person Re-Identification Retail FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Re-Identification Retail FP16 - Device: CPUIntel ADL-S GT1 - Intel Core i3-12100510152025SE +/- 0.16, N = 319.80MIN: 9.77 / MAX: 70.871. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUIntel ADL-S GT1 - Intel Core i3-121007001400210028003500SE +/- 28.02, N = 33481.921. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUIntel ADL-S GT1 - Intel Core i3-121000.25650.5130.76951.0261.2825SE +/- 0.01, N = 31.14MIN: 0.58 / MAX: 29.631. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Handwritten English Recognition FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Handwritten English Recognition FP16-INT8 - Device: CPUIntel ADL-S GT1 - Intel Core i3-1210020406080100SE +/- 0.38, N = 388.411. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Handwritten English Recognition FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Handwritten English Recognition FP16-INT8 - Device: CPUIntel ADL-S GT1 - Intel Core i3-121001020304050SE +/- 0.20, N = 345.22MIN: 31.25 / MAX: 119.061. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUIntel ADL-S GT1 - Intel Core i3-121002K4K6K8K10KSE +/- 11.73, N = 310592.671. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUIntel ADL-S GT1 - Intel Core i3-121000.08330.16660.24990.33320.4165SE +/- 0.00, N = 30.37MIN: 0.24 / MAX: 45.251. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

ONNX Runtime

Model: GPT-2 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.17Model: GPT-2 - Device: CPU - Executor: ParallelIntel ADL-S GT1 - Intel Core i3-12100510152025SE +/- 0.03, N = 319.871. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: GPT-2 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.17Model: GPT-2 - Device: CPU - Executor: StandardIntel ADL-S GT1 - Intel Core i3-12100510152025SE +/- 0.19, N = 319.561. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: T5 Encoder - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.17Model: T5 Encoder - Device: CPU - Executor: ParallelIntel ADL-S GT1 - Intel Core i3-1210048121620SE +/- 0.01, N = 317.851. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: T5 Encoder - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.17Model: T5 Encoder - Device: CPU - Executor: StandardIntel ADL-S GT1 - Intel Core i3-1210048121620SE +/- 0.01, N = 317.361. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: bertsquad-12 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.17Model: bertsquad-12 - Device: CPU - Executor: ParallelIntel ADL-S GT1 - Intel Core i3-121004080120160200SE +/- 2.89, N = 15187.691. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: bertsquad-12 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.17Model: bertsquad-12 - Device: CPU - Executor: StandardIntel ADL-S GT1 - Intel Core i3-12100306090120150SE +/- 8.08, N = 14130.811. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.17Model: CaffeNet 12-int8 - Device: CPU - Executor: ParallelIntel ADL-S GT1 - Intel Core i3-121001.07432.14863.22294.29725.3715SE +/- 0.05286, N = 34.774721. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.17Model: CaffeNet 12-int8 - Device: CPU - Executor: StandardIntel ADL-S GT1 - Intel Core i3-121000.96471.92942.89413.85884.8235SE +/- 0.04898, N = 34.287711. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.17Model: fcn-resnet101-11 - Device: CPU - Executor: ParallelIntel ADL-S GT1 - Intel Core i3-12100400800120016002000SE +/- 57.61, N = 151975.361. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: fcn-resnet101-11 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.17Model: fcn-resnet101-11 - Device: CPU - Executor: StandardIntel ADL-S GT1 - Intel Core i3-1210030060090012001500SE +/- 26.80, N = 151353.241. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.17Model: ArcFace ResNet-100 - Device: CPU - Executor: ParallelIntel ADL-S GT1 - Intel Core i3-1210020406080100SE +/- 0.89, N = 686.701. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.17Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardIntel ADL-S GT1 - Intel Core i3-121001326395265SE +/- 0.65, N = 557.791. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.17Model: ResNet50 v1-12-int8 - Device: CPU - Executor: ParallelIntel ADL-S GT1 - Intel Core i3-12100246810SE +/- 0.09036, N = 38.197371. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.17Model: ResNet50 v1-12-int8 - Device: CPU - Executor: StandardIntel ADL-S GT1 - Intel Core i3-12100246810SE +/- 0.04100, N = 37.005361. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: super-resolution-10 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.17Model: super-resolution-10 - Device: CPU - Executor: ParallelIntel ADL-S GT1 - Intel Core i3-12100612182430SE +/- 0.28, N = 426.291. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: super-resolution-10 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.17Model: super-resolution-10 - Device: CPU - Executor: StandardIntel ADL-S GT1 - Intel Core i3-12100510152025SE +/- 0.90, N = 1519.211. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.17Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: ParallelIntel ADL-S GT1 - Intel Core i3-1210050100150200250SE +/- 1.70, N = 3225.471. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread

ONNX Runtime

Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.17Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: StandardIntel ADL-S GT1 - Intel Core i3-121004080120160200SE +/- 2.83, N = 12196.431. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt -lpthread -pthread


Phoronix Test Suite v10.8.5