H610-i312100-1

Intel Core i3-12100 testing with a ASRock H610M-HDV/M.2 R2.0 (6.03 BIOS) and Intel ADL-S GT1 3GB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2408228-HERT-H610I3171
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Intel ADL-S GT1 - Intel Core i3-12100
August 20
  2 Days, 17 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


H610-i312100-1 Suite 1.0.0 System Test suite extracted from H610-i312100-1. pts/scikit-learn-2.0.0 online_ocsvm.py Benchmark: SGDOneClassSVM pts/scikit-learn-2.0.0 isolation_forest.py Benchmark: Isolation Forest pts/whisper-cpp-1.1.0 -m models/ggml-medium.en.bin -f ../2016-state-of-the-union.wav Model: ggml-medium.en - Input: 2016 State of the Union pts/scikit-learn-2.0.0 plot_fastkmeans.py Benchmark: Plot Fast KMeans pts/deepsparse-1.7.0 zoo:llama2-7b-llama2_chat_llama2_pretrain-base_quantized --scenario async Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream pts/scikit-learn-2.0.0 random_projections.py --n-times 100 Benchmark: Sparse Random Projections / 100 Iterations pts/scikit-learn-2.0.0 kernel_pca_solvers_time_vs_n_components.py Benchmark: Kernel PCA Solvers / Time vs. N Components pts/caffe-1.5.0 --model=../models/bvlc_googlenet/deploy.prototxt -iterations 1000 Model: GoogleNet - Acceleration: CPU - Iterations: 1000 pts/scikit-learn-2.0.0 saga.py Benchmark: SAGA pts/deepsparse-1.7.0 zoo:llama2-7b-llama2_chat_llama2_pretrain-base_quantized --scenario sync Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream pts/scikit-learn-2.0.0 glm.py Benchmark: GLM pts/scikit-learn-2.0.0 hist_gradient_boosting_higgsboson.py Benchmark: Hist Gradient Boosting Higgs Boson pts/whisper-cpp-1.1.0 -m models/ggml-small.en.bin -f ../2016-state-of-the-union.wav Model: ggml-small.en - Input: 2016 State of the Union pts/scikit-learn-2.0.0 lasso.py Benchmark: Lasso pts/scikit-learn-2.0.0 covertype.py Benchmark: Covertype Dataset Benchmark pts/plaidml-1.0.4 --no-fp16 --no-train resnet50 CPU FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU pts/mlpack-1.0.2 SCIKIT_QDA Benchmark: scikit_qda pts/mlpack-1.0.2 SCIKIT_LINEARRIDGEREGRESSION Benchmark: scikit_linearridgeregression pts/scikit-learn-2.0.0 hist_gradient_boosting_threading.py Benchmark: Hist Gradient Boosting Threading pts/caffe-1.5.0 --model=../models/bvlc_alexnet/deploy.prototxt -iterations 1000 Model: AlexNet - Acceleration: CPU - Iterations: 1000 pts/scikit-learn-2.0.0 tsne_mnist.py Benchmark: TSNE MNIST Dataset pts/scikit-learn-2.0.0 isotonic.py --iterations 100 --log_min_problem_size 1 --log_max_problem_size 10 --dataset perturbed_logarithm Benchmark: Isotonic / Perturbed Logarithm pts/scikit-learn-2.0.0 isotonic.py --iterations 100 --log_min_problem_size 1 --log_max_problem_size 10 --dataset pathological Benchmark: Isotonic / Pathological pts/pytorch-1.1.0 cpu 512 efficientnet_v2_l Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l pts/pytorch-1.1.0 cpu 256 efficientnet_v2_l Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l pts/pytorch-1.1.0 cpu 64 efficientnet_v2_l Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l pts/pytorch-1.1.0 cpu 16 efficientnet_v2_l Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l pts/pytorch-1.1.0 cpu 32 efficientnet_v2_l Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l pts/scikit-learn-2.0.0 kernel_pca_solvers_time_vs_n_samples.py Benchmark: Kernel PCA Solvers / Time vs. N Samples pts/pytorch-1.1.0 cpu 32 resnet50 Device: CPU - Batch Size: 32 - Model: ResNet-50 pts/scikit-learn-2.0.0 feature_expansions.py Benchmark: Feature Expansions pts/onnx-1.17.0 fcn-resnet101-11/model.onnx -e cpu -P Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel pts/scikit-learn-2.0.0 plot_polynomial_kernel_approximation.py Benchmark: Plot Polynomial Kernel Approximation pts/onnx-1.17.0 fcn-resnet101-11/model.onnx -e cpu Model: fcn-resnet101-11 - Device: CPU - Executor: Standard pts/xnnpack-1.0.0 Model: QU8MobileNetV3Small pts/xnnpack-1.0.0 Model: QU8MobileNetV3Large pts/xnnpack-1.0.0 Model: QU8MobileNetV2 pts/xnnpack-1.0.0 Model: FP16MobileNetV3Small pts/xnnpack-1.0.0 Model: FP16MobileNetV3Large pts/xnnpack-1.0.0 Model: FP16MobileNetV2 pts/xnnpack-1.0.0 Model: FP32MobileNetV3Small pts/xnnpack-1.0.0 Model: FP32MobileNetV3Large pts/xnnpack-1.0.0 Model: FP32MobileNetV2 pts/scikit-learn-2.0.0 isotonic.py --iterations 100 --log_min_problem_size 1 --log_max_problem_size 10 --dataset logistic Benchmark: Isotonic / Logistic pts/onnx-1.17.0 bertsquad-12/bertsquad-12.onnx -e cpu -P Model: bertsquad-12 - Device: CPU - Executor: Parallel pts/scikit-learn-2.0.0 plot_lasso_path.py Benchmark: Plot Lasso Path pts/onnx-1.17.0 super_resolution/super_resolution.onnx -e cpu Model: super-resolution-10 - Device: CPU - Executor: Standard pts/onnx-1.17.0 bertsquad-12/bertsquad-12.onnx -e cpu Model: bertsquad-12 - Device: CPU - Executor: Standard pts/pytorch-1.1.0 cpu 512 resnet152 Device: CPU - Batch Size: 512 - Model: ResNet-152 pts/pytorch-1.1.0 cpu 64 resnet152 Device: CPU - Batch Size: 64 - Model: ResNet-152 pts/pytorch-1.1.0 cpu 256 resnet152 Device: CPU - Batch Size: 256 - Model: ResNet-152 pts/pytorch-1.1.0 cpu 32 resnet152 Device: CPU - Batch Size: 32 - Model: ResNet-152 pts/pytorch-1.1.0 cpu 16 resnet152 Device: CPU - Batch Size: 16 - Model: ResNet-152 pts/scikit-learn-2.0.0 plot_svd.py Benchmark: Plot Singular Value Decomposition pts/scikit-learn-2.0.0 plot_hierarchical.py Benchmark: Plot Hierarchical pts/plaidml-1.0.4 --no-fp16 --no-train vgg16 CPU FP16: No - Mode: Inference - Network: VGG16 - Device: CPU pts/onnx-1.17.0 FasterRCNN-12-int8/FasterRCNN-12-int8.onnx -e cpu Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard pts/whisper-cpp-1.1.0 -m models/ggml-base.en.bin -f ../2016-state-of-the-union.wav Model: ggml-base.en - Input: 2016 State of the Union pts/numenta-nab-1.1.1 -d knncad Detector: KNN CAD pts/pytorch-1.1.0 cpu 1 efficientnet_v2_l Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l pts/scikit-learn-2.0.0 tree.py Benchmark: Tree pts/numenta-nab-1.1.1 -d earthgeckoSkyline Detector: Earthgecko Skyline pts/caffe-1.5.0 --model=../models/bvlc_googlenet/deploy.prototxt -iterations 200 Model: GoogleNet - Acceleration: CPU - Iterations: 200 pts/numenta-nab-1.1.1 -d bayesChangePt Detector: Bayesian Changepoint pts/scikit-learn-2.0.0 plot_omp_lars.py Benchmark: Plot OMP vs. LARS pts/mnn-2.9.0 Model: inception-v3 pts/mnn-2.9.0 Model: mobilenet-v1-1.0 pts/mnn-2.9.0 Model: MobileNetV2_224 pts/mnn-2.9.0 Model: SqueezeNetV1.0 pts/mnn-2.9.0 Model: resnet-v2-50 pts/mnn-2.9.0 Model: squeezenetv1.1 pts/mnn-2.9.0 Model: mobilenetV3 pts/mnn-2.9.0 Model: nasnet pts/ncnn-1.5.0 -1 Target: CPU - Model: FastestDet pts/ncnn-1.5.0 -1 Target: CPU - Model: vision_transformer pts/ncnn-1.5.0 -1 Target: CPU - Model: regnety_400m pts/ncnn-1.5.0 -1 Target: CPU - Model: squeezenet_ssd pts/ncnn-1.5.0 -1 Target: CPU - Model: yolov4-tiny pts/ncnn-1.5.0 -1 Target: CPU - Model: resnet50 pts/ncnn-1.5.0 -1 Target: CPU - Model: alexnet pts/ncnn-1.5.0 -1 Target: CPU - Model: resnet18 pts/ncnn-1.5.0 -1 Target: CPU - Model: vgg16 pts/ncnn-1.5.0 -1 Target: CPU - Model: googlenet pts/ncnn-1.5.0 -1 Target: CPU - Model: blazeface pts/ncnn-1.5.0 -1 Target: CPU - Model: efficientnet-b0 pts/ncnn-1.5.0 -1 Target: CPU - Model: mnasnet pts/ncnn-1.5.0 -1 Target: CPU - Model: shufflenet-v2 pts/ncnn-1.5.0 -1 Target: CPU-v3-v3 - Model: mobilenet-v3 pts/ncnn-1.5.0 -1 Target: CPU-v2-v2 - Model: mobilenet-v2 pts/ncnn-1.5.0 -1 Target: CPU - Model: mobilenet pts/tnn-1.1.0 -dt NAIVE -mp ../benchmark/benchmark-model/densenet.tnnproto Target: CPU - Model: DenseNet pts/scikit-learn-2.0.0 sgd_regression.py Benchmark: SGD Regression pts/scikit-learn-2.0.0 plot_neighbors.py Benchmark: Plot Neighbors pts/pytorch-1.1.0 cpu 16 resnet50 Device: CPU - Batch Size: 16 - Model: ResNet-50 pts/pytorch-1.1.0 cpu 64 resnet50 Device: CPU - Batch Size: 64 - Model: ResNet-50 pts/pytorch-1.1.0 cpu 512 resnet50 Device: CPU - Batch Size: 512 - Model: ResNet-50 pts/pytorch-1.1.0 cpu 256 resnet50 Device: CPU - Batch Size: 256 - Model: ResNet-50 pts/ncnn-1.5.0 Target: Vulkan GPU - Model: FastestDet pts/ncnn-1.5.0 Target: Vulkan GPU - Model: vision_transformer pts/ncnn-1.5.0 Target: Vulkan GPU - Model: regnety_400m pts/ncnn-1.5.0 Target: Vulkan GPU - Model: squeezenet_ssd pts/ncnn-1.5.0 Target: Vulkan GPU - Model: yolov4-tiny pts/ncnn-1.5.0 Target: Vulkan GPU - Model: resnet50 pts/ncnn-1.5.0 Target: Vulkan GPU - Model: alexnet pts/ncnn-1.5.0 Target: Vulkan GPU - Model: resnet18 pts/ncnn-1.5.0 Target: Vulkan GPU - Model: vgg16 pts/ncnn-1.5.0 Target: Vulkan GPU - Model: googlenet pts/ncnn-1.5.0 Target: Vulkan GPU - Model: blazeface pts/ncnn-1.5.0 Target: Vulkan GPU - Model: efficientnet-b0 pts/ncnn-1.5.0 Target: Vulkan GPU - Model: mnasnet pts/ncnn-1.5.0 Target: Vulkan GPU - Model: shufflenet-v2 pts/ncnn-1.5.0 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 pts/ncnn-1.5.0 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 pts/ncnn-1.5.0 Target: Vulkan GPU - Model: mobilenet pts/numpy-1.2.1 pts/scikit-learn-2.0.0 hist_gradient_boosting.py Benchmark: Hist Gradient Boosting pts/scikit-learn-2.0.0 sample_without_replacement.py Benchmark: Sample Without Replacement pts/onnx-1.17.0 resnet100/resnet100.onnx -e cpu -P Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel pts/scikit-learn-2.0.0 lof.py Benchmark: LocalOutlierFactor pts/opencv-1.3.0 dnn Test: DNN - Deep Neural Network pts/onednn-3.4.0 --deconv --batch=inputs/deconv/shapes_1d --engine=cpu Harness: Deconvolution Batch shapes_1d - Engine: CPU pts/caffe-1.5.0 --model=../models/bvlc_googlenet/deploy.prototxt -iterations 100 Model: GoogleNet - Acceleration: CPU - Iterations: 100 pts/onnx-1.17.0 resnet100/resnet100.onnx -e cpu Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard pts/pytorch-1.1.0 cpu 1 resnet152 Device: CPU - Batch Size: 1 - Model: ResNet-152 pts/scikit-learn-2.0.0 sparsify.py Benchmark: Sparsify pts/onednn-3.4.0 --rnn --batch=inputs/rnn/perf_rnn_training --engine=cpu Harness: Recurrent Neural Network Training - Engine: CPU pts/caffe-1.5.0 --model=../models/bvlc_alexnet/deploy.prototxt -iterations 200 Model: AlexNet - Acceleration: CPU - Iterations: 200 pts/mlpack-1.0.2 SCIKIT_ICA Benchmark: scikit_ica pts/scikit-learn-2.0.0 mnist.py Benchmark: MNIST Dataset pts/onednn-3.4.0 --rnn --batch=inputs/rnn/perf_rnn_inference_lb --engine=cpu Harness: Recurrent Neural Network Inference - Engine: CPU pts/scikit-learn-2.0.0 hist_gradient_boosting_adult.py Benchmark: Hist Gradient Boosting Adult pts/onnx-1.17.0 super_resolution/super_resolution.onnx -e cpu -P Model: super-resolution-10 - Device: CPU - Executor: Parallel pts/deepspeech-1.0.0 CPU Acceleration: CPU pts/scikit-learn-2.0.0 plot_ward.py Benchmark: Plot Ward pts/scikit-learn-2.0.0 text_vectorizers.py Benchmark: Text Vectorizers pts/openvino-1.5.0 -m models/intel/face-detection-0206/FP16/face-detection-0206.xml -d CPU Model: Face Detection FP16 - Device: CPU pts/scikit-learn-2.0.0 20newsgroups.py -e logistic_regression Benchmark: 20 Newsgroups / Logistic Regression pts/pytorch-1.1.0 cpu 1 resnet50 Device: CPU - Batch Size: 1 - Model: ResNet-50 pts/openvino-1.5.0 -m models/intel/face-detection-0206/FP16-INT8/face-detection-0206.xml -d CPU Model: Face Detection FP16-INT8 - Device: CPU pts/onnx-1.17.0 GPT2/model.onnx -e cpu -P Model: GPT-2 - Device: CPU - Executor: Parallel pts/onnx-1.17.0 FasterRCNN-12-int8/FasterRCNN-12-int8.onnx -e cpu -P Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel pts/openvino-1.5.0 -m models/intel/person-detection-0303/FP16/person-detection-0303.xml -d CPU Model: Person Detection FP16 - Device: CPU pts/openvino-1.5.0 -m models/intel/person-detection-0303/FP32/person-detection-0303.xml -d CPU Model: Person Detection FP32 - Device: CPU pts/onnx-1.17.0 GPT2/model.onnx -e cpu Model: GPT-2 - Device: CPU - Executor: Standard pts/openvino-1.5.0 -m models/intel/machine-translation-nar-en-de-0002/FP16/machine-translation-nar-en-de-0002.xml -d CPU Model: Machine Translation EN To DE FP16 - Device: CPU pts/onnx-1.17.0 t5-encoder/t5-encoder.onnx -e cpu -P Model: T5 Encoder - Device: CPU - Executor: Parallel pts/tensorflow-lite-1.1.0 --graph=inception_v4.tflite Model: Inception V4 pts/tensorflow-lite-1.1.0 --graph=inception_resnet_v2.tflite Model: Inception ResNet V2 pts/onnx-1.17.0 t5-encoder/t5-encoder.onnx -e cpu Model: T5 Encoder - Device: CPU - Executor: Standard pts/openvino-1.5.0 -m models/intel/road-segmentation-adas-0001/FP16-INT8/road-segmentation-adas-0001.xml -d CPU Model: Road Segmentation ADAS FP16-INT8 - Device: CPU pts/openvino-1.5.0 -m models/intel/road-segmentation-adas-0001/FP16/road-segmentation-adas-0001.xml -d CPU Model: Road Segmentation ADAS FP16 - Device: CPU pts/tensorflow-lite-1.1.0 --graph=nasnet_mobile.tflite Model: NASNet Mobile pts/tensorflow-lite-1.1.0 --graph=mobilenet_v1_1.0_224.tflite Model: Mobilenet Float pts/tensorflow-lite-1.1.0 --graph=squeezenet.tflite Model: SqueezeNet pts/openvino-1.5.0 -m models/intel/noise-suppression-poconetlike-0001/FP16/noise-suppression-poconetlike-0001.xml -d CPU Model: Noise Suppression Poconet-Like FP16 - Device: CPU pts/tensorflow-lite-1.1.0 --graph=mobilenet_v1_1.0_224_quant.tflite Model: Mobilenet Quant pts/openvino-1.5.0 -m models/intel/person-vehicle-bike-detection-2004/FP16/person-vehicle-bike-detection-2004.xml -d CPU Model: Person Vehicle Bike Detection FP16 - Device: CPU pts/scikit-learn-2.0.0 plot_incremental_pca.py Benchmark: Plot Incremental PCA pts/openvino-1.5.0 -m models/intel/handwritten-english-recognition-0001/FP16-INT8/handwritten-english-recognition-0001.xml -d CPU Model: Handwritten English Recognition FP16-INT8 - Device: CPU pts/openvino-1.5.0 -m models/intel/person-reidentification-retail-0277/FP16/person-reidentification-retail-0277.xml -d CPU Model: Person Re-Identification Retail FP16 - Device: CPU pts/openvino-1.5.0 -m models/intel/handwritten-english-recognition-0001/FP16/handwritten-english-recognition-0001.xml -d CPU Model: Handwritten English Recognition FP16 - Device: CPU pts/openvino-1.5.0 -m models/intel/vehicle-detection-0202/FP16-INT8/vehicle-detection-0202.xml -d CPU Model: Vehicle Detection FP16-INT8 - Device: CPU pts/openvino-1.5.0 -m models/intel/face-detection-retail-0005/FP16-INT8/face-detection-retail-0005.xml -d CPU Model: Face Detection Retail FP16-INT8 - Device: CPU pts/openvino-1.5.0 -m models/intel/vehicle-detection-0202/FP16/vehicle-detection-0202.xml -d CPU Model: Vehicle Detection FP16 - Device: CPU pts/onnx-1.17.0 caffenet-12-int8/caffenet-12-int8.onnx -e cpu -P Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel pts/openvino-1.5.0 -m models/intel/weld-porosity-detection-0001/FP16/weld-porosity-detection-0001.xml -d CPU Model: Weld Porosity Detection FP16 - Device: CPU pts/openvino-1.5.0 -m models/intel/weld-porosity-detection-0001/FP16-INT8/weld-porosity-detection-0001.xml -d CPU Model: Weld Porosity Detection FP16-INT8 - Device: CPU pts/openvino-1.5.0 -m models/intel/face-detection-retail-0005/FP16/face-detection-retail-0005.xml -d CPU Model: Face Detection Retail FP16 - Device: CPU pts/onnx-1.17.0 caffenet-12-int8/caffenet-12-int8.onnx -e cpu Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard pts/openvino-1.5.0 -m models/intel/age-gender-recognition-retail-0013/FP16-INT8/age-gender-recognition-retail-0013.xml -d CPU Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU pts/onnx-1.17.0 resnet50-v1-12-int8/resnet50-v1-12-int8.onnx -e cpu -P Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel pts/openvino-1.5.0 -m models/intel/age-gender-recognition-retail-0013/FP16/age-gender-recognition-retail-0013.xml -d CPU Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU pts/onnx-1.17.0 resnet50-v1-12-int8/resnet50-v1-12-int8.onnx -e cpu Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard pts/numenta-nab-1.1.1 -d relativeEntropy Detector: Relative Entropy pts/numenta-nab-1.1.1 -d contextOSE Detector: Contextual Anomaly Detector OSE pts/deepsparse-1.7.0 zoo:nlp/sentiment_analysis/oberta-base/pytorch/huggingface/sst2/pruned90_quant-none --input_shapes='[1,128]' --scenario async Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream pts/scikit-learn-2.0.0 plot_nmf.py Benchmark: Plot Non-Negative Matrix Factorization pts/deepsparse-1.7.0 zoo:nlp/sentiment_analysis/oberta-base/pytorch/huggingface/sst2/pruned90_quant-none --input_shapes='[1,128]' --scenario sync Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream pts/caffe-1.5.0 --model=../models/bvlc_alexnet/deploy.prototxt -iterations 100 Model: AlexNet - Acceleration: CPU - Iterations: 100 pts/deepsparse-1.7.0 zoo:nlp/question_answering/obert-large/pytorch/huggingface/squad/pruned97_quant-none --input_shapes='[1,128]' --scenario async Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.7.0 zoo:nlp/question_answering/obert-large/pytorch/huggingface/squad/pruned97_quant-none --input_shapes='[1,128]' --scenario sync Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream pts/deepsparse-1.7.0 zoo:cv/segmentation/yolact-darknet53/pytorch/dbolya/coco/pruned90-none --scenario sync Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream pts/deepsparse-1.7.0 zoo:cv/segmentation/yolact-darknet53/pytorch/dbolya/coco/pruned90-none --scenario async Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.7.0 zoo:nlp/document_classification/obert-base/pytorch/huggingface/imdb/base-none --scenario async Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.7.0 zoo:nlp/token_classification/bert-base/pytorch/huggingface/conll2003/base-none --scenario async Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.7.0 zoo:nlp/document_classification/obert-base/pytorch/huggingface/imdb/base-none --scenario sync Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream pts/deepsparse-1.7.0 zoo:nlp/token_classification/bert-base/pytorch/huggingface/conll2003/base-none --scenario sync Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream pts/deepsparse-1.7.0 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario async Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.7.0 zoo:nlp/text_classification/distilbert-none/pytorch/huggingface/mnli/base-none --scenario async Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.7.0 zoo:nlp/text_classification/distilbert-none/pytorch/huggingface/mnli/base-none --scenario sync Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream pts/deepsparse-1.7.0 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/pruned95_uniform_quant-none --scenario async Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.7.0 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/pruned95_uniform_quant-none --scenario sync Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream pts/deepsparse-1.7.0 zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/pruned85-none --scenario async Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.7.0 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario sync Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream pts/deepsparse-1.7.0 zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/pruned85-none --scenario sync Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream pts/deepsparse-1.7.0 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario sync Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream pts/deepsparse-1.7.0 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario async Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream pts/rbenchmark-1.0.3 pts/scikit-learn-2.0.0 hist_gradient_boosting_categorical_only.py Benchmark: Hist Gradient Boosting Categorical Only pts/mlpack-1.0.2 SCIKIT_SVM Benchmark: scikit_svm pts/onednn-3.4.0 --ip --batch=inputs/ip/shapes_1d --engine=cpu Harness: IP Shapes 1D - Engine: CPU pts/tnn-1.1.0 -dt NAIVE -mp ../benchmark/benchmark-model/mobilenet_v2.tnnproto Target: CPU - Model: MobileNet v2 pts/scikit-learn-2.0.0 rcv1_logreg_convergence.py Benchmark: RCV1 Logreg Convergencet pts/numenta-nab-1.1.1 -d windowedGaussian Detector: Windowed Gaussian pts/tnn-1.1.0 -dt NAIVE -mp ../benchmark/benchmark-model/squeezenet_v1.1.tnnproto Target: CPU - Model: SqueezeNet v1.1 pts/onednn-3.4.0 --ip --batch=inputs/ip/shapes_3d --engine=cpu Harness: IP Shapes 3D - Engine: CPU pts/onednn-3.4.0 --conv --batch=inputs/conv/shapes_auto --engine=cpu Harness: Convolution Batch Shapes Auto - Engine: CPU pts/tnn-1.1.0 -dt NAIVE -mp ../benchmark/benchmark-model/shufflenet_v2.tnnproto Target: CPU - Model: SqueezeNet v2 pts/onednn-3.4.0 --deconv --batch=inputs/deconv/shapes_3d --engine=cpu Harness: Deconvolution Batch shapes_3d - Engine: CPU pts/ai-benchmark-1.0.2 pts/scikit-learn-2.0.0 plot_parallel_pairwise.py Benchmark: Plot Parallel Pairwise pts/ecp-candle-1.1.0 P1B2 Benchmark: P1B2 pts/spacy-1.0.0 pts/scikit-learn-2.0.0 glmnet.py Benchmark: Glmnet pts/ecp-candle-1.1.0 P3B1 Benchmark: P3B1 pts/ecp-candle-1.1.0 P3B2 Benchmark: P3B2 pts/tensorflow-2.2.0 --device cpu --batch_size=1 --model=vgg16 Device: CPU - Batch Size: 1 - Model: VGG-16 pts/tensorflow-2.2.0 --device gpu --batch_size=16 --model=googlenet Device: GPU - Batch Size: 16 - Model: GoogLeNet pts/tensorflow-2.2.0 --device gpu --batch_size=1 --model=googlenet Device: GPU - Batch Size: 1 - Model: GoogLeNet pts/tensorflow-2.2.0 --device cpu --batch_size=256 --model=alexnet Device: CPU - Batch Size: 256 - Model: AlexNet pts/tensorflow-2.2.0 --device cpu --batch_size=256 --model=resnet50 Device: CPU - Batch Size: 256 - Model: ResNet-50 pts/tensorflow-2.2.0 --device gpu --batch_size=64 --model=resnet50 Device: GPU - Batch Size: 64 - Model: ResNet-50 pts/tensorflow-2.2.0 --device cpu --batch_size=32 --model=googlenet Device: CPU - Batch Size: 32 - Model: GoogLeNet pts/tensorflow-2.2.0 --device cpu --batch_size=16 --model=resnet50 Device: CPU - Batch Size: 16 - Model: ResNet-50 pts/tensorflow-2.2.0 --device cpu --batch_size=32 --model=vgg16 Device: CPU - Batch Size: 32 - Model: VGG-16 pts/tensorflow-2.2.0 --device cpu --batch_size=512 --model=resnet50 Device: CPU - Batch Size: 512 - Model: ResNet-50 pts/tensorflow-2.2.0 --device gpu --batch_size=32 --model=resnet50 Device: GPU - Batch Size: 32 - Model: ResNet-50 pts/tensorflow-2.2.0 --device cpu --batch_size=64 --model=resnet50 Device: CPU - Batch Size: 64 - Model: ResNet-50 pts/tensorflow-2.2.0 --device cpu --batch_size=64 --model=googlenet Device: CPU - Batch Size: 64 - Model: GoogLeNet pts/tensorflow-2.2.0 --device cpu --batch_size=16 --model=googlenet Device: CPU - Batch Size: 16 - Model: GoogLeNet pts/tensorflow-2.2.0 --device gpu --batch_size=1 --model=resnet50 Device: GPU - Batch Size: 1 - Model: ResNet-50 pts/tensorflow-2.2.0 --device gpu --batch_size=512 --model=vgg16 Device: GPU - Batch Size: 512 - Model: VGG-16 pts/tensorflow-2.2.0 --device gpu --batch_size=256 --model=vgg16 Device: GPU - Batch Size: 256 - Model: VGG-16 pts/tensorflow-2.2.0 --device cpu --batch_size=16 --model=alexnet Device: CPU - Batch Size: 16 - Model: AlexNet pts/tensorflow-2.2.0 --device cpu --batch_size=16 --model=vgg16 Device: CPU - Batch Size: 16 - Model: VGG-16 pts/tensorflow-2.2.0 --device cpu --batch_size=1 --model=alexnet Device: CPU - Batch Size: 1 - Model: AlexNet pts/tensorflow-2.2.0 --device gpu --batch_size=512 --model=googlenet Device: GPU - Batch Size: 512 - Model: GoogLeNet pts/tensorflow-2.2.0 --device gpu --batch_size=64 --model=googlenet Device: GPU - Batch Size: 64 - Model: GoogLeNet pts/tensorflow-2.2.0 --device gpu --batch_size=16 --model=resnet50 Device: GPU - Batch Size: 16 - Model: ResNet-50 pts/tensorflow-2.2.0 --device cpu --batch_size=32 --model=resnet50 Device: CPU - Batch Size: 32 - Model: ResNet-50 pts/tensorflow-2.2.0 --device gpu --batch_size=512 --model=alexnet Device: GPU - Batch Size: 512 - Model: AlexNet pts/tensorflow-2.2.0 --device cpu --batch_size=512 --model=alexnet Device: CPU - Batch Size: 512 - Model: AlexNet pts/tensorflow-2.2.0 --device cpu --batch_size=1 --model=resnet50 Device: CPU - Batch Size: 1 - Model: ResNet-50 pts/tensorflow-2.2.0 --device gpu --batch_size=64 --model=alexnet Device: GPU - Batch Size: 64 - Model: AlexNet pts/tensorflow-2.2.0 --device gpu --batch_size=512 --model=resnet50 Device: GPU - Batch Size: 512 - Model: ResNet-50 pts/tensorflow-2.2.0 --device gpu --batch_size=256 --model=resnet50 Device: GPU - Batch Size: 256 - Model: ResNet-50 pts/tensorflow-2.2.0 --device cpu --batch_size=512 --model=googlenet Device: CPU - Batch Size: 512 - Model: GoogLeNet pts/tensorflow-2.2.0 --device gpu --batch_size=32 --model=googlenet Device: GPU - Batch Size: 32 - Model: GoogLeNet pts/tensorflow-2.2.0 --device gpu --batch_size=256 --model=alexnet Device: GPU - Batch Size: 256 - Model: AlexNet pts/tensorflow-2.2.0 --device cpu --batch_size=1 --model=googlenet Device: CPU - Batch Size: 1 - Model: GoogLeNet pts/tensorflow-2.2.0 --device gpu --batch_size=16 --model=alexnet Device: GPU - Batch Size: 16 - Model: AlexNet pts/tensorflow-2.2.0 --device cpu --batch_size=64 --model=alexnet Device: CPU - Batch Size: 64 - Model: AlexNet pts/tensorflow-2.2.0 --device cpu --batch_size=512 --model=vgg16 Device: CPU - Batch Size: 512 - Model: VGG-16 pts/tensorflow-2.2.0 --device cpu --batch_size=32 --model=alexnet Device: CPU - Batch Size: 32 - Model: AlexNet pts/tensorflow-2.2.0 --device cpu --batch_size=256 --model=vgg16 Device: CPU - Batch Size: 256 - Model: VGG-16 pts/tensorflow-2.2.0 --device gpu --batch_size=64 --model=vgg16 Device: GPU - Batch Size: 64 - Model: VGG-16 pts/tensorflow-2.2.0 --device gpu --batch_size=32 --model=vgg16 Device: GPU - Batch Size: 32 - Model: VGG-16 pts/tensorflow-2.2.0 --device gpu --batch_size=16 --model=vgg16 Device: GPU - Batch Size: 16 - Model: VGG-16 pts/tensorflow-2.2.0 --device cpu --batch_size=64 --model=vgg16 Device: CPU - Batch Size: 64 - Model: VGG-16 pts/tensorflow-2.2.0 --device gpu --batch_size=1 --model=vgg16 Device: GPU - Batch Size: 1 - Model: VGG-16 pts/tensorflow-2.2.0 --device gpu --batch_size=256 --model=googlenet Device: GPU - Batch Size: 256 - Model: GoogLeNet pts/tensorflow-2.2.0 --device cpu --batch_size=256 --model=googlenet Device: CPU - Batch Size: 256 - Model: GoogLeNet pts/tensorflow-2.2.0 --device gpu --batch_size=32 --model=alexnet Device: GPU - Batch Size: 32 - Model: AlexNet pts/tensorflow-2.2.0 --device gpu --batch_size=1 --model=alexnet Device: GPU - Batch Size: 1 - Model: AlexNet pts/onnx-1.17.0 yolov4/yolov4.onnx -e cpu -P Model: yolov4 - Device: CPU - Executor: Parallel pts/onnx-1.17.0 yolov4/yolov4.onnx -e cpu Model: yolov4 - Device: CPU - Executor: Standard pts/llama-cpp-1.1.0 -m ../llama-2-7b.Q4_0.gguf Model: llama-2-7b.Q4_0.gguf pts/llamafile-1.2.0 run-wizardcoder --gpu DISABLE Test: wizardcoder-python-34b-v1.0.Q6_K - Acceleration: CPU pts/llamafile-1.2.0 run-mistral --gpu DISABLE Test: mistral-7b-instruct-v0.2.Q8_0 - Acceleration: CPU pts/llamafile-1.2.0 run-llava --gpu DISABLE Test: llava-v1.5-7b-q4 - Acceleration: CPU pts/llama-cpp-1.1.0 -m ../llama-2-70b-chat.Q5_0.gguf Model: llama-2-70b-chat.Q5_0.gguf pts/llama-cpp-1.1.0 -m ../llama-2-13b.Q4_0.gguf Model: llama-2-13b.Q4_0.gguf pts/rnnoise-1.1.0 sample-audio-long.raw Input: 26 Minute Long Talking Sample pts/lczero-1.8.0 -b blas Backend: BLAS