24.03.13.Pop.2204.ML.test1

AMD Ryzen 9 7950X 16-Core testing with a ASUS ProArt X670E-CREATOR WIFI (1710 BIOS) and Zotac NVIDIA GeForce RTX 4070 Ti 12GB on Pop 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2403157-NE-240313POP28
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Initial test 1 No water cool
March 13
  2 Days, 55 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


24.03.13.Pop.2204.ML.test1 Suite 1.0.0 System Test suite extracted from 24.03.13.Pop.2204.ML.test1. pts/opencv-1.3.0 dnn Test: DNN - Deep Neural Network pts/mlpack-1.0.2 SCIKIT_LINEARRIDGEREGRESSION Benchmark: scikit_linearridgeregression pts/mlpack-1.0.2 SCIKIT_SVM Benchmark: scikit_svm pts/mlpack-1.0.2 SCIKIT_QDA Benchmark: scikit_qda pts/mlpack-1.0.2 SCIKIT_ICA Benchmark: scikit_ica pts/ai-benchmark-1.0.2 Device AI Score pts/ai-benchmark-1.0.2 Device Training Score pts/ai-benchmark-1.0.2 Device Inference Score pts/numenta-nab-1.1.1 -d contextOSE Detector: Contextual Anomaly Detector OSE pts/numenta-nab-1.1.1 -d bayesChangePt Detector: Bayesian Changepoint pts/numenta-nab-1.1.1 -d earthgeckoSkyline Detector: Earthgecko Skyline pts/numenta-nab-1.1.1 -d windowedGaussian Detector: Windowed Gaussian pts/numenta-nab-1.1.1 -d relativeEntropy Detector: Relative Entropy pts/numenta-nab-1.1.1 -d knncad Detector: KNN CAD pts/openvino-1.5.0 -m models/intel/age-gender-recognition-retail-0013/FP16-INT8/age-gender-recognition-retail-0013.xml -d CPU Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU pts/openvino-1.5.0 -m models/intel/handwritten-english-recognition-0001/FP16-INT8/handwritten-english-recognition-0001.xml -d CPU Model: Handwritten English Recognition FP16-INT8 - Device: CPU pts/openvino-1.5.0 -m models/intel/age-gender-recognition-retail-0013/FP16/age-gender-recognition-retail-0013.xml -d CPU Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU pts/openvino-1.5.0 -m models/intel/person-reidentification-retail-0277/FP16/person-reidentification-retail-0277.xml -d CPU Model: Person Re-Identification Retail FP16 - Device: CPU pts/openvino-1.5.0 -m models/intel/handwritten-english-recognition-0001/FP16/handwritten-english-recognition-0001.xml -d CPU Model: Handwritten English Recognition FP16 - Device: CPU pts/openvino-1.5.0 -m models/intel/noise-suppression-poconetlike-0001/FP16/noise-suppression-poconetlike-0001.xml -d CPU Model: Noise Suppression Poconet-Like FP16 - Device: CPU pts/openvino-1.5.0 -m models/intel/person-vehicle-bike-detection-2004/FP16/person-vehicle-bike-detection-2004.xml -d CPU Model: Person Vehicle Bike Detection FP16 - Device: CPU pts/openvino-1.5.0 -m models/intel/weld-porosity-detection-0001/FP16-INT8/weld-porosity-detection-0001.xml -d CPU Model: Weld Porosity Detection FP16-INT8 - Device: CPU pts/openvino-1.5.0 -m models/intel/machine-translation-nar-en-de-0002/FP16/machine-translation-nar-en-de-0002.xml -d CPU Model: Machine Translation EN To DE FP16 - Device: CPU pts/openvino-1.5.0 -m models/intel/road-segmentation-adas-0001/FP16-INT8/road-segmentation-adas-0001.xml -d CPU Model: Road Segmentation ADAS FP16-INT8 - Device: CPU pts/openvino-1.5.0 -m models/intel/face-detection-retail-0005/FP16-INT8/face-detection-retail-0005.xml -d CPU Model: Face Detection Retail FP16-INT8 - Device: CPU pts/openvino-1.5.0 -m models/intel/weld-porosity-detection-0001/FP16/weld-porosity-detection-0001.xml -d CPU Model: Weld Porosity Detection FP16 - Device: CPU pts/openvino-1.5.0 -m models/intel/vehicle-detection-0202/FP16-INT8/vehicle-detection-0202.xml -d CPU Model: Vehicle Detection FP16-INT8 - Device: CPU pts/openvino-1.5.0 -m models/intel/road-segmentation-adas-0001/FP16/road-segmentation-adas-0001.xml -d CPU Model: Road Segmentation ADAS FP16 - Device: CPU pts/openvino-1.5.0 -m models/intel/face-detection-retail-0005/FP16/face-detection-retail-0005.xml -d CPU Model: Face Detection Retail FP16 - Device: CPU pts/openvino-1.5.0 -m models/intel/face-detection-0206/FP16-INT8/face-detection-0206.xml -d CPU Model: Face Detection FP16-INT8 - Device: CPU pts/openvino-1.5.0 -m models/intel/vehicle-detection-0202/FP16/vehicle-detection-0202.xml -d CPU Model: Vehicle Detection FP16 - Device: CPU pts/openvino-1.5.0 -m models/intel/person-detection-0303/FP32/person-detection-0303.xml -d CPU Model: Person Detection FP32 - Device: CPU pts/openvino-1.5.0 -m models/intel/person-detection-0303/FP16/person-detection-0303.xml -d CPU Model: Person Detection FP16 - Device: CPU pts/openvino-1.5.0 -m models/intel/face-detection-0206/FP16/face-detection-0206.xml -d CPU Model: Face Detection FP16 - Device: CPU pts/tnn-1.1.0 -dt NAIVE -mp ../benchmark/benchmark-model/squeezenet_v1.1.tnnproto Target: CPU - Model: SqueezeNet v1.1 pts/tnn-1.1.0 -dt NAIVE -mp ../benchmark/benchmark-model/shufflenet_v2.tnnproto Target: CPU - Model: SqueezeNet v2 pts/tnn-1.1.0 -dt NAIVE -mp ../benchmark/benchmark-model/mobilenet_v2.tnnproto Target: CPU - Model: MobileNet v2 pts/tnn-1.1.0 -dt NAIVE -mp ../benchmark/benchmark-model/densenet.tnnproto Target: CPU - Model: DenseNet pts/ncnn-1.5.0 Target: Vulkan GPU - Model: FastestDet pts/ncnn-1.5.0 Target: Vulkan GPU - Model: vision_transformer pts/ncnn-1.5.0 Target: Vulkan GPU - Model: regnety_400m pts/ncnn-1.5.0 Target: Vulkan GPU - Model: squeezenet_ssd pts/ncnn-1.5.0 Target: Vulkan GPU - Model: yolov4-tiny pts/ncnn-1.5.0 Target: Vulkan GPUv2-yolov3v2-yolov3 - Model: mobilenetv2-yolov3 pts/ncnn-1.5.0 Target: Vulkan GPU - Model: resnet50 pts/ncnn-1.5.0 Target: Vulkan GPU - Model: alexnet pts/ncnn-1.5.0 Target: Vulkan GPU - Model: resnet18 pts/ncnn-1.5.0 Target: Vulkan GPU - Model: vgg16 pts/ncnn-1.5.0 Target: Vulkan GPU - Model: googlenet pts/ncnn-1.5.0 Target: Vulkan GPU - Model: blazeface pts/ncnn-1.5.0 Target: Vulkan GPU - Model: efficientnet-b0 pts/ncnn-1.5.0 Target: Vulkan GPU - Model: mnasnet pts/ncnn-1.5.0 Target: Vulkan GPU - Model: shufflenet-v2 pts/ncnn-1.5.0 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 pts/ncnn-1.5.0 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 pts/ncnn-1.5.0 Target: Vulkan GPU - Model: mobilenet pts/ncnn-1.5.0 -1 Target: CPU - Model: FastestDet pts/ncnn-1.5.0 -1 Target: CPU - Model: vision_transformer pts/ncnn-1.5.0 -1 Target: CPU - Model: regnety_400m pts/ncnn-1.5.0 -1 Target: CPU - Model: squeezenet_ssd pts/ncnn-1.5.0 -1 Target: CPU - Model: yolov4-tiny pts/ncnn-1.5.0 -1 Target: CPUv2-yolov3v2-yolov3 - Model: mobilenetv2-yolov3 pts/ncnn-1.5.0 -1 Target: CPU - Model: resnet50 pts/ncnn-1.5.0 -1 Target: CPU - Model: alexnet pts/ncnn-1.5.0 -1 Target: CPU - Model: resnet18 pts/ncnn-1.5.0 -1 Target: CPU - Model: vgg16 pts/ncnn-1.5.0 -1 Target: CPU - Model: googlenet pts/ncnn-1.5.0 -1 Target: CPU - Model: blazeface pts/ncnn-1.5.0 -1 Target: CPU - Model: efficientnet-b0 pts/ncnn-1.5.0 -1 Target: CPU - Model: mnasnet pts/ncnn-1.5.0 -1 Target: CPU - Model: shufflenet-v2 pts/ncnn-1.5.0 -1 Target: CPU-v3-v3 - Model: mobilenet-v3 pts/ncnn-1.5.0 -1 Target: CPU-v2-v2 - Model: mobilenet-v2 pts/ncnn-1.5.0 -1 Target: CPU - Model: mobilenet pts/mnn-2.1.0 Model: inception-v3 pts/mnn-2.1.0 Model: mobilenet-v1-1.0 pts/mnn-2.1.0 Model: MobileNetV2_224 pts/mnn-2.1.0 Model: SqueezeNetV1.0 pts/mnn-2.1.0 Model: resnet-v2-50 pts/mnn-2.1.0 Model: squeezenetv1.1 pts/mnn-2.1.0 Model: mobilenetV3 pts/mnn-2.1.0 Model: nasnet pts/spacy-1.0.0 Model: en_core_web_trf pts/spacy-1.0.0 Model: en_core_web_lg pts/deepsparse-1.6.0 zoo:nlp/token_classification/bert-base/pytorch/huggingface/conll2003/base-none --scenario sync Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream pts/deepsparse-1.6.0 zoo:nlp/token_classification/bert-base/pytorch/huggingface/conll2003/base-none --scenario async Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.6.0 zoo:nlp/question_answering/obert-large/pytorch/huggingface/squad/pruned97_quant-none --input_shapes='[1,128]' --scenario sync Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream pts/deepsparse-1.6.0 zoo:nlp/question_answering/obert-large/pytorch/huggingface/squad/pruned97_quant-none --input_shapes='[1,128]' --scenario async Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.6.0 zoo:cv/segmentation/yolact-darknet53/pytorch/dbolya/coco/pruned90-none --scenario sync Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream pts/deepsparse-1.6.0 zoo:cv/segmentation/yolact-darknet53/pytorch/dbolya/coco/pruned90-none --scenario async Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.6.0 zoo:nlp/text_classification/distilbert-none/pytorch/huggingface/mnli/base-none --scenario sync Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream pts/deepsparse-1.6.0 zoo:nlp/text_classification/distilbert-none/pytorch/huggingface/mnli/base-none --scenario async Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.6.0 zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/pruned85-none --scenario sync Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream pts/deepsparse-1.6.0 zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/pruned85-none --scenario async Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.6.0 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario sync Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream pts/deepsparse-1.6.0 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario async Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.6.0 zoo:nlp/question_answering/obert-large/pytorch/huggingface/squad/base-none --input_shapes='[1,128]' --scenario sync Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream pts/deepsparse-1.6.0 zoo:nlp/question_answering/obert-large/pytorch/huggingface/squad/base-none --input_shapes='[1,128]' --scenario async Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.6.0 zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/base-none --scenario sync Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream pts/deepsparse-1.6.0 zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/base-none --scenario async Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.6.0 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/pruned95_uniform_quant-none --scenario sync Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream pts/deepsparse-1.6.0 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/pruned95_uniform_quant-none --scenario async Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.6.0 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario sync Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream pts/deepsparse-1.6.0 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario async Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.6.0 zoo:nlp/sentiment_analysis/oberta-base/pytorch/huggingface/sst2/pruned90_quant-none --input_shapes='[1,128]' --scenario sync Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream pts/deepsparse-1.6.0 zoo:nlp/sentiment_analysis/oberta-base/pytorch/huggingface/sst2/pruned90_quant-none --input_shapes='[1,128]' --scenario async Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.6.0 zoo:nlp/document_classification/obert-base/pytorch/huggingface/imdb/base-none --scenario sync Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream pts/deepsparse-1.6.0 zoo:nlp/document_classification/obert-base/pytorch/huggingface/imdb/base-none --scenario async Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream pts/tensorflow-2.1.1 --device gpu --batch_size=512 --model=googlenet Device: GPU - Batch Size: 512 - Model: GoogLeNet pts/tensorflow-2.1.1 --device gpu --batch_size=256 --model=resnet50 Device: GPU - Batch Size: 256 - Model: ResNet-50 pts/tensorflow-2.1.1 --device gpu --batch_size=256 --model=googlenet Device: GPU - Batch Size: 256 - Model: GoogLeNet pts/tensorflow-2.1.1 --device cpu --batch_size=512 --model=googlenet Device: CPU - Batch Size: 512 - Model: GoogLeNet pts/tensorflow-2.1.1 --device cpu --batch_size=256 --model=resnet50 Device: CPU - Batch Size: 256 - Model: ResNet-50 pts/tensorflow-2.1.1 --device cpu --batch_size=256 --model=googlenet Device: CPU - Batch Size: 256 - Model: GoogLeNet pts/tensorflow-2.1.1 --device gpu --batch_size=64 --model=resnet50 Device: GPU - Batch Size: 64 - Model: ResNet-50 pts/tensorflow-2.1.1 --device gpu --batch_size=64 --model=googlenet Device: GPU - Batch Size: 64 - Model: GoogLeNet pts/tensorflow-2.1.1 --device gpu --batch_size=32 --model=resnet50 Device: GPU - Batch Size: 32 - Model: ResNet-50 pts/tensorflow-2.1.1 --device gpu --batch_size=32 --model=googlenet Device: GPU - Batch Size: 32 - Model: GoogLeNet pts/tensorflow-2.1.1 --device gpu --batch_size=16 --model=resnet50 Device: GPU - Batch Size: 16 - Model: ResNet-50 pts/tensorflow-2.1.1 --device gpu --batch_size=16 --model=googlenet Device: GPU - Batch Size: 16 - Model: GoogLeNet pts/tensorflow-2.1.1 --device cpu --batch_size=64 --model=resnet50 Device: CPU - Batch Size: 64 - Model: ResNet-50 pts/tensorflow-2.1.1 --device cpu --batch_size=64 --model=googlenet Device: CPU - Batch Size: 64 - Model: GoogLeNet pts/tensorflow-2.1.1 --device cpu --batch_size=32 --model=resnet50 Device: CPU - Batch Size: 32 - Model: ResNet-50 pts/tensorflow-2.1.1 --device cpu --batch_size=32 --model=googlenet Device: CPU - Batch Size: 32 - Model: GoogLeNet pts/tensorflow-2.1.1 --device cpu --batch_size=16 --model=resnet50 Device: CPU - Batch Size: 16 - Model: ResNet-50 pts/tensorflow-2.1.1 --device cpu --batch_size=16 --model=googlenet Device: CPU - Batch Size: 16 - Model: GoogLeNet pts/tensorflow-2.1.1 --device gpu --batch_size=512 --model=alexnet Device: GPU - Batch Size: 512 - Model: AlexNet pts/tensorflow-2.1.1 --device gpu --batch_size=256 --model=alexnet Device: GPU - Batch Size: 256 - Model: AlexNet pts/tensorflow-2.1.1 --device gpu --batch_size=1 --model=resnet50 Device: GPU - Batch Size: 1 - Model: ResNet-50 pts/tensorflow-2.1.1 --device gpu --batch_size=1 --model=googlenet Device: GPU - Batch Size: 1 - Model: GoogLeNet pts/tensorflow-2.1.1 --device cpu --batch_size=512 --model=alexnet Device: CPU - Batch Size: 512 - Model: AlexNet pts/tensorflow-2.1.1 --device cpu --batch_size=256 --model=alexnet Device: CPU - Batch Size: 256 - Model: AlexNet pts/tensorflow-2.1.1 --device cpu --batch_size=1 --model=resnet50 Device: CPU - Batch Size: 1 - Model: ResNet-50 pts/tensorflow-2.1.1 --device cpu --batch_size=1 --model=googlenet Device: CPU - Batch Size: 1 - Model: GoogLeNet pts/tensorflow-2.1.1 --device gpu --batch_size=64 --model=alexnet Device: GPU - Batch Size: 64 - Model: AlexNet pts/tensorflow-2.1.1 --device gpu --batch_size=32 --model=alexnet Device: GPU - Batch Size: 32 - Model: AlexNet pts/tensorflow-2.1.1 --device gpu --batch_size=256 --model=vgg16 Device: GPU - Batch Size: 256 - Model: VGG-16 pts/tensorflow-2.1.1 --device gpu --batch_size=16 --model=alexnet Device: GPU - Batch Size: 16 - Model: AlexNet pts/tensorflow-2.1.1 --device cpu --batch_size=64 --model=alexnet Device: CPU - Batch Size: 64 - Model: AlexNet pts/tensorflow-2.1.1 --device cpu --batch_size=32 --model=alexnet Device: CPU - Batch Size: 32 - Model: AlexNet pts/tensorflow-2.1.1 --device cpu --batch_size=256 --model=vgg16 Device: CPU - Batch Size: 256 - Model: VGG-16 pts/tensorflow-2.1.1 --device cpu --batch_size=16 --model=alexnet Device: CPU - Batch Size: 16 - Model: AlexNet pts/tensorflow-2.1.1 --device gpu --batch_size=64 --model=vgg16 Device: GPU - Batch Size: 64 - Model: VGG-16 pts/tensorflow-2.1.1 --device gpu --batch_size=32 --model=vgg16 Device: GPU - Batch Size: 32 - Model: VGG-16 pts/tensorflow-2.1.1 --device gpu --batch_size=16 --model=vgg16 Device: GPU - Batch Size: 16 - Model: VGG-16 pts/tensorflow-2.1.1 --device gpu --batch_size=1 --model=alexnet Device: GPU - Batch Size: 1 - Model: AlexNet pts/tensorflow-2.1.1 --device cpu --batch_size=64 --model=vgg16 Device: CPU - Batch Size: 64 - Model: VGG-16 pts/tensorflow-2.1.1 --device cpu --batch_size=32 --model=vgg16 Device: CPU - Batch Size: 32 - Model: VGG-16 pts/tensorflow-2.1.1 --device cpu --batch_size=16 --model=vgg16 Device: CPU - Batch Size: 16 - Model: VGG-16 pts/tensorflow-2.1.1 --device cpu --batch_size=1 --model=alexnet Device: CPU - Batch Size: 1 - Model: AlexNet pts/tensorflow-2.1.1 --device gpu --batch_size=1 --model=vgg16 Device: GPU - Batch Size: 1 - Model: VGG-16 pts/tensorflow-2.1.1 --device cpu --batch_size=1 --model=vgg16 Device: CPU - Batch Size: 1 - Model: VGG-16 pts/pytorch-1.0.1 cuda 512 efficientnet_v2_l Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: Efficientnet_v2_l pts/pytorch-1.0.1 cuda 256 efficientnet_v2_l Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: Efficientnet_v2_l pts/pytorch-1.0.1 cuda 64 efficientnet_v2_l Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: Efficientnet_v2_l pts/pytorch-1.0.1 cuda 32 efficientnet_v2_l Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: Efficientnet_v2_l pts/pytorch-1.0.1 cuda 16 efficientnet_v2_l Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: Efficientnet_v2_l pts/pytorch-1.0.1 cuda 1 efficientnet_v2_l Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: Efficientnet_v2_l pts/pytorch-1.0.1 cuda 512 resnet152 Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: ResNet-152 pts/pytorch-1.0.1 cuda 256 resnet152 Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: ResNet-152 pts/pytorch-1.0.1 cuda 64 resnet152 Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: ResNet-152 pts/pytorch-1.0.1 cuda 512 resnet50 Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: ResNet-50 pts/pytorch-1.0.1 cuda 32 resnet152 Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: ResNet-152 pts/pytorch-1.0.1 cuda 256 resnet50 Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: ResNet-50 pts/pytorch-1.0.1 cuda 16 resnet152 Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: ResNet-152 pts/pytorch-1.0.1 cuda 64 resnet50 Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: ResNet-50 pts/pytorch-1.0.1 cuda 32 resnet50 Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: ResNet-50 pts/pytorch-1.0.1 cuda 16 resnet50 Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: ResNet-50 pts/pytorch-1.0.1 cuda 1 resnet152 Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: ResNet-152 pts/pytorch-1.0.1 cuda 1 resnet50 Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: ResNet-50 pts/pytorch-1.0.1 cpu 512 efficientnet_v2_l Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l pts/pytorch-1.0.1 cpu 256 efficientnet_v2_l Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l pts/pytorch-1.0.1 cpu 64 efficientnet_v2_l Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l pts/pytorch-1.0.1 cpu 32 efficientnet_v2_l Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l pts/pytorch-1.0.1 cpu 16 efficientnet_v2_l Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l pts/pytorch-1.0.1 cpu 1 efficientnet_v2_l Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l pts/pytorch-1.0.1 cpu 512 resnet152 Device: CPU - Batch Size: 512 - Model: ResNet-152 pts/pytorch-1.0.1 cpu 256 resnet152 Device: CPU - Batch Size: 256 - Model: ResNet-152 pts/pytorch-1.0.1 cpu 64 resnet152 Device: CPU - Batch Size: 64 - Model: ResNet-152 pts/pytorch-1.0.1 cpu 512 resnet50 Device: CPU - Batch Size: 512 - Model: ResNet-50 pts/pytorch-1.0.1 cpu 32 resnet152 Device: CPU - Batch Size: 32 - Model: ResNet-152 pts/pytorch-1.0.1 cpu 256 resnet50 Device: CPU - Batch Size: 256 - Model: ResNet-50 pts/pytorch-1.0.1 cpu 16 resnet152 Device: CPU - Batch Size: 16 - Model: ResNet-152 pts/pytorch-1.0.1 cpu 64 resnet50 Device: CPU - Batch Size: 64 - Model: ResNet-50 pts/pytorch-1.0.1 cpu 32 resnet50 Device: CPU - Batch Size: 32 - Model: ResNet-50 pts/pytorch-1.0.1 cpu 16 resnet50 Device: CPU - Batch Size: 16 - Model: ResNet-50 pts/pytorch-1.0.1 cpu 1 resnet152 Device: CPU - Batch Size: 1 - Model: ResNet-152 pts/pytorch-1.0.1 cpu 1 resnet50 Device: CPU - Batch Size: 1 - Model: ResNet-50 pts/tensorflow-lite-1.1.0 --graph=inception_resnet_v2.tflite Model: Inception ResNet V2 pts/tensorflow-lite-1.1.0 --graph=mobilenet_v1_1.0_224_quant.tflite Model: Mobilenet Quant pts/tensorflow-lite-1.1.0 --graph=mobilenet_v1_1.0_224.tflite Model: Mobilenet Float pts/tensorflow-lite-1.1.0 --graph=nasnet_mobile.tflite Model: NASNet Mobile pts/tensorflow-lite-1.1.0 --graph=inception_v4.tflite Model: Inception V4 pts/tensorflow-lite-1.1.0 --graph=squeezenet.tflite Model: SqueezeNet pts/rnnoise-1.0.2 pts/deepspeech-1.0.0 CPU Acceleration: CPU pts/numpy-1.2.1 pts/onednn-3.4.0 --rnn --batch=inputs/rnn/perf_rnn_inference_lb --engine=cpu Harness: Recurrent Neural Network Inference - Engine: CPU pts/onednn-3.4.0 --rnn --batch=inputs/rnn/perf_rnn_training --engine=cpu Harness: Recurrent Neural Network Training - Engine: CPU pts/onednn-3.4.0 --deconv --batch=inputs/deconv/shapes_3d --engine=cpu Harness: Deconvolution Batch shapes_3d - Engine: CPU pts/onednn-3.4.0 --deconv --batch=inputs/deconv/shapes_1d --engine=cpu Harness: Deconvolution Batch shapes_1d - Engine: CPU pts/onednn-3.4.0 --conv --batch=inputs/conv/shapes_auto --engine=cpu Harness: Convolution Batch Shapes Auto - Engine: CPU pts/onednn-3.4.0 --ip --batch=inputs/ip/shapes_3d --engine=cpu Harness: IP Shapes 3D - Engine: CPU pts/onednn-3.4.0 --ip --batch=inputs/ip/shapes_1d --engine=cpu Harness: IP Shapes 1D - Engine: CPU pts/shoc-1.2.0 -opencl -benchmark DeviceMemory Target: OpenCL - Benchmark: Texture Read Bandwidth pts/shoc-1.2.0 -opencl -benchmark BusSpeedReadback Target: OpenCL - Benchmark: Bus Speed Readback pts/shoc-1.2.0 -opencl -benchmark BusSpeedDownload Target: OpenCL - Benchmark: Bus Speed Download pts/shoc-1.2.0 -opencl -benchmark MaxFlops Target: OpenCL - Benchmark: Max SP Flops pts/shoc-1.2.0 -opencl -benchmark GEMM Target: OpenCL - Benchmark: GEMM SGEMM_N pts/shoc-1.2.0 -opencl -benchmark Reduction Target: OpenCL - Benchmark: Reduction pts/shoc-1.2.0 -opencl -benchmark MD5Hash Target: OpenCL - Benchmark: MD5 Hash pts/shoc-1.2.0 -opencl -benchmark FFT Target: OpenCL - Benchmark: FFT SP pts/shoc-1.2.0 -opencl -benchmark Triad Target: OpenCL - Benchmark: Triad pts/shoc-1.2.0 -opencl -benchmark S3D Target: OpenCL - Benchmark: S3D pts/llamafile-1.0.0 run-wizardcoder --gpu DISABLE Test: wizardcoder-python-34b-v1.0.Q6_K - Acceleration: CPU pts/llamafile-1.0.0 run-mistral --gpu DISABLE Test: mistral-7b-instruct-v0.2.Q8_0 - Acceleration: CPU pts/llamafile-1.0.0 run-llava --gpu DISABLE Test: llava-v1.5-7b-q4 - Acceleration: CPU pts/llama-cpp-1.0.0 -m ../llama-2-70b-chat.Q5_0.gguf Model: llama-2-70b-chat.Q5_0.gguf pts/llama-cpp-1.0.0 -m ../llama-2-13b.Q4_0.gguf Model: llama-2-13b.Q4_0.gguf pts/llama-cpp-1.0.0 -m ../llama-2-7b.Q4_0.gguf Model: llama-2-7b.Q4_0.gguf pts/whisper-cpp-1.0.0 -m models/ggml-medium.en.bin -f ../2016-state-of-the-union.wav Model: ggml-medium.en - Input: 2016 State of the Union pts/whisper-cpp-1.0.0 -m models/ggml-small.en.bin -f ../2016-state-of-the-union.wav Model: ggml-small.en - Input: 2016 State of the Union pts/whisper-cpp-1.0.0 -m models/ggml-base.en.bin -f ../2016-state-of-the-union.wav Model: ggml-base.en - Input: 2016 State of the Union pts/scikit-learn-2.0.0 random_projections.py --n-times 100 Benchmark: Sparse Random Projections / 100 Iterations pts/scikit-learn-2.0.0 kernel_pca_solvers_time_vs_n_components.py Benchmark: Kernel PCA Solvers / Time vs. N Components pts/scikit-learn-2.0.0 kernel_pca_solvers_time_vs_n_samples.py Benchmark: Kernel PCA Solvers / Time vs. N Samples pts/scikit-learn-2.0.0 hist_gradient_boosting_categorical_only.py Benchmark: Hist Gradient Boosting Categorical Only pts/scikit-learn-2.0.0 plot_nmf.py Benchmark: Plot Non-Negative Matrix Factorization pts/scikit-learn-2.0.0 plot_polynomial_kernel_approximation.py Benchmark: Plot Polynomial Kernel Approximation pts/scikit-learn-2.0.0 20newsgroups.py -e logistic_regression Benchmark: 20 Newsgroups / Logistic Regression pts/scikit-learn-2.0.0 hist_gradient_boosting_higgsboson.py Benchmark: Hist Gradient Boosting Higgs Boson pts/scikit-learn-2.0.0 plot_svd.py Benchmark: Plot Singular Value Decomposition pts/scikit-learn-2.0.0 hist_gradient_boosting_threading.py Benchmark: Hist Gradient Boosting Threading pts/scikit-learn-2.0.0 isotonic.py --iterations 100 --log_min_problem_size 1 --log_max_problem_size 10 --dataset perturbed_logarithm Benchmark: Isotonic / Perturbed Logarithm pts/scikit-learn-2.0.0 hist_gradient_boosting_adult.py Benchmark: Hist Gradient Boosting Adult pts/scikit-learn-2.0.0 covertype.py Benchmark: Covertype Dataset Benchmark pts/scikit-learn-2.0.0 sample_without_replacement.py Benchmark: Sample Without Replacement pts/scikit-learn-2.0.0 rcv1_logreg_convergence.py Benchmark: RCV1 Logreg Convergencet pts/scikit-learn-2.0.0 isotonic.py --iterations 100 --log_min_problem_size 1 --log_max_problem_size 10 --dataset pathological Benchmark: Isotonic / Pathological pts/scikit-learn-2.0.0 plot_parallel_pairwise.py Benchmark: Plot Parallel Pairwise pts/scikit-learn-2.0.0 hist_gradient_boosting.py Benchmark: Hist Gradient Boosting pts/scikit-learn-2.0.0 plot_incremental_pca.py Benchmark: Plot Incremental PCA pts/scikit-learn-2.0.0 isotonic.py --iterations 100 --log_min_problem_size 1 --log_max_problem_size 10 --dataset logistic Benchmark: Isotonic / Logistic pts/scikit-learn-2.0.0 tsne_mnist.py Benchmark: TSNE MNIST Dataset pts/scikit-learn-2.0.0 lof.py Benchmark: LocalOutlierFactor pts/scikit-learn-2.0.0 feature_expansions.py Benchmark: Feature Expansions pts/scikit-learn-2.0.0 plot_omp_lars.py Benchmark: Plot OMP vs. LARS pts/scikit-learn-2.0.0 plot_hierarchical.py Benchmark: Plot Hierarchical pts/scikit-learn-2.0.0 text_vectorizers.py Benchmark: Text Vectorizers pts/scikit-learn-2.0.0 plot_fastkmeans.py Benchmark: Plot Fast KMeans pts/scikit-learn-2.0.0 isolation_forest.py Benchmark: Isolation Forest pts/scikit-learn-2.0.0 plot_lasso_path.py Benchmark: Plot Lasso Path pts/scikit-learn-2.0.0 online_ocsvm.py Benchmark: SGDOneClassSVM pts/scikit-learn-2.0.0 sgd_regression.py Benchmark: SGD Regression pts/scikit-learn-2.0.0 plot_neighbors.py Benchmark: Plot Neighbors pts/scikit-learn-2.0.0 mnist.py Benchmark: MNIST Dataset pts/scikit-learn-2.0.0 plot_ward.py Benchmark: Plot Ward pts/scikit-learn-2.0.0 sparsify.py Benchmark: Sparsify pts/scikit-learn-2.0.0 glmnet.py Benchmark: Glmnet pts/scikit-learn-2.0.0 lasso.py Benchmark: Lasso pts/scikit-learn-2.0.0 tree.py Benchmark: Tree pts/scikit-learn-2.0.0 saga.py Benchmark: SAGA pts/scikit-learn-2.0.0 glm.py Benchmark: GLM pts/onnx-1.17.0 FasterRCNN-12-int8/FasterRCNN-12-int8.onnx -e cpu Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard pts/onnx-1.17.0 FasterRCNN-12-int8/FasterRCNN-12-int8.onnx -e cpu -P Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel pts/onnx-1.17.0 super_resolution/super_resolution.onnx -e cpu Model: super-resolution-10 - Device: CPU - Executor: Standard pts/onnx-1.17.0 super_resolution/super_resolution.onnx -e cpu -P Model: super-resolution-10 - Device: CPU - Executor: Parallel pts/onnx-1.17.0 resnet50-v1-12-int8/resnet50-v1-12-int8.onnx -e cpu Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard pts/onnx-1.17.0 resnet50-v1-12-int8/resnet50-v1-12-int8.onnx -e cpu -P Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel pts/onnx-1.17.0 resnet100/resnet100.onnx -e cpu Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard pts/onnx-1.17.0 resnet100/resnet100.onnx -e cpu -P Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel pts/onnx-1.17.0 fcn-resnet101-11/model.onnx -e cpu Model: fcn-resnet101-11 - Device: CPU - Executor: Standard pts/onnx-1.17.0 fcn-resnet101-11/model.onnx -e cpu -P Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel pts/onnx-1.17.0 caffenet-12-int8/caffenet-12-int8.onnx -e cpu Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard pts/onnx-1.17.0 caffenet-12-int8/caffenet-12-int8.onnx -e cpu -P Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel pts/onnx-1.17.0 bertsquad-12/bertsquad-12.onnx -e cpu Model: bertsquad-12 - Device: CPU - Executor: Standard pts/onnx-1.17.0 bertsquad-12/bertsquad-12.onnx -e cpu -P Model: bertsquad-12 - Device: CPU - Executor: Parallel pts/onnx-1.17.0 t5-encoder/t5-encoder.onnx -e cpu Model: T5 Encoder - Device: CPU - Executor: Standard pts/onnx-1.17.0 t5-encoder/t5-encoder.onnx -e cpu -P Model: T5 Encoder - Device: CPU - Executor: Parallel pts/onnx-1.17.0 yolov4/yolov4.onnx -e cpu Model: yolov4 - Device: CPU - Executor: Standard pts/onnx-1.17.0 yolov4/yolov4.onnx -e cpu -P Model: yolov4 - Device: CPU - Executor: Parallel pts/onnx-1.17.0 GPT2/model.onnx -e cpu Model: GPT-2 - Device: CPU - Executor: Standard pts/onnx-1.17.0 GPT2/model.onnx -e cpu -P Model: GPT-2 - Device: CPU - Executor: Parallel pts/plaidml-1.0.4 --no-fp16 --no-train resnet50 CPU FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU pts/plaidml-1.0.4 --no-fp16 --no-train vgg16 CPU FP16: No - Mode: Inference - Network: VGG16 - Device: CPU pts/caffe-1.5.0 --model=../models/bvlc_googlenet/deploy.prototxt -iterations 1000 Model: GoogleNet - Acceleration: CPU - Iterations: 1000 pts/caffe-1.5.0 --model=../models/bvlc_googlenet/deploy.prototxt -iterations 200 Model: GoogleNet - Acceleration: CPU - Iterations: 200 pts/caffe-1.5.0 --model=../models/bvlc_googlenet/deploy.prototxt -iterations 100 Model: GoogleNet - Acceleration: CPU - Iterations: 100 pts/caffe-1.5.0 --model=../models/bvlc_alexnet/deploy.prototxt -iterations 1000 Model: AlexNet - Acceleration: CPU - Iterations: 1000 pts/caffe-1.5.0 --model=../models/bvlc_alexnet/deploy.prototxt -iterations 200 Model: AlexNet - Acceleration: CPU - Iterations: 200 pts/caffe-1.5.0 --model=../models/bvlc_alexnet/deploy.prototxt -iterations 100 Model: AlexNet - Acceleration: CPU - Iterations: 100 pts/tensorflow-2.1.1 --device gpu --batch_size=512 --model=resnet50 Device: GPU - Batch Size: 512 - Model: ResNet-50 pts/tensorflow-2.1.1 --device cpu --batch_size=512 --model=resnet50 Device: CPU - Batch Size: 512 - Model: ResNet-50 pts/tensorflow-2.1.1 --device gpu --batch_size=512 --model=vgg16 Device: GPU - Batch Size: 512 - Model: VGG-16 pts/tensorflow-2.1.1 --device cpu --batch_size=512 --model=vgg16 Device: CPU - Batch Size: 512 - Model: VGG-16 pts/rbenchmark-1.0.3 pts/lczero-1.7.0 -b blas Backend: BLAS