24.03.13.Pop.2204.ML.test1

AMD Ryzen 9 7950X 16-Core testing with a ASUS ProArt X670E-CREATOR WIFI (1710 BIOS) and Zotac NVIDIA GeForce RTX 4070 Ti 12GB on Pop 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2403157-NE-240313POP28
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Initial test 1 No water cool
March 13
  2 Days, 55 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


24.03.13.Pop.2204.ML.test1 AMD Ryzen 9 7950X 16-Core testing with a ASUS ProArt X670E-CREATOR WIFI (1710 BIOS) and Zotac NVIDIA GeForce RTX 4070 Ti 12GB on Pop 22.04 via the Phoronix Test Suite. ,,"Initial test 1 No water cool" Processor,,AMD Ryzen 9 7950X 16-Core @ 5.88GHz (16 Cores / 32 Threads) Motherboard,,ASUS ProArt X670E-CREATOR WIFI (1710 BIOS) Chipset,,AMD Device 14d8 Memory,,2 x 16 GB DDR5-4800MT/s G Skill F5-6000J3636F16G Disk,,1000GB PNY CS2130 1TB SSD Graphics,,Zotac NVIDIA GeForce RTX 4070 Ti 12GB Audio,,NVIDIA Device 22bc Monitor,,2 x DELL 2001FP Network,,Intel I225-V + Aquantia AQtion AQC113CS NBase-T/IEEE + MEDIATEK MT7922 802.11ax PCI OS,,Pop 22.04 Kernel,,6.6.10-76060610-generic (x86_64) Desktop,,GNOME Shell 42.5 Display Server,,X Server 1.21.1.4 Display Driver,,NVIDIA 550.54.14 OpenGL,,4.6.0 OpenCL,,OpenCL 3.0 CUDA 12.4.89 Vulkan,,1.3.277 Compiler,,GCC 11.4.0 File-System,,ext4 Screen Resolution,,3200x1200 ,,"Initial test 1 No water cool" "OpenCV - Test: DNN - Deep Neural Network (ms)",LIB,30277 "Mlpack Benchmark - Benchmark: scikit_linearridgeregression (sec)",LIB,1.03 "Mlpack Benchmark - Benchmark: scikit_svm (sec)",LIB,15.12 "Mlpack Benchmark - Benchmark: scikit_qda (sec)",LIB,34.07 "Mlpack Benchmark - Benchmark: scikit_ica (sec)",LIB,30.12 "AI Benchmark Alpha - Device AI Score (Score)",HIB,6473 "AI Benchmark Alpha - Device Training Score (Score)",HIB,3573 "AI Benchmark Alpha - Device Inference Score (Score)",HIB,2900 "Numenta Anomaly Benchmark - Detector: Contextual Anomaly Detector OSE (sec)",LIB,25.400 "Numenta Anomaly Benchmark - Detector: Bayesian Changepoint (sec)",LIB,13.213 "Numenta Anomaly Benchmark - Detector: Earthgecko Skyline (sec)",LIB,55.566 "Numenta Anomaly Benchmark - Detector: Windowed Gaussian (sec)",LIB,4.984 "Numenta Anomaly Benchmark - Detector: Relative Entropy (sec)",LIB,8.281 "Numenta Anomaly Benchmark - Detector: KNN CAD (sec)",LIB,105.001 "OpenVINO - Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU (ms)",LIB,0.31 "OpenVINO - Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU (FPS)",HIB,46025.94 "OpenVINO - Model: Handwritten English Recognition FP16-INT8 - Device: CPU (ms)",LIB,21.88 "OpenVINO - Model: Handwritten English Recognition FP16-INT8 - Device: CPU (FPS)",HIB,729.99 "OpenVINO - Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU (ms)",LIB,0.45 "OpenVINO - Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU (FPS)",HIB,32402.42 "OpenVINO - Model: Person Re-Identification Retail FP16 - Device: CPU (ms)",LIB,4.46 "OpenVINO - Model: Person Re-Identification Retail FP16 - Device: CPU (FPS)",HIB,1785.48 "OpenVINO - Model: Handwritten English Recognition FP16 - Device: CPU (ms)",LIB,23.94 "OpenVINO - Model: Handwritten English Recognition FP16 - Device: CPU (FPS)",HIB,667.29 "OpenVINO - Model: Noise Suppression Poconet-Like FP16 - Device: CPU (ms)",LIB,11.33 "OpenVINO - Model: Noise Suppression Poconet-Like FP16 - Device: CPU (FPS)",HIB,1386.93 "OpenVINO - Model: Person Vehicle Bike Detection FP16 - Device: CPU (ms)",LIB,5.53 "OpenVINO - Model: Person Vehicle Bike Detection FP16 - Device: CPU (FPS)",HIB,1442.07 "OpenVINO - Model: Weld Porosity Detection FP16-INT8 - Device: CPU (ms)",LIB,6.44 "OpenVINO - Model: Weld Porosity Detection FP16-INT8 - Device: CPU (FPS)",HIB,2470.86 "OpenVINO - Model: Machine Translation EN To DE FP16 - Device: CPU (ms)",LIB,65.52 "OpenVINO - Model: Machine Translation EN To DE FP16 - Device: CPU (FPS)",HIB,121.96 "OpenVINO - Model: Road Segmentation ADAS FP16-INT8 - Device: CPU (ms)",LIB,17.81 "OpenVINO - Model: Road Segmentation ADAS FP16-INT8 - Device: CPU (FPS)",HIB,448.28 "OpenVINO - Model: Face Detection Retail FP16-INT8 - Device: CPU (ms)",LIB,3.61 "OpenVINO - Model: Face Detection Retail FP16-INT8 - Device: CPU (FPS)",HIB,4335.66 "OpenVINO - Model: Weld Porosity Detection FP16 - Device: CPU (ms)",LIB,12.61 "OpenVINO - Model: Weld Porosity Detection FP16 - Device: CPU (FPS)",HIB,1266.85 "OpenVINO - Model: Vehicle Detection FP16-INT8 - Device: CPU (ms)",LIB,5.18 "OpenVINO - Model: Vehicle Detection FP16-INT8 - Device: CPU (FPS)",HIB,1538.27 "OpenVINO - Model: Road Segmentation ADAS FP16 - Device: CPU (ms)",LIB,29.32 "OpenVINO - Model: Road Segmentation ADAS FP16 - Device: CPU (FPS)",HIB,272.36 "OpenVINO - Model: Face Detection Retail FP16 - Device: CPU (ms)",LIB,2.53 "OpenVINO - Model: Face Detection Retail FP16 - Device: CPU (FPS)",HIB,3062.63 "OpenVINO - Model: Face Detection FP16-INT8 - Device: CPU (ms)",LIB,323.12 "OpenVINO - Model: Face Detection FP16-INT8 - Device: CPU (FPS)",HIB,24.71 "OpenVINO - Model: Vehicle Detection FP16 - Device: CPU (ms)",LIB,12.91 "OpenVINO - Model: Vehicle Detection FP16 - Device: CPU (FPS)",HIB,618.41 "OpenVINO - Model: Person Detection FP32 - Device: CPU (ms)",LIB,103.09 "OpenVINO - Model: Person Detection FP32 - Device: CPU (FPS)",HIB,77.54 "OpenVINO - Model: Person Detection FP16 - Device: CPU (ms)",LIB,104.27 "OpenVINO - Model: Person Detection FP16 - Device: CPU (FPS)",HIB,76.65 "OpenVINO - Model: Face Detection FP16 - Device: CPU (ms)",LIB,625.66 "OpenVINO - Model: Face Detection FP16 - Device: CPU (FPS)",HIB,12.75 "TNN - Target: CPU - Model: SqueezeNet v1.1 (ms)",LIB,179.679 "TNN - Target: CPU - Model: SqueezeNet v2 (ms)",LIB,42.215 "TNN - Target: CPU - Model: MobileNet v2 (ms)",LIB,183.277 "TNN - Target: CPU - Model: DenseNet (ms)",LIB,2005.606 "NCNN - Target: Vulkan GPU - Model: FastestDet (ms)",LIB,4.30 "NCNN - Target: Vulkan GPU - Model: vision_transformer (ms)",LIB,38.12 "NCNN - Target: Vulkan GPU - Model: regnety_400m (ms)",LIB,9.83 "NCNN - Target: Vulkan GPU - Model: squeezenet_ssd (ms)",LIB,8.38 "NCNN - Target: Vulkan GPU - Model: yolov4-tiny (ms)",LIB,15.84 "NCNN - Target: Vulkan GPUv2-yolov3v2-yolov3 - Model: mobilenetv2-yolov3 (ms)",LIB,9.36 "NCNN - Target: Vulkan GPU - Model: resnet50 (ms)",LIB,13.16 "NCNN - Target: Vulkan GPU - Model: alexnet (ms)",LIB,5.67 "NCNN - Target: Vulkan GPU - Model: resnet18 (ms)",LIB,6.65 "NCNN - Target: Vulkan GPU - Model: vgg16 (ms)",LIB,32.31 "NCNN - Target: Vulkan GPU - Model: googlenet (ms)",LIB,9.69 "NCNN - Target: Vulkan GPU - Model: blazeface (ms)",LIB,1.62 "NCNN - Target: Vulkan GPU - Model: efficientnet-b0 (ms)",LIB,4.57 "NCNN - Target: Vulkan GPU - Model: mnasnet (ms)",LIB,3.45 "NCNN - Target: Vulkan GPU - Model: shufflenet-v2 (ms)",LIB,3.90 "NCNN - Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 (ms)",LIB,3.72 "NCNN - Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 (ms)",LIB,3.69 "NCNN - Target: Vulkan GPU - Model: mobilenet (ms)",LIB,9.36 "NCNN - Target: CPU - Model: FastestDet (ms)",LIB,4.69 "NCNN - Target: CPU - Model: vision_transformer (ms)",LIB,37.92 "NCNN - Target: CPU - Model: regnety_400m (ms)",LIB,9.87 "NCNN - Target: CPU - Model: squeezenet_ssd (ms)",LIB,8.49 "NCNN - Target: CPU - Model: yolov4-tiny (ms)",LIB,16.28 "NCNN - Target: CPUv2-yolov3v2-yolov3 - Model: mobilenetv2-yolov3 (ms)",LIB,9.80 "NCNN - Target: CPU - Model: resnet50 (ms)",LIB,13.48 "NCNN - Target: CPU - Model: alexnet (ms)",LIB,5.52 "NCNN - Target: CPU - Model: resnet18 (ms)",LIB,6.62 "NCNN - Target: CPU - Model: vgg16 (ms)",LIB,32.60 "NCNN - Target: CPU - Model: googlenet (ms)",LIB,9.56 "NCNN - Target: CPU - Model: blazeface (ms)",LIB,1.60 "NCNN - Target: CPU - Model: efficientnet-b0 (ms)",LIB,4.50 "NCNN - Target: CPU - Model: mnasnet (ms)",LIB,3.46 "NCNN - Target: CPU - Model: shufflenet-v2 (ms)",LIB,3.93 "NCNN - Target: CPU-v3-v3 - Model: mobilenet-v3 (ms)",LIB,3.69 "NCNN - Target: CPU-v2-v2 - Model: mobilenet-v2 (ms)",LIB,3.65 "NCNN - Target: CPU - Model: mobilenet (ms)",LIB,9.80 "Mobile Neural Network - Model: inception-v3 (ms)",LIB,23.421 "Mobile Neural Network - Model: mobilenet-v1-1.0 (ms)",LIB,2.456 "Mobile Neural Network - Model: MobileNetV2_224 (ms)",LIB,3.410 "Mobile Neural Network - Model: SqueezeNetV1.0 (ms)",LIB,4.141 "Mobile Neural Network - Model: resnet-v2-50 (ms)",LIB,12.123 "Mobile Neural Network - Model: squeezenetv1.1 (ms)",LIB,2.542 "Mobile Neural Network - Model: mobilenetV3 (ms)",LIB,1.638 "Mobile Neural Network - Model: nasnet (ms)",LIB,11.300 "spaCy - Model: en_core_web_trf (tokens/sec)",HIB,2415 "spaCy - Model: en_core_web_lg (tokens/sec)",HIB,18557 "Neural Magic DeepSparse - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream (ms/batch)",LIB,57.8304 "Neural Magic DeepSparse - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream (items/sec)",HIB,17.2893 "Neural Magic DeepSparse - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream (ms/batch)",LIB,400.3931 "Neural Magic DeepSparse - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream (items/sec)",HIB,19.9101 "Neural Magic DeepSparse - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream (ms/batch)",LIB,10.1862 "Neural Magic DeepSparse - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream (items/sec)",HIB,98.0758 "Neural Magic DeepSparse - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream (ms/batch)",LIB,19.1612 "Neural Magic DeepSparse - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream (items/sec)",HIB,417.1723 "Neural Magic DeepSparse - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream (ms/batch)",LIB,36.3089 "Neural Magic DeepSparse - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream (items/sec)",HIB,27.5303 "Neural Magic DeepSparse - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream (ms/batch)",LIB,236.3144 "Neural Magic DeepSparse - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream (items/sec)",HIB,33.8095 "Neural Magic DeepSparse - Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream (ms/batch)",LIB,10.3590 "Neural Magic DeepSparse - Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream (items/sec)",HIB,96.4692 "Neural Magic DeepSparse - Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream (ms/batch)",LIB,43.7801 "Neural Magic DeepSparse - Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream (items/sec)",HIB,182.6225 "Neural Magic DeepSparse - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream (ms/batch)",LIB,10.8023 "Neural Magic DeepSparse - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream (items/sec)",HIB,92.5135 "Neural Magic DeepSparse - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream (ms/batch)",LIB,69.4409 "Neural Magic DeepSparse - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream (items/sec)",HIB,115.1124 "Neural Magic DeepSparse - Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream (ms/batch)",LIB,5.7436 "Neural Magic DeepSparse - Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream (items/sec)",HIB,173.8986 "Neural Magic DeepSparse - Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream (ms/batch)",LIB,30.1491 "Neural Magic DeepSparse - Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream (items/sec)",HIB,265.2018 "Neural Magic DeepSparse - Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream (ms/batch)",LIB,54.0028 "Neural Magic DeepSparse - Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream (items/sec)",HIB,18.5143 "Neural Magic DeepSparse - Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream (ms/batch)",LIB,303.3330 "Neural Magic DeepSparse - Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream (items/sec)",HIB,26.3345 "Neural Magic DeepSparse - Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream (ms/batch)",LIB,11.0556 "Neural Magic DeepSparse - Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream (items/sec)",HIB,90.3558 "Neural Magic DeepSparse - Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream (ms/batch)",LIB,72.0412 "Neural Magic DeepSparse - Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream (items/sec)",HIB,110.9914 "Neural Magic DeepSparse - Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream (ms/batch)",LIB,0.8214 "Neural Magic DeepSparse - Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream (items/sec)",HIB,1214.2399 "Neural Magic DeepSparse - Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream (ms/batch)",LIB,3.9240 "Neural Magic DeepSparse - Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream (items/sec)",HIB,2031.8775 "Neural Magic DeepSparse - Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream (ms/batch)",LIB,5.7601 "Neural Magic DeepSparse - Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream (items/sec)",HIB,173.3786 "Neural Magic DeepSparse - Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream (ms/batch)",LIB,30.2415 "Neural Magic DeepSparse - Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream (items/sec)",HIB,264.3773 "Neural Magic DeepSparse - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream (ms/batch)",LIB,3.5898 "Neural Magic DeepSparse - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream (items/sec)",HIB,278.3122 "Neural Magic DeepSparse - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream (ms/batch)",LIB,8.9714 "Neural Magic DeepSparse - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream (items/sec)",HIB,890.1894 "Neural Magic DeepSparse - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream (ms/batch)",LIB,57.6038 "Neural Magic DeepSparse - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream (items/sec)",HIB,17.3572 "Neural Magic DeepSparse - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream (ms/batch)",LIB,397.5797 "Neural Magic DeepSparse - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream (items/sec)",HIB,20.0852 "TensorFlow - Device: GPU - Batch Size: 512 - Model: GoogLeNet (images/sec)",HIB,15.90 "TensorFlow - Device: GPU - Batch Size: 256 - Model: ResNet-50 (images/sec)",HIB,5.56 "TensorFlow - Device: GPU - Batch Size: 256 - Model: GoogLeNet (images/sec)",HIB,15.76 "TensorFlow - Device: CPU - Batch Size: 512 - Model: GoogLeNet (images/sec)",HIB,115.70 "TensorFlow - Device: CPU - Batch Size: 256 - Model: ResNet-50 (images/sec)",HIB,36.15 "TensorFlow - Device: CPU - Batch Size: 256 - Model: GoogLeNet (images/sec)",HIB,116.33 "TensorFlow - Device: GPU - Batch Size: 64 - Model: ResNet-50 (images/sec)",HIB,5.51 "TensorFlow - Device: GPU - Batch Size: 64 - Model: GoogLeNet (images/sec)",HIB,15.61 "TensorFlow - Device: GPU - Batch Size: 32 - Model: ResNet-50 (images/sec)",HIB,5.49 "TensorFlow - Device: GPU - Batch Size: 32 - Model: GoogLeNet (images/sec)",HIB,15.45 "TensorFlow - Device: GPU - Batch Size: 16 - Model: ResNet-50 (images/sec)",HIB,5.42 "TensorFlow - Device: GPU - Batch Size: 16 - Model: GoogLeNet (images/sec)",HIB,15.10 "TensorFlow - Device: CPU - Batch Size: 64 - Model: ResNet-50 (images/sec)",HIB,36.36 "TensorFlow - Device: CPU - Batch Size: 64 - Model: GoogLeNet (images/sec)",HIB,119.04 "TensorFlow - Device: CPU - Batch Size: 32 - Model: ResNet-50 (images/sec)",HIB,36.74 "TensorFlow - Device: CPU - Batch Size: 32 - Model: GoogLeNet (images/sec)",HIB,122.39 "TensorFlow - Device: CPU - Batch Size: 16 - Model: ResNet-50 (images/sec)",HIB,36.43 "TensorFlow - Device: CPU - Batch Size: 16 - Model: GoogLeNet (images/sec)",HIB,125.83 "TensorFlow - Device: GPU - Batch Size: 512 - Model: AlexNet (images/sec)",HIB,35.93 "TensorFlow - Device: GPU - Batch Size: 256 - Model: AlexNet (images/sec)",HIB,35.82 "TensorFlow - Device: GPU - Batch Size: 1 - Model: ResNet-50 (images/sec)",HIB,4.25 "TensorFlow - Device: GPU - Batch Size: 1 - Model: GoogLeNet (images/sec)",HIB,12.36 "TensorFlow - Device: CPU - Batch Size: 512 - Model: AlexNet (images/sec)",HIB,392.16 "TensorFlow - Device: CPU - Batch Size: 256 - Model: AlexNet (images/sec)",HIB,388.40 "TensorFlow - Device: CPU - Batch Size: 1 - Model: ResNet-50 (images/sec)",HIB,12.70 "TensorFlow - Device: CPU - Batch Size: 1 - Model: GoogLeNet (images/sec)",HIB,47.21 "TensorFlow - Device: GPU - Batch Size: 64 - Model: AlexNet (images/sec)",HIB,34.84 "TensorFlow - Device: GPU - Batch Size: 32 - Model: AlexNet (images/sec)",HIB,33.39 "TensorFlow - Device: GPU - Batch Size: 256 - Model: VGG-16 (images/sec)",HIB,1.77 "TensorFlow - Device: GPU - Batch Size: 16 - Model: AlexNet (images/sec)",HIB,30.67 "TensorFlow - Device: CPU - Batch Size: 64 - Model: AlexNet (images/sec)",HIB,305.81 "TensorFlow - Device: CPU - Batch Size: 32 - Model: AlexNet (images/sec)",HIB,224.56 "TensorFlow - Device: CPU - Batch Size: 256 - Model: VGG-16 (images/sec)",HIB,18.12 "TensorFlow - Device: CPU - Batch Size: 16 - Model: AlexNet (images/sec)",HIB,148.72 "TensorFlow - Device: GPU - Batch Size: 64 - Model: VGG-16 (images/sec)",HIB,1.73 "TensorFlow - Device: GPU - Batch Size: 32 - Model: VGG-16 (images/sec)",HIB,1.72 "TensorFlow - Device: GPU - Batch Size: 16 - Model: VGG-16 (images/sec)",HIB,1.70 "TensorFlow - Device: GPU - Batch Size: 1 - Model: AlexNet (images/sec)",HIB,12.58 "TensorFlow - Device: CPU - Batch Size: 64 - Model: VGG-16 (images/sec)",HIB,17.44 "TensorFlow - Device: CPU - Batch Size: 32 - Model: VGG-16 (images/sec)",HIB,16.89 "TensorFlow - Device: CPU - Batch Size: 16 - Model: VGG-16 (images/sec)",HIB,16.09 "TensorFlow - Device: CPU - Batch Size: 1 - Model: AlexNet (images/sec)",HIB,13.00 "TensorFlow - Device: GPU - Batch Size: 1 - Model: VGG-16 (images/sec)",HIB,1.46 "TensorFlow - Device: CPU - Batch Size: 1 - Model: VGG-16 (images/sec)",HIB,4.74 "PyTorch - Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: Efficientnet_v2_l (batches/sec)",HIB,69.97 "PyTorch - Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: Efficientnet_v2_l (batches/sec)",HIB,70.81 "PyTorch - Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: Efficientnet_v2_l (batches/sec)",HIB,69.80 "PyTorch - Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: Efficientnet_v2_l (batches/sec)",HIB,70.52 "PyTorch - Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: Efficientnet_v2_l (batches/sec)",HIB,70.63 "PyTorch - Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: Efficientnet_v2_l (batches/sec)",HIB,71.98 "PyTorch - Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: ResNet-152 (batches/sec)",HIB,140.41 "PyTorch - Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: ResNet-152 (batches/sec)",HIB,138.62 "PyTorch - Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: ResNet-152 (batches/sec)",HIB,138.72 "PyTorch - Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: ResNet-50 (batches/sec)",HIB,383.56 "PyTorch - Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: ResNet-152 (batches/sec)",HIB,139.41 "PyTorch - Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: ResNet-50 (batches/sec)",HIB,380.34 "PyTorch - Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: ResNet-152 (batches/sec)",HIB,138.78 "PyTorch - Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: ResNet-50 (batches/sec)",HIB,379.98 "PyTorch - Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: ResNet-50 (batches/sec)",HIB,380.74 "PyTorch - Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: ResNet-50 (batches/sec)",HIB,380.67 "PyTorch - Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: ResNet-152 (batches/sec)",HIB,137.39 "PyTorch - Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: ResNet-50 (batches/sec)",HIB,387.06 "PyTorch - Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l (batches/sec)",HIB,10.44 "PyTorch - Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l (batches/sec)",HIB,10.63 "PyTorch - Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l (batches/sec)",HIB,10.58 "PyTorch - Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l (batches/sec)",HIB,10.59 "PyTorch - Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l (batches/sec)",HIB,10.46 "PyTorch - Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l (batches/sec)",HIB,14.14 "PyTorch - Device: CPU - Batch Size: 512 - Model: ResNet-152 (batches/sec)",HIB,17.59 "PyTorch - Device: CPU - Batch Size: 256 - Model: ResNet-152 (batches/sec)",HIB,17.69 "PyTorch - Device: CPU - Batch Size: 64 - Model: ResNet-152 (batches/sec)",HIB,17.66 "PyTorch - Device: CPU - Batch Size: 512 - Model: ResNet-50 (batches/sec)",HIB,42.91 "PyTorch - Device: CPU - Batch Size: 32 - Model: ResNet-152 (batches/sec)",HIB,17.64 "PyTorch - Device: CPU - Batch Size: 256 - Model: ResNet-50 (batches/sec)",HIB,43.49 "PyTorch - Device: CPU - Batch Size: 16 - Model: ResNet-152 (batches/sec)",HIB,17.66 "PyTorch - Device: CPU - Batch Size: 64 - Model: ResNet-50 (batches/sec)",HIB,43.35 "PyTorch - Device: CPU - Batch Size: 32 - Model: ResNet-50 (batches/sec)",HIB,44.08 "PyTorch - Device: CPU - Batch Size: 16 - Model: ResNet-50 (batches/sec)",HIB,44.08 "PyTorch - Device: CPU - Batch Size: 1 - Model: ResNet-152 (batches/sec)",HIB,25.64 "PyTorch - Device: CPU - Batch Size: 1 - Model: ResNet-50 (batches/sec)",HIB,64.81 "TensorFlow Lite - Model: Inception ResNet V2 (us)",LIB,21857.0 "TensorFlow Lite - Model: Mobilenet Quant (us)",LIB,1861.53 "TensorFlow Lite - Model: Mobilenet Float (us)",LIB,1214.11 "TensorFlow Lite - Model: NASNet Mobile (us)",LIB,10099.3 "TensorFlow Lite - Model: Inception V4 (us)",LIB,21139.4 "TensorFlow Lite - Model: SqueezeNet (us)",LIB,1716.04 "RNNoise - (sec)",LIB,13.707 "DeepSpeech - Acceleration: CPU (sec)",LIB,47.03514 "Numpy Benchmark - (Score)",HIB,704.52 "oneDNN - Harness: Recurrent Neural Network Inference - Engine: CPU (ms)",LIB,747.499 "oneDNN - Harness: Recurrent Neural Network Training - Engine: CPU (ms)",LIB,1452.96 "oneDNN - Harness: Deconvolution Batch shapes_3d - Engine: CPU (ms)",LIB,2.56519 "oneDNN - Harness: Deconvolution Batch shapes_1d - Engine: CPU (ms)",LIB,3.06179 "oneDNN - Harness: Convolution Batch Shapes Auto - Engine: CPU (ms)",LIB,7.16631 "oneDNN - Harness: IP Shapes 3D - Engine: CPU (ms)",LIB,4.42170 "oneDNN - Harness: IP Shapes 1D - Engine: CPU (ms)",LIB,1.17351 "SHOC Scalable HeterOgeneous Computing - Target: OpenCL - Benchmark: Texture Read Bandwidth (GB/s)",HIB,2985.70 "SHOC Scalable HeterOgeneous Computing - Target: OpenCL - Benchmark: Bus Speed Readback (GB/s)",HIB,27.0723 "SHOC Scalable HeterOgeneous Computing - Target: OpenCL - Benchmark: Bus Speed Download (GB/s)",HIB,26.8275 "SHOC Scalable HeterOgeneous Computing - Target: OpenCL - Benchmark: Max SP Flops (GFLOPS)",HIB,43074.9 "SHOC Scalable HeterOgeneous Computing - Target: OpenCL - Benchmark: GEMM SGEMM_N (GFLOPS)",HIB,13212.0 "SHOC Scalable HeterOgeneous Computing - Target: OpenCL - Benchmark: Reduction (GB/s)",HIB,388.934 "SHOC Scalable HeterOgeneous Computing - Target: OpenCL - Benchmark: MD5 Hash (GHash/s)",HIB,47.8988 "SHOC Scalable HeterOgeneous Computing - Target: OpenCL - Benchmark: FFT SP (GFLOPS)",HIB,1292.53 "SHOC Scalable HeterOgeneous Computing - Target: OpenCL - Benchmark: Triad (GB/s)",HIB,25.4632 "SHOC Scalable HeterOgeneous Computing - Target: OpenCL - Benchmark: S3D (GFLOPS)",HIB,299.468 "Llamafile - Test: wizardcoder-python-34b-v1.0.Q6_K - Acceleration: CPU (Tokens/sec)",HIB, "Llamafile - Test: mistral-7b-instruct-v0.2.Q8_0 - Acceleration: CPU (Tokens/sec)",HIB, "Llamafile - Test: llava-v1.5-7b-q4 - Acceleration: CPU (Tokens/sec)",HIB, "Llama.cpp - Model: llama-2-70b-chat.Q5_0.gguf (Tokens/sec)",HIB, "Llama.cpp - Model: llama-2-13b.Q4_0.gguf (Tokens/sec)",HIB, "Llama.cpp - Model: llama-2-7b.Q4_0.gguf (Tokens/sec)",HIB, "Whisper.cpp - Model: ggml-medium.en - Input: 2016 State of the Union (sec)",LIB,0.86615 "Whisper.cpp - Model: ggml-small.en - Input: 2016 State of the Union (sec)",LIB,0.34881 "Whisper.cpp - Model: ggml-base.en - Input: 2016 State of the Union (sec)",LIB,0.15350 "Scikit-Learn - Benchmark: Sparse Random Projections / 100 Iterations (sec)",LIB, "Scikit-Learn - Benchmark: Kernel PCA Solvers / Time vs. N Components (sec)",LIB, "Scikit-Learn - Benchmark: Kernel PCA Solvers / Time vs. N Samples (sec)",LIB, "Scikit-Learn - Benchmark: Hist Gradient Boosting Categorical Only (sec)",LIB, "Scikit-Learn - Benchmark: Plot Non-Negative Matrix Factorization (sec)",LIB, "Scikit-Learn - Benchmark: Plot Polynomial Kernel Approximation (sec)",LIB, "Scikit-Learn - Benchmark: 20 Newsgroups / Logistic Regression (sec)",LIB, "Scikit-Learn - Benchmark: Hist Gradient Boosting Higgs Boson (sec)",LIB, "Scikit-Learn - Benchmark: Plot Singular Value Decomposition (sec)",LIB, "Scikit-Learn - Benchmark: Hist Gradient Boosting Threading (sec)",LIB, "Scikit-Learn - Benchmark: Isotonic / Perturbed Logarithm (sec)",LIB, "Scikit-Learn - Benchmark: Hist Gradient Boosting Adult (sec)",LIB, "Scikit-Learn - Benchmark: Covertype Dataset Benchmark (sec)",LIB, "Scikit-Learn - Benchmark: Sample Without Replacement (sec)",LIB, "Scikit-Learn - Benchmark: RCV1 Logreg Convergencet (sec)",LIB, "Scikit-Learn - Benchmark: Isotonic / Pathological (sec)",LIB, "Scikit-Learn - Benchmark: Plot Parallel Pairwise (sec)",LIB, "Scikit-Learn - Benchmark: Hist Gradient Boosting (sec)",LIB, "Scikit-Learn - Benchmark: Plot Incremental PCA (sec)",LIB, "Scikit-Learn - Benchmark: Isotonic / Logistic (sec)",LIB, "Scikit-Learn - Benchmark: TSNE MNIST Dataset (sec)",LIB, "Scikit-Learn - Benchmark: LocalOutlierFactor (sec)",LIB, "Scikit-Learn - Benchmark: Feature Expansions (sec)",LIB, "Scikit-Learn - Benchmark: Plot OMP vs. LARS (sec)",LIB, "Scikit-Learn - Benchmark: Plot Hierarchical (sec)",LIB, "Scikit-Learn - Benchmark: Text Vectorizers (sec)",LIB, "Scikit-Learn - Benchmark: Plot Fast KMeans (sec)",LIB, "Scikit-Learn - Benchmark: Isolation Forest (sec)",LIB, "Scikit-Learn - Benchmark: Plot Lasso Path (sec)",LIB, "Scikit-Learn - Benchmark: SGDOneClassSVM (sec)",LIB, "Scikit-Learn - Benchmark: SGD Regression (sec)",LIB, "Scikit-Learn - Benchmark: Plot Neighbors (sec)",LIB, "Scikit-Learn - Benchmark: MNIST Dataset (sec)",LIB, "Scikit-Learn - Benchmark: Plot Ward (sec)",LIB, "Scikit-Learn - Benchmark: Sparsify (sec)",LIB, "Scikit-Learn - Benchmark: Glmnet (sec)",LIB, "Scikit-Learn - Benchmark: Lasso (sec)",LIB, "Scikit-Learn - Benchmark: Tree (sec)",LIB, "Scikit-Learn - Benchmark: SAGA (sec)",LIB, "Scikit-Learn - Benchmark: GLM (sec)",LIB, "ONNX Runtime - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard (Inferences/sec)",HIB, "ONNX Runtime - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB, "ONNX Runtime - Model: super-resolution-10 - Device: CPU - Executor: Standard (Inferences/sec)",HIB, "ONNX Runtime - Model: super-resolution-10 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB, "ONNX Runtime - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard (Inferences/sec)",HIB, "ONNX Runtime - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB, "ONNX Runtime - Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard (Inferences/sec)",HIB, "ONNX Runtime - Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB, "ONNX Runtime - Model: fcn-resnet101-11 - Device: CPU - Executor: Standard (Inferences/sec)",HIB, "ONNX Runtime - Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB, "ONNX Runtime - Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard (Inferences/sec)",HIB, "ONNX Runtime - Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB, "ONNX Runtime - Model: bertsquad-12 - Device: CPU - Executor: Standard (Inferences/sec)",HIB, "ONNX Runtime - Model: bertsquad-12 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB, "ONNX Runtime - Model: T5 Encoder - Device: CPU - Executor: Standard (Inferences/sec)",HIB, "ONNX Runtime - Model: T5 Encoder - Device: CPU - Executor: Parallel (Inferences/sec)",HIB, "ONNX Runtime - Model: yolov4 - Device: CPU - Executor: Standard (Inferences/sec)",HIB, "ONNX Runtime - Model: yolov4 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB, "ONNX Runtime - Model: GPT-2 - Device: CPU - Executor: Standard (Inferences/sec)",HIB, "ONNX Runtime - Model: GPT-2 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB, "PlaidML - FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU (Examples/sec)",HIB, "PlaidML - FP16: No - Mode: Inference - Network: VGG16 - Device: CPU (Examples/sec)",HIB, "Caffe - Model: GoogleNet - Acceleration: CPU - Iterations: 1000 (ms)",LIB, "Caffe - Model: GoogleNet - Acceleration: CPU - Iterations: 200 (ms)",LIB, "Caffe - Model: GoogleNet - Acceleration: CPU - Iterations: 100 (ms)",LIB, "Caffe - Model: AlexNet - Acceleration: CPU - Iterations: 1000 (ms)",LIB, "Caffe - Model: AlexNet - Acceleration: CPU - Iterations: 200 (ms)",LIB, "Caffe - Model: AlexNet - Acceleration: CPU - Iterations: 100 (ms)",LIB, "TensorFlow - Device: GPU - Batch Size: 512 - Model: ResNet-50 (images/sec)",HIB, "TensorFlow - Device: CPU - Batch Size: 512 - Model: ResNet-50 (images/sec)",HIB, "TensorFlow - Device: GPU - Batch Size: 512 - Model: VGG-16 (images/sec)",HIB, "TensorFlow - Device: CPU - Batch Size: 512 - Model: VGG-16 (images/sec)",HIB, "R Benchmark - (sec)",LIB, "LeelaChessZero - Backend: BLAS (Nodes/s)",HIB,