24.03.13.Pop.2204.ML.test1

AMD Ryzen 9 7950X 16-Core testing with a ASUS ProArt X670E-CREATOR WIFI (1710 BIOS) and Zotac NVIDIA GeForce RTX 4070 Ti 12GB on Pop 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2403157-NE-240313POP28
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Initial test 1 No water cool
March 13
  2 Days, 55 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


24.03.13.Pop.2204.ML.test1 AMD Ryzen 9 7950X 16-Core testing with a ASUS ProArt X670E-CREATOR WIFI (1710 BIOS) and Zotac NVIDIA GeForce RTX 4070 Ti 12GB on Pop 22.04 via the Phoronix Test Suite. Initial test 1 No water cool: Processor: AMD Ryzen 9 7950X 16-Core @ 5.88GHz (16 Cores / 32 Threads), Motherboard: ASUS ProArt X670E-CREATOR WIFI (1710 BIOS), Chipset: AMD Device 14d8, Memory: 2 x 16 GB DDR5-4800MT/s G Skill F5-6000J3636F16G, Disk: 1000GB PNY CS2130 1TB SSD, Graphics: Zotac NVIDIA GeForce RTX 4070 Ti 12GB, Audio: NVIDIA Device 22bc, Monitor: 2 x DELL 2001FP, Network: Intel I225-V + Aquantia AQtion AQC113CS NBase-T/IEEE + MEDIATEK MT7922 802.11ax PCI OS: Pop 22.04, Kernel: 6.6.10-76060610-generic (x86_64), Desktop: GNOME Shell 42.5, Display Server: X Server 1.21.1.4, Display Driver: NVIDIA 550.54.14, OpenGL: 4.6.0, OpenCL: OpenCL 3.0 CUDA 12.4.89, Vulkan: 1.3.277, Compiler: GCC 11.4.0, File-System: ext4, Screen Resolution: 3200x1200 TensorFlow 2.12 Device: GPU - Batch Size: 256 - Model: VGG-16 images/sec > Higher Is Better Initial test 1 No water cool . 1.77 |========================================== TensorFlow 2.12 Device: GPU - Batch Size: 256 - Model: ResNet-50 images/sec > Higher Is Better Initial test 1 No water cool . 5.56 |========================================== TensorFlow 2.12 Device: GPU - Batch Size: 64 - Model: VGG-16 images/sec > Higher Is Better Initial test 1 No water cool . 1.73 |========================================== TensorFlow 2.12 Device: GPU - Batch Size: 512 - Model: GoogLeNet images/sec > Higher Is Better Initial test 1 No water cool . 15.90 |========================================= TensorFlow 2.12 Device: GPU - Batch Size: 32 - Model: VGG-16 images/sec > Higher Is Better Initial test 1 No water cool . 1.72 |========================================== TensorFlow 2.12 Device: GPU - Batch Size: 256 - Model: GoogLeNet images/sec > Higher Is Better Initial test 1 No water cool . 15.76 |========================================= SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Max SP Flops GFLOPS > Higher Is Better Initial test 1 No water cool . 43074.9 |======================================= TensorFlow 2.12 Device: GPU - Batch Size: 512 - Model: AlexNet images/sec > Higher Is Better Initial test 1 No water cool . 35.93 |========================================= TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: VGG-16 images/sec > Higher Is Better Initial test 1 No water cool . 18.12 |========================================= TensorFlow 2.12 Device: GPU - Batch Size: 64 - Model: ResNet-50 images/sec > Higher Is Better Initial test 1 No water cool . 5.51 |========================================== TensorFlow 2.12 Device: GPU - Batch Size: 16 - Model: VGG-16 images/sec > Higher Is Better Initial test 1 No water cool . 1.70 |========================================== TensorFlow 2.12 Device: GPU - Batch Size: 256 - Model: AlexNet images/sec > Higher Is Better Initial test 1 No water cool . 35.82 |========================================= TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: ResNet-50 images/sec > Higher Is Better Initial test 1 No water cool . 36.15 |========================================= TensorFlow 2.12 Device: GPU - Batch Size: 32 - Model: ResNet-50 images/sec > Higher Is Better Initial test 1 No water cool . 5.49 |========================================== TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: GoogLeNet images/sec > Higher Is Better Initial test 1 No water cool . 115.70 |======================================== TensorFlow 2.12 Device: GPU - Batch Size: 64 - Model: GoogLeNet images/sec > Higher Is Better Initial test 1 No water cool . 15.61 |========================================= TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: VGG-16 images/sec > Higher Is Better Initial test 1 No water cool . 17.44 |========================================= TensorFlow 2.12 Device: GPU - Batch Size: 16 - Model: ResNet-50 images/sec > Higher Is Better Initial test 1 No water cool . 5.42 |========================================== Numenta Anomaly Benchmark 1.1 Detector: KNN CAD Seconds < Lower Is Better Initial test 1 No water cool . 105.00 |======================================== AI Benchmark Alpha 0.1.2 Device AI Score Score > Higher Is Better Initial test 1 No water cool . 6473 |========================================== AI Benchmark Alpha 0.1.2 Device Training Score Score > Higher Is Better Initial test 1 No water cool . 3573 |========================================== AI Benchmark Alpha 0.1.2 Device Inference Score Score > Higher Is Better Initial test 1 No water cool . 2900 |========================================== TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: GoogLeNet images/sec > Higher Is Better Initial test 1 No water cool . 116.33 |======================================== TensorFlow 2.12 Device: GPU - Batch Size: 32 - Model: GoogLeNet images/sec > Higher Is Better Initial test 1 No water cool . 15.45 |========================================= TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: VGG-16 images/sec > Higher Is Better Initial test 1 No water cool . 16.89 |========================================= TensorFlow 2.12 Device: GPU - Batch Size: 64 - Model: AlexNet images/sec > Higher Is Better Initial test 1 No water cool . 34.84 |========================================= TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: ResNet-50 images/sec > Higher Is Better Initial test 1 No water cool . 36.36 |========================================= PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l batches/sec > Higher Is Better Initial test 1 No water cool . 10.44 |========================================= PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l batches/sec > Higher Is Better Initial test 1 No water cool . 10.46 |========================================= PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l batches/sec > Higher Is Better Initial test 1 No water cool . 10.59 |========================================= PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l batches/sec > Higher Is Better Initial test 1 No water cool . 10.63 |========================================= PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l batches/sec > Higher Is Better Initial test 1 No water cool . 10.58 |========================================= OpenCV 4.7 Test: DNN - Deep Neural Network ms < Lower Is Better Initial test 1 No water cool . 30277 |========================================= OpenVINO 2024.0 Model: Face Detection FP16 - Device: CPU ms < Lower Is Better Initial test 1 No water cool . 625.66 |======================================== OpenVINO 2024.0 Model: Face Detection FP16 - Device: CPU FPS > Higher Is Better Initial test 1 No water cool . 12.75 |========================================= TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: AlexNet images/sec > Higher Is Better Initial test 1 No water cool . 392.16 |======================================== TNN 0.3 Target: CPU - Model: DenseNet ms < Lower Is Better Initial test 1 No water cool . 2005.61 |======================================= Mobile Neural Network 2.1 Model: inception-v3 ms < Lower Is Better Initial test 1 No water cool . 23.42 |========================================= Mobile Neural Network 2.1 Model: mobilenet-v1-1.0 ms < Lower Is Better Initial test 1 No water cool . 2.456 |========================================= Mobile Neural Network 2.1 Model: MobileNetV2_224 ms < Lower Is Better Initial test 1 No water cool . 3.410 |========================================= Mobile Neural Network 2.1 Model: SqueezeNetV1.0 ms < Lower Is Better Initial test 1 No water cool . 4.141 |========================================= Mobile Neural Network 2.1 Model: resnet-v2-50 ms < Lower Is Better Initial test 1 No water cool . 12.12 |========================================= Mobile Neural Network 2.1 Model: squeezenetv1.1 ms < Lower Is Better Initial test 1 No water cool . 2.542 |========================================= Mobile Neural Network 2.1 Model: mobilenetV3 ms < Lower Is Better Initial test 1 No water cool . 1.638 |========================================= Mobile Neural Network 2.1 Model: nasnet ms < Lower Is Better Initial test 1 No water cool . 11.30 |========================================= TensorFlow 2.12 Device: GPU - Batch Size: 16 - Model: GoogLeNet images/sec > Higher Is Better Initial test 1 No water cool . 15.10 |========================================= Numpy Benchmark Score > Higher Is Better Initial test 1 No water cool . 704.52 |======================================== TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: VGG-16 images/sec > Higher Is Better Initial test 1 No water cool . 16.09 |========================================= TensorFlow 2.12 Device: GPU - Batch Size: 512 - Model: VGG-16 images/sec > Higher Is Better PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: ResNet-152 batches/sec > Higher Is Better Initial test 1 No water cool . 17.69 |========================================= PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: ResNet-152 batches/sec > Higher Is Better Initial test 1 No water cool . 17.66 |========================================= PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: ResNet-152 batches/sec > Higher Is Better Initial test 1 No water cool . 17.59 |========================================= PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: ResNet-152 batches/sec > Higher Is Better Initial test 1 No water cool . 17.64 |========================================= PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-152 batches/sec > Higher Is Better Initial test 1 No water cool . 17.66 |========================================= TensorFlow 2.12 Device: GPU - Batch Size: 32 - Model: AlexNet images/sec > Higher Is Better Initial test 1 No water cool . 33.39 |========================================= TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: ResNet-50 images/sec > Higher Is Better Initial test 1 No water cool . 36.74 |========================================= PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l batches/sec > Higher Is Better Initial test 1 No water cool . 14.14 |========================================= TensorFlow 2.12 Device: GPU - Batch Size: 1 - Model: VGG-16 images/sec > Higher Is Better Initial test 1 No water cool . 1.46 |========================================== oneDNN 3.4 Harness: Recurrent Neural Network Training - Engine: CPU ms < Lower Is Better Initial test 1 No water cool . 1452.96 |======================================= oneDNN 3.4 Harness: Recurrent Neural Network Inference - Engine: CPU ms < Lower Is Better Initial test 1 No water cool . 747.50 |======================================== TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: AlexNet images/sec > Higher Is Better Initial test 1 No water cool . 388.40 |======================================== Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Initial test 1 No water cool . 303.33 |======================================== Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Initial test 1 No water cool . 26.33 |========================================= Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Initial test 1 No water cool . 54.00 |========================================= Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Initial test 1 No water cool . 18.51 |========================================= Mlpack Benchmark Benchmark: scikit_qda Seconds < Lower Is Better Initial test 1 No water cool . 34.07 |========================================= OpenVINO 2024.0 Model: Face Detection FP16-INT8 - Device: CPU ms < Lower Is Better Initial test 1 No water cool . 323.12 |======================================== OpenVINO 2024.0 Model: Face Detection FP16-INT8 - Device: CPU FPS > Higher Is Better Initial test 1 No water cool . 24.71 |========================================= NCNN 20230517 Target: CPU - Model: FastestDet ms < Lower Is Better Initial test 1 No water cool . 4.69 |========================================== NCNN 20230517 Target: CPU - Model: vision_transformer ms < Lower Is Better Initial test 1 No water cool . 37.92 |========================================= NCNN 20230517 Target: CPU - Model: regnety_400m ms < Lower Is Better Initial test 1 No water cool . 9.87 |========================================== NCNN 20230517 Target: CPU - Model: squeezenet_ssd ms < Lower Is Better Initial test 1 No water cool . 8.49 |========================================== NCNN 20230517 Target: CPU - Model: yolov4-tiny ms < Lower Is Better Initial test 1 No water cool . 16.28 |========================================= NCNN 20230517 Target: CPUv2-yolov3v2-yolov3 - Model: mobilenetv2-yolov3 ms < Lower Is Better Initial test 1 No water cool . 9.80 |========================================== NCNN 20230517 Target: CPU - Model: resnet50 ms < Lower Is Better Initial test 1 No water cool . 13.48 |========================================= NCNN 20230517 Target: CPU - Model: alexnet ms < Lower Is Better Initial test 1 No water cool . 5.52 |========================================== NCNN 20230517 Target: CPU - Model: resnet18 ms < Lower Is Better Initial test 1 No water cool . 6.62 |========================================== NCNN 20230517 Target: CPU - Model: vgg16 ms < Lower Is Better Initial test 1 No water cool . 32.60 |========================================= NCNN 20230517 Target: CPU - Model: googlenet ms < Lower Is Better Initial test 1 No water cool . 9.56 |========================================== NCNN 20230517 Target: CPU - Model: blazeface ms < Lower Is Better Initial test 1 No water cool . 1.60 |========================================== NCNN 20230517 Target: CPU - Model: efficientnet-b0 ms < Lower Is Better Initial test 1 No water cool . 4.50 |========================================== NCNN 20230517 Target: CPU - Model: mnasnet ms < Lower Is Better Initial test 1 No water cool . 3.46 |========================================== NCNN 20230517 Target: CPU - Model: shufflenet-v2 ms < Lower Is Better Initial test 1 No water cool . 3.93 |========================================== NCNN 20230517 Target: CPU-v3-v3 - Model: mobilenet-v3 ms < Lower Is Better Initial test 1 No water cool . 3.69 |========================================== NCNN 20230517 Target: CPU-v2-v2 - Model: mobilenet-v2 ms < Lower Is Better Initial test 1 No water cool . 3.65 |========================================== NCNN 20230517 Target: CPU - Model: mobilenet ms < Lower Is Better Initial test 1 No water cool . 9.80 |========================================== TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: GoogLeNet images/sec > Higher Is Better Initial test 1 No water cool . 119.04 |======================================== NCNN 20230517 Target: Vulkan GPU - Model: FastestDet ms < Lower Is Better Initial test 1 No water cool . 4.30 |========================================== NCNN 20230517 Target: Vulkan GPU - Model: vision_transformer ms < Lower Is Better Initial test 1 No water cool . 38.12 |========================================= NCNN 20230517 Target: Vulkan GPU - Model: regnety_400m ms < Lower Is Better Initial test 1 No water cool . 9.83 |========================================== NCNN 20230517 Target: Vulkan GPU - Model: squeezenet_ssd ms < Lower Is Better Initial test 1 No water cool . 8.38 |========================================== NCNN 20230517 Target: Vulkan GPU - Model: yolov4-tiny ms < Lower Is Better Initial test 1 No water cool . 15.84 |========================================= NCNN 20230517 Target: Vulkan GPUv2-yolov3v2-yolov3 - Model: mobilenetv2-yolov3 ms < Lower Is Better Initial test 1 No water cool . 9.36 |========================================== NCNN 20230517 Target: Vulkan GPU - Model: resnet50 ms < Lower Is Better Initial test 1 No water cool . 13.16 |========================================= NCNN 20230517 Target: Vulkan GPU - Model: alexnet ms < Lower Is Better Initial test 1 No water cool . 5.67 |========================================== NCNN 20230517 Target: Vulkan GPU - Model: resnet18 ms < Lower Is Better Initial test 1 No water cool . 6.65 |========================================== NCNN 20230517 Target: Vulkan GPU - Model: vgg16 ms < Lower Is Better Initial test 1 No water cool . 32.31 |========================================= NCNN 20230517 Target: Vulkan GPU - Model: googlenet ms < Lower Is Better Initial test 1 No water cool . 9.69 |========================================== NCNN 20230517 Target: Vulkan GPU - Model: blazeface ms < Lower Is Better Initial test 1 No water cool . 1.62 |========================================== NCNN 20230517 Target: Vulkan GPU - Model: efficientnet-b0 ms < Lower Is Better Initial test 1 No water cool . 4.57 |========================================== NCNN 20230517 Target: Vulkan GPU - Model: mnasnet ms < Lower Is Better Initial test 1 No water cool . 3.45 |========================================== NCNN 20230517 Target: Vulkan GPU - Model: shufflenet-v2 ms < Lower Is Better Initial test 1 No water cool . 3.90 |========================================== NCNN 20230517 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 ms < Lower Is Better Initial test 1 No water cool . 3.72 |========================================== NCNN 20230517 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 ms < Lower Is Better Initial test 1 No water cool . 3.69 |========================================== NCNN 20230517 Target: Vulkan GPU - Model: mobilenet ms < Lower Is Better Initial test 1 No water cool . 9.36 |========================================== OpenVINO 2024.0 Model: Machine Translation EN To DE FP16 - Device: CPU ms < Lower Is Better Initial test 1 No water cool . 65.52 |========================================= OpenVINO 2024.0 Model: Machine Translation EN To DE FP16 - Device: CPU FPS > Higher Is Better Initial test 1 No water cool . 121.96 |======================================== OpenVINO 2024.0 Model: Person Detection FP16 - Device: CPU ms < Lower Is Better Initial test 1 No water cool . 104.27 |======================================== OpenVINO 2024.0 Model: Person Detection FP16 - Device: CPU FPS > Higher Is Better Initial test 1 No water cool . 76.65 |========================================= OpenVINO 2024.0 Model: Person Detection FP32 - Device: CPU ms < Lower Is Better Initial test 1 No water cool . 103.09 |======================================== OpenVINO 2024.0 Model: Person Detection FP32 - Device: CPU FPS > Higher Is Better Initial test 1 No water cool . 77.54 |========================================= TensorFlow Lite 2022-05-18 Model: Inception V4 Microseconds < Lower Is Better Initial test 1 No water cool . 21139.4 |======================================= OpenVINO 2024.0 Model: Noise Suppression Poconet-Like FP16 - Device: CPU ms < Lower Is Better Initial test 1 No water cool . 11.33 |========================================= OpenVINO 2024.0 Model: Noise Suppression Poconet-Like FP16 - Device: CPU FPS > Higher Is Better Initial test 1 No water cool . 1386.93 |======================================= TensorFlow Lite 2022-05-18 Model: Inception ResNet V2 Microseconds < Lower Is Better Initial test 1 No water cool . 21857.0 |======================================= OpenVINO 2024.0 Model: Road Segmentation ADAS FP16-INT8 - Device: CPU ms < Lower Is Better Initial test 1 No water cool . 17.81 |========================================= OpenVINO 2024.0 Model: Road Segmentation ADAS FP16-INT8 - Device: CPU FPS > Higher Is Better Initial test 1 No water cool . 448.28 |======================================== TensorFlow Lite 2022-05-18 Model: NASNet Mobile Microseconds < Lower Is Better Initial test 1 No water cool . 10099.3 |======================================= TensorFlow Lite 2022-05-18 Model: Mobilenet Float Microseconds < Lower Is Better Initial test 1 No water cool . 1214.11 |======================================= TensorFlow Lite 2022-05-18 Model: SqueezeNet Microseconds < Lower Is Better Initial test 1 No water cool . 1716.04 |======================================= OpenVINO 2024.0 Model: Person Vehicle Bike Detection FP16 - Device: CPU ms < Lower Is Better Initial test 1 No water cool . 5.53 |========================================== OpenVINO 2024.0 Model: Person Vehicle Bike Detection FP16 - Device: CPU FPS > Higher Is Better Initial test 1 No water cool . 1442.07 |======================================= TensorFlow Lite 2022-05-18 Model: Mobilenet Quant Microseconds < Lower Is Better Initial test 1 No water cool . 1861.53 |======================================= OpenVINO 2024.0 Model: Road Segmentation ADAS FP16 - Device: CPU ms < Lower Is Better Initial test 1 No water cool . 29.32 |========================================= OpenVINO 2024.0 Model: Road Segmentation ADAS FP16 - Device: CPU FPS > Higher Is Better Initial test 1 No water cool . 272.36 |======================================== OpenVINO 2024.0 Model: Person Re-Identification Retail FP16 - Device: CPU ms < Lower Is Better Initial test 1 No water cool . 4.46 |========================================== OpenVINO 2024.0 Model: Person Re-Identification Retail FP16 - Device: CPU FPS > Higher Is Better Initial test 1 No water cool . 1785.48 |======================================= OpenVINO 2024.0 Model: Handwritten English Recognition FP16-INT8 - Device: CPU ms < Lower Is Better Initial test 1 No water cool . 21.88 |========================================= OpenVINO 2024.0 Model: Handwritten English Recognition FP16-INT8 - Device: CPU FPS > Higher Is Better Initial test 1 No water cool . 729.99 |======================================== OpenVINO 2024.0 Model: Vehicle Detection FP16-INT8 - Device: CPU ms < Lower Is Better Initial test 1 No water cool . 5.18 |========================================== OpenVINO 2024.0 Model: Vehicle Detection FP16-INT8 - Device: CPU FPS > Higher Is Better Initial test 1 No water cool . 1538.27 |======================================= OpenVINO 2024.0 Model: Face Detection Retail FP16-INT8 - Device: CPU ms < Lower Is Better Initial test 1 No water cool . 3.61 |========================================== OpenVINO 2024.0 Model: Face Detection Retail FP16-INT8 - Device: CPU FPS > Higher Is Better Initial test 1 No water cool . 4335.66 |======================================= OpenVINO 2024.0 Model: Handwritten English Recognition FP16 - Device: CPU ms < Lower Is Better Initial test 1 No water cool . 23.94 |========================================= OpenVINO 2024.0 Model: Handwritten English Recognition FP16 - Device: CPU FPS > Higher Is Better Initial test 1 No water cool . 667.29 |======================================== OpenVINO 2024.0 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU ms < Lower Is Better Initial test 1 No water cool . 0.31 |========================================== OpenVINO 2024.0 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU FPS > Higher Is Better Initial test 1 No water cool . 46025.94 |====================================== OpenVINO 2024.0 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU ms < Lower Is Better Initial test 1 No water cool . 0.45 |========================================== OpenVINO 2024.0 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU FPS > Higher Is Better Initial test 1 No water cool . 32402.42 |====================================== OpenVINO 2024.0 Model: Vehicle Detection FP16 - Device: CPU ms < Lower Is Better Initial test 1 No water cool . 12.91 |========================================= OpenVINO 2024.0 Model: Vehicle Detection FP16 - Device: CPU FPS > Higher Is Better Initial test 1 No water cool . 618.41 |======================================== OpenVINO 2024.0 Model: Weld Porosity Detection FP16 - Device: CPU ms < Lower Is Better Initial test 1 No water cool . 12.61 |========================================= OpenVINO 2024.0 Model: Weld Porosity Detection FP16 - Device: CPU FPS > Higher Is Better Initial test 1 No water cool . 1266.85 |======================================= OpenVINO 2024.0 Model: Face Detection Retail FP16 - Device: CPU ms < Lower Is Better Initial test 1 No water cool . 2.53 |========================================== OpenVINO 2024.0 Model: Face Detection Retail FP16 - Device: CPU FPS > Higher Is Better Initial test 1 No water cool . 3062.63 |======================================= OpenVINO 2024.0 Model: Weld Porosity Detection FP16-INT8 - Device: CPU ms < Lower Is Better Initial test 1 No water cool . 6.44 |========================================== OpenVINO 2024.0 Model: Weld Porosity Detection FP16-INT8 - Device: CPU FPS > Higher Is Better Initial test 1 No water cool . 2470.86 |======================================= TensorFlow 2.12 Device: GPU - Batch Size: 512 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.12 Device: GPU - Batch Size: 16 - Model: AlexNet images/sec > Higher Is Better Initial test 1 No water cool . 30.67 |========================================= Numenta Anomaly Benchmark 1.1 Detector: Earthgecko Skyline Seconds < Lower Is Better Initial test 1 No water cool . 55.57 |========================================= TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: ResNet-50 images/sec > Higher Is Better Initial test 1 No water cool . 36.43 |========================================= oneDNN 3.4 Harness: IP Shapes 1D - Engine: CPU ms < Lower Is Better Initial test 1 No water cool . 1.17351 |======================================= PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-152 batches/sec > Higher Is Better Initial test 1 No water cool . 25.64 |========================================= PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: ResNet-50 batches/sec > Higher Is Better Initial test 1 No water cool . 42.91 |========================================= PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: ResNet-50 batches/sec > Higher Is Better Initial test 1 No water cool . 43.49 |========================================= PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: ResNet-50 batches/sec > Higher Is Better Initial test 1 No water cool . 43.35 |========================================= PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: ResNet-50 batches/sec > Higher Is Better Initial test 1 No water cool . 44.08 |========================================= PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 batches/sec > Higher Is Better Initial test 1 No water cool . 44.08 |========================================= Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Initial test 1 No water cool . 397.58 |======================================== Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Initial test 1 No water cool . 20.09 |========================================= Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Initial test 1 No water cool . 3.5898 |======================================== Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Initial test 1 No water cool . 278.31 |======================================== spaCy 3.4.1 Model: en_core_web_trf tokens/sec > Higher Is Better Initial test 1 No water cool . 2415 |========================================== spaCy 3.4.1 Model: en_core_web_lg tokens/sec > Higher Is Better Initial test 1 No water cool . 18557 |========================================= Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Initial test 1 No water cool . 8.9714 |======================================== Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Initial test 1 No water cool . 890.19 |======================================== Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Initial test 1 No water cool . 400.39 |======================================== Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Initial test 1 No water cool . 19.91 |========================================= Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Initial test 1 No water cool . 19.16 |========================================= Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Initial test 1 No water cool . 417.17 |======================================== Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Initial test 1 No water cool . 10.19 |========================================= Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Initial test 1 No water cool . 98.08 |========================================= Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Initial test 1 No water cool . 57.60 |========================================= Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Initial test 1 No water cool . 17.36 |========================================= Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Initial test 1 No water cool . 57.83 |========================================= Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Initial test 1 No water cool . 17.29 |========================================= Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Initial test 1 No water cool . 236.31 |======================================== Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Initial test 1 No water cool . 33.81 |========================================= Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Initial test 1 No water cool . 43.78 |========================================= Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Initial test 1 No water cool . 182.62 |======================================== Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Initial test 1 No water cool . 36.31 |========================================= Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Initial test 1 No water cool . 27.53 |========================================= Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Initial test 1 No water cool . 10.80 |========================================= Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Initial test 1 No water cool . 92.51 |========================================= Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Initial test 1 No water cool . 10.36 |========================================= Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Initial test 1 No water cool . 96.47 |========================================= Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Initial test 1 No water cool . 69.44 |========================================= Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Initial test 1 No water cool . 115.11 |======================================== Neural Magic DeepSparse 1.6 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Initial test 1 No water cool . 30.24 |========================================= Neural Magic DeepSparse 1.6 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Initial test 1 No water cool . 264.38 |======================================== Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Initial test 1 No water cool . 30.15 |========================================= Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Initial test 1 No water cool . 265.20 |======================================== Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Initial test 1 No water cool . 72.04 |========================================= Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Initial test 1 No water cool . 110.99 |======================================== Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Initial test 1 No water cool . 11.06 |========================================= Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Initial test 1 No water cool . 90.36 |========================================= Neural Magic DeepSparse 1.6 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Initial test 1 No water cool . 5.7601 |======================================== Neural Magic DeepSparse 1.6 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Initial test 1 No water cool . 173.38 |======================================== Neural Magic DeepSparse 1.6 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Initial test 1 No water cool . 3.9240 |======================================== Neural Magic DeepSparse 1.6 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Initial test 1 No water cool . 2031.88 |======================================= Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Initial test 1 No water cool . 5.7436 |======================================== Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Initial test 1 No water cool . 173.90 |======================================== DeepSpeech 0.6 Acceleration: CPU Seconds < Lower Is Better Initial test 1 No water cool . 47.04 |========================================= Neural Magic DeepSparse 1.6 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Initial test 1 No water cool . 0.8214 |======================================== Neural Magic DeepSparse 1.6 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Initial test 1 No water cool . 1214.24 |======================================= TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: VGG-16 images/sec > Higher Is Better oneDNN 3.4 Harness: Deconvolution Batch shapes_1d - Engine: CPU ms < Lower Is Better Initial test 1 No water cool . 3.06179 |======================================= PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: Efficientnet_v2_l batches/sec > Higher Is Better Initial test 1 No water cool . 69.80 |========================================= PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: Efficientnet_v2_l batches/sec > Higher Is Better Initial test 1 No water cool . 69.97 |========================================= PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: Efficientnet_v2_l batches/sec > Higher Is Better Initial test 1 No water cool . 70.52 |========================================= PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: Efficientnet_v2_l batches/sec > Higher Is Better Initial test 1 No water cool . 70.63 |========================================= PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: Efficientnet_v2_l batches/sec > Higher Is Better Initial test 1 No water cool . 70.81 |========================================= Mlpack Benchmark Benchmark: scikit_ica Seconds < Lower Is Better Initial test 1 No water cool . 30.12 |========================================= TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: GoogLeNet images/sec > Higher Is Better Initial test 1 No water cool . 122.39 |======================================== TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: ResNet-50 images/sec > Higher Is Better Mlpack Benchmark Benchmark: scikit_linearridgeregression Seconds < Lower Is Better Initial test 1 No water cool . 1.03 |========================================== TensorFlow 2.12 Device: GPU - Batch Size: 1 - Model: ResNet-50 images/sec > Higher Is Better Initial test 1 No water cool . 4.25 |========================================== Numenta Anomaly Benchmark 1.1 Detector: Contextual Anomaly Detector OSE Seconds < Lower Is Better Initial test 1 No water cool . 25.40 |========================================= TensorFlow 2.12 Device: CPU - Batch Size: 1 - Model: VGG-16 images/sec > Higher Is Better Initial test 1 No water cool . 4.74 |========================================== Numenta Anomaly Benchmark 1.1 Detector: Windowed Gaussian Seconds < Lower Is Better Initial test 1 No water cool . 4.984 |========================================= TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: AlexNet images/sec > Higher Is Better Initial test 1 No water cool . 305.81 |======================================== PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-50 batches/sec > Higher Is Better Initial test 1 No water cool . 64.81 |========================================= PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: ResNet-152 batches/sec > Higher Is Better Initial test 1 No water cool . 138.62 |======================================== PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: ResNet-152 batches/sec > Higher Is Better Initial test 1 No water cool . 138.78 |======================================== PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: ResNet-152 batches/sec > Higher Is Better Initial test 1 No water cool . 139.41 |======================================== PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: ResNet-152 batches/sec > Higher Is Better Initial test 1 No water cool . 138.72 |======================================== PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: ResNet-152 batches/sec > Higher Is Better Initial test 1 No water cool . 140.41 |======================================== PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: Efficientnet_v2_l batches/sec > Higher Is Better Initial test 1 No water cool . 71.98 |========================================= Mlpack Benchmark Benchmark: scikit_svm Seconds < Lower Is Better Initial test 1 No water cool . 15.12 |========================================= TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: AlexNet images/sec > Higher Is Better Initial test 1 No water cool . 224.56 |======================================== TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: GoogLeNet images/sec > Higher Is Better Initial test 1 No water cool . 125.83 |======================================== RNNoise 2020-06-28 Seconds < Lower Is Better Initial test 1 No water cool . 13.71 |========================================= TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: AlexNet images/sec > Higher Is Better Initial test 1 No water cool . 148.72 |======================================== Numenta Anomaly Benchmark 1.1 Detector: Bayesian Changepoint Seconds < Lower Is Better Initial test 1 No water cool . 13.21 |========================================= TNN 0.3 Target: CPU - Model: MobileNet v2 ms < Lower Is Better Initial test 1 No water cool . 183.28 |======================================== TNN 0.3 Target: CPU - Model: SqueezeNet v1.1 ms < Lower Is Better Initial test 1 No water cool . 179.68 |======================================== TensorFlow 2.12 Device: CPU - Batch Size: 1 - Model: ResNet-50 images/sec > Higher Is Better Initial test 1 No water cool . 12.70 |========================================= TensorFlow 2.12 Device: GPU - Batch Size: 1 - Model: GoogLeNet images/sec > Higher Is Better Initial test 1 No water cool . 12.36 |========================================= Numenta Anomaly Benchmark 1.1 Detector: Relative Entropy Seconds < Lower Is Better Initial test 1 No water cool . 8.281 |========================================= PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: ResNet-152 batches/sec > Higher Is Better Initial test 1 No water cool . 137.39 |======================================== TensorFlow 2.12 Device: GPU - Batch Size: 1 - Model: AlexNet images/sec > Higher Is Better Initial test 1 No water cool . 12.58 |========================================= TensorFlow 2.12 Device: CPU - Batch Size: 1 - Model: AlexNet images/sec > Higher Is Better Initial test 1 No water cool . 13.00 |========================================= oneDNN 3.4 Harness: IP Shapes 3D - Engine: CPU ms < Lower Is Better Initial test 1 No water cool . 4.42170 |======================================= PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: ResNet-50 batches/sec > Higher Is Better Initial test 1 No water cool . 380.34 |======================================== PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: ResNet-50 batches/sec > Higher Is Better Initial test 1 No water cool . 379.98 |======================================== PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: ResNet-50 batches/sec > Higher Is Better Initial test 1 No water cool . 380.67 |======================================== PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: ResNet-50 batches/sec > Higher Is Better Initial test 1 No water cool . 380.74 |======================================== PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: ResNet-50 batches/sec > Higher Is Better Initial test 1 No water cool . 383.56 |======================================== SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Texture Read Bandwidth GB/s > Higher Is Better Initial test 1 No water cool . 2985.70 |======================================= oneDNN 3.4 Harness: Convolution Batch Shapes Auto - Engine: CPU ms < Lower Is Better Initial test 1 No water cool . 7.16631 |======================================= SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: GEMM SGEMM_N GFLOPS > Higher Is Better Initial test 1 No water cool . 13212.0 |======================================= PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: ResNet-50 batches/sec > Higher Is Better Initial test 1 No water cool . 387.06 |======================================== Whisper.cpp 1.4 Model: ggml-medium.en - Input: 2016 State of the Union Seconds < Lower Is Better Initial test 1 No water cool . 0.86615 |======================================= TensorFlow 2.12 Device: CPU - Batch Size: 1 - Model: GoogLeNet images/sec > Higher Is Better Initial test 1 No water cool . 47.21 |========================================= SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Bus Speed Readback GB/s > Higher Is Better Initial test 1 No water cool . 27.07 |========================================= oneDNN 3.4 Harness: Deconvolution Batch shapes_3d - Engine: CPU ms < Lower Is Better Initial test 1 No water cool . 2.56519 |======================================= TNN 0.3 Target: CPU - Model: SqueezeNet v2 ms < Lower Is Better Initial test 1 No water cool . 42.22 |========================================= SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: S3D GFLOPS > Higher Is Better Initial test 1 No water cool . 299.47 |======================================== Whisper.cpp 1.4 Model: ggml-small.en - Input: 2016 State of the Union Seconds < Lower Is Better Initial test 1 No water cool . 0.34881 |======================================= SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: FFT SP GFLOPS > Higher Is Better Initial test 1 No water cool . 1292.53 |======================================= SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Triad GB/s > Higher Is Better Initial test 1 No water cool . 25.46 |========================================= SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Reduction GB/s > Higher Is Better Initial test 1 No water cool . 388.93 |======================================== SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Bus Speed Download GB/s > Higher Is Better Initial test 1 No water cool . 26.83 |========================================= Whisper.cpp 1.4 Model: ggml-base.en - Input: 2016 State of the Union Seconds < Lower Is Better Initial test 1 No water cool . 0.15350 |======================================= Scikit-Learn 1.2.2 Benchmark: Tree Seconds < Lower Is Better SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: MD5 Hash GHash/s > Higher Is Better Initial test 1 No water cool . 47.90 |========================================= PlaidML FP16: No - Mode: Inference - Network: VGG16 - Device: CPU Examples Per Second > Higher Is Better Scikit-Learn 1.2.2 Benchmark: Plot Non-Negative Matrix Factorization Seconds < Lower Is Better PlaidML FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU Examples Per Second > Higher Is Better Scikit-Learn 1.2.2 Benchmark: Kernel PCA Solvers / Time vs. N Samples Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Text Vectorizers Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Kernel PCA Solvers / Time vs. N Components Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Sample Without Replacement Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: RCV1 Logreg Convergencet Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Parallel Pairwise Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: SGD Regression Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Neighbors Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Feature Expansions Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Incremental PCA Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Isolation Forest Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: LocalOutlierFactor Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Adult Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Higgs Boson Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: GLM Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Sparse Random Projections / 100 Iterations Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Categorical Only Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Polynomial Kernel Approximation Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: 20 Newsgroups / Logistic Regression Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Threading Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Covertype Dataset Benchmark Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Isotonic / Logistic Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot OMP vs. LARS Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Hierarchical Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Fast KMeans Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Lasso Path Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: MNIST Dataset Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Sparsify Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Glmnet Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Lasso Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: SAGA Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Isotonic / Perturbed Logarithm Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Isotonic / Pathological Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: TSNE MNIST Dataset Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Ward Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Singular Value Decomposition Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: SGDOneClassSVM Seconds < Lower Is Better Llamafile 0.6 Test: mistral-7b-instruct-v0.2.Q8_0 - Acceleration: CPU Tokens Per Second > Higher Is Better Llamafile 0.6 Test: wizardcoder-python-34b-v1.0.Q6_K - Acceleration: CPU Tokens Per Second > Higher Is Better Llamafile 0.6 Test: llava-v1.5-7b-q4 - Acceleration: CPU Tokens Per Second > Higher Is Better Llama.cpp b1808 Model: llama-2-70b-chat.Q5_0.gguf Tokens Per Second > Higher Is Better Llama.cpp b1808 Model: llama-2-13b.Q4_0.gguf Tokens Per Second > Higher Is Better Llama.cpp b1808 Model: llama-2-7b.Q4_0.gguf Tokens Per Second > Higher Is Better ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 1000 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 200 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 100 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 1000 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 200 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 100 Milli-Seconds < Lower Is Better R Benchmark Seconds < Lower Is Better LeelaChessZero 0.30 Backend: BLAS Nodes Per Second > Higher Is Better