ml-benchmark2 AMD Ryzen 9 7950X3D 16-Core testing with a ASUS PRIME X670E-PRO WIFI (1813 BIOS) and MSI NVIDIA GeForce RTX 3060 12GB on Ubuntu 22.04 via the Phoronix Test Suite. ml-benchmark2-12-25-23: Processor: AMD Ryzen 9 7950X3D 16-Core @ 4.20GHz (16 Cores / 32 Threads), Motherboard: ASUS PRIME X670E-PRO WIFI (1813 BIOS), Chipset: AMD Device 14d8, Memory: 128GB, Disk: 4001GB CT4000P3PSSD8 + 1024GB SPCC M.2 PCIe SSD, Graphics: MSI NVIDIA GeForce RTX 3060 12GB, Audio: NVIDIA Device 228e, Monitor: LC27T55, Network: Realtek RTL8125 2.5GbE + MEDIATEK Device 0608 OS: Ubuntu 22.04, Kernel: 6.2.0-39-generic (x86_64), Desktop: GNOME Shell 42.9, Display Server: X Server 1.21.1.4, Display Driver: NVIDIA 535.129.03, OpenGL: 4.6.0, OpenCL: OpenCL 3.0 CUDA 12.2.147, Vulkan: 1.3.242, Compiler: GCC 11.4.0 + CUDA 12.3, File-System: ext4, Screen Resolution: 1920x1080 SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: S3D SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Triad SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: FFT SP SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: MD5 Hash SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Reduction SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: GEMM SGEMM_N SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Max SP Flops SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Bus Speed Download SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Bus Speed Readback SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Texture Read Bandwidth LeelaChessZero 0.30 Backend: BLAS Nodes Per Second > Higher Is Better ml-benchmark2-12-25-23 . 175 |================================================= oneDNN 3.3 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 2.13148 |============================================= oneDNN 3.3 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 3.81408 |============================================= oneDNN 3.3 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 0.542833 |============================================ oneDNN 3.3 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 0.325432 |============================================ oneDNN 3.3 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 0.772163 |============================================ oneDNN 3.3 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 1.49285 |============================================= oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 5.27943 |============================================= oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 3.43419 |============================================= oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 2.88833 |============================================= oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 5.08690 |============================================= oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 0.505580 |============================================ oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 0.731894 |============================================ oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 1358.07 |============================================= oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 707.84 |============================================== oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 1367.75 |============================================= oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 1.49610 |============================================= oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 2.60894 |============================================= oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 1.61432 |============================================= oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 704.86 |============================================== oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 1343.53 |============================================= oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 702.32 |============================================== Numpy Benchmark Score > Higher Is Better ml-benchmark2-12-25-23 . 721.25 |============================================== DeepSpeech 0.6 Acceleration: CPU Seconds < Lower Is Better ml-benchmark2-12-25-23 . 52.49 |=============================================== R Benchmark Seconds < Lower Is Better ml-benchmark2-12-25-23 . 0.1133 |============================================== RNNoise 2020-06-28 Seconds < Lower Is Better ml-benchmark2-12-25-23 . 14.60 |=============================================== TensorFlow Lite 2022-05-18 Model: SqueezeNet Microseconds < Lower Is Better ml-benchmark2-12-25-23 . 1798.07 |============================================= TensorFlow Lite 2022-05-18 Model: Inception V4 Microseconds < Lower Is Better ml-benchmark2-12-25-23 . 22388.9 |============================================= TensorFlow Lite 2022-05-18 Model: NASNet Mobile Microseconds < Lower Is Better ml-benchmark2-12-25-23 . 10670.2 |============================================= TensorFlow Lite 2022-05-18 Model: Mobilenet Float Microseconds < Lower Is Better ml-benchmark2-12-25-23 . 1293.76 |============================================= TensorFlow Lite 2022-05-18 Model: Mobilenet Quant Microseconds < Lower Is Better ml-benchmark2-12-25-23 . 1985.59 |============================================= TensorFlow Lite 2022-05-18 Model: Inception ResNet V2 Microseconds < Lower Is Better ml-benchmark2-12-25-23 . 22832.8 |============================================= PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-50 batches/sec > Higher Is Better ml-benchmark2-12-25-23 . 67.87 |=============================================== PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-152 batches/sec > Higher Is Better ml-benchmark2-12-25-23 . 26.53 |=============================================== PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 batches/sec > Higher Is Better ml-benchmark2-12-25-23 . 45.48 |=============================================== PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: ResNet-50 batches/sec > Higher Is Better ml-benchmark2-12-25-23 . 45.71 |=============================================== PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: ResNet-50 batches/sec > Higher Is Better ml-benchmark2-12-25-23 . 45.17 |=============================================== PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-152 batches/sec > Higher Is Better ml-benchmark2-12-25-23 . 17.88 |=============================================== PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: ResNet-50 batches/sec > Higher Is Better ml-benchmark2-12-25-23 . 45.77 |=============================================== PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: ResNet-152 batches/sec > Higher Is Better ml-benchmark2-12-25-23 . 17.78 |=============================================== PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: ResNet-50 batches/sec > Higher Is Better ml-benchmark2-12-25-23 . 44.93 |=============================================== PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: ResNet-152 batches/sec > Higher Is Better ml-benchmark2-12-25-23 . 17.98 |=============================================== PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: ResNet-152 batches/sec > Higher Is Better ml-benchmark2-12-25-23 . 17.71 |=============================================== PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: ResNet-152 batches/sec > Higher Is Better ml-benchmark2-12-25-23 . 17.89 |=============================================== PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l batches/sec > Higher Is Better ml-benchmark2-12-25-23 . 13.70 |=============================================== PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l batches/sec > Higher Is Better ml-benchmark2-12-25-23 . 10.29 |=============================================== PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l batches/sec > Higher Is Better ml-benchmark2-12-25-23 . 10.45 |=============================================== PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l batches/sec > Higher Is Better ml-benchmark2-12-25-23 . 10.43 |=============================================== PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l batches/sec > Higher Is Better ml-benchmark2-12-25-23 . 10.47 |=============================================== PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l batches/sec > Higher Is Better ml-benchmark2-12-25-23 . 10.42 |=============================================== TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: VGG-16 images/sec > Higher Is Better ml-benchmark2-12-25-23 . 16.48 |=============================================== TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: VGG-16 images/sec > Higher Is Better ml-benchmark2-12-25-23 . 17.71 |=============================================== TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: VGG-16 images/sec > Higher Is Better ml-benchmark2-12-25-23 . 18.48 |=============================================== TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: AlexNet images/sec > Higher Is Better ml-benchmark2-12-25-23 . 140.39 |============================================== TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: VGG-16 images/sec > Higher Is Better ml-benchmark2-12-25-23 . 19.00 |=============================================== TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: AlexNet images/sec > Higher Is Better ml-benchmark2-12-25-23 . 215.96 |============================================== TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: VGG-16 images/sec > Higher Is Better ml-benchmark2-12-25-23 . 19.13 |=============================================== TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: AlexNet images/sec > Higher Is Better ml-benchmark2-12-25-23 . 297.39 |============================================== TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: AlexNet images/sec > Higher Is Better ml-benchmark2-12-25-23 . 386.97 |============================================== TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: AlexNet images/sec > Higher Is Better ml-benchmark2-12-25-23 . 400.77 |============================================== TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: GoogLeNet images/sec > Higher Is Better ml-benchmark2-12-25-23 . 127.79 |============================================== TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: ResNet-50 images/sec > Higher Is Better ml-benchmark2-12-25-23 . 35.90 |=============================================== TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: GoogLeNet images/sec > Higher Is Better ml-benchmark2-12-25-23 . 127.29 |============================================== TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: ResNet-50 images/sec > Higher Is Better ml-benchmark2-12-25-23 . 36.10 |=============================================== TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: GoogLeNet images/sec > Higher Is Better ml-benchmark2-12-25-23 . 120.40 |============================================== TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: ResNet-50 images/sec > Higher Is Better ml-benchmark2-12-25-23 . 35.45 |=============================================== TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: GoogLeNet images/sec > Higher Is Better ml-benchmark2-12-25-23 . 112.12 |============================================== TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: ResNet-50 images/sec > Higher Is Better ml-benchmark2-12-25-23 . 34.24 |=============================================== TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: GoogLeNet images/sec > Higher Is Better ml-benchmark2-12-25-23 . 110.88 |============================================== TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: ResNet-50 images/sec > Higher Is Better ml-benchmark2-12-25-23 . 34.09 |=============================================== Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better ml-benchmark2-12-25-23 . 22.07 |=============================================== Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better ml-benchmark2-12-25-23 . 361.49 |============================================== Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream items/sec > Higher Is Better ml-benchmark2-12-25-23 . 17.55 |=============================================== Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better ml-benchmark2-12-25-23 . 56.96 |=============================================== Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better ml-benchmark2-12-25-23 . 934.08 |============================================== Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better ml-benchmark2-12-25-23 . 8.5518 |============================================== Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better ml-benchmark2-12-25-23 . 270.67 |============================================== Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better ml-benchmark2-12-25-23 . 3.6930 |============================================== Neural Magic DeepSparse 1.6 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better ml-benchmark2-12-25-23 . 270.15 |============================================== Neural Magic DeepSparse 1.6 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better ml-benchmark2-12-25-23 . 29.60 |=============================================== Neural Magic DeepSparse 1.6 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream items/sec > Higher Is Better ml-benchmark2-12-25-23 . 192.04 |============================================== Neural Magic DeepSparse 1.6 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better ml-benchmark2-12-25-23 . 5.2053 |============================================== Neural Magic DeepSparse 1.6 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better ml-benchmark2-12-25-23 . 2614.24 |============================================= Neural Magic DeepSparse 1.6 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better ml-benchmark2-12-25-23 . 3.0495 |============================================== Neural Magic DeepSparse 1.6 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better ml-benchmark2-12-25-23 . 1172.15 |============================================= Neural Magic DeepSparse 1.6 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better ml-benchmark2-12-25-23 . 0.8513 |============================================== Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better ml-benchmark2-12-25-23 . 120.31 |============================================== Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better ml-benchmark2-12-25-23 . 66.46 |=============================================== Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream items/sec > Higher Is Better ml-benchmark2-12-25-23 . 94.15 |=============================================== Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better ml-benchmark2-12-25-23 . 10.62 |=============================================== Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better ml-benchmark2-12-25-23 . 27.63 |=============================================== Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better ml-benchmark2-12-25-23 . 289.50 |============================================== Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream items/sec > Higher Is Better ml-benchmark2-12-25-23 . 17.60 |=============================================== Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better ml-benchmark2-12-25-23 . 56.81 |=============================================== Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better ml-benchmark2-12-25-23 . 270.26 |============================================== Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better ml-benchmark2-12-25-23 . 29.59 |=============================================== Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream items/sec > Higher Is Better ml-benchmark2-12-25-23 . 192.13 |============================================== Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better ml-benchmark2-12-25-23 . 5.2028 |============================================== Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better ml-benchmark2-12-25-23 . 128.09 |============================================== Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better ml-benchmark2-12-25-23 . 62.42 |=============================================== Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better ml-benchmark2-12-25-23 . 95.22 |=============================================== Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better ml-benchmark2-12-25-23 . 10.50 |=============================================== Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better ml-benchmark2-12-25-23 . 187.27 |============================================== Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better ml-benchmark2-12-25-23 . 42.70 |=============================================== Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream items/sec > Higher Is Better ml-benchmark2-12-25-23 . 109.77 |============================================== Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better ml-benchmark2-12-25-23 . 9.1055 |============================================== Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better ml-benchmark2-12-25-23 . 34.93 |=============================================== Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better ml-benchmark2-12-25-23 . 228.46 |============================================== Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream items/sec > Higher Is Better ml-benchmark2-12-25-23 . 26.46 |=============================================== Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better ml-benchmark2-12-25-23 . 37.78 |=============================================== Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better ml-benchmark2-12-25-23 . 428.23 |============================================== Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better ml-benchmark2-12-25-23 . 18.66 |=============================================== Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better ml-benchmark2-12-25-23 . 100.11 |============================================== Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better ml-benchmark2-12-25-23 . 9.9862 |============================================== Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better ml-benchmark2-12-25-23 . 22.38 |=============================================== Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better ml-benchmark2-12-25-23 . 356.97 |============================================== Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better ml-benchmark2-12-25-23 . 17.60 |=============================================== Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better ml-benchmark2-12-25-23 . 56.81 |=============================================== spaCy 3.4.1 Model: en_core_web_lg tokens/sec > Higher Is Better ml-benchmark2-12-25-23 . 19452 |=============================================== spaCy 3.4.1 Model: en_core_web_trf tokens/sec > Higher Is Better ml-benchmark2-12-25-23 . 2354 |================================================ Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 100 Milli-Seconds < Lower Is Better ml-benchmark2-12-25-23 . 24893 |=============================================== Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 200 Milli-Seconds < Lower Is Better ml-benchmark2-12-25-23 . 49227 |=============================================== Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 1000 Milli-Seconds < Lower Is Better ml-benchmark2-12-25-23 . 243445 |============================================== Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 100 Milli-Seconds < Lower Is Better ml-benchmark2-12-25-23 . 63606 |=============================================== Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 200 Milli-Seconds < Lower Is Better ml-benchmark2-12-25-23 . 124421 |============================================== Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 1000 Milli-Seconds < Lower Is Better ml-benchmark2-12-25-23 . 626915 |============================================== Mobile Neural Network 2.1 Model: nasnet ms < Lower Is Better ml-benchmark2-12-25-23 . 11.96 |=============================================== Mobile Neural Network 2.1 Model: mobilenetV3 ms < Lower Is Better ml-benchmark2-12-25-23 . 1.818 |=============================================== Mobile Neural Network 2.1 Model: squeezenetv1.1 ms < Lower Is Better ml-benchmark2-12-25-23 . 2.770 |=============================================== Mobile Neural Network 2.1 Model: resnet-v2-50 ms < Lower Is Better ml-benchmark2-12-25-23 . 11.36 |=============================================== Mobile Neural Network 2.1 Model: SqueezeNetV1.0 ms < Lower Is Better ml-benchmark2-12-25-23 . 4.402 |=============================================== Mobile Neural Network 2.1 Model: MobileNetV2_224 ms < Lower Is Better ml-benchmark2-12-25-23 . 3.723 |=============================================== Mobile Neural Network 2.1 Model: mobilenet-v1-1.0 ms < Lower Is Better ml-benchmark2-12-25-23 . 2.640 |=============================================== Mobile Neural Network 2.1 Model: inception-v3 ms < Lower Is Better ml-benchmark2-12-25-23 . 22.38 |=============================================== NCNN 20230517 Target: CPU - Model: mobilenet ms < Lower Is Better ml-benchmark2-12-25-23 . 10.64 |=============================================== NCNN 20230517 Target: CPU-v2-v2 - Model: mobilenet-v2 ms < Lower Is Better ml-benchmark2-12-25-23 . 4.52 |================================================ NCNN 20230517 Target: CPU-v3-v3 - Model: mobilenet-v3 ms < Lower Is Better ml-benchmark2-12-25-23 . 4.25 |================================================ NCNN 20230517 Target: CPU - Model: shufflenet-v2 ms < Lower Is Better ml-benchmark2-12-25-23 . 4.42 |================================================ NCNN 20230517 Target: CPU - Model: mnasnet ms < Lower Is Better ml-benchmark2-12-25-23 . 3.86 |================================================ NCNN 20230517 Target: CPU - Model: efficientnet-b0 ms < Lower Is Better ml-benchmark2-12-25-23 . 5.30 |================================================ NCNN 20230517 Target: CPU - Model: blazeface ms < Lower Is Better ml-benchmark2-12-25-23 . 1.70 |================================================ NCNN 20230517 Target: CPU - Model: googlenet ms < Lower Is Better ml-benchmark2-12-25-23 . 10.38 |=============================================== NCNN 20230517 Target: CPU - Model: vgg16 ms < Lower Is Better ml-benchmark2-12-25-23 . 35.80 |=============================================== NCNN 20230517 Target: CPU - Model: resnet18 ms < Lower Is Better ml-benchmark2-12-25-23 . 6.70 |================================================ NCNN 20230517 Target: CPU - Model: alexnet ms < Lower Is Better ml-benchmark2-12-25-23 . 5.09 |================================================ NCNN 20230517 Target: CPU - Model: resnet50 ms < Lower Is Better ml-benchmark2-12-25-23 . 13.00 |=============================================== NCNN 20230517 Target: CPU - Model: yolov4-tiny ms < Lower Is Better ml-benchmark2-12-25-23 . 17.12 |=============================================== NCNN 20230517 Target: CPU - Model: squeezenet_ssd ms < Lower Is Better ml-benchmark2-12-25-23 . 8.75 |================================================ NCNN 20230517 Target: CPU - Model: regnety_400m ms < Lower Is Better ml-benchmark2-12-25-23 . 11.06 |=============================================== NCNN 20230517 Target: CPU - Model: vision_transformer ms < Lower Is Better ml-benchmark2-12-25-23 . 42.19 |=============================================== NCNN 20230517 Target: CPU - Model: FastestDet ms < Lower Is Better ml-benchmark2-12-25-23 . 5.08 |================================================ NCNN 20230517 Target: Vulkan GPU - Model: mobilenet ms < Lower Is Better ml-benchmark2-12-25-23 . 10.67 |=============================================== NCNN 20230517 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 ms < Lower Is Better ml-benchmark2-12-25-23 . 4.29 |================================================ NCNN 20230517 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 ms < Lower Is Better ml-benchmark2-12-25-23 . 4.40 |================================================ NCNN 20230517 Target: Vulkan GPU - Model: shufflenet-v2 ms < Lower Is Better ml-benchmark2-12-25-23 . 4.36 |================================================ NCNN 20230517 Target: Vulkan GPU - Model: mnasnet ms < Lower Is Better ml-benchmark2-12-25-23 . 3.97 |================================================ NCNN 20230517 Target: Vulkan GPU - Model: efficientnet-b0 ms < Lower Is Better ml-benchmark2-12-25-23 . 5.56 |================================================ NCNN 20230517 Target: Vulkan GPU - Model: blazeface ms < Lower Is Better ml-benchmark2-12-25-23 . 1.60 |================================================ NCNN 20230517 Target: Vulkan GPU - Model: googlenet ms < Lower Is Better ml-benchmark2-12-25-23 . 10.69 |=============================================== NCNN 20230517 Target: Vulkan GPU - Model: vgg16 ms < Lower Is Better ml-benchmark2-12-25-23 . 35.09 |=============================================== NCNN 20230517 Target: Vulkan GPU - Model: resnet18 ms < Lower Is Better ml-benchmark2-12-25-23 . 6.64 |================================================ NCNN 20230517 Target: Vulkan GPU - Model: alexnet ms < Lower Is Better ml-benchmark2-12-25-23 . 5.20 |================================================ NCNN 20230517 Target: Vulkan GPU - Model: resnet50 ms < Lower Is Better ml-benchmark2-12-25-23 . 13.39 |=============================================== NCNN 20230517 Target: Vulkan GPU - Model: yolov4-tiny ms < Lower Is Better ml-benchmark2-12-25-23 . 17.22 |=============================================== NCNN 20230517 Target: Vulkan GPU - Model: squeezenet_ssd ms < Lower Is Better ml-benchmark2-12-25-23 . 8.79 |================================================ NCNN 20230517 Target: Vulkan GPU - Model: regnety_400m ms < Lower Is Better ml-benchmark2-12-25-23 . 11.29 |=============================================== NCNN 20230517 Target: Vulkan GPU - Model: vision_transformer ms < Lower Is Better ml-benchmark2-12-25-23 . 42.15 |=============================================== NCNN 20230517 Target: Vulkan GPU - Model: FastestDet ms < Lower Is Better ml-benchmark2-12-25-23 . 4.76 |================================================ TNN 0.3 Target: CPU - Model: DenseNet ms < Lower Is Better ml-benchmark2-12-25-23 . 2134.43 |============================================= TNN 0.3 Target: CPU - Model: MobileNet v2 ms < Lower Is Better ml-benchmark2-12-25-23 . 187.48 |============================================== TNN 0.3 Target: CPU - Model: SqueezeNet v2 ms < Lower Is Better ml-benchmark2-12-25-23 . 42.14 |=============================================== TNN 0.3 Target: CPU - Model: SqueezeNet v1.1 ms < Lower Is Better ml-benchmark2-12-25-23 . 179.66 |============================================== PlaidML FP16: No - Mode: Inference - Network: VGG16 - Device: CPU Examples Per Second > Higher Is Better PlaidML FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU Examples Per Second > Higher Is Better OpenVINO 2023.2.dev Model: Face Detection FP16 - Device: CPU FPS > Higher Is Better ml-benchmark2-12-25-23 . 13.70 |=============================================== OpenVINO 2023.2.dev Model: Face Detection FP16 - Device: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 581.79 |============================================== OpenVINO 2023.2.dev Model: Person Detection FP16 - Device: CPU FPS > Higher Is Better ml-benchmark2-12-25-23 . 87.80 |=============================================== OpenVINO 2023.2.dev Model: Person Detection FP16 - Device: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 91.06 |=============================================== OpenVINO 2023.2.dev Model: Person Detection FP32 - Device: CPU FPS > Higher Is Better ml-benchmark2-12-25-23 . 87.33 |=============================================== OpenVINO 2023.2.dev Model: Person Detection FP32 - Device: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 91.56 |=============================================== OpenVINO 2023.2.dev Model: Vehicle Detection FP16 - Device: CPU FPS > Higher Is Better ml-benchmark2-12-25-23 . 994.76 |============================================== OpenVINO 2023.2.dev Model: Vehicle Detection FP16 - Device: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 8.04 |================================================ OpenVINO 2023.2.dev Model: Face Detection FP16-INT8 - Device: CPU FPS > Higher Is Better ml-benchmark2-12-25-23 . 26.49 |=============================================== OpenVINO 2023.2.dev Model: Face Detection FP16-INT8 - Device: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 301.34 |============================================== OpenVINO 2023.2.dev Model: Face Detection Retail FP16 - Device: CPU FPS > Higher Is Better ml-benchmark2-12-25-23 . 3485.66 |============================================= OpenVINO 2023.2.dev Model: Face Detection Retail FP16 - Device: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 2.29 |================================================ OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16 - Device: CPU FPS > Higher Is Better ml-benchmark2-12-25-23 . 441.60 |============================================== OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16 - Device: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 18.10 |=============================================== OpenVINO 2023.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU FPS > Higher Is Better ml-benchmark2-12-25-23 . 1667.89 |============================================= OpenVINO 2023.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 4.79 |================================================ OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16 - Device: CPU FPS > Higher Is Better ml-benchmark2-12-25-23 . 1386.49 |============================================= OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16 - Device: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 11.53 |=============================================== OpenVINO 2023.2.dev Model: Face Detection Retail FP16-INT8 - Device: CPU FPS > Higher Is Better ml-benchmark2-12-25-23 . 4876.38 |============================================= OpenVINO 2023.2.dev Model: Face Detection Retail FP16-INT8 - Device: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 3.28 |================================================ OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16-INT8 - Device: CPU FPS > Higher Is Better ml-benchmark2-12-25-23 . 532.88 |============================================== OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16-INT8 - Device: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 15.00 |=============================================== OpenVINO 2023.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU FPS > Higher Is Better ml-benchmark2-12-25-23 . 121.71 |============================================== OpenVINO 2023.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 65.69 |=============================================== OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU FPS > Higher Is Better ml-benchmark2-12-25-23 . 2688.84 |============================================= OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 5.95 |================================================ OpenVINO 2023.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU FPS > Higher Is Better ml-benchmark2-12-25-23 . 1579.58 |============================================= OpenVINO 2023.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 5.06 |================================================ OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16 - Device: CPU FPS > Higher Is Better ml-benchmark2-12-25-23 . 736.26 |============================================== OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16 - Device: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 21.72 |=============================================== OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU FPS > Higher Is Better ml-benchmark2-12-25-23 . 41501.98 |============================================ OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 0.38 |================================================ OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16-INT8 - Device: CPU FPS > Higher Is Better ml-benchmark2-12-25-23 . 602.34 |============================================== OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16-INT8 - Device: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 26.55 |=============================================== OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU FPS > Higher Is Better ml-benchmark2-12-25-23 . 58961.44 |============================================ OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU ms < Lower Is Better ml-benchmark2-12-25-23 . 0.26 |================================================ Numenta Anomaly Benchmark 1.1 Detector: KNN CAD Seconds < Lower Is Better ml-benchmark2-12-25-23 . 100.05 |============================================== Numenta Anomaly Benchmark 1.1 Detector: Relative Entropy Seconds < Lower Is Better ml-benchmark2-12-25-23 . 8.048 |=============================================== Numenta Anomaly Benchmark 1.1 Detector: Windowed Gaussian Seconds < Lower Is Better ml-benchmark2-12-25-23 . 4.713 |=============================================== Numenta Anomaly Benchmark 1.1 Detector: Earthgecko Skyline Seconds < Lower Is Better ml-benchmark2-12-25-23 . 51.84 |=============================================== Numenta Anomaly Benchmark 1.1 Detector: Bayesian Changepoint Seconds < Lower Is Better ml-benchmark2-12-25-23 . 12.75 |=============================================== Numenta Anomaly Benchmark 1.1 Detector: Contextual Anomaly Detector OSE Seconds < Lower Is Better ml-benchmark2-12-25-23 . 25.41 |=============================================== ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better AI Benchmark Alpha 0.1.2 Device Inference Score Score > Higher Is Better ml-benchmark2-12-25-23 . 2932 |================================================ AI Benchmark Alpha 0.1.2 Device Training Score Score > Higher Is Better ml-benchmark2-12-25-23 . 3592 |================================================ AI Benchmark Alpha 0.1.2 Device AI Score Score > Higher Is Better ml-benchmark2-12-25-23 . 6524 |================================================ Mlpack Benchmark Benchmark: scikit_ica Seconds < Lower Is Better ml-benchmark2-12-25-23 . 29.93 |=============================================== Mlpack Benchmark Benchmark: scikit_qda Seconds < Lower Is Better ml-benchmark2-12-25-23 . 31.30 |=============================================== Mlpack Benchmark Benchmark: scikit_svm Seconds < Lower Is Better ml-benchmark2-12-25-23 . 14.12 |=============================================== Mlpack Benchmark Benchmark: scikit_linearridgeregression Seconds < Lower Is Better ml-benchmark2-12-25-23 . 1.08 |================================================ Scikit-Learn 1.2.2 Benchmark: GLM Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: SAGA Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Tree Seconds < Lower Is Better ml-benchmark2-12-25-23 . 37.43 |=============================================== Scikit-Learn 1.2.2 Benchmark: Lasso Seconds < Lower Is Better ml-benchmark2-12-25-23 . 237.29 |============================================== Scikit-Learn 1.2.2 Benchmark: Glmnet Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Sparsify Seconds < Lower Is Better ml-benchmark2-12-25-23 . 71.65 |=============================================== Scikit-Learn 1.2.2 Benchmark: Plot Ward Seconds < Lower Is Better ml-benchmark2-12-25-23 . 40.58 |=============================================== Scikit-Learn 1.2.2 Benchmark: MNIST Dataset Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Neighbors Seconds < Lower Is Better ml-benchmark2-12-25-23 . 123.61 |============================================== Scikit-Learn 1.2.2 Benchmark: SGD Regression Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: SGDOneClassSVM Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Lasso Path Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Isolation Forest Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Fast KMeans Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Text Vectorizers Seconds < Lower Is Better ml-benchmark2-12-25-23 . 41.82 |=============================================== Scikit-Learn 1.2.2 Benchmark: Plot Hierarchical Seconds < Lower Is Better ml-benchmark2-12-25-23 . 139.00 |============================================== Scikit-Learn 1.2.2 Benchmark: Plot OMP vs. LARS Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Feature Expansions Seconds < Lower Is Better ml-benchmark2-12-25-23 . 90.53 |=============================================== Scikit-Learn 1.2.2 Benchmark: LocalOutlierFactor Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: TSNE MNIST Dataset Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Isotonic / Logistic Seconds < Lower Is Better ml-benchmark2-12-25-23 . 1123.36 |============================================= Scikit-Learn 1.2.2 Benchmark: Plot Incremental PCA Seconds < Lower Is Better ml-benchmark2-12-25-23 . 49.61 |=============================================== Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Parallel Pairwise Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Isotonic / Pathological Seconds < Lower Is Better ml-benchmark2-12-25-23 . 3114.64 |============================================= Scikit-Learn 1.2.2 Benchmark: RCV1 Logreg Convergencet Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Sample Without Replacement Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Covertype Dataset Benchmark Seconds < Lower Is Better ml-benchmark2-12-25-23 . 270.96 |============================================== Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Adult Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Isotonic / Perturbed Logarithm Seconds < Lower Is Better ml-benchmark2-12-25-23 . 1384.03 |============================================= Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Threading Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Singular Value Decomposition Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Higgs Boson Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: 20 Newsgroups / Logistic Regression Seconds < Lower Is Better ml-benchmark2-12-25-23 . 28.22 |=============================================== Scikit-Learn 1.2.2 Benchmark: Plot Polynomial Kernel Approximation Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Non-Negative Matrix Factorization Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Categorical Only Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Kernel PCA Solvers / Time vs. N Samples Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Kernel PCA Solvers / Time vs. N Components Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Sparse Random Projections / 100 Iterations Seconds < Lower Is Better ml-benchmark2-12-25-23 . 401.31 |============================================== Whisper.cpp 1.4 Model: ggml-base.en - Input: 2016 State of the Union Seconds < Lower Is Better ml-benchmark2-12-25-23 . 96.98 |=============================================== Whisper.cpp 1.4 Model: ggml-small.en - Input: 2016 State of the Union Seconds < Lower Is Better ml-benchmark2-12-25-23 . 289.74 |============================================== Whisper.cpp 1.4 Model: ggml-medium.en - Input: 2016 State of the Union Seconds < Lower Is Better ml-benchmark2-12-25-23 . 807.18 |============================================== OpenCV 4.7 Test: DNN - Deep Neural Network ms < Lower Is Better ml-benchmark2-12-25-23 . 31437 |===============================================