m1test AMD Ryzen 5 7600X 6-Core testing with a ASUS TUF GAMING B650-PLUS WIFI (0823 BIOS) and Gigabyte NVIDIA GeForce RTX 4060 Ti 16GB on Ubuntu 22.04 via the Phoronix Test Suite. firstmachinetest: Processor: AMD Ryzen 5 7600X 6-Core @ 4.70GHz (6 Cores / 12 Threads), Motherboard: ASUS TUF GAMING B650-PLUS WIFI (0823 BIOS), Chipset: AMD Device 14d8, Memory: 32GB, Disk: Western Digital WD_BLACK SN850X 2000GB, Graphics: Gigabyte NVIDIA GeForce RTX 4060 Ti 16GB, Audio: NVIDIA Device 22bd, Monitor: DELL P2720DC, Network: Realtek RTL8125 2.5GbE + Realtek Device b852 OS: Ubuntu 22.04, Kernel: 5.19.0-50-generic (x86_64), Desktop: GNOME Shell 42.9, Display Server: X Server 1.21.1.4, Display Driver: NVIDIA 535.86.05, OpenGL: 4.6.0, OpenCL: OpenCL 3.0 CUDA 12.2.128, Vulkan: 1.3.224, Compiler: GCC 11.3.0, File-System: ext4, Screen Resolution: 2560x1440 SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: S3D GFLOPS > Higher Is Better firstmachinetest . 167.74 |==================================================== SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Triad GB/s > Higher Is Better firstmachinetest . 12.92 |===================================================== SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: FFT SP GFLOPS > Higher Is Better firstmachinetest . 726.05 |==================================================== SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: MD5 Hash GHash/s > Higher Is Better firstmachinetest . 25.93 |===================================================== SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Reduction GB/s > Higher Is Better firstmachinetest . 262.42 |==================================================== SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: GEMM SGEMM_N GFLOPS > Higher Is Better firstmachinetest . 6415.70 |=================================================== SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Max SP Flops GFLOPS > Higher Is Better firstmachinetest . 23395.7 |=================================================== SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Bus Speed Download GB/s > Higher Is Better firstmachinetest . 13.38 |===================================================== SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Bus Speed Readback GB/s > Higher Is Better firstmachinetest . 13.19 |===================================================== SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Texture Read Bandwidth GB/s > Higher Is Better firstmachinetest . 2697.61 |=================================================== LeelaChessZero 0.28 Backend: BLAS Nodes Per Second > Higher Is Better firstmachinetest . 1383 |====================================================== oneDNN 3.1 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU ms < Lower Is Better firstmachinetest . 3.94237 |=================================================== oneDNN 3.1 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU ms < Lower Is Better firstmachinetest . 4.97774 |=================================================== oneDNN 3.1 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better firstmachinetest . 0.731267 |================================================== oneDNN 3.1 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better firstmachinetest . 1.11340 |=================================================== oneDNN 3.1 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better firstmachinetest . 1.53228 |=================================================== oneDNN 3.1 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better firstmachinetest . 2.99807 |=================================================== oneDNN 3.1 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU ms < Lower Is Better firstmachinetest . 8.57521 |=================================================== oneDNN 3.1 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU ms < Lower Is Better firstmachinetest . 5.27941 |=================================================== oneDNN 3.1 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU ms < Lower Is Better firstmachinetest . 5.24738 |=================================================== oneDNN 3.1 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better firstmachinetest . 8.33924 |=================================================== oneDNN 3.1 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better firstmachinetest . 1.00945 |=================================================== oneDNN 3.1 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better firstmachinetest . 1.30084 |=================================================== oneDNN 3.1 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU ms < Lower Is Better firstmachinetest . 2835.96 |=================================================== oneDNN 3.1 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU ms < Lower Is Better firstmachinetest . 1417.35 |=================================================== oneDNN 3.1 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better firstmachinetest . 2834.98 |=================================================== oneDNN 3.1 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better firstmachinetest . 3.73203 |=================================================== oneDNN 3.1 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better firstmachinetest . 8.81665 |=================================================== oneDNN 3.1 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better firstmachinetest . 3.12200 |=================================================== oneDNN 3.1 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better firstmachinetest . 1414.69 |=================================================== oneDNN 3.1 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better firstmachinetest . 2834.58 |=================================================== oneDNN 3.1 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better firstmachinetest . 1415.38 |=================================================== Numpy Benchmark Score > Higher Is Better firstmachinetest . 807.43 |==================================================== DeepSpeech 0.6 Acceleration: CPU Seconds < Lower Is Better firstmachinetest . 45.23 |===================================================== R Benchmark Seconds < Lower Is Better RNNoise 2020-06-28 Seconds < Lower Is Better firstmachinetest . 13.41 |===================================================== TensorFlow Lite 2022-05-18 Model: SqueezeNet Microseconds < Lower Is Better firstmachinetest . 1920.44 |=================================================== TensorFlow Lite 2022-05-18 Model: Inception V4 Microseconds < Lower Is Better firstmachinetest . 29141.4 |=================================================== TensorFlow Lite 2022-05-18 Model: NASNet Mobile Microseconds < Lower Is Better firstmachinetest . 5368.28 |=================================================== TensorFlow Lite 2022-05-18 Model: Mobilenet Float Microseconds < Lower Is Better firstmachinetest . 1348.48 |=================================================== TensorFlow Lite 2022-05-18 Model: Mobilenet Quant Microseconds < Lower Is Better firstmachinetest . 2942.86 |=================================================== TensorFlow Lite 2022-05-18 Model: Inception ResNet V2 Microseconds < Lower Is Better firstmachinetest . 26415.7 |=================================================== TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: VGG-16 images/sec > Higher Is Better firstmachinetest . 8.45 |====================================================== TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: VGG-16 images/sec > Higher Is Better firstmachinetest . 8.87 |====================================================== TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: VGG-16 images/sec > Higher Is Better firstmachinetest . 9.05 |====================================================== TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: AlexNet images/sec > Higher Is Better firstmachinetest . 107.99 |==================================================== TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: VGG-16 images/sec > Higher Is Better firstmachinetest . 9.09 |====================================================== TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: AlexNet images/sec > Higher Is Better firstmachinetest . 142.72 |==================================================== TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: VGG-16 images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: AlexNet images/sec > Higher Is Better firstmachinetest . 167.60 |==================================================== TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: AlexNet images/sec > Higher Is Better firstmachinetest . 189.83 |==================================================== TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: AlexNet images/sec > Higher Is Better firstmachinetest . 193.91 |==================================================== TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: GoogLeNet images/sec > Higher Is Better firstmachinetest . 77.31 |===================================================== TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: ResNet-50 images/sec > Higher Is Better firstmachinetest . 25.69 |===================================================== TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: GoogLeNet images/sec > Higher Is Better firstmachinetest . 75.55 |===================================================== TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: ResNet-50 images/sec > Higher Is Better firstmachinetest . 25.51 |===================================================== TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: GoogLeNet images/sec > Higher Is Better firstmachinetest . 74.56 |===================================================== TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: ResNet-50 images/sec > Higher Is Better firstmachinetest . 25.28 |===================================================== TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: GoogLeNet images/sec > Higher Is Better firstmachinetest . 74.32 |===================================================== TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: ResNet-50 images/sec > Higher Is Better firstmachinetest . 25.12 |===================================================== TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: GoogLeNet images/sec > Higher Is Better firstmachinetest . 74.51 |===================================================== TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: ResNet-50 images/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better firstmachinetest . 8.9503 |==================================================== Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better firstmachinetest . 335.17 |==================================================== Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream items/sec > Higher Is Better firstmachinetest . 8.9435 |==================================================== Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better firstmachinetest . 111.81 |==================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better firstmachinetest . 390.93 |==================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better firstmachinetest . 7.6609 |==================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better firstmachinetest . 280.56 |==================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better firstmachinetest . 3.5599 |==================================================== Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better firstmachinetest . 167.04 |==================================================== Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better firstmachinetest . 17.95 |===================================================== Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream items/sec > Higher Is Better firstmachinetest . 125.59 |==================================================== Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better firstmachinetest . 7.9561 |==================================================== Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better firstmachinetest . 47.42 |===================================================== Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better firstmachinetest . 63.25 |===================================================== Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better firstmachinetest . 45.05 |===================================================== Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better firstmachinetest . 22.19 |===================================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better firstmachinetest . 111.80 |==================================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better firstmachinetest . 26.82 |===================================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream items/sec > Higher Is Better firstmachinetest . 84.05 |===================================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better firstmachinetest . 11.89 |===================================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better firstmachinetest . 1259.31 |=================================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better firstmachinetest . 2.3753 |==================================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better firstmachinetest . 943.25 |==================================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better firstmachinetest . 1.0581 |==================================================== Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better firstmachinetest . 50.01 |===================================================== Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better firstmachinetest . 59.97 |===================================================== Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream items/sec > Higher Is Better firstmachinetest . 44.69 |===================================================== Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better firstmachinetest . 22.37 |===================================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better firstmachinetest . 11.25 |===================================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better firstmachinetest . 266.69 |==================================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream items/sec > Higher Is Better firstmachinetest . 9.5099 |==================================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better firstmachinetest . 105.15 |==================================================== Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better firstmachinetest . 111.89 |==================================================== Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better firstmachinetest . 26.80 |===================================================== Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream items/sec > Higher Is Better firstmachinetest . 84.02 |===================================================== Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better firstmachinetest . 11.90 |===================================================== Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better firstmachinetest . 50.34 |===================================================== Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better firstmachinetest . 59.57 |===================================================== Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better firstmachinetest . 45.35 |===================================================== Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better firstmachinetest . 22.04 |===================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better firstmachinetest . 76.62 |===================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better firstmachinetest . 39.14 |===================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream items/sec > Higher Is Better firstmachinetest . 68.50 |===================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better firstmachinetest . 14.59 |===================================================== Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better firstmachinetest . 16.32 |===================================================== Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better firstmachinetest . 183.83 |==================================================== Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream items/sec > Higher Is Better firstmachinetest . 16.14 |===================================================== Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better firstmachinetest . 61.94 |===================================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better firstmachinetest . 184.60 |==================================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better firstmachinetest . 16.24 |===================================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better firstmachinetest . 118.66 |==================================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better firstmachinetest . 8.4159 |==================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better firstmachinetest . 38.60 |===================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better firstmachinetest . 77.72 |===================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better firstmachinetest . 34.84 |===================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better firstmachinetest . 28.70 |===================================================== Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better firstmachinetest . 8.9483 |==================================================== Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better firstmachinetest . 335.25 |==================================================== Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better firstmachinetest . 8.9540 |==================================================== Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better firstmachinetest . 111.68 |==================================================== spaCy 3.4.1 Model: en_core_web_lg tokens/sec > Higher Is Better firstmachinetest . 19413 |===================================================== spaCy 3.4.1 Model: en_core_web_trf tokens/sec > Higher Is Better firstmachinetest . 1657 |====================================================== Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 100 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 200 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 1000 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 100 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 200 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 1000 Milli-Seconds < Lower Is Better Mobile Neural Network 2.1 Model: nasnet ms < Lower Is Better firstmachinetest . 5.894 |===================================================== Mobile Neural Network 2.1 Model: mobilenetV3 ms < Lower Is Better firstmachinetest . 0.759 |===================================================== Mobile Neural Network 2.1 Model: squeezenetv1.1 ms < Lower Is Better firstmachinetest . 1.388 |===================================================== Mobile Neural Network 2.1 Model: resnet-v2-50 ms < Lower Is Better firstmachinetest . 11.92 |===================================================== Mobile Neural Network 2.1 Model: SqueezeNetV1.0 ms < Lower Is Better firstmachinetest . 2.477 |===================================================== Mobile Neural Network 2.1 Model: MobileNetV2_224 ms < Lower Is Better firstmachinetest . 1.689 |===================================================== Mobile Neural Network 2.1 Model: mobilenet-v1-1.0 ms < Lower Is Better firstmachinetest . 2.797 |===================================================== Mobile Neural Network 2.1 Model: inception-v3 ms < Lower Is Better firstmachinetest . 16.04 |===================================================== NCNN 20220729 Target: CPU - Model: mobilenet ms < Lower Is Better firstmachinetest . 7.25 |====================================================== NCNN 20220729 Target: CPU-v2-v2 - Model: mobilenet-v2 ms < Lower Is Better firstmachinetest . 1.93 |====================================================== NCNN 20220729 Target: CPU-v3-v3 - Model: mobilenet-v3 ms < Lower Is Better firstmachinetest . 1.48 |====================================================== NCNN 20220729 Target: CPU - Model: shufflenet-v2 ms < Lower Is Better firstmachinetest . 1.52 |====================================================== NCNN 20220729 Target: CPU - Model: mnasnet ms < Lower Is Better firstmachinetest . 1.56 |====================================================== NCNN 20220729 Target: CPU - Model: efficientnet-b0 ms < Lower Is Better firstmachinetest . 2.69 |====================================================== NCNN 20220729 Target: CPU - Model: blazeface ms < Lower Is Better firstmachinetest . 0.5 |======================================================= NCNN 20220729 Target: CPU - Model: googlenet ms < Lower Is Better firstmachinetest . 5.98 |====================================================== NCNN 20220729 Target: CPU - Model: vgg16 ms < Lower Is Better firstmachinetest . 29.96 |===================================================== NCNN 20220729 Target: CPU - Model: resnet18 ms < Lower Is Better firstmachinetest . 6.28 |====================================================== NCNN 20220729 Target: CPU - Model: alexnet ms < Lower Is Better firstmachinetest . 4.69 |====================================================== NCNN 20220729 Target: CPU - Model: resnet50 ms < Lower Is Better firstmachinetest . 11.87 |===================================================== NCNN 20220729 Target: CPU - Model: yolov4-tiny ms < Lower Is Better firstmachinetest . 13.21 |===================================================== NCNN 20220729 Target: CPU - Model: squeezenet_ssd ms < Lower Is Better firstmachinetest . 10.37 |===================================================== NCNN 20220729 Target: CPU - Model: regnety_400m ms < Lower Is Better firstmachinetest . 4.58 |====================================================== NCNN 20220729 Target: CPU - Model: vision_transformer ms < Lower Is Better firstmachinetest . 113.91 |==================================================== NCNN 20220729 Target: CPU - Model: FastestDet ms < Lower Is Better firstmachinetest . 1.86 |====================================================== NCNN 20220729 Target: Vulkan GPU - Model: mobilenet ms < Lower Is Better firstmachinetest . 10.70 |===================================================== NCNN 20220729 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 ms < Lower Is Better firstmachinetest . 3.33 |====================================================== NCNN 20220729 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 ms < Lower Is Better firstmachinetest . 3.83 |====================================================== NCNN 20220729 Target: Vulkan GPU - Model: shufflenet-v2 ms < Lower Is Better firstmachinetest . 2.54 |====================================================== NCNN 20220729 Target: Vulkan GPU - Model: mnasnet ms < Lower Is Better firstmachinetest . 3.41 |====================================================== NCNN 20220729 Target: Vulkan GPU - Model: efficientnet-b0 ms < Lower Is Better firstmachinetest . 7.92 |====================================================== NCNN 20220729 Target: Vulkan GPU - Model: blazeface ms < Lower Is Better firstmachinetest . 1.05 |====================================================== NCNN 20220729 Target: Vulkan GPU - Model: googlenet ms < Lower Is Better firstmachinetest . 9.07 |====================================================== NCNN 20220729 Target: Vulkan GPU - Model: vgg16 ms < Lower Is Better firstmachinetest . 68.87 |===================================================== NCNN 20220729 Target: Vulkan GPU - Model: resnet18 ms < Lower Is Better firstmachinetest . 8.74 |====================================================== NCNN 20220729 Target: Vulkan GPU - Model: alexnet ms < Lower Is Better firstmachinetest . 18.29 |===================================================== NCNN 20220729 Target: Vulkan GPU - Model: resnet50 ms < Lower Is Better firstmachinetest . 20.12 |===================================================== NCNN 20220729 Target: Vulkan GPU - Model: yolov4-tiny ms < Lower Is Better firstmachinetest . 19.79 |===================================================== NCNN 20220729 Target: Vulkan GPU - Model: squeezenet_ssd ms < Lower Is Better firstmachinetest . 13.11 |===================================================== NCNN 20220729 Target: Vulkan GPU - Model: regnety_400m ms < Lower Is Better firstmachinetest . 4.46 |====================================================== NCNN 20220729 Target: Vulkan GPU - Model: vision_transformer ms < Lower Is Better firstmachinetest . 719.18 |==================================================== NCNN 20220729 Target: Vulkan GPU - Model: FastestDet ms < Lower Is Better firstmachinetest . 3.06 |====================================================== TNN 0.3 Target: CPU - Model: DenseNet ms < Lower Is Better firstmachinetest . 2181.40 |=================================================== TNN 0.3 Target: CPU - Model: MobileNet v2 ms < Lower Is Better firstmachinetest . 182.51 |==================================================== TNN 0.3 Target: CPU - Model: SqueezeNet v2 ms < Lower Is Better firstmachinetest . 41.95 |===================================================== TNN 0.3 Target: CPU - Model: SqueezeNet v1.1 ms < Lower Is Better firstmachinetest . 177.53 |==================================================== PlaidML FP16: No - Mode: Inference - Network: VGG16 - Device: CPU Examples Per Second > Higher Is Better PlaidML FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU Examples Per Second > Higher Is Better ECP-CANDLE 0.4 Benchmark: P1B2 Seconds < Lower Is Better ECP-CANDLE 0.4 Benchmark: P3B1 Seconds < Lower Is Better ECP-CANDLE 0.4 Benchmark: P3B2 Seconds < Lower Is Better Numenta Anomaly Benchmark 1.1 Detector: KNN CAD Seconds < Lower Is Better firstmachinetest . 169.65 |==================================================== Numenta Anomaly Benchmark 1.1 Detector: Relative Entropy Seconds < Lower Is Better firstmachinetest . 12.83 |===================================================== Numenta Anomaly Benchmark 1.1 Detector: Windowed Gaussian Seconds < Lower Is Better firstmachinetest . 9.667 |===================================================== Numenta Anomaly Benchmark 1.1 Detector: Earthgecko Skyline Seconds < Lower Is Better firstmachinetest . 92.49 |===================================================== Numenta Anomaly Benchmark 1.1 Detector: Bayesian Changepoint Seconds < Lower Is Better firstmachinetest . 18.99 |===================================================== Numenta Anomaly Benchmark 1.1 Detector: Contextual Anomaly Detector OSE Seconds < Lower Is Better firstmachinetest . 30.82 |===================================================== ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better AI Benchmark Alpha 0.1.2 Device Inference Score Score > Higher Is Better firstmachinetest . 1809 |====================================================== AI Benchmark Alpha 0.1.2 Device Training Score Score > Higher Is Better firstmachinetest . 2496 |====================================================== AI Benchmark Alpha 0.1.2 Device AI Score Score > Higher Is Better firstmachinetest . 4305 |====================================================== Mlpack Benchmark Benchmark: scikit_ica Seconds < Lower Is Better firstmachinetest . 27.50 |===================================================== Mlpack Benchmark Benchmark: scikit_qda Seconds < Lower Is Better firstmachinetest . 33.09 |===================================================== Mlpack Benchmark Benchmark: scikit_svm Seconds < Lower Is Better firstmachinetest . 14.60 |===================================================== Mlpack Benchmark Benchmark: scikit_linearridgeregression Seconds < Lower Is Better firstmachinetest . 1.62 |====================================================== Scikit-Learn 1.2.2 Benchmark: GLM Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: SAGA Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Tree Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Lasso Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Glmnet Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Sparsify Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Ward Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: MNIST Dataset Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Neighbors Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: SGD Regression Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: SGDOneClassSVM Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Lasso Path Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Isolation Forest Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Fast KMeans Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Text Vectorizers Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Hierarchical Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot OMP vs. LARS Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Feature Expansions Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: LocalOutlierFactor Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: TSNE MNIST Dataset Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Isotonic / Logistic Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Incremental PCA Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Parallel Pairwise Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Isotonic / Pathological Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: RCV1 Logreg Convergencet Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Sample Without Replacement Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Covertype Dataset Benchmark Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Adult Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Isotonic / Perturbed Logarithm Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Threading Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Singular Value Decomposition Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Higgs Boson Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: 20 Newsgroups / Logistic Regression Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Polynomial Kernel Approximation Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Non-Negative Matrix Factorization Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Categorical Only Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Kernel PCA Solvers / Time vs. N Samples Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Kernel PCA Solvers / Time vs. N Components Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Sparse Random Projections / 100 Iterations Seconds < Lower Is Better OpenCV 4.7 Test: DNN - Deep Neural Network ms < Lower Is Better firstmachinetest . 12915 |=====================================================