sk3tchy-creator-tst VMware testing on Debian 11 via the Phoronix Test Suite. centos-vm: Processor: 2 x Intel Core i9-9900K (4 Cores), Motherboard: Intel 440BX (6.00 BIOS), Chipset: Intel 440BX/ZX/DX, Memory: 1 x 4 GB DRAM, Disk: 107GB Virtual disk, Graphics: VMware SVGA II, Network: VMware VMXNET3 OS: Debian 11, Kernel: 5.10.0-21-amd64 (x86_64), Vulkan: 1.0.2, Compiler: GCC 10.2.1 20210110, File-System: ext4, Screen Resolution: 1176x885, System Layer: VMware LeelaChessZero 0.28 Backend: BLAS Nodes Per Second > Higher Is Better centos-vm . 1268 |============================================================= oneDNN 3.0 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU ms < Lower Is Better centos-vm . 6.07470 |========================================================== oneDNN 3.0 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU ms < Lower Is Better centos-vm . 8.61128 |========================================================== oneDNN 3.0 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better centos-vm . 2.64353 |========================================================== oneDNN 3.0 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better centos-vm . 2.74872 |========================================================== oneDNN 3.0 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 3.0 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 3.0 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU ms < Lower Is Better centos-vm . 16.47 |============================================================ oneDNN 3.0 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU ms < Lower Is Better centos-vm . 10.19 |============================================================ oneDNN 3.0 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU ms < Lower Is Better centos-vm . 10.59 |============================================================ oneDNN 3.0 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better centos-vm . 16.79 |============================================================ oneDNN 3.0 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better centos-vm . 4.01290 |========================================================== oneDNN 3.0 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better centos-vm . 8.52308 |========================================================== oneDNN 3.0 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU ms < Lower Is Better centos-vm . 5169.42 |========================================================== oneDNN 3.0 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU ms < Lower Is Better centos-vm . 3103.29 |========================================================== oneDNN 3.0 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better centos-vm . 5203.70 |========================================================== oneDNN 3.0 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 3.0 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 3.0 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 3.0 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better centos-vm . 3114.25 |========================================================== oneDNN 3.0 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU ms < Lower Is Better centos-vm . 4.20534 |========================================================== oneDNN 3.0 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better centos-vm . 5220.57 |========================================================== oneDNN 3.0 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better centos-vm . 3114.15 |========================================================== oneDNN 3.0 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better centos-vm . 2.27932 |========================================================== oneDNN 3.0 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better Numpy Benchmark Score > Higher Is Better centos-vm . 371.44 |=========================================================== DeepSpeech 0.6 Acceleration: CPU Seconds < Lower Is Better centos-vm . 65.85 |============================================================ R Benchmark Seconds < Lower Is Better centos-vm . 0.1319 |=========================================================== RNNoise 2020-06-28 Seconds < Lower Is Better centos-vm . 22.99 |============================================================ TensorFlow Lite 2022-05-18 Model: SqueezeNet Microseconds < Lower Is Better centos-vm . 6449.66 |========================================================== TensorFlow Lite 2022-05-18 Model: Inception V4 Microseconds < Lower Is Better centos-vm . 95985.4 |========================================================== TensorFlow Lite 2022-05-18 Model: NASNet Mobile Microseconds < Lower Is Better centos-vm . 14313.6 |========================================================== TensorFlow Lite 2022-05-18 Model: Mobilenet Float Microseconds < Lower Is Better centos-vm . 4839.42 |========================================================== TensorFlow Lite 2022-05-18 Model: Mobilenet Quant Microseconds < Lower Is Better centos-vm . 8367.17 |========================================================== TensorFlow Lite 2022-05-18 Model: Inception ResNet V2 Microseconds < Lower Is Better centos-vm . 87461.0 |========================================================== TensorFlow 2.10 Device: CPU - Batch Size: 16 - Model: VGG-16 images/sec > Higher Is Better TensorFlow 2.10 Device: CPU - Batch Size: 32 - Model: VGG-16 images/sec > Higher Is Better TensorFlow 2.10 Device: CPU - Batch Size: 64 - Model: VGG-16 images/sec > Higher Is Better TensorFlow 2.10 Device: CPU - Batch Size: 16 - Model: AlexNet images/sec > Higher Is Better TensorFlow 2.10 Device: CPU - Batch Size: 256 - Model: VGG-16 images/sec > Higher Is Better TensorFlow 2.10 Device: CPU - Batch Size: 32 - Model: AlexNet images/sec > Higher Is Better TensorFlow 2.10 Device: CPU - Batch Size: 512 - Model: VGG-16 images/sec > Higher Is Better TensorFlow 2.10 Device: CPU - Batch Size: 64 - Model: AlexNet images/sec > Higher Is Better TensorFlow 2.10 Device: CPU - Batch Size: 256 - Model: AlexNet images/sec > Higher Is Better TensorFlow 2.10 Device: CPU - Batch Size: 512 - Model: AlexNet images/sec > Higher Is Better TensorFlow 2.10 Device: CPU - Batch Size: 16 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.10 Device: CPU - Batch Size: 16 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.10 Device: CPU - Batch Size: 32 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.10 Device: CPU - Batch Size: 32 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.10 Device: CPU - Batch Size: 64 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.10 Device: CPU - Batch Size: 64 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.10 Device: CPU - Batch Size: 256 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.10 Device: CPU - Batch Size: 256 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.10 Device: CPU - Batch Size: 512 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.10 Device: CPU - Batch Size: 512 - Model: ResNet-50 images/sec > Higher Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better centos-vm . 3.9171 |=========================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better centos-vm . 508.49 |=========================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream items/sec > Higher Is Better centos-vm . 2.0387 |=========================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better centos-vm . 490.51 |=========================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better centos-vm . 45.71 |============================================================ Neural Magic DeepSparse 1.3.2 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better centos-vm . 43.72 |============================================================ Neural Magic DeepSparse 1.3.2 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream items/sec > Higher Is Better centos-vm . 25.49 |============================================================ Neural Magic DeepSparse 1.3.2 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better centos-vm . 39.23 |============================================================ Neural Magic DeepSparse 1.3.2 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better centos-vm . 14.61 |============================================================ Neural Magic DeepSparse 1.3.2 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better centos-vm . 136.77 |=========================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better centos-vm . 8.1049 |=========================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better centos-vm . 123.37 |=========================================================== Neural Magic DeepSparse 1.3.2 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better centos-vm . 21.60 |============================================================ Neural Magic DeepSparse 1.3.2 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better centos-vm . 92.55 |============================================================ Neural Magic DeepSparse 1.3.2 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream items/sec > Higher Is Better centos-vm . 11.40 |============================================================ Neural Magic DeepSparse 1.3.2 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better centos-vm . 87.74 |============================================================ Neural Magic DeepSparse 1.3.2 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better centos-vm . 44.26 |============================================================ Neural Magic DeepSparse 1.3.2 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better centos-vm . 45.15 |============================================================ Neural Magic DeepSparse 1.3.2 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream items/sec > Higher Is Better centos-vm . 23.10 |============================================================ Neural Magic DeepSparse 1.3.2 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better centos-vm . 43.29 |============================================================ Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better centos-vm . 32.80 |============================================================ Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better centos-vm . 60.92 |============================================================ Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream items/sec > Higher Is Better centos-vm . 16.87 |============================================================ Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better centos-vm . 59.26 |============================================================ Neural Magic DeepSparse 1.3.2 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better centos-vm . 5.7598 |=========================================================== Neural Magic DeepSparse 1.3.2 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better centos-vm . 345.84 |=========================================================== Neural Magic DeepSparse 1.3.2 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream items/sec > Higher Is Better centos-vm . 3.0853 |=========================================================== Neural Magic DeepSparse 1.3.2 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better centos-vm . 324.10 |=========================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better centos-vm . 16.41 |============================================================ Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better centos-vm . 121.81 |=========================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better centos-vm . 8.4059 |=========================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better centos-vm . 118.96 |=========================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better centos-vm . 3.9141 |=========================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better centos-vm . 509.88 |=========================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better centos-vm . 2.0309 |=========================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better centos-vm . 492.40 |=========================================================== spaCy 3.4.1 tokens/sec > Higher Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 100 Milli-Seconds < Lower Is Better centos-vm . 25273 |============================================================ Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 200 Milli-Seconds < Lower Is Better centos-vm . 50387 |============================================================ Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 1000 Milli-Seconds < Lower Is Better centos-vm . 253259 |=========================================================== Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 100 Milli-Seconds < Lower Is Better centos-vm . 63778 |============================================================ Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 200 Milli-Seconds < Lower Is Better centos-vm . 127696 |=========================================================== Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 1000 Milli-Seconds < Lower Is Better centos-vm . 637873 |=========================================================== Mobile Neural Network 2.1 Model: nasnet ms < Lower Is Better centos-vm . 9.598 |============================================================ Mobile Neural Network 2.1 Model: mobilenetV3 ms < Lower Is Better centos-vm . 1.533 |============================================================ Mobile Neural Network 2.1 Model: squeezenetv1.1 ms < Lower Is Better centos-vm . 2.637 |============================================================ Mobile Neural Network 2.1 Model: resnet-v2-50 ms < Lower Is Better centos-vm . 22.81 |============================================================ Mobile Neural Network 2.1 Model: SqueezeNetV1.0 ms < Lower Is Better centos-vm . 4.380 |============================================================ Mobile Neural Network 2.1 Model: MobileNetV2_224 ms < Lower Is Better centos-vm . 2.966 |============================================================ Mobile Neural Network 2.1 Model: mobilenet-v1-1.0 ms < Lower Is Better centos-vm . 3.435 |============================================================ Mobile Neural Network 2.1 Model: inception-v3 ms < Lower Is Better centos-vm . 29.08 |============================================================ NCNN 20220729 Target: CPU - Model: mobilenet ms < Lower Is Better centos-vm . 16.62 |============================================================ NCNN 20220729 Target: CPU-v2-v2 - Model: mobilenet-v2 ms < Lower Is Better centos-vm . 4.63 |============================================================= NCNN 20220729 Target: CPU-v3-v3 - Model: mobilenet-v3 ms < Lower Is Better centos-vm . 3.72 |============================================================= NCNN 20220729 Target: CPU - Model: shufflenet-v2 ms < Lower Is Better centos-vm . 3.53 |============================================================= NCNN 20220729 Target: CPU - Model: mnasnet ms < Lower Is Better centos-vm . 4.02 |============================================================= NCNN 20220729 Target: CPU - Model: efficientnet-b0 ms < Lower Is Better centos-vm . 8.88 |============================================================= NCNN 20220729 Target: CPU - Model: blazeface ms < Lower Is Better centos-vm . 0.91 |============================================================= NCNN 20220729 Target: CPU - Model: googlenet ms < Lower Is Better centos-vm . 14.52 |============================================================ NCNN 20220729 Target: CPU - Model: vgg16 ms < Lower Is Better centos-vm . 62.68 |============================================================ NCNN 20220729 Target: CPU - Model: resnet18 ms < Lower Is Better centos-vm . 12.49 |============================================================ NCNN 20220729 Target: CPU - Model: alexnet ms < Lower Is Better centos-vm . 9.79 |============================================================= NCNN 20220729 Target: CPU - Model: resnet50 ms < Lower Is Better centos-vm . 25.44 |============================================================ NCNN 20220729 Target: CPU - Model: yolov4-tiny ms < Lower Is Better centos-vm . 25.51 |============================================================ NCNN 20220729 Target: CPU - Model: squeezenet_ssd ms < Lower Is Better centos-vm . 17.66 |============================================================ NCNN 20220729 Target: CPU - Model: regnety_400m ms < Lower Is Better centos-vm . 9.64 |============================================================= NCNN 20220729 Target: CPU - Model: vision_transformer ms < Lower Is Better centos-vm . 470.88 |=========================================================== NCNN 20220729 Target: CPU - Model: FastestDet ms < Lower Is Better centos-vm . 3.63 |============================================================= NCNN 20220729 Target: Vulkan GPU - Model: mobilenet ms < Lower Is Better centos-vm . 654.98 |=========================================================== NCNN 20220729 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 ms < Lower Is Better centos-vm . 223.54 |=========================================================== NCNN 20220729 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 ms < Lower Is Better centos-vm . 205.42 |=========================================================== NCNN 20220729 Target: Vulkan GPU - Model: shufflenet-v2 ms < Lower Is Better centos-vm . 170.09 |=========================================================== NCNN 20220729 Target: Vulkan GPU - Model: mnasnet ms < Lower Is Better centos-vm . 230.97 |=========================================================== NCNN 20220729 Target: Vulkan GPU - Model: efficientnet-b0 ms < Lower Is Better centos-vm . 343.00 |=========================================================== NCNN 20220729 Target: Vulkan GPU - Model: blazeface ms < Lower Is Better centos-vm . 51.27 |============================================================ NCNN 20220729 Target: Vulkan GPU - Model: googlenet ms < Lower Is Better centos-vm . 578.67 |=========================================================== NCNN 20220729 Target: Vulkan GPU - Model: vgg16 ms < Lower Is Better centos-vm . 3020.88 |========================================================== NCNN 20220729 Target: Vulkan GPU - Model: resnet18 ms < Lower Is Better centos-vm . 537.33 |=========================================================== NCNN 20220729 Target: Vulkan GPU - Model: alexnet ms < Lower Is Better centos-vm . 667.05 |=========================================================== NCNN 20220729 Target: Vulkan GPU - Model: resnet50 ms < Lower Is Better centos-vm . 1298.50 |========================================================== NCNN 20220729 Target: Vulkan GPU - Model: yolov4-tiny ms < Lower Is Better centos-vm . 944.39 |=========================================================== NCNN 20220729 Target: Vulkan GPU - Model: squeezenet_ssd ms < Lower Is Better centos-vm . 620.86 |=========================================================== NCNN 20220729 Target: Vulkan GPU - Model: regnety_400m ms < Lower Is Better centos-vm . 281.72 |=========================================================== NCNN 20220729 Target: Vulkan GPU - Model: vision_transformer ms < Lower Is Better centos-vm . 8709.22 |========================================================== NCNN 20220729 Target: Vulkan GPU - Model: FastestDet ms < Lower Is Better centos-vm . 162.71 |=========================================================== TNN 0.3 Target: CPU - Model: DenseNet ms < Lower Is Better centos-vm . 3875.53 |========================================================== TNN 0.3 Target: CPU - Model: MobileNet v2 ms < Lower Is Better centos-vm . 289.27 |=========================================================== TNN 0.3 Target: CPU - Model: SqueezeNet v2 ms < Lower Is Better centos-vm . 66.23 |============================================================ TNN 0.3 Target: CPU - Model: SqueezeNet v1.1 ms < Lower Is Better centos-vm . 285.86 |=========================================================== PlaidML FP16: No - Mode: Inference - Network: VGG16 - Device: CPU FPS > Higher Is Better centos-vm . 8.93 |============================================================= PlaidML FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU FPS > Higher Is Better centos-vm . 5.74 |============================================================= ECP-CANDLE 0.4 Benchmark: P1B2 Seconds < Lower Is Better ECP-CANDLE 0.4 Benchmark: P3B1 Seconds < Lower Is Better ECP-CANDLE 0.4 Benchmark: P3B2 Seconds < Lower Is Better Numenta Anomaly Benchmark 1.1 Detector: KNN CAD Seconds < Lower Is Better centos-vm . 340.66 |=========================================================== Numenta Anomaly Benchmark 1.1 Detector: Relative Entropy Seconds < Lower Is Better centos-vm . 36.77 |============================================================ Numenta Anomaly Benchmark 1.1 Detector: Windowed Gaussian Seconds < Lower Is Better centos-vm . 20.90 |============================================================ Numenta Anomaly Benchmark 1.1 Detector: Earthgecko Skyline Seconds < Lower Is Better centos-vm . 226.20 |=========================================================== Numenta Anomaly Benchmark 1.1 Detector: Bayesian Changepoint Seconds < Lower Is Better centos-vm . 62.59 |============================================================ Numenta Anomaly Benchmark 1.1 Detector: Contextual Anomaly Detector OSE Seconds < Lower Is Better centos-vm . 61.83 |============================================================ ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better AI Benchmark Alpha 0.1.2 Score > Higher Is Better Mlpack Benchmark Benchmark: scikit_ica Seconds < Lower Is Better Mlpack Benchmark Benchmark: scikit_qda Seconds < Lower Is Better Mlpack Benchmark Benchmark: scikit_svm Seconds < Lower Is Better Mlpack Benchmark Benchmark: scikit_linearridgeregression Seconds < Lower Is Better Scikit-Learn 1.1.3 Benchmark: MNIST Dataset Seconds < Lower Is Better centos-vm . 109.97 |=========================================================== Scikit-Learn 1.1.3 Benchmark: TSNE MNIST Dataset Seconds < Lower Is Better centos-vm . 37.24 |============================================================ Scikit-Learn 1.1.3 Benchmark: Sparse Random Projections, 100 Iterations Seconds < Lower Is Better centos-vm . 165.89 |=========================================================== OpenCV 4.6 Test: DNN - Deep Neural Network ms < Lower Is Better centos-vm . 13471 |============================================================