machine_learning_test Nano_machine_learning_test_result machine_learning_test_result: Processor: ARMv8 Cortex-A78E @ 1.51GHz (6 Cores), Motherboard: EDK II 3.1-32827747, Memory: 8GB, Disk: 256GB TEAM Ind N745-M80W, Graphics: NVIDIA Tegra Orin, Monitor: BenQ GW2381, Network: Realtek RTL8111/8168/8411 + Realtek RTL8822CE 802.11ac PCIe OS: Ubuntu 20.04, Kernel: 5.10.104-tegra (aarch64), Desktop: GNOME Shell 3.36.9, Display Server: X Server 1.20.13, Display Driver: NVIDIA 35.3.1, OpenGL: 4.6.0, Vulkan: 1.3.212, Compiler: GCC 9.4.0 + CUDA 11.4, File-System: ext4, Screen Resolution: 1920x1200 Scikit-Learn 1.2.2 Benchmark: LocalOutlierFactor Seconds < Lower Is Better machine_learning_test_result . 272.75 |======================================== Scikit-Learn 1.2.2 Benchmark: Feature Expansions Seconds < Lower Is Better machine_learning_test_result . 347.10 |======================================== Scikit-Learn 1.2.2 Benchmark: Plot OMP vs. LARS Seconds < Lower Is Better machine_learning_test_result . 159.49 |======================================== Scikit-Learn 1.2.2 Benchmark: Plot Hierarchical Seconds < Lower Is Better machine_learning_test_result . 473.07 |======================================== Scikit-Learn 1.2.2 Benchmark: Text Vectorizers Seconds < Lower Is Better machine_learning_test_result . 206.41 |======================================== Scikit-Learn 1.2.2 Benchmark: Plot Lasso Path Seconds < Lower Is Better machine_learning_test_result . 589.35 |======================================== Scikit-Learn 1.2.2 Benchmark: SGD Regression Seconds < Lower Is Better machine_learning_test_result . 181.39 |======================================== Scikit-Learn 1.2.2 Benchmark: Plot Neighbors Seconds < Lower Is Better machine_learning_test_result . 687.96 |======================================== Scikit-Learn 1.2.2 Benchmark: MNIST Dataset Seconds < Lower Is Better machine_learning_test_result . 205.61 |======================================== Scikit-Learn 1.2.2 Benchmark: Plot Ward Seconds < Lower Is Better machine_learning_test_result . 156.97 |======================================== Scikit-Learn 1.2.2 Benchmark: Sparsify Seconds < Lower Is Better machine_learning_test_result . 183.64 |======================================== Scikit-Learn 1.2.2 Benchmark: Lasso Seconds < Lower Is Better machine_learning_test_result . 793.01 |======================================== Scikit-Learn 1.2.2 Benchmark: Tree Seconds < Lower Is Better machine_learning_test_result . 181.66 |======================================== Scikit-Learn 1.2.2 Benchmark: SAGA Seconds < Lower Is Better machine_learning_test_result . 2449.38 |======================================= Scikit-Learn 1.2.2 Benchmark: GLM Seconds < Lower Is Better machine_learning_test_result . 1051.97 |======================================= Numenta Anomaly Benchmark 1.1 Detector: Contextual Anomaly Detector OSE Seconds < Lower Is Better machine_learning_test_result . 260.82 |======================================== Numenta Anomaly Benchmark 1.1 Detector: Bayesian Changepoint Seconds < Lower Is Better machine_learning_test_result . 210.46 |======================================== Numenta Anomaly Benchmark 1.1 Detector: Earthgecko Skyline Seconds < Lower Is Better machine_learning_test_result . 550.59 |======================================== Numenta Anomaly Benchmark 1.1 Detector: Windowed Gaussian Seconds < Lower Is Better machine_learning_test_result . 38.47 |========================================= Numenta Anomaly Benchmark 1.1 Detector: Relative Entropy Seconds < Lower Is Better machine_learning_test_result . 92.50 |========================================= Numenta Anomaly Benchmark 1.1 Detector: KNN CAD Seconds < Lower Is Better machine_learning_test_result . 739.30 |======================================== TNN 0.3 Target: CPU - Model: SqueezeNet v1.1 ms < Lower Is Better machine_learning_test_result . 483.10 |======================================== TNN 0.3 Target: CPU - Model: SqueezeNet v2 ms < Lower Is Better machine_learning_test_result . 132.24 |======================================== TNN 0.3 Target: CPU - Model: MobileNet v2 ms < Lower Is Better machine_learning_test_result . 544.13 |======================================== TNN 0.3 Target: CPU - Model: DenseNet ms < Lower Is Better machine_learning_test_result . 7490.18 |======================================= NCNN 20220729 Target: Vulkan GPU - Model: FastestDet ms < Lower Is Better machine_learning_test_result . 4.86 |========================================== NCNN 20220729 Target: Vulkan GPU - Model: vision_transformer ms < Lower Is Better machine_learning_test_result . 935.56 |======================================== NCNN 20220729 Target: Vulkan GPU - Model: regnety_400m ms < Lower Is Better machine_learning_test_result . 6.29 |========================================== NCNN 20220729 Target: Vulkan GPU - Model: yolov4-tiny ms < Lower Is Better machine_learning_test_result . 23.21 |========================================= NCNN 20220729 Target: Vulkan GPU - Model: resnet50 ms < Lower Is Better machine_learning_test_result . 9.67 |========================================== NCNN 20220729 Target: Vulkan GPU - Model: alexnet ms < Lower Is Better machine_learning_test_result . 9.03 |========================================== NCNN 20220729 Target: Vulkan GPU - Model: vgg16 ms < Lower Is Better machine_learning_test_result . 19.34 |========================================= NCNN 20220729 Target: Vulkan GPU - Model: googlenet ms < Lower Is Better machine_learning_test_result . 8.21 |========================================== NCNN 20220729 Target: Vulkan GPU - Model: blazeface ms < Lower Is Better machine_learning_test_result . 2.93 |========================================== NCNN 20220729 Target: Vulkan GPU - Model: efficientnet-b0 ms < Lower Is Better machine_learning_test_result . 8.61 |========================================== NCNN 20220729 Target: Vulkan GPU - Model: shufflenet-v2 ms < Lower Is Better machine_learning_test_result . 5.10 |========================================== NCNN 20220729 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 ms < Lower Is Better machine_learning_test_result . 4.08 |========================================== NCNN 20220729 Target: CPU - Model: FastestDet ms < Lower Is Better machine_learning_test_result . 5.57 |========================================== NCNN 20220729 Target: CPU - Model: vision_transformer ms < Lower Is Better machine_learning_test_result . 805.53 |======================================== NCNN 20220729 Target: CPU - Model: regnety_400m ms < Lower Is Better machine_learning_test_result . 12.83 |========================================= NCNN 20220729 Target: CPU - Model: squeezenet_ssd ms < Lower Is Better machine_learning_test_result . 17.44 |========================================= NCNN 20220729 Target: CPU - Model: yolov4-tiny ms < Lower Is Better machine_learning_test_result . 29.49 |========================================= NCNN 20220729 Target: CPU - Model: resnet50 ms < Lower Is Better machine_learning_test_result . 31.60 |========================================= NCNN 20220729 Target: CPU - Model: alexnet ms < Lower Is Better machine_learning_test_result . 15.40 |========================================= NCNN 20220729 Target: CPU - Model: resnet18 ms < Lower Is Better machine_learning_test_result . 13.47 |========================================= NCNN 20220729 Target: CPU - Model: vgg16 ms < Lower Is Better machine_learning_test_result . 61.65 |========================================= NCNN 20220729 Target: CPU - Model: googlenet ms < Lower Is Better machine_learning_test_result . 17.86 |========================================= NCNN 20220729 Target: CPU - Model: blazeface ms < Lower Is Better machine_learning_test_result . 2.36 |========================================== NCNN 20220729 Target: CPU - Model: efficientnet-b0 ms < Lower Is Better machine_learning_test_result . 9.71 |========================================== NCNN 20220729 Target: CPU - Model: mnasnet ms < Lower Is Better machine_learning_test_result . 5.50 |========================================== NCNN 20220729 Target: CPU - Model: shufflenet-v2 ms < Lower Is Better machine_learning_test_result . 4.96 |========================================== NCNN 20220729 Target: CPU-v3-v3 - Model: mobilenet-v3 ms < Lower Is Better machine_learning_test_result . 5.63 |========================================== NCNN 20220729 Target: CPU-v2-v2 - Model: mobilenet-v2 ms < Lower Is Better machine_learning_test_result . 6.10 |========================================== NCNN 20220729 Target: CPU - Model: mobilenet ms < Lower Is Better machine_learning_test_result . 20.10 |========================================= Mobile Neural Network 2.1 Model: inception-v3 ms < Lower Is Better machine_learning_test_result . 84.63 |========================================= Mobile Neural Network 2.1 Model: mobilenet-v1-1.0 ms < Lower Is Better machine_learning_test_result . 11.67 |========================================= Mobile Neural Network 2.1 Model: MobileNetV2_224 ms < Lower Is Better machine_learning_test_result . 7.837 |========================================= Mobile Neural Network 2.1 Model: SqueezeNetV1.0 ms < Lower Is Better machine_learning_test_result . 12.86 |========================================= Mobile Neural Network 2.1 Model: resnet-v2-50 ms < Lower Is Better machine_learning_test_result . 66.27 |========================================= Mobile Neural Network 2.1 Model: squeezenetv1.1 ms < Lower Is Better machine_learning_test_result . 7.311 |========================================= Mobile Neural Network 2.1 Model: mobilenetV3 ms < Lower Is Better machine_learning_test_result . 3.101 |========================================= Mobile Neural Network 2.1 Model: nasnet ms < Lower Is Better machine_learning_test_result . 26.44 |========================================= TensorFlow Lite 2022-05-18 Model: Inception ResNet V2 Microseconds < Lower Is Better machine_learning_test_result . 187430 |======================================== TensorFlow Lite 2022-05-18 Model: Mobilenet Quant Microseconds < Lower Is Better machine_learning_test_result . 5186.43 |======================================= TensorFlow Lite 2022-05-18 Model: Mobilenet Float Microseconds < Lower Is Better machine_learning_test_result . 10162.5 |======================================= TensorFlow Lite 2022-05-18 Model: NASNet Mobile Microseconds < Lower Is Better machine_learning_test_result . 30199.9 |======================================= TensorFlow Lite 2022-05-18 Model: Inception V4 Microseconds < Lower Is Better machine_learning_test_result . 205030 |======================================== TensorFlow Lite 2022-05-18 Model: SqueezeNet Microseconds < Lower Is Better machine_learning_test_result . 14679.2 |======================================= RNNoise 2020-06-28 Seconds < Lower Is Better machine_learning_test_result . 36.75 |========================================= R Benchmark Seconds < Lower Is Better machine_learning_test_result . 0.3711 |======================================== Numpy Benchmark Score > Higher Is Better machine_learning_test_result . 165.05 |======================================== LeelaChessZero 0.28 Backend: BLAS Nodes Per Second > Higher Is Better machine_learning_test_result . 253 |=========================================== OpenCV 4.7 Test: DNN - Deep Neural Network ms < Lower Is Better machine_learning_test_result . 284715 |======================================== Scikit-Learn 1.2.2 Benchmark: Sparse Random Projections / 100 Iterations Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Kernel PCA Solvers / Time vs. N Components Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Kernel PCA Solvers / Time vs. N Samples Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Categorical Only Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Non-Negative Matrix Factorization Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Polynomial Kernel Approximation Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: 20 Newsgroups / Logistic Regression Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Higgs Boson Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Singular Value Decomposition Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Threading Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Isotonic / Perturbed Logarithm Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Adult Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Covertype Dataset Benchmark Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Sample Without Replacement Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: RCV1 Logreg Convergencet Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Isotonic / Pathological Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Parallel Pairwise Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Incremental PCA Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Isotonic / Logistic Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: TSNE MNIST Dataset Seconds < Lower Is Better machine_learning_test_result . 1313.49 |======================================= Scikit-Learn 1.2.2 Benchmark: Plot Fast KMeans Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Isolation Forest Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: SGDOneClassSVM Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Glmnet Seconds < Lower Is Better Mlpack Benchmark Benchmark: scikit_linearridgeregression Seconds < Lower Is Better Mlpack Benchmark Benchmark: scikit_svm Seconds < Lower Is Better Mlpack Benchmark Benchmark: scikit_qda Seconds < Lower Is Better Mlpack Benchmark Benchmark: scikit_ica Seconds < Lower Is Better AI Benchmark Alpha 0.1.2 Score > Higher Is Better ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ECP-CANDLE 0.4 Benchmark: P3B2 Seconds < Lower Is Better ECP-CANDLE 0.4 Benchmark: P3B1 Seconds < Lower Is Better ECP-CANDLE 0.4 Benchmark: P1B2 Seconds < Lower Is Better PlaidML FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU Examples Per Second > Higher Is Better PlaidML FP16: No - Mode: Inference - Network: VGG16 - Device: CPU Examples Per Second > Higher Is Better NCNN 20220729 Target: Vulkan GPU - Model: squeezenet_ssd ms < Lower Is Better machine_learning_test_result . 13.41 |========================================= NCNN 20220729 Target: Vulkan GPU - Model: resnet18 ms < Lower Is Better machine_learning_test_result . 5.58 |========================================== NCNN 20220729 Target: Vulkan GPU - Model: mnasnet ms < Lower Is Better machine_learning_test_result . 4.44 |========================================== NCNN 20220729 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 ms < Lower Is Better machine_learning_test_result . 5.27 |========================================== NCNN 20220729 Target: Vulkan GPU - Model: mobilenet ms < Lower Is Better machine_learning_test_result . 13.39 |========================================= Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 1000 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 200 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 100 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 1000 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 200 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 100 Milli-Seconds < Lower Is Better spaCy 3.4.1 tokens/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: AlexNet images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: AlexNet images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: AlexNet images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: VGG-16 images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: AlexNet images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: VGG-16 images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: AlexNet images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: VGG-16 images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: VGG-16 images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: VGG-16 images/sec > Higher Is Better oneDNN 3.1 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 3.1 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 3.1 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better oneDNN 3.1 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 3.1 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 3.1 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 3.1 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better oneDNN 3.1 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU ms < Lower Is Better oneDNN 3.1 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU ms < Lower Is Better oneDNN 3.1 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better oneDNN 3.1 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better oneDNN 3.1 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better oneDNN 3.1 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU ms < Lower Is Better oneDNN 3.1 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU ms < Lower Is Better oneDNN 3.1 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU ms < Lower Is Better oneDNN 3.1 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 3.1 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 3.1 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better oneDNN 3.1 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better oneDNN 3.1 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU ms < Lower Is Better oneDNN 3.1 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU ms < Lower Is Better