mltestresults2 wsl testing on Ubuntu 20.04 via the Phoronix Test Suite. rtx3080_1290k_2: Processor: Intel Core i9-12900K (12 Cores / 24 Threads), Memory: 16GB, Disk: 4 x 275GB Virtual Disk, Graphics: NVIDIA GeForce RTX 3080 10GB OS: Ubuntu 20.04, Kernel: 5.10.16.3-microsoft-standard-WSL2 (x86_64), Display Server: Wayland, OpenGL: 3.3 Mesa 21.2.6, Vulkan: 1.1.182, Compiler: GCC 9.4.0 + CUDA 11.7, File-System: ext4, Screen Resolution: 1920x1080, System Layer: wsl LeelaChessZero 0.28 Backend: BLAS Nodes Per Second > Higher Is Better rtx3080_1290k_2 . 905 |======================================================== oneDNN 2.6 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU ms < Lower Is Better rtx3080_1290k_2 . 3.37536 |==================================================== oneDNN 2.6 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU ms < Lower Is Better rtx3080_1290k_2 . 4.69622 |==================================================== oneDNN 2.6 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better rtx3080_1290k_2 . 1.25033 |==================================================== oneDNN 2.6 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better rtx3080_1290k_2 . 1.21899 |==================================================== oneDNN 2.6 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 2.6 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 2.6 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU ms < Lower Is Better rtx3080_1290k_2 . 7.83715 |==================================================== oneDNN 2.6 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU ms < Lower Is Better rtx3080_1290k_2 . 10.57 |====================================================== oneDNN 2.6 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU ms < Lower Is Better rtx3080_1290k_2 . 6.04959 |==================================================== oneDNN 2.6 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better rtx3080_1290k_2 . 7.93788 |==================================================== oneDNN 2.6 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better rtx3080_1290k_2 . 1.78561 |==================================================== oneDNN 2.6 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better rtx3080_1290k_2 . 2.46833 |==================================================== oneDNN 2.6 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU ms < Lower Is Better rtx3080_1290k_2 . 3360.37 |==================================================== oneDNN 2.6 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU ms < Lower Is Better rtx3080_1290k_2 . 1953.39 |==================================================== oneDNN 2.6 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better rtx3080_1290k_2 . 3374.31 |==================================================== oneDNN 2.6 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 2.6 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better rtx3080_1290k_2 . 1940.49 |==================================================== oneDNN 2.6 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU ms < Lower Is Better rtx3080_1290k_2 . 1.77054 |==================================================== oneDNN 2.6 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better rtx3080_1290k_2 . 3344.94 |==================================================== oneDNN 2.6 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better rtx3080_1290k_2 . 1925.58 |==================================================== oneDNN 2.6 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better rtx3080_1290k_2 . 1.14390 |==================================================== oneDNN 2.6 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better Numpy Benchmark Score > Higher Is Better rtx3080_1290k_2 . 628.57 |===================================================== DeepSpeech 0.6 Acceleration: CPU Seconds < Lower Is Better rtx3080_1290k_2 . 53.52 |====================================================== R Benchmark Seconds < Lower Is Better rtx3080_1290k_2 . 0.1701 |===================================================== RNNoise 2020-06-28 Seconds < Lower Is Better rtx3080_1290k_2 . 13.74 |====================================================== TensorFlow Lite 2022-05-18 Model: SqueezeNet Microseconds < Lower Is Better rtx3080_1290k_2 . 2188.99 |==================================================== TensorFlow Lite 2022-05-18 Model: Inception V4 Microseconds < Lower Is Better rtx3080_1290k_2 . 28429.3 |==================================================== TensorFlow Lite 2022-05-18 Model: NASNet Mobile Microseconds < Lower Is Better rtx3080_1290k_2 . 10891.6 |==================================================== TensorFlow Lite 2022-05-18 Model: Mobilenet Float Microseconds < Lower Is Better rtx3080_1290k_2 . 1508.67 |==================================================== TensorFlow Lite 2022-05-18 Model: Mobilenet Quant Microseconds < Lower Is Better rtx3080_1290k_2 . 3584.51 |==================================================== TensorFlow Lite 2022-05-18 Model: Inception ResNet V2 Microseconds < Lower Is Better rtx3080_1290k_2 . 28494.0 |==================================================== Tensorflow Build: Cifar10 Seconds < Lower Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 100 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 200 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 1000 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 100 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 200 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 1000 Milli-Seconds < Lower Is Better Mobile Neural Network 1.2 Model: mobilenetV3 ms < Lower Is Better rtx3080_1290k_2 . 1.658 |====================================================== Mobile Neural Network 1.2 Model: squeezenetv1.1 ms < Lower Is Better rtx3080_1290k_2 . 3.288 |====================================================== Mobile Neural Network 1.2 Model: resnet-v2-50 ms < Lower Is Better rtx3080_1290k_2 . 27.30 |====================================================== Mobile Neural Network 1.2 Model: SqueezeNetV1.0 ms < Lower Is Better rtx3080_1290k_2 . 4.999 |====================================================== Mobile Neural Network 1.2 Model: MobileNetV2_224 ms < Lower Is Better rtx3080_1290k_2 . 3.022 |====================================================== Mobile Neural Network 1.2 Model: mobilenet-v1-1.0 ms < Lower Is Better rtx3080_1290k_2 . 3.935 |====================================================== Mobile Neural Network 1.2 Model: inception-v3 ms < Lower Is Better rtx3080_1290k_2 . 28.66 |====================================================== NCNN 20210720 Target: CPU - Model: mobilenet ms < Lower Is Better rtx3080_1290k_2 . 13.46 |====================================================== NCNN 20210720 Target: CPU-v2-v2 - Model: mobilenet-v2 ms < Lower Is Better rtx3080_1290k_2 . 4.50 |======================================================= NCNN 20210720 Target: CPU-v3-v3 - Model: mobilenet-v3 ms < Lower Is Better rtx3080_1290k_2 . 3.73 |======================================================= NCNN 20210720 Target: CPU - Model: shufflenet-v2 ms < Lower Is Better rtx3080_1290k_2 . 4.05 |======================================================= NCNN 20210720 Target: CPU - Model: mnasnet ms < Lower Is Better rtx3080_1290k_2 . 4.09 |======================================================= NCNN 20210720 Target: CPU - Model: efficientnet-b0 ms < Lower Is Better rtx3080_1290k_2 . 6.30 |======================================================= NCNN 20210720 Target: CPU - Model: blazeface ms < Lower Is Better rtx3080_1290k_2 . 1.84 |======================================================= NCNN 20210720 Target: CPU - Model: googlenet ms < Lower Is Better rtx3080_1290k_2 . 13.75 |====================================================== NCNN 20210720 Target: CPU - Model: vgg16 ms < Lower Is Better rtx3080_1290k_2 . 37.94 |====================================================== NCNN 20210720 Target: CPU - Model: resnet18 ms < Lower Is Better rtx3080_1290k_2 . 13.75 |====================================================== NCNN 20210720 Target: CPU - Model: alexnet ms < Lower Is Better rtx3080_1290k_2 . 10.36 |====================================================== NCNN 20210720 Target: CPU - Model: resnet50 ms < Lower Is Better rtx3080_1290k_2 . 23.34 |====================================================== NCNN 20210720 Target: CPU - Model: yolov4-tiny ms < Lower Is Better rtx3080_1290k_2 . 21.46 |====================================================== NCNN 20210720 Target: CPU - Model: squeezenet_ssd ms < Lower Is Better rtx3080_1290k_2 . 19.26 |====================================================== NCNN 20210720 Target: CPU - Model: regnety_400m ms < Lower Is Better rtx3080_1290k_2 . 11.39 |====================================================== NCNN 20210720 Target: Vulkan GPU - Model: mobilenet ms < Lower Is Better rtx3080_1290k_2 . 431.72 |===================================================== NCNN 20210720 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 ms < Lower Is Better rtx3080_1290k_2 . 137.72 |===================================================== NCNN 20210720 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 ms < Lower Is Better rtx3080_1290k_2 . 136.16 |===================================================== NCNN 20210720 Target: Vulkan GPU - Model: shufflenet-v2 ms < Lower Is Better rtx3080_1290k_2 . 115.07 |===================================================== NCNN 20210720 Target: Vulkan GPU - Model: mnasnet ms < Lower Is Better rtx3080_1290k_2 . 142.91 |===================================================== NCNN 20210720 Target: Vulkan GPU - Model: efficientnet-b0 ms < Lower Is Better rtx3080_1290k_2 . 218.24 |===================================================== NCNN 20210720 Target: Vulkan GPU - Model: blazeface ms < Lower Is Better rtx3080_1290k_2 . 55.99 |====================================================== NCNN 20210720 Target: Vulkan GPU - Model: googlenet ms < Lower Is Better rtx3080_1290k_2 . 490.75 |===================================================== NCNN 20210720 Target: Vulkan GPU - Model: vgg16 ms < Lower Is Better rtx3080_1290k_2 . 2450.49 |==================================================== NCNN 20210720 Target: Vulkan GPU - Model: resnet18 ms < Lower Is Better rtx3080_1290k_2 . 522.91 |===================================================== NCNN 20210720 Target: Vulkan GPU - Model: alexnet ms < Lower Is Better rtx3080_1290k_2 . 406.35 |===================================================== NCNN 20210720 Target: Vulkan GPU - Model: resnet50 ms < Lower Is Better rtx3080_1290k_2 . 1312.42 |==================================================== NCNN 20210720 Target: Vulkan GPU - Model: yolov4-tiny ms < Lower Is Better rtx3080_1290k_2 . 887.28 |===================================================== NCNN 20210720 Target: Vulkan GPU - Model: squeezenet_ssd ms < Lower Is Better rtx3080_1290k_2 . 693.06 |===================================================== NCNN 20210720 Target: Vulkan GPU - Model: regnety_400m ms < Lower Is Better rtx3080_1290k_2 . 213.29 |===================================================== TNN 0.3 Target: CPU - Model: DenseNet ms < Lower Is Better rtx3080_1290k_2 . 2000.04 |==================================================== TNN 0.3 Target: CPU - Model: MobileNet v2 ms < Lower Is Better rtx3080_1290k_2 . 179.06 |===================================================== TNN 0.3 Target: CPU - Model: SqueezeNet v2 ms < Lower Is Better rtx3080_1290k_2 . 39.02 |====================================================== TNN 0.3 Target: CPU - Model: SqueezeNet v1.1 ms < Lower Is Better rtx3080_1290k_2 . 140.97 |===================================================== PlaidML FP16: No - Mode: Inference - Network: VGG16 - Device: CPU FPS > Higher Is Better rtx3080_1290k_2 . 25.96 |====================================================== PlaidML FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU FPS > Higher Is Better rtx3080_1290k_2 . 8.11 |======================================================= OpenVINO 2021.1 Model: Face Detection 0106 FP16 - Device: CPU FPS > Higher Is Better rtx3080_1290k_2 . 3.73 |======================================================= OpenVINO 2021.1 Model: Face Detection 0106 FP16 - Device: CPU ms < Lower Is Better rtx3080_1290k_2 . 1587.80 |==================================================== OpenVINO 2021.1 Model: Face Detection 0106 FP32 - Device: CPU FPS > Higher Is Better rtx3080_1290k_2 . 3.72 |======================================================= OpenVINO 2021.1 Model: Face Detection 0106 FP32 - Device: CPU ms < Lower Is Better rtx3080_1290k_2 . 1599.46 |==================================================== OpenVINO 2021.1 Model: Person Detection 0106 FP16 - Device: CPU FPS > Higher Is Better rtx3080_1290k_2 . 2.58 |======================================================= OpenVINO 2021.1 Model: Person Detection 0106 FP16 - Device: CPU ms < Lower Is Better rtx3080_1290k_2 . 2314.36 |==================================================== OpenVINO 2021.1 Model: Person Detection 0106 FP32 - Device: CPU FPS > Higher Is Better rtx3080_1290k_2 . 2.60 |======================================================= OpenVINO 2021.1 Model: Person Detection 0106 FP32 - Device: CPU ms < Lower Is Better rtx3080_1290k_2 . 2304.08 |==================================================== OpenVINO 2021.1 Model: Face Detection 0106 FP16 - Device: Intel GPU FPS > Higher Is Better OpenVINO 2021.1 Model: Face Detection 0106 FP32 - Device: Intel GPU FPS > Higher Is Better OpenVINO 2021.1 Model: Person Detection 0106 FP16 - Device: Intel GPU FPS > Higher Is Better OpenVINO 2021.1 Model: Person Detection 0106 FP32 - Device: Intel GPU FPS > Higher Is Better OpenVINO 2021.1 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU FPS > Higher Is Better rtx3080_1290k_2 . 8475.43 |==================================================== OpenVINO 2021.1 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU ms < Lower Is Better rtx3080_1290k_2 . 0.72 |======================================================= OpenVINO 2021.1 Model: Age Gender Recognition Retail 0013 FP32 - Device: CPU FPS > Higher Is Better rtx3080_1290k_2 . 8436.69 |==================================================== OpenVINO 2021.1 Model: Age Gender Recognition Retail 0013 FP32 - Device: CPU ms < Lower Is Better rtx3080_1290k_2 . 0.72 |======================================================= OpenVINO 2021.1 Model: Age Gender Recognition Retail 0013 FP16 - Device: Intel GPU FPS > Higher Is Better OpenVINO 2021.1 Model: Age Gender Recognition Retail 0013 FP32 - Device: Intel GPU FPS > Higher Is Better ECP-CANDLE 0.4 Benchmark: P1B2 Seconds < Lower Is Better rtx3080_1290k_2 . 20.74 |====================================================== ECP-CANDLE 0.4 Benchmark: P3B1 Seconds < Lower Is Better rtx3080_1290k_2 . 399.28 |===================================================== ECP-CANDLE 0.4 Benchmark: P3B2 Seconds < Lower Is Better rtx3080_1290k_2 . 471.09 |===================================================== Numenta Anomaly Benchmark 1.1 Detector: EXPoSE Seconds < Lower Is Better rtx3080_1290k_2 . 213.98 |===================================================== Numenta Anomaly Benchmark 1.1 Detector: Relative Entropy Seconds < Lower Is Better rtx3080_1290k_2 . 9.476 |====================================================== Numenta Anomaly Benchmark 1.1 Detector: Windowed Gaussian Seconds < Lower Is Better rtx3080_1290k_2 . 5.054 |====================================================== Numenta Anomaly Benchmark 1.1 Detector: Earthgecko Skyline Seconds < Lower Is Better rtx3080_1290k_2 . 74.50 |====================================================== Numenta Anomaly Benchmark 1.1 Detector: Bayesian Changepoint Seconds < Lower Is Better rtx3080_1290k_2 . 17.20 |====================================================== ONNX Runtime 1.11 Model: GPT-2 - Device: CPU - Executor: Parallel Inferences Per Minute > Higher Is Better rtx3080_1290k_2 . 6331 |======================================================= ONNX Runtime 1.11 Model: GPT-2 - Device: CPU - Executor: Standard Inferences Per Minute > Higher Is Better rtx3080_1290k_2 . 8801 |======================================================= ONNX Runtime 1.11 Model: yolov4 - Device: CPU - Executor: Parallel Inferences Per Minute > Higher Is Better ONNX Runtime 1.11 Model: yolov4 - Device: CPU - Executor: Standard Inferences Per Minute > Higher Is Better ONNX Runtime 1.11 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inferences Per Minute > Higher Is Better rtx3080_1290k_2 . 638 |======================================================== ONNX Runtime 1.11 Model: bertsquad-12 - Device: CPU - Executor: Standard Inferences Per Minute > Higher Is Better rtx3080_1290k_2 . 803 |======================================================== ONNX Runtime 1.11 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inferences Per Minute > Higher Is Better rtx3080_1290k_2 . 79 |========================================================= ONNX Runtime 1.11 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inferences Per Minute > Higher Is Better rtx3080_1290k_2 . 87 |========================================================= ONNX Runtime 1.11 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inferences Per Minute > Higher Is Better rtx3080_1290k_2 . 953 |======================================================== ONNX Runtime 1.11 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inferences Per Minute > Higher Is Better rtx3080_1290k_2 . 1311 |======================================================= ONNX Runtime 1.11 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inferences Per Minute > Higher Is Better rtx3080_1290k_2 . 4644 |======================================================= ONNX Runtime 1.11 Model: super-resolution-10 - Device: CPU - Executor: Standard Inferences Per Minute > Higher Is Better rtx3080_1290k_2 . 5981 |======================================================= AI Benchmark Alpha 0.1.2 Device Inference Score Score > Higher Is Better rtx3080_1290k_2 . 1820 |======================================================= AI Benchmark Alpha 0.1.2 Device Training Score Score > Higher Is Better rtx3080_1290k_2 . 2586 |======================================================= AI Benchmark Alpha 0.1.2 Device AI Score Score > Higher Is Better rtx3080_1290k_2 . 4406 |======================================================= Mlpack Benchmark Benchmark: scikit_ica Seconds < Lower Is Better Mlpack Benchmark Benchmark: scikit_qda Seconds < Lower Is Better Mlpack Benchmark Benchmark: scikit_svm Seconds < Lower Is Better Mlpack Benchmark Benchmark: scikit_linearridgeregression Seconds < Lower Is Better Scikit-Learn 0.22.1 Seconds < Lower Is Better rtx3080_1290k_2 . 6.153 |======================================================