m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 AMD Ryzen 9 7940HS testing with a Win element M600 (SR500P03_P5C2V07 BIOS) and AMD Phoenix1 16GB on EndeavourOS rolling via the Phoronix Test Suite. m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2: Processor: AMD Ryzen 9 7940HS @ 4.00GHz (8 Cores / 16 Threads), Motherboard: Win element M600 (SR500P03_P5C2V07 BIOS), Chipset: AMD Device 14e8, Memory: 80GB, Disk: Western Digital WD_BLACK SN850X 2000GB, Graphics: AMD Phoenix1 16GB, Audio: AMD Rembrandt Radeon HD Audio, Monitor: DELL S3422DW, Network: 2 x Realtek RTL8125 2.5GbE + Intel Wi-Fi 6 AX200 OS: EndeavourOS rolling, Kernel: 6.4.12-arch1-1 (x86_64), Desktop: Xfce 4.18, Display Server: X Server 1.21.1.8, OpenGL: 4.6 Mesa 23.1.6-arch1.4 (LLVM 16.0.6 DRM 3.52), Compiler: GCC 13.2.1 20230801, File-System: ext4, Screen Resolution: 3440x1440 SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: S3D SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Triad SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: FFT SP SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: MD5 Hash SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Reduction SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: GEMM SGEMM_N SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Max SP Flops SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Bus Speed Download SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Bus Speed Readback SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Texture Read Bandwidth PlaidML FP16: No - Mode: Inference - Network: VGG16 - Device: CPU Examples Per Second > Higher Is Better PlaidML FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU Examples Per Second > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: VGG-16 images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: VGG-16 images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: VGG-16 images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: AlexNet images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: VGG-16 images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: AlexNet images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: VGG-16 images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: AlexNet images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: AlexNet images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: AlexNet images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: ResNet-50 images/sec > Higher Is Better ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better LeelaChessZero 0.28 Backend: BLAS Nodes Per Second > Higher Is Better Numpy Benchmark Score > Higher Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 692.94 | AI Benchmark Alpha 0.1.2 Score > Higher Is Better spaCy 3.4.1 tokens/sec > Higher Is Better TensorFlow Lite 2022-05-18 Model: SqueezeNet Microseconds < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 1921.51 | TensorFlow Lite 2022-05-18 Model: Inception V4 Microseconds < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 28464.8 | TensorFlow Lite 2022-05-18 Model: NASNet Mobile Microseconds < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 7083.46 | TensorFlow Lite 2022-05-18 Model: Mobilenet Float Microseconds < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 1515.79 | TensorFlow Lite 2022-05-18 Model: Mobilenet Quant Microseconds < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 3105.79 | TensorFlow Lite 2022-05-18 Model: Inception ResNet V2 Microseconds < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 28085.8 | Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 100 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 200 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 1000 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 100 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 200 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 1000 Milli-Seconds < Lower Is Better oneDNN 3.1 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 4.54620 | oneDNN 3.1 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 4.61966 | oneDNN 3.1 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 0.720656 | oneDNN 3.1 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 1.58351 | oneDNN 3.1 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 1.58579 | oneDNN 3.1 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 2.65992 | oneDNN 3.1 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 10.53 | oneDNN 3.1 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 6.74907 | oneDNN 3.1 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 4.50492 | oneDNN 3.1 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 9.06392 | oneDNN 3.1 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 0.912151 | oneDNN 3.1 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 1.07871 | oneDNN 3.1 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 2672.08 | oneDNN 3.1 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 1436.72 | oneDNN 3.1 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 2718.50 | oneDNN 3.1 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 4.21333 | oneDNN 3.1 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 8.55150 | oneDNN 3.1 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 2.75936 | oneDNN 3.1 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 1409.84 | oneDNN 3.1 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 2714.85 | oneDNN 3.1 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 1402.63 | Mobile Neural Network 2.1 ms < Lower Is Better NCNN 20230517 Target: CPU - Model: mobilenet ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 8.07 | NCNN 20230517 Target: CPU-v2-v2 - Model: mobilenet-v2 ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 2.43 | NCNN 20230517 Target: CPU-v3-v3 - Model: mobilenet-v3 ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 2.23 | NCNN 20230517 Target: CPU - Model: shufflenet-v2 ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 1.97 | NCNN 20230517 Target: CPU - Model: mnasnet ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 2.18 | NCNN 20230517 Target: CPU - Model: efficientnet-b0 ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 3.30 | NCNN 20230517 Target: CPU - Model: blazeface ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 0.73 | NCNN 20230517 Target: CPU - Model: googlenet ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 6.33 | NCNN 20230517 Target: CPU - Model: vgg16 ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 31.14 | NCNN 20230517 Target: CPU - Model: resnet18 ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 4.64 | NCNN 20230517 Target: CPU - Model: alexnet ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 4.52 | NCNN 20230517 Target: CPU - Model: resnet50 ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 10.45 | NCNN 20230517 Target: CPU - Model: yolov4-tiny ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 13.42 | NCNN 20230517 Target: CPU - Model: squeezenet_ssd ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 6.30 | NCNN 20230517 Target: CPU - Model: regnety_400m ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 5.14 | NCNN 20230517 Target: CPU - Model: vision_transformer ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 53.79 | NCNN 20230517 Target: CPU - Model: FastestDet ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 2.49 | NCNN 20230517 Target: Vulkan GPU - Model: mobilenet ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 8.24 | NCNN 20230517 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 2.47 | NCNN 20230517 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 2.28 | NCNN 20230517 Target: Vulkan GPU - Model: shufflenet-v2 ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 2.01 | NCNN 20230517 Target: Vulkan GPU - Model: mnasnet ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 2.21 | NCNN 20230517 Target: Vulkan GPU - Model: efficientnet-b0 ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 3.38 | NCNN 20230517 Target: Vulkan GPU - Model: blazeface ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 0.75 | NCNN 20230517 Target: Vulkan GPU - Model: googlenet ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 6.66 | NCNN 20230517 Target: Vulkan GPU - Model: vgg16 ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 31.20 | NCNN 20230517 Target: Vulkan GPU - Model: resnet18 ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 4.90 | NCNN 20230517 Target: Vulkan GPU - Model: alexnet ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 4.91 | NCNN 20230517 Target: Vulkan GPU - Model: resnet50 ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 10.37 | NCNN 20230517 Target: Vulkan GPU - Model: yolov4-tiny ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 13.60 | NCNN 20230517 Target: Vulkan GPU - Model: squeezenet_ssd ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 6.10 | NCNN 20230517 Target: Vulkan GPU - Model: regnety_400m ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 5.22 | NCNN 20230517 Target: Vulkan GPU - Model: vision_transformer ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 54.04 | NCNN 20230517 Target: Vulkan GPU - Model: FastestDet ms < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 2.48 | TNN 0.3 Target: CPU - Model: DenseNet ms < Lower Is Better TNN 0.3 Target: CPU - Model: MobileNet v2 ms < Lower Is Better TNN 0.3 Target: CPU - Model: SqueezeNet v2 ms < Lower Is Better TNN 0.3 Target: CPU - Model: SqueezeNet v1.1 ms < Lower Is Better DeepSpeech 0.6 Acceleration: CPU Seconds < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 46.96 | R Benchmark Seconds < Lower Is Better RNNoise 2020-06-28 Seconds < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 14.30 | ECP-CANDLE 0.4 Benchmark: P1B2 Seconds < Lower Is Better ECP-CANDLE 0.4 Benchmark: P3B1 Seconds < Lower Is Better ECP-CANDLE 0.4 Benchmark: P3B2 Seconds < Lower Is Better Numenta Anomaly Benchmark 1.1 Detector: KNN CAD Seconds < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 107.12 | Numenta Anomaly Benchmark 1.1 Detector: Relative Entropy Seconds < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 9.716 | Numenta Anomaly Benchmark 1.1 Detector: Windowed Gaussian Seconds < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 6.466 | Numenta Anomaly Benchmark 1.1 Detector: Earthgecko Skyline Seconds < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 86.57 | Numenta Anomaly Benchmark 1.1 Detector: Bayesian Changepoint Seconds < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 23.27 | Numenta Anomaly Benchmark 1.1 Detector: Contextual Anomaly Detector OSE Seconds < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 30.66 | Mlpack Benchmark Benchmark: scikit_ica Seconds < Lower Is Better Mlpack Benchmark Benchmark: scikit_qda Seconds < Lower Is Better Mlpack Benchmark Benchmark: scikit_svm Seconds < Lower Is Better Mlpack Benchmark Benchmark: scikit_linearridgeregression Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: GLM Seconds < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 1649.84 | Scikit-Learn 1.2.2 Benchmark: SAGA Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Tree Seconds < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 44.32 | Scikit-Learn 1.2.2 Benchmark: Lasso Seconds < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 3207.15 | Scikit-Learn 1.2.2 Benchmark: Glmnet Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Sparsify Seconds < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 104.15 | Scikit-Learn 1.2.2 Benchmark: Plot Ward Seconds < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 39.65 | Scikit-Learn 1.2.2 Benchmark: MNIST Dataset Seconds < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 55.46 | Scikit-Learn 1.2.2 Benchmark: Plot Neighbors Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: SGD Regression Seconds < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 655.59 | Scikit-Learn 1.2.2 Benchmark: SGDOneClassSVM Seconds < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 209.02 | Scikit-Learn 1.2.2 Benchmark: Plot Lasso Path Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Isolation Forest Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Fast KMeans Seconds < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 512.43 | Scikit-Learn 1.2.2 Benchmark: Text Vectorizers Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Hierarchical Seconds < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 128.61 | Scikit-Learn 1.2.2 Benchmark: Plot OMP vs. LARS Seconds < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 627.89 | Scikit-Learn 1.2.2 Benchmark: Feature Expansions Seconds < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 110.20 | Scikit-Learn 1.2.2 Benchmark: LocalOutlierFactor Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: TSNE MNIST Dataset Seconds < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 233.66 | Scikit-Learn 1.2.2 Benchmark: Isotonic / Logistic Seconds < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 1118.01 | Scikit-Learn 1.2.2 Benchmark: Plot Incremental PCA Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Seconds < Lower Is Better m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2 . 62.78 |