hurricane-server AMD Eng Sample 100-000000897-03 testing with a Supermicro Super Server H13SSL-N v2.00 (3.0 BIOS) and llvmpipe on Ubuntu 24.04 via the Phoronix Test Suite. hurricane-server: Processor: AMD Eng Sample 100-000000897-03 @ 2.55GHz (32 Cores / 64 Threads), Motherboard: Supermicro Super Server H13SSL-N v2.00 (3.0 BIOS), Chipset: AMD Device 14a4, Memory: 32 GB + 32 GB + 32 GB + 16 GB + 16 GB + 16 GB + 32 GB + 32 GB + 32 GB + 16 GB + 16 GB + 16 GB DDR5-4800MT/s, Disk: 512GB INTEL SSDPEKKF512G8L, Graphics: llvmpipe (405/715MHz), Network: 2 x Broadcom NetXtreme BCM5720 PCIe OS: Ubuntu 24.04, Kernel: 6.8.0-50-generic (x86_64), Desktop: GNOME Shell 46.0, Display Server: X Server 1.21.1.11, Display Driver: NVIDIA 535.183.01, OpenGL: 4.5 Mesa 24.0.9-0ubuntu0.3 (LLVM 17.0.6 256 bits), OpenCL: OpenCL 3.0 CUDA 12.2.148, Compiler: GCC 13.3.0, File-System: ext4, Screen Resolution: 1024x768 OpenVINO GenAI 2024.5 Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU tokens/s > Higher Is Better hurricane-server . 47.17 |===================================================== OpenVINO GenAI 2024.5 Model: Falcon-7b-instruct-int4-ov - Device: CPU tokens/s > Higher Is Better hurricane-server . 39.37 |===================================================== OpenVINO GenAI 2024.5 Model: TinyLlama-1.1B-Chat-v1.0 - Device: CPU tokens/s > Higher Is Better hurricane-server . 65.76 |===================================================== OpenVINO GenAI 2024.5 Model: Gemma-7b-int4-ov - Device: CPU tokens/s > Higher Is Better hurricane-server . 30.11 |===================================================== Whisperfile 20Aug24 Model Size: Medium Seconds < Lower Is Better hurricane-server . 312.22 |==================================================== Whisperfile 20Aug24 Model Size: Small Seconds < Lower Is Better hurricane-server . 137.65 |==================================================== Whisperfile 20Aug24 Model Size: Tiny Seconds < Lower Is Better hurricane-server . 48.32 |===================================================== Whisper.cpp 1.6.2 Model: ggml-medium.en - Input: 2016 State of the Union Seconds < Lower Is Better hurricane-server . 605.22 |==================================================== Whisper.cpp 1.6.2 Model: ggml-small.en - Input: 2016 State of the Union Seconds < Lower Is Better hurricane-server . 243.06 |==================================================== Whisper.cpp 1.6.2 Model: ggml-base.en - Input: 2016 State of the Union Seconds < Lower Is Better hurricane-server . 116.86 |==================================================== Scikit-Learn 1.2.2 Benchmark: Sparse Random Projections / 100 Iterations Seconds < Lower Is Better hurricane-server . 659.73 |==================================================== Scikit-Learn 1.2.2 Benchmark: Kernel PCA Solvers / Time vs. N Components Seconds < Lower Is Better hurricane-server . 40.06 |===================================================== Scikit-Learn 1.2.2 Benchmark: Kernel PCA Solvers / Time vs. N Samples Seconds < Lower Is Better hurricane-server . 68.49 |===================================================== Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Categorical Only Seconds < Lower Is Better hurricane-server . 44.82 |===================================================== Scikit-Learn 1.2.2 Benchmark: Plot Polynomial Kernel Approximation Seconds < Lower Is Better hurricane-server . 129.02 |==================================================== Scikit-Learn 1.2.2 Benchmark: 20 Newsgroups / Logistic Regression Seconds < Lower Is Better hurricane-server . 12.94 |===================================================== Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Higgs Boson Seconds < Lower Is Better hurricane-server . 77.71 |===================================================== Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Threading Seconds < Lower Is Better hurricane-server . 68.02 |===================================================== Scikit-Learn 1.2.2 Benchmark: Isotonic / Perturbed Logarithm Seconds < Lower Is Better hurricane-server . 2180.92 |=================================================== Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Adult Seconds < Lower Is Better hurricane-server . 245.15 |==================================================== Scikit-Learn 1.2.2 Benchmark: Covertype Dataset Benchmark Seconds < Lower Is Better hurricane-server . 434.51 |==================================================== Scikit-Learn 1.2.2 Benchmark: Sample Without Replacement Seconds < Lower Is Better hurricane-server . 135.81 |==================================================== Scikit-Learn 1.2.2 Benchmark: Isotonic / Pathological Seconds < Lower Is Better hurricane-server . 4978.99 |=================================================== Scikit-Learn 1.2.2 Benchmark: Plot Parallel Pairwise Seconds < Lower Is Better hurricane-server . 123.03 |==================================================== Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Seconds < Lower Is Better hurricane-server . 247.45 |==================================================== Scikit-Learn 1.2.2 Benchmark: Plot Incremental PCA Seconds < Lower Is Better hurricane-server . 36.50 |===================================================== Scikit-Learn 1.2.2 Benchmark: Isotonic / Logistic Seconds < Lower Is Better hurricane-server . 1974.80 |=================================================== Scikit-Learn 1.2.2 Benchmark: TSNE MNIST Dataset Seconds < Lower Is Better hurricane-server . 268.06 |==================================================== Scikit-Learn 1.2.2 Benchmark: LocalOutlierFactor Seconds < Lower Is Better hurricane-server . 25.66 |===================================================== Scikit-Learn 1.2.2 Benchmark: Feature Expansions Seconds < Lower Is Better hurricane-server . 126.78 |==================================================== Scikit-Learn 1.2.2 Benchmark: Plot OMP vs. LARS Seconds < Lower Is Better hurricane-server . 46.72 |===================================================== Scikit-Learn 1.2.2 Benchmark: Plot Hierarchical Seconds < Lower Is Better hurricane-server . 197.95 |==================================================== Scikit-Learn 1.2.2 Benchmark: Text Vectorizers Seconds < Lower Is Better hurricane-server . 65.35 |===================================================== Scikit-Learn 1.2.2 Benchmark: Isolation Forest Seconds < Lower Is Better hurricane-server . 236.89 |==================================================== Scikit-Learn 1.2.2 Benchmark: SGDOneClassSVM Seconds < Lower Is Better hurricane-server . 330.21 |==================================================== Scikit-Learn 1.2.2 Benchmark: SGD Regression Seconds < Lower Is Better hurricane-server . 87.93 |===================================================== Scikit-Learn 1.2.2 Benchmark: Plot Neighbors Seconds < Lower Is Better hurricane-server . 174.50 |==================================================== Scikit-Learn 1.2.2 Benchmark: MNIST Dataset Seconds < Lower Is Better hurricane-server . 78.40 |===================================================== Scikit-Learn 1.2.2 Benchmark: Plot Ward Seconds < Lower Is Better hurricane-server . 57.03 |===================================================== Scikit-Learn 1.2.2 Benchmark: Sparsify Seconds < Lower Is Better hurricane-server . 156.75 |==================================================== Scikit-Learn 1.2.2 Benchmark: Lasso Seconds < Lower Is Better hurricane-server . 536.89 |==================================================== Scikit-Learn 1.2.2 Benchmark: Tree Seconds < Lower Is Better hurricane-server . 68.79 |===================================================== Scikit-Learn 1.2.2 Benchmark: SAGA Seconds < Lower Is Better hurricane-server . 1027.77 |=================================================== Scikit-Learn 1.2.2 Benchmark: GLM Seconds < Lower Is Better hurricane-server . 200.42 |==================================================== OpenVINO 2024.5 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU ms < Lower Is Better hurricane-server . 0.41 |====================================================== OpenVINO 2024.5 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU FPS > Higher Is Better hurricane-server . 64559.58 |================================================== OpenVINO 2024.5 Model: Handwritten English Recognition FP16-INT8 - Device: CPU ms < Lower Is Better hurricane-server . 27.52 |===================================================== OpenVINO 2024.5 Model: Handwritten English Recognition FP16-INT8 - Device: CPU FPS > Higher Is Better hurricane-server . 1160.60 |=================================================== OpenVINO 2024.5 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU ms < Lower Is Better hurricane-server . 0.58 |====================================================== OpenVINO 2024.5 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU FPS > Higher Is Better hurricane-server . 47505.84 |================================================== OpenVINO 2024.5 Model: Person Re-Identification Retail FP16 - Device: CPU ms < Lower Is Better hurricane-server . 5.82 |====================================================== OpenVINO 2024.5 Model: Person Re-Identification Retail FP16 - Device: CPU FPS > Higher Is Better hurricane-server . 2734.75 |=================================================== OpenVINO 2024.5 Model: Handwritten English Recognition FP16 - Device: CPU ms < Lower Is Better hurricane-server . 29.25 |===================================================== OpenVINO 2024.5 Model: Handwritten English Recognition FP16 - Device: CPU FPS > Higher Is Better hurricane-server . 1092.39 |=================================================== OpenVINO 2024.5 Model: Noise Suppression Poconet-Like FP16 - Device: CPU ms < Lower Is Better hurricane-server . 11.90 |===================================================== OpenVINO 2024.5 Model: Noise Suppression Poconet-Like FP16 - Device: CPU FPS > Higher Is Better hurricane-server . 2638.33 |=================================================== OpenVINO 2024.5 Model: Person Vehicle Bike Detection FP16 - Device: CPU ms < Lower Is Better hurricane-server . 7.31 |====================================================== OpenVINO 2024.5 Model: Person Vehicle Bike Detection FP16 - Device: CPU FPS > Higher Is Better hurricane-server . 2176.46 |=================================================== OpenVINO 2024.5 Model: Weld Porosity Detection FP16-INT8 - Device: CPU ms < Lower Is Better hurricane-server . 8.29 |====================================================== OpenVINO 2024.5 Model: Weld Porosity Detection FP16-INT8 - Device: CPU FPS > Higher Is Better hurricane-server . 3835.55 |=================================================== OpenVINO 2024.5 Model: Machine Translation EN To DE FP16 - Device: CPU ms < Lower Is Better hurricane-server . 68.60 |===================================================== OpenVINO 2024.5 Model: Machine Translation EN To DE FP16 - Device: CPU FPS > Higher Is Better hurricane-server . 233.04 |==================================================== OpenVINO 2024.5 Model: Road Segmentation ADAS FP16-INT8 - Device: CPU ms < Lower Is Better hurricane-server . 19.12 |===================================================== OpenVINO 2024.5 Model: Road Segmentation ADAS FP16-INT8 - Device: CPU FPS > Higher Is Better hurricane-server . 834.81 |==================================================== OpenVINO 2024.5 Model: Face Detection Retail FP16-INT8 - Device: CPU ms < Lower Is Better hurricane-server . 4.59 |====================================================== OpenVINO 2024.5 Model: Face Detection Retail FP16-INT8 - Device: CPU FPS > Higher Is Better hurricane-server . 6872.55 |=================================================== OpenVINO 2024.5 Model: Weld Porosity Detection FP16 - Device: CPU ms < Lower Is Better hurricane-server . 16.08 |===================================================== OpenVINO 2024.5 Model: Weld Porosity Detection FP16 - Device: CPU FPS > Higher Is Better hurricane-server . 1984.68 |=================================================== OpenVINO 2024.5 Model: Vehicle Detection FP16-INT8 - Device: CPU ms < Lower Is Better hurricane-server . 6.72 |====================================================== OpenVINO 2024.5 Model: Vehicle Detection FP16-INT8 - Device: CPU FPS > Higher Is Better hurricane-server . 2367.68 |=================================================== OpenVINO 2024.5 Model: Road Segmentation ADAS FP16 - Device: CPU ms < Lower Is Better hurricane-server . 20.28 |===================================================== OpenVINO 2024.5 Model: Road Segmentation ADAS FP16 - Device: CPU FPS > Higher Is Better hurricane-server . 786.83 |==================================================== OpenVINO 2024.5 Model: Face Detection Retail FP16 - Device: CPU ms < Lower Is Better hurricane-server . 3.29 |====================================================== OpenVINO 2024.5 Model: Face Detection Retail FP16 - Device: CPU FPS > Higher Is Better hurricane-server . 4791.90 |=================================================== OpenVINO 2024.5 Model: Face Detection FP16-INT8 - Device: CPU ms < Lower Is Better hurricane-server . 412.40 |==================================================== OpenVINO 2024.5 Model: Face Detection FP16-INT8 - Device: CPU FPS > Higher Is Better hurricane-server . 38.67 |===================================================== OpenVINO 2024.5 Model: Vehicle Detection FP16 - Device: CPU ms < Lower Is Better hurricane-server . 10.37 |===================================================== OpenVINO 2024.5 Model: Vehicle Detection FP16 - Device: CPU FPS > Higher Is Better hurricane-server . 1536.16 |=================================================== OpenVINO 2024.5 Model: Person Detection FP32 - Device: CPU ms < Lower Is Better hurricane-server . 82.70 |===================================================== OpenVINO 2024.5 Model: Person Detection FP32 - Device: CPU FPS > Higher Is Better hurricane-server . 193.19 |==================================================== OpenVINO 2024.5 Model: Person Detection FP16 - Device: CPU ms < Lower Is Better hurricane-server . 83.08 |===================================================== OpenVINO 2024.5 Model: Person Detection FP16 - Device: CPU FPS > Higher Is Better hurricane-server . 192.30 |==================================================== OpenVINO 2024.5 Model: Face Detection FP16 - Device: CPU ms < Lower Is Better hurricane-server . 790.62 |==================================================== OpenVINO 2024.5 Model: Face Detection FP16 - Device: CPU FPS > Higher Is Better hurricane-server . 20.16 |===================================================== XNNPACK b7b048 Model: QS8MobileNetV2 us < Lower Is Better hurricane-server . 2001 |====================================================== XNNPACK b7b048 Model: FP16MobileNetV3Small us < Lower Is Better hurricane-server . 2136 |====================================================== XNNPACK b7b048 Model: FP16MobileNetV3Large us < Lower Is Better hurricane-server . 3012 |====================================================== XNNPACK b7b048 Model: FP16MobileNetV2 us < Lower Is Better hurricane-server . 1985 |====================================================== XNNPACK b7b048 Model: FP16MobileNetV1 us < Lower Is Better hurricane-server . 1348 |====================================================== XNNPACK b7b048 Model: FP32MobileNetV3Small us < Lower Is Better hurricane-server . 2164 |====================================================== XNNPACK b7b048 Model: FP32MobileNetV3Large us < Lower Is Better hurricane-server . 3136 |====================================================== XNNPACK b7b048 Model: FP32MobileNetV2 us < Lower Is Better hurricane-server . 2162 |====================================================== XNNPACK b7b048 Model: FP32MobileNetV1 us < Lower Is Better hurricane-server . 1306 |====================================================== NCNN 20230517 Target: Vulkan GPU - Model: vision_transformer ms < Lower Is Better hurricane-server . 67.71 |===================================================== NCNN 20230517 Target: Vulkan GPU - Model: regnety_400m ms < Lower Is Better hurricane-server . 24.08 |===================================================== NCNN 20230517 Target: Vulkan GPU - Model: squeezenet_ssd ms < Lower Is Better hurricane-server . 16.92 |===================================================== NCNN 20230517 Target: Vulkan GPU - Model: yolov4-tiny ms < Lower Is Better hurricane-server . 27.01 |===================================================== NCNN 20230517 Target: Vulkan GPUv2-yolov3v2-yolov3 - Model: mobilenetv2-yolov3 ms < Lower Is Better hurricane-server . 15.98 |===================================================== NCNN 20230517 Target: Vulkan GPU - Model: resnet50 ms < Lower Is Better hurricane-server . 14.46 |===================================================== NCNN 20230517 Target: Vulkan GPU - Model: alexnet ms < Lower Is Better hurricane-server . 5.52 |====================================================== NCNN 20230517 Target: Vulkan GPU - Model: resnet18 ms < Lower Is Better hurricane-server . 8.99 |====================================================== NCNN 20230517 Target: Vulkan GPU - Model: vgg16 ms < Lower Is Better hurricane-server . 25.17 |===================================================== NCNN 20230517 Target: Vulkan GPU - Model: googlenet ms < Lower Is Better hurricane-server . 17.41 |===================================================== NCNN 20230517 Target: Vulkan GPU - Model: blazeface ms < Lower Is Better hurricane-server . 3.81 |====================================================== NCNN 20230517 Target: Vulkan GPU - Model: efficientnet-b0 ms < Lower Is Better hurricane-server . 9.71 |====================================================== NCNN 20230517 Target: Vulkan GPU - Model: mnasnet ms < Lower Is Better hurricane-server . 6.89 |====================================================== NCNN 20230517 Target: Vulkan GPU - Model: shufflenet-v2 ms < Lower Is Better hurricane-server . 9.49 |====================================================== NCNN 20230517 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 ms < Lower Is Better hurricane-server . 7.95 |====================================================== NCNN 20230517 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 ms < Lower Is Better hurricane-server . 7.43 |====================================================== NCNN 20230517 Target: Vulkan GPU - Model: mobilenet ms < Lower Is Better hurricane-server . 15.98 |===================================================== NCNN 20230517 Target: CPU - Model: regnety_400m ms < Lower Is Better hurricane-server . 24.06 |===================================================== NCNN 20230517 Target: CPU - Model: squeezenet_ssd ms < Lower Is Better hurricane-server . 16.85 |===================================================== NCNN 20230517 Target: CPU - Model: yolov4-tiny ms < Lower Is Better hurricane-server . 26.58 |===================================================== NCNN 20230517 Target: CPUv2-yolov3v2-yolov3 - Model: mobilenetv2-yolov3 ms < Lower Is Better hurricane-server . 15.81 |===================================================== NCNN 20230517 Target: CPU - Model: resnet50 ms < Lower Is Better hurricane-server . 14.46 |===================================================== NCNN 20230517 Target: CPU - Model: alexnet ms < Lower Is Better hurricane-server . 5.51 |====================================================== NCNN 20230517 Target: CPU - Model: resnet18 ms < Lower Is Better hurricane-server . 9.01 |====================================================== NCNN 20230517 Target: CPU - Model: vgg16 ms < Lower Is Better hurricane-server . 25.23 |===================================================== NCNN 20230517 Target: CPU - Model: googlenet ms < Lower Is Better hurricane-server . 17.39 |===================================================== NCNN 20230517 Target: CPU - Model: blazeface ms < Lower Is Better hurricane-server . 3.80 |====================================================== NCNN 20230517 Target: CPU - Model: efficientnet-b0 ms < Lower Is Better hurricane-server . 9.73 |====================================================== NCNN 20230517 Target: CPU - Model: mnasnet ms < Lower Is Better hurricane-server . 6.91 |====================================================== NCNN 20230517 Target: CPU - Model: shufflenet-v2 ms < Lower Is Better hurricane-server . 9.51 |====================================================== NCNN 20230517 Target: CPU-v3-v3 - Model: mobilenet-v3 ms < Lower Is Better hurricane-server . 7.93 |====================================================== NCNN 20230517 Target: CPU-v2-v2 - Model: mobilenet-v2 ms < Lower Is Better hurricane-server . 7.41 |====================================================== NCNN 20230517 Target: CPU - Model: mobilenet ms < Lower Is Better hurricane-server . 15.81 |===================================================== TensorFlow 2.16.1 Device: GPU - Batch Size: 512 - Model: ResNet-50 images/sec > Higher Is Better hurricane-server . 6.83 |====================================================== TensorFlow 2.16.1 Device: GPU - Batch Size: 512 - Model: GoogLeNet images/sec > Higher Is Better hurricane-server . 21.10 |===================================================== TensorFlow 2.16.1 Device: GPU - Batch Size: 256 - Model: ResNet-50 images/sec > Higher Is Better hurricane-server . 6.84 |====================================================== TensorFlow 2.16.1 Device: GPU - Batch Size: 256 - Model: GoogLeNet images/sec > Higher Is Better hurricane-server . 21.06 |===================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 512 - Model: ResNet-50 images/sec > Higher Is Better hurricane-server . 101.60 |==================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 512 - Model: GoogLeNet images/sec > Higher Is Better hurricane-server . 288.79 |==================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 256 - Model: ResNet-50 images/sec > Higher Is Better hurricane-server . 98.53 |===================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 256 - Model: GoogLeNet images/sec > Higher Is Better hurricane-server . 283.28 |==================================================== TensorFlow 2.16.1 Device: GPU - Batch Size: 64 - Model: ResNet-50 images/sec > Higher Is Better hurricane-server . 6.78 |====================================================== TensorFlow 2.16.1 Device: GPU - Batch Size: 64 - Model: GoogLeNet images/sec > Higher Is Better hurricane-server . 20.83 |===================================================== TensorFlow 2.16.1 Device: GPU - Batch Size: 32 - Model: ResNet-50 images/sec > Higher Is Better hurricane-server . 6.73 |====================================================== TensorFlow 2.16.1 Device: GPU - Batch Size: 32 - Model: GoogLeNet images/sec > Higher Is Better hurricane-server . 20.37 |===================================================== TensorFlow 2.16.1 Device: GPU - Batch Size: 16 - Model: ResNet-50 images/sec > Higher Is Better hurricane-server . 6.60 |====================================================== TensorFlow 2.16.1 Device: GPU - Batch Size: 16 - Model: GoogLeNet images/sec > Higher Is Better hurricane-server . 20.15 |===================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 64 - Model: ResNet-50 images/sec > Higher Is Better hurricane-server . 90.02 |===================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 64 - Model: GoogLeNet images/sec > Higher Is Better hurricane-server . 262.31 |==================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 32 - Model: ResNet-50 images/sec > Higher Is Better hurricane-server . 84.65 |===================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 32 - Model: GoogLeNet images/sec > Higher Is Better hurricane-server . 241.16 |==================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 images/sec > Higher Is Better hurricane-server . 74.57 |===================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 16 - Model: GoogLeNet images/sec > Higher Is Better hurricane-server . 214.21 |==================================================== TensorFlow 2.16.1 Device: GPU - Batch Size: 512 - Model: AlexNet images/sec > Higher Is Better hurricane-server . 34.13 |===================================================== TensorFlow 2.16.1 Device: GPU - Batch Size: 256 - Model: AlexNet images/sec > Higher Is Better hurricane-server . 33.97 |===================================================== TensorFlow 2.16.1 Device: GPU - Batch Size: 1 - Model: ResNet-50 images/sec > Higher Is Better hurricane-server . 4.72 |====================================================== TensorFlow 2.16.1 Device: GPU - Batch Size: 1 - Model: GoogLeNet images/sec > Higher Is Better hurricane-server . 14.65 |===================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 512 - Model: AlexNet images/sec > Higher Is Better hurricane-server . 679.63 |==================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 256 - Model: AlexNet images/sec > Higher Is Better hurricane-server . 651.72 |==================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 1 - Model: ResNet-50 images/sec > Higher Is Better hurricane-server . 17.32 |===================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 1 - Model: GoogLeNet images/sec > Higher Is Better hurricane-server . 53.62 |===================================================== TensorFlow 2.16.1 Device: GPU - Batch Size: 64 - Model: AlexNet images/sec > Higher Is Better hurricane-server . 33.24 |===================================================== TensorFlow 2.16.1 Device: GPU - Batch Size: 512 - Model: VGG-16 images/sec > Higher Is Better hurricane-server . 1.79 |====================================================== TensorFlow 2.16.1 Device: GPU - Batch Size: 32 - Model: AlexNet images/sec > Higher Is Better hurricane-server . 32.12 |===================================================== TensorFlow 2.16.1 Device: GPU - Batch Size: 256 - Model: VGG-16 images/sec > Higher Is Better hurricane-server . 1.79 |====================================================== TensorFlow 2.16.1 Device: GPU - Batch Size: 16 - Model: AlexNet images/sec > Higher Is Better hurricane-server . 30.06 |===================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 64 - Model: AlexNet images/sec > Higher Is Better hurricane-server . 532.95 |==================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 512 - Model: VGG-16 images/sec > Higher Is Better hurricane-server . 33.71 |===================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 32 - Model: AlexNet images/sec > Higher Is Better hurricane-server . 465.66 |==================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 256 - Model: VGG-16 images/sec > Higher Is Better hurricane-server . 33.52 |===================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 16 - Model: AlexNet images/sec > Higher Is Better hurricane-server . 376.93 |==================================================== TensorFlow 2.16.1 Device: GPU - Batch Size: 64 - Model: VGG-16 images/sec > Higher Is Better hurricane-server . 1.78 |====================================================== TensorFlow 2.16.1 Device: GPU - Batch Size: 32 - Model: VGG-16 images/sec > Higher Is Better hurricane-server . 1.77 |====================================================== TensorFlow 2.16.1 Device: GPU - Batch Size: 16 - Model: VGG-16 images/sec > Higher Is Better hurricane-server . 1.76 |====================================================== TensorFlow 2.16.1 Device: GPU - Batch Size: 1 - Model: AlexNet images/sec > Higher Is Better hurricane-server . 15.94 |===================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 64 - Model: VGG-16 images/sec > Higher Is Better hurricane-server . 32.65 |===================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 32 - Model: VGG-16 images/sec > Higher Is Better hurricane-server . 31.76 |===================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 16 - Model: VGG-16 images/sec > Higher Is Better hurricane-server . 29.28 |===================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 1 - Model: AlexNet images/sec > Higher Is Better hurricane-server . 58.02 |===================================================== TensorFlow 2.16.1 Device: GPU - Batch Size: 1 - Model: VGG-16 images/sec > Higher Is Better hurricane-server . 1.61 |====================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 1 - Model: VGG-16 images/sec > Higher Is Better hurricane-server . 15.04 |===================================================== PyTorch 2.2.1 Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l batches/sec > Higher Is Better hurricane-server . 8.16 |====================================================== PyTorch 2.2.1 Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l batches/sec > Higher Is Better hurricane-server . 8.20 |====================================================== PyTorch 2.2.1 Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l batches/sec > Higher Is Better hurricane-server . 8.21 |====================================================== PyTorch 2.2.1 Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l batches/sec > Higher Is Better hurricane-server . 8.10 |====================================================== PyTorch 2.2.1 Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l batches/sec > Higher Is Better hurricane-server . 8.21 |====================================================== PyTorch 2.2.1 Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l batches/sec > Higher Is Better hurricane-server . 12.47 |===================================================== PyTorch 2.2.1 Device: CPU - Batch Size: 512 - Model: ResNet-152 batches/sec > Higher Is Better hurricane-server . 19.50 |===================================================== PyTorch 2.2.1 Device: CPU - Batch Size: 256 - Model: ResNet-152 batches/sec > Higher Is Better hurricane-server . 19.48 |===================================================== PyTorch 2.2.1 Device: CPU - Batch Size: 64 - Model: ResNet-152 batches/sec > Higher Is Better hurricane-server . 19.69 |===================================================== PyTorch 2.2.1 Device: CPU - Batch Size: 512 - Model: ResNet-50 batches/sec > Higher Is Better hurricane-server . 52.09 |===================================================== PyTorch 2.2.1 Device: CPU - Batch Size: 32 - Model: ResNet-152 batches/sec > Higher Is Better hurricane-server . 19.51 |===================================================== PyTorch 2.2.1 Device: CPU - Batch Size: 256 - Model: ResNet-50 batches/sec > Higher Is Better hurricane-server . 52.12 |===================================================== PyTorch 2.2.1 Device: CPU - Batch Size: 16 - Model: ResNet-152 batches/sec > Higher Is Better hurricane-server . 19.55 |===================================================== PyTorch 2.2.1 Device: CPU - Batch Size: 64 - Model: ResNet-50 batches/sec > Higher Is Better hurricane-server . 52.10 |===================================================== PyTorch 2.2.1 Device: CPU - Batch Size: 32 - Model: ResNet-50 batches/sec > Higher Is Better hurricane-server . 52.12 |===================================================== PyTorch 2.2.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 batches/sec > Higher Is Better hurricane-server . 51.39 |===================================================== PyTorch 2.2.1 Device: CPU - Batch Size: 1 - Model: ResNet-152 batches/sec > Higher Is Better hurricane-server . 24.69 |===================================================== PyTorch 2.2.1 Device: CPU - Batch Size: 1 - Model: ResNet-50 batches/sec > Higher Is Better hurricane-server . 67.94 |===================================================== TensorFlow Lite 2022-05-18 Model: Inception ResNet V2 Microseconds < Lower Is Better hurricane-server . 31730.6 |=================================================== TensorFlow Lite 2022-05-18 Model: Mobilenet Quant Microseconds < Lower Is Better hurricane-server . 2531.33 |=================================================== TensorFlow Lite 2022-05-18 Model: Mobilenet Float Microseconds < Lower Is Better hurricane-server . 1306.87 |=================================================== TensorFlow Lite 2022-05-18 Model: NASNet Mobile Microseconds < Lower Is Better hurricane-server . 24134.5 |=================================================== TensorFlow Lite 2022-05-18 Model: Inception V4 Microseconds < Lower Is Better hurricane-server . 16113.8 |=================================================== TensorFlow Lite 2022-05-18 Model: SqueezeNet Microseconds < Lower Is Better hurricane-server . 2015.74 |=================================================== LiteRT 2024-10-15 Model: Quantized COCO SSD MobileNet v1 Microseconds < Lower Is Better hurricane-server . 2299.81 |=================================================== LiteRT 2024-10-15 Model: Inception ResNet V2 Microseconds < Lower Is Better hurricane-server . 19022.7 |=================================================== LiteRT 2024-10-15 Model: Mobilenet Quant Microseconds < Lower Is Better hurricane-server . 1404.29 |=================================================== LiteRT 2024-10-15 Model: Mobilenet Float Microseconds < Lower Is Better hurricane-server . 1332.51 |=================================================== LiteRT 2024-10-15 Model: NASNet Mobile Microseconds < Lower Is Better hurricane-server . 31332.4 |=================================================== LiteRT 2024-10-15 Model: Inception V4 Microseconds < Lower Is Better hurricane-server . 16671.7 |=================================================== LiteRT 2024-10-15 Model: SqueezeNet Microseconds < Lower Is Better hurricane-server . 2102.34 |=================================================== LiteRT 2024-10-15 Model: DeepLab V3 Microseconds < Lower Is Better hurricane-server . 3250.31 |=================================================== RNNoise 0.2 Input: 26 Minute Long Talking Sample Seconds < Lower Is Better hurricane-server . 11.36 |===================================================== R Benchmark Seconds < Lower Is Better hurricane-server . 0.1707 |==================================================== DeepSpeech 0.6 Acceleration: CPU Seconds < Lower Is Better hurricane-server . 53.37 |===================================================== Numpy Benchmark Score > Higher Is Better hurricane-server . 513.75 |==================================================== oneDNN 3.6 Harness: Recurrent Neural Network Inference - Engine: CPU ms < Lower Is Better hurricane-server . 450.86 |==================================================== oneDNN 3.6 Harness: Recurrent Neural Network Training - Engine: CPU ms < Lower Is Better hurricane-server . 811.25 |==================================================== oneDNN 3.6 Harness: Deconvolution Batch shapes_3d - Engine: CPU ms < Lower Is Better hurricane-server . 1.81557 |=================================================== oneDNN 3.6 Harness: Deconvolution Batch shapes_1d - Engine: CPU ms < Lower Is Better hurricane-server . 5.68471 |=================================================== oneDNN 3.6 Harness: Convolution Batch Shapes Auto - Engine: CPU ms < Lower Is Better hurricane-server . 1.14790 |=================================================== oneDNN 3.6 Harness: IP Shapes 3D - Engine: CPU ms < Lower Is Better hurricane-server . 0.680494 |================================================== oneDNN 3.6 Harness: IP Shapes 1D - Engine: CPU ms < Lower Is Better hurricane-server . 0.850385 |================================================== LeelaChessZero 0.31.1 Backend: BLAS Nodes Per Second > Higher Is Better hurricane-server . 281 |======================================================= SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Texture Read Bandwidth GB/s > Higher Is Better hurricane-server . 588.11 |==================================================== SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Bus Speed Readback GB/s > Higher Is Better hurricane-server . 13.54 |===================================================== SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Bus Speed Download GB/s > Higher Is Better hurricane-server . 13.21 |===================================================== SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Max SP Flops GFLOPS > Higher Is Better hurricane-server . 9437.51 |=================================================== SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: GEMM SGEMM_N GFLOPS > Higher Is Better hurricane-server . 5521.35 |=================================================== SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Reduction GB/s > Higher Is Better hurricane-server . 257.89 |==================================================== SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: MD5 Hash GHash/s > Higher Is Better hurricane-server . 14.49 |===================================================== SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: FFT SP GFLOPS > Higher Is Better hurricane-server . 1479.17 |=================================================== SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Triad GB/s > Higher Is Better hurricane-server . 12.90 |===================================================== SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: S3D GFLOPS > Higher Is Better hurricane-server . 268.85 |==================================================== OpenVINO GenAI 2024.5 Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU - Time Per Output Token ms < Lower Is Better hurricane-server . 21.20 |===================================================== OpenVINO GenAI 2024.5 Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU - Time To First Token ms < Lower Is Better hurricane-server . 41.41 |===================================================== OpenVINO GenAI 2024.5 Model: Falcon-7b-instruct-int4-ov - Device: CPU - Time Per Output Token ms < Lower Is Better hurricane-server . 25.40 |===================================================== OpenVINO GenAI 2024.5 Model: Falcon-7b-instruct-int4-ov - Device: CPU - Time To First Token ms < Lower Is Better hurricane-server . 59.06 |===================================================== OpenVINO GenAI 2024.5 Model: TinyLlama-1.1B-Chat-v1.0 - Device: CPU - Time Per Output Token ms < Lower Is Better hurricane-server . 15.21 |===================================================== OpenVINO GenAI 2024.5 Model: TinyLlama-1.1B-Chat-v1.0 - Device: CPU - Time To First Token ms < Lower Is Better hurricane-server . 18.24 |===================================================== OpenVINO GenAI 2024.5 Model: Gemma-7b-int4-ov - Device: CPU - Time Per Output Token ms < Lower Is Better hurricane-server . 33.21 |===================================================== OpenVINO GenAI 2024.5 Model: Gemma-7b-int4-ov - Device: CPU - Time To First Token ms < Lower Is Better hurricane-server . 72.72 |===================================================== OpenCV 4.7 Test: DNN - Deep Neural Network ms < Lower Is Better hurricane-server . 33303 |===================================================== Llamafile 0.8.16 Test: wizardcoder-python-34b-v1.0.Q6_K - Acceleration: CPU Tokens Per Second > Higher Is Better Llamafile 0.8.16 Test: mistral-7b-instruct-v0.2.Q8_0 - Acceleration: CPU Tokens Per Second > Higher Is Better Llamafile 0.8.16 Test: llava-v1.5-7b-q4 - Acceleration: CPU Tokens Per Second > Higher Is Better Llama.cpp b4154 Model: llama-2-70b-chat.Q5_0.gguf Tokens Per Second > Higher Is Better Llama.cpp b4154 Model: llama-2-13b.Q4_0.gguf Tokens Per Second > Higher Is Better Llama.cpp b4154 Model: llama-2-7b.Q4_0.gguf Tokens Per Second > Higher Is Better Scikit-Learn 1.2.2 Benchmark: Plot Non-Negative Matrix Factorization Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Singular Value Decomposition Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: RCV1 Logreg Convergencet Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Fast KMeans Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Lasso Path Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Glmnet Seconds < Lower Is Better Mlpack Benchmark Benchmark: scikit_linearridgeregression Seconds < Lower Is Better Mlpack Benchmark Benchmark: scikit_svm Seconds < Lower Is Better Mlpack Benchmark Benchmark: scikit_qda Seconds < Lower Is Better Mlpack Benchmark Benchmark: scikit_ica Seconds < Lower Is Better AI Benchmark Alpha 0.1.2 Score > Higher Is Better ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better Numenta Anomaly Benchmark 1.1 Detector: Contextual Anomaly Detector OSE Seconds < Lower Is Better Numenta Anomaly Benchmark 1.1 Detector: Bayesian Changepoint Seconds < Lower Is Better Numenta Anomaly Benchmark 1.1 Detector: Earthgecko Skyline Seconds < Lower Is Better Numenta Anomaly Benchmark 1.1 Detector: Windowed Gaussian Seconds < Lower Is Better Numenta Anomaly Benchmark 1.1 Detector: Relative Entropy Seconds < Lower Is Better Numenta Anomaly Benchmark 1.1 Detector: KNN CAD Seconds < Lower Is Better PlaidML FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU Examples Per Second > Higher Is Better PlaidML FP16: No - Mode: Inference - Network: VGG16 - Device: CPU Examples Per Second > Higher Is Better TNN 0.3 Target: CPU - Model: SqueezeNet v1.1 ms < Lower Is Better TNN 0.3 Target: CPU - Model: SqueezeNet v2 ms < Lower Is Better TNN 0.3 Target: CPU - Model: MobileNet v2 ms < Lower Is Better TNN 0.3 Target: CPU - Model: DenseNet ms < Lower Is Better NCNN 20230517 Target: Vulkan GPU - Model: FastestDet ms < Lower Is Better hurricane-server . 11.57 |===================================================== NCNN 20230517 Target: CPU - Model: FastestDet ms < Lower Is Better hurricane-server . 11.18 |===================================================== NCNN 20230517 Target: CPU - Model: vision_transformer ms < Lower Is Better hurricane-server . 67.87 |===================================================== Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 1000 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 200 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 100 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 1000 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 200 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 100 Milli-Seconds < Lower Is Better spaCy 3.4.1 tokens/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better