phoronix-machine-learning.txt AMD Ryzen Threadripper 7960X 24-Cores testing with a Gigabyte TRX50 AERO D (FA BIOS) and Sapphire AMD Radeon RX 7900 XTX 24GB on Ubuntu 24.04 via the Phoronix Test Suite. phoronix-ml.txt: Processor: AMD Ryzen Threadripper 7960X 24-Cores @ 7.79GHz (24 Cores / 48 Threads), Motherboard: Gigabyte TRX50 AERO D (FA BIOS), Chipset: AMD Device 14a4, Memory: 4 x 32GB DDR5-5200MT/s Micron MTC20F1045S1RC56BG1, Disk: 1000GB GIGABYTE AG512K1TB, Graphics: Sapphire AMD Radeon RX 7900 XTX 24GB, Audio: AMD Device 14cc, Monitor: HP E273, Network: Aquantia AQC113C NBase-T/IEEE + Realtek RTL8125 2.5GbE + Qualcomm WCN785x Wi-Fi 7 OS: Ubuntu 24.04, Kernel: 6.8.0-48-generic (x86_64), Desktop: GNOME Shell 46.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 24.2.0-devel (LLVM 18.1.7 DRM 3.58), OpenCL: OpenCL 2.1 AMD-APP (3625.0), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 1920x1080 SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: FFT SP GFLOPS > Higher Is Better phoronix-ml.txt . 2703.37 |==================================================== phoronix-ml.txt . 752.84 |============== SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Reduction GB/s > Higher Is Better phoronix-ml.txt . 595.05 |===================================================== phoronix-ml.txt . 42.94 |==== SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Triad GB/s > Higher Is Better phoronix-ml.txt . 23.05 |====================================================== phoronix-ml.txt . 13.82 |================================ SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: GEMM SGEMM_N GFLOPS > Higher Is Better phoronix-ml.txt . 8470.02 |==================================================== phoronix-ml.txt . 7615.35 |=============================================== SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: MD5 Hash GHash/s > Higher Is Better phoronix-ml.txt . 49.64 |====================================================== phoronix-ml.txt . 46.51 |=================================================== SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: S3D GFLOPS > Higher Is Better phoronix-ml.txt . 298.49 |===================================================== phoronix-ml.txt . 289.54 |=================================================== Whisper.cpp 1.6.2 Model: ggml-medium.en - Input: 2016 State of the Union Seconds < Lower Is Better phoronix-ml.txt . 579.17 |===================================================== Whisper.cpp 1.6.2 Model: ggml-small.en - Input: 2016 State of the Union Seconds < Lower Is Better phoronix-ml.txt . 218.19 |===================================================== Whisper.cpp 1.6.2 Model: ggml-base.en - Input: 2016 State of the Union Seconds < Lower Is Better phoronix-ml.txt . 92.75 |====================================================== Scikit-Learn 1.2.2 Benchmark: Sparse Random Projections / 100 Iterations Seconds < Lower Is Better phoronix-ml.txt . 504.83 |===================================================== Scikit-Learn 1.2.2 Benchmark: Kernel PCA Solvers / Time vs. N Components Seconds < Lower Is Better phoronix-ml.txt . 31.04 |====================================================== Scikit-Learn 1.2.2 Benchmark: Kernel PCA Solvers / Time vs. N Samples Seconds < Lower Is Better phoronix-ml.txt . 61.61 |====================================================== Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Categorical Only Seconds < Lower Is Better phoronix-ml.txt . 30.19 |====================================================== Scikit-Learn 1.2.2 Benchmark: Plot Polynomial Kernel Approximation Seconds < Lower Is Better phoronix-ml.txt . 104.70 |===================================================== Scikit-Learn 1.2.2 Benchmark: 20 Newsgroups / Logistic Regression Seconds < Lower Is Better phoronix-ml.txt . 10.45 |====================================================== Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Higgs Boson Seconds < Lower Is Better phoronix-ml.txt . 65.74 |====================================================== Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Threading Seconds < Lower Is Better phoronix-ml.txt . 52.73 |====================================================== Scikit-Learn 1.2.2 Benchmark: Isotonic / Perturbed Logarithm Seconds < Lower Is Better phoronix-ml.txt . 1528.97 |==================================================== Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Adult Seconds < Lower Is Better phoronix-ml.txt . 153.34 |===================================================== Scikit-Learn 1.2.2 Benchmark: Covertype Dataset Benchmark Seconds < Lower Is Better phoronix-ml.txt . 320.39 |===================================================== Scikit-Learn 1.2.2 Benchmark: Sample Without Replacement Seconds < Lower Is Better phoronix-ml.txt . 90.64 |====================================================== Scikit-Learn 1.2.2 Benchmark: Isotonic / Pathological Seconds < Lower Is Better phoronix-ml.txt . 3843.26 |==================================================== Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Seconds < Lower Is Better phoronix-ml.txt . 166.78 |===================================================== Scikit-Learn 1.2.2 Benchmark: Plot Incremental PCA Seconds < Lower Is Better phoronix-ml.txt . 31.21 |====================================================== Scikit-Learn 1.2.2 Benchmark: Isotonic / Logistic Seconds < Lower Is Better phoronix-ml.txt . 1406.45 |==================================================== Scikit-Learn 1.2.2 Benchmark: TSNE MNIST Dataset Seconds < Lower Is Better phoronix-ml.txt . 247.56 |===================================================== Scikit-Learn 1.2.2 Benchmark: LocalOutlierFactor Seconds < Lower Is Better phoronix-ml.txt . 21.62 |====================================================== Scikit-Learn 1.2.2 Benchmark: Feature Expansions Seconds < Lower Is Better phoronix-ml.txt . 100.35 |===================================================== Scikit-Learn 1.2.2 Benchmark: Plot OMP vs. LARS Seconds < Lower Is Better phoronix-ml.txt . 41.48 |====================================================== Scikit-Learn 1.2.2 Benchmark: Plot Hierarchical Seconds < Lower Is Better phoronix-ml.txt . 141.46 |===================================================== Scikit-Learn 1.2.2 Benchmark: Text Vectorizers Seconds < Lower Is Better phoronix-ml.txt . 45.34 |====================================================== Scikit-Learn 1.2.2 Benchmark: Isolation Forest Seconds < Lower Is Better phoronix-ml.txt . 176.29 |===================================================== Scikit-Learn 1.2.2 Benchmark: SGDOneClassSVM Seconds < Lower Is Better phoronix-ml.txt . 233.41 |===================================================== Scikit-Learn 1.2.2 Benchmark: SGD Regression Seconds < Lower Is Better phoronix-ml.txt . 64.37 |====================================================== Scikit-Learn 1.2.2 Benchmark: Plot Neighbors Seconds < Lower Is Better phoronix-ml.txt . 114.84 |===================================================== Scikit-Learn 1.2.2 Benchmark: MNIST Dataset Seconds < Lower Is Better phoronix-ml.txt . 52.74 |====================================================== Scikit-Learn 1.2.2 Benchmark: Plot Ward Seconds < Lower Is Better phoronix-ml.txt . 42.10 |====================================================== Scikit-Learn 1.2.2 Benchmark: Sparsify Seconds < Lower Is Better phoronix-ml.txt . 108.46 |===================================================== Scikit-Learn 1.2.2 Benchmark: Lasso Seconds < Lower Is Better phoronix-ml.txt . 308.23 |===================================================== Scikit-Learn 1.2.2 Benchmark: Tree Seconds < Lower Is Better phoronix-ml.txt . 46.97 |====================================================== Scikit-Learn 1.2.2 Benchmark: SAGA Seconds < Lower Is Better phoronix-ml.txt . 669.12 |===================================================== Scikit-Learn 1.2.2 Benchmark: GLM Seconds < Lower Is Better phoronix-ml.txt . 168.33 |===================================================== ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better phoronix-ml.txt . 2.21086 |==================================================== ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better phoronix-ml.txt . 1.29718 |==================================================== ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better phoronix-ml.txt . 97.32 |====================================================== ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better phoronix-ml.txt . 123.74 |===================================================== ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better phoronix-ml.txt . 328.29 |===================================================== ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better phoronix-ml.txt . 108.88 |===================================================== ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better phoronix-ml.txt . 4.11676 |==================================================== ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better phoronix-ml.txt . 639.91 |===================================================== ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better phoronix-ml.txt . 203.06 |===================================================== ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better phoronix-ml.txt . 172.04 |===================================================== ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better phoronix-ml.txt . 272.18 |===================================================== ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better phoronix-ml.txt . 109.86 |===================================================== ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better phoronix-ml.txt . 57.86 |====================================================== ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better phoronix-ml.txt . 9.65442 |==================================================== ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better phoronix-ml.txt . 5.39684 |==================================================== OpenVINO 2024.0 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU ms < Lower Is Better phoronix-ml.txt . 0.3 |======================================================== OpenVINO 2024.0 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU FPS > Higher Is Better phoronix-ml.txt . 67537.77 |=================================================== OpenVINO 2024.0 Model: Handwritten English Recognition FP16-INT8 - Device: CPU ms < Lower Is Better phoronix-ml.txt . 21.58 |====================================================== OpenVINO 2024.0 Model: Handwritten English Recognition FP16-INT8 - Device: CPU FPS > Higher Is Better phoronix-ml.txt . 1108.65 |==================================================== OpenVINO 2024.0 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU ms < Lower Is Better phoronix-ml.txt . 0.43 |======================================================= OpenVINO 2024.0 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU FPS > Higher Is Better phoronix-ml.txt . 48433.92 |=================================================== OpenVINO 2024.0 Model: Person Re-Identification Retail FP16 - Device: CPU ms < Lower Is Better phoronix-ml.txt . 4.72 |======================================================= OpenVINO 2024.0 Model: Person Re-Identification Retail FP16 - Device: CPU FPS > Higher Is Better phoronix-ml.txt . 2523.59 |==================================================== OpenVINO 2024.0 Model: Handwritten English Recognition FP16 - Device: CPU ms < Lower Is Better phoronix-ml.txt . 23.12 |====================================================== OpenVINO 2024.0 Model: Handwritten English Recognition FP16 - Device: CPU FPS > Higher Is Better phoronix-ml.txt . 1035.29 |==================================================== OpenVINO 2024.0 Model: Noise Suppression Poconet-Like FP16 - Device: CPU ms < Lower Is Better phoronix-ml.txt . 11.53 |====================================================== OpenVINO 2024.0 Model: Person Vehicle Bike Detection FP16 - Device: CPU ms < Lower Is Better phoronix-ml.txt . 6.18 |======================================================= OpenVINO 2024.0 Model: Person Vehicle Bike Detection FP16 - Device: CPU FPS > Higher Is Better phoronix-ml.txt . 1930.08 |==================================================== OpenVINO 2024.0 Model: Weld Porosity Detection FP16-INT8 - Device: CPU ms < Lower Is Better phoronix-ml.txt . 6.31 |======================================================= OpenVINO 2024.0 Model: Weld Porosity Detection FP16-INT8 - Device: CPU FPS > Higher Is Better phoronix-ml.txt . 3742.77 |==================================================== OpenVINO 2024.0 Model: Machine Translation EN To DE FP16 - Device: CPU ms < Lower Is Better phoronix-ml.txt . 63.99 |====================================================== OpenVINO 2024.0 Model: Road Segmentation ADAS FP16-INT8 - Device: CPU ms < Lower Is Better phoronix-ml.txt . 17.62 |====================================================== OpenVINO 2024.0 Model: Road Segmentation ADAS FP16-INT8 - Device: CPU FPS > Higher Is Better phoronix-ml.txt . 679.66 |===================================================== OpenVINO 2024.0 Model: Face Detection Retail FP16-INT8 - Device: CPU ms < Lower Is Better phoronix-ml.txt . 3.58 |======================================================= OpenVINO 2024.0 Model: Face Detection Retail FP16-INT8 - Device: CPU FPS > Higher Is Better phoronix-ml.txt . 6458.38 |==================================================== OpenVINO 2024.0 Model: Weld Porosity Detection FP16 - Device: CPU ms < Lower Is Better phoronix-ml.txt . 12.25 |====================================================== OpenVINO 2024.0 Model: Weld Porosity Detection FP16 - Device: CPU FPS > Higher Is Better phoronix-ml.txt . 1947.67 |==================================================== OpenVINO 2024.0 Model: Vehicle Detection FP16-INT8 - Device: CPU ms < Lower Is Better phoronix-ml.txt . 5.51 |======================================================= OpenVINO 2024.0 Model: Vehicle Detection FP16-INT8 - Device: CPU FPS > Higher Is Better phoronix-ml.txt . 2160.57 |==================================================== OpenVINO 2024.0 Model: Face Detection Retail FP16 - Device: CPU ms < Lower Is Better phoronix-ml.txt . 2.76 |======================================================= OpenVINO 2024.0 Model: Face Detection Retail FP16 - Device: CPU FPS > Higher Is Better phoronix-ml.txt . 4273.09 |==================================================== OpenVINO 2024.0 Model: Face Detection FP16-INT8 - Device: CPU ms < Lower Is Better phoronix-ml.txt . 320.42 |===================================================== OpenVINO 2024.0 Model: Face Detection FP16-INT8 - Device: CPU FPS > Higher Is Better phoronix-ml.txt . 37.36 |====================================================== OpenVINO 2024.0 Model: Vehicle Detection FP16 - Device: CPU ms < Lower Is Better phoronix-ml.txt . 11.12 |====================================================== OpenVINO 2024.0 Model: Vehicle Detection FP16 - Device: CPU FPS > Higher Is Better phoronix-ml.txt . 1077.87 |==================================================== OpenVINO 2024.0 Model: Person Detection FP32 - Device: CPU ms < Lower Is Better phoronix-ml.txt . 100.46 |===================================================== OpenVINO 2024.0 Model: Person Detection FP32 - Device: CPU FPS > Higher Is Better phoronix-ml.txt . 119.41 |===================================================== OpenVINO 2024.0 Model: Face Detection FP16 - Device: CPU ms < Lower Is Better phoronix-ml.txt . 607.48 |===================================================== OpenVINO 2024.0 Model: Face Detection FP16 - Device: CPU FPS > Higher Is Better phoronix-ml.txt . 19.69 |====================================================== XNNPACK b7b048 Model: QS8MobileNetV2 us < Lower Is Better phoronix-ml.txt . 1398 |======================================================= XNNPACK b7b048 Model: FP16MobileNetV3Small us < Lower Is Better phoronix-ml.txt . 1464 |======================================================= XNNPACK b7b048 Model: FP16MobileNetV3Large us < Lower Is Better phoronix-ml.txt . 2128 |======================================================= XNNPACK b7b048 Model: FP16MobileNetV2 us < Lower Is Better phoronix-ml.txt . 1495 |======================================================= XNNPACK b7b048 Model: FP16MobileNetV1 us < Lower Is Better phoronix-ml.txt . 1144 |======================================================= XNNPACK b7b048 Model: FP32MobileNetV3Small us < Lower Is Better phoronix-ml.txt . 1503 |======================================================= XNNPACK b7b048 Model: FP32MobileNetV3Large us < Lower Is Better phoronix-ml.txt . 2465 |======================================================= XNNPACK b7b048 Model: FP32MobileNetV2 us < Lower Is Better phoronix-ml.txt . 1873 |======================================================= XNNPACK b7b048 Model: FP32MobileNetV1 us < Lower Is Better phoronix-ml.txt . 1233 |======================================================= NCNN 20230517 Target: Vulkan GPU - Model: vision_transformer ms < Lower Is Better phoronix-ml.txt . 41.05 |====================================================== NCNN 20230517 Target: Vulkan GPU - Model: regnety_400m ms < Lower Is Better phoronix-ml.txt . 18.63 |====================================================== NCNN 20230517 Target: Vulkan GPU - Model: yolov4-tiny ms < Lower Is Better phoronix-ml.txt . 23.64 |====================================================== NCNN 20230517 Target: Vulkan GPUv2-yolov3v2-yolov3 - Model: mobilenetv2-yolov3 ms < Lower Is Better phoronix-ml.txt . 13.79 |====================================================== NCNN 20230517 Target: Vulkan GPU - Model: alexnet ms < Lower Is Better phoronix-ml.txt . 5.28 |======================================================= NCNN 20230517 Target: Vulkan GPU - Model: resnet18 ms < Lower Is Better phoronix-ml.txt . 7.86 |======================================================= NCNN 20230517 Target: Vulkan GPU - Model: vgg16 ms < Lower Is Better phoronix-ml.txt . 25.13 |====================================================== NCNN 20230517 Target: Vulkan GPU - Model: googlenet ms < Lower Is Better phoronix-ml.txt . 16.01 |====================================================== NCNN 20230517 Target: Vulkan GPU - Model: blazeface ms < Lower Is Better phoronix-ml.txt . 3.14 |======================================================= NCNN 20230517 Target: Vulkan GPU - Model: mnasnet ms < Lower Is Better phoronix-ml.txt . 5.99 |======================================================= NCNN 20230517 Target: Vulkan GPU - Model: shufflenet-v2 ms < Lower Is Better phoronix-ml.txt . 8.06 |======================================================= NCNN 20230517 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 ms < Lower Is Better phoronix-ml.txt . 6.49 |======================================================= NCNN 20230517 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 ms < Lower Is Better phoronix-ml.txt . 6.31 |======================================================= NCNN 20230517 Target: Vulkan GPU - Model: mobilenet ms < Lower Is Better phoronix-ml.txt . 13.79 |====================================================== NCNN 20230517 Target: CPU - Model: vision_transformer ms < Lower Is Better phoronix-ml.txt . 40.59 |====================================================== NCNN 20230517 Target: CPU - Model: regnety_400m ms < Lower Is Better phoronix-ml.txt . 18.58 |====================================================== NCNN 20230517 Target: CPU - Model: squeezenet_ssd ms < Lower Is Better phoronix-ml.txt . 14.34 |====================================================== NCNN 20230517 Target: CPU - Model: yolov4-tiny ms < Lower Is Better phoronix-ml.txt . 24.14 |====================================================== NCNN 20230517 Target: CPUv2-yolov3v2-yolov3 - Model: mobilenetv2-yolov3 ms < Lower Is Better phoronix-ml.txt . 13.85 |====================================================== NCNN 20230517 Target: CPU - Model: resnet50 ms < Lower Is Better phoronix-ml.txt . 13.33 |====================================================== NCNN 20230517 Target: CPU - Model: alexnet ms < Lower Is Better phoronix-ml.txt . 5.56 |======================================================= NCNN 20230517 Target: CPU - Model: resnet18 ms < Lower Is Better phoronix-ml.txt . 8.02 |======================================================= NCNN 20230517 Target: CPU - Model: vgg16 ms < Lower Is Better phoronix-ml.txt . 25.71 |====================================================== NCNN 20230517 Target: CPU - Model: googlenet ms < Lower Is Better phoronix-ml.txt . 16.42 |====================================================== NCNN 20230517 Target: CPU - Model: blazeface ms < Lower Is Better phoronix-ml.txt . 3.11 |======================================================= NCNN 20230517 Target: CPU - Model: efficientnet-b0 ms < Lower Is Better phoronix-ml.txt . 8.28 |======================================================= NCNN 20230517 Target: CPU - Model: shufflenet-v2 ms < Lower Is Better phoronix-ml.txt . 8.15 |======================================================= NCNN 20230517 Target: CPU-v3-v3 - Model: mobilenet-v3 ms < Lower Is Better phoronix-ml.txt . 6.45 |======================================================= NCNN 20230517 Target: CPU-v2-v2 - Model: mobilenet-v2 ms < Lower Is Better phoronix-ml.txt . 6.30 |======================================================= NCNN 20230517 Target: CPU - Model: mobilenet ms < Lower Is Better phoronix-ml.txt . 13.85 |====================================================== Mobile Neural Network 2.9.b11b7037d Model: inception-v3 ms < Lower Is Better phoronix-ml.txt . 36.45 |====================================================== Mobile Neural Network 2.9.b11b7037d Model: mobilenet-v1-1.0 ms < Lower Is Better phoronix-ml.txt . 3.784 |====================================================== Mobile Neural Network 2.9.b11b7037d Model: MobileNetV2_224 ms < Lower Is Better phoronix-ml.txt . 3.268 |====================================================== Mobile Neural Network 2.9.b11b7037d Model: SqueezeNetV1.0 ms < Lower Is Better phoronix-ml.txt . 6.429 |====================================================== Mobile Neural Network 2.9.b11b7037d Model: resnet-v2-50 ms < Lower Is Better phoronix-ml.txt . 18.53 |====================================================== Mobile Neural Network 2.9.b11b7037d Model: squeezenetv1.1 ms < Lower Is Better phoronix-ml.txt . 4.327 |====================================================== Mobile Neural Network 2.9.b11b7037d Model: mobilenetV3 ms < Lower Is Better phoronix-ml.txt . 2.536 |====================================================== Mobile Neural Network 2.9.b11b7037d Model: nasnet ms < Lower Is Better phoronix-ml.txt . 15.30 |====================================================== TensorFlow 2.16.1 Device: GPU - Batch Size: 512 - Model: ResNet-50 images/sec > Higher Is Better phoronix-ml.txt . 9.34 |======================================================= TensorFlow 2.16.1 Device: GPU - Batch Size: 512 - Model: GoogLeNet images/sec > Higher Is Better phoronix-ml.txt . 28.73 |====================================================== TensorFlow 2.16.1 Device: GPU - Batch Size: 256 - Model: ResNet-50 images/sec > Higher Is Better phoronix-ml.txt . 9.36 |======================================================= TensorFlow 2.16.1 Device: GPU - Batch Size: 256 - Model: GoogLeNet images/sec > Higher Is Better phoronix-ml.txt . 29.03 |====================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 512 - Model: ResNet-50 images/sec > Higher Is Better phoronix-ml.txt . 58.50 |====================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 512 - Model: GoogLeNet images/sec > Higher Is Better phoronix-ml.txt . 185.25 |===================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 256 - Model: ResNet-50 images/sec > Higher Is Better phoronix-ml.txt . 59.70 |====================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 256 - Model: GoogLeNet images/sec > Higher Is Better phoronix-ml.txt . 227.06 |===================================================== TensorFlow 2.16.1 Device: GPU - Batch Size: 64 - Model: ResNet-50 images/sec > Higher Is Better phoronix-ml.txt . 9.24 |======================================================= TensorFlow 2.16.1 Device: GPU - Batch Size: 64 - Model: GoogLeNet images/sec > Higher Is Better phoronix-ml.txt . 28.53 |====================================================== TensorFlow 2.16.1 Device: GPU - Batch Size: 32 - Model: ResNet-50 images/sec > Higher Is Better phoronix-ml.txt . 9.14 |======================================================= TensorFlow 2.16.1 Device: GPU - Batch Size: 32 - Model: GoogLeNet images/sec > Higher Is Better phoronix-ml.txt . 27.91 |====================================================== TensorFlow 2.16.1 Device: GPU - Batch Size: 16 - Model: ResNet-50 images/sec > Higher Is Better phoronix-ml.txt . 8.94 |======================================================= TensorFlow 2.16.1 Device: GPU - Batch Size: 16 - Model: GoogLeNet images/sec > Higher Is Better phoronix-ml.txt . 26.91 |====================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 64 - Model: ResNet-50 images/sec > Higher Is Better phoronix-ml.txt . 70.89 |====================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 64 - Model: GoogLeNet images/sec > Higher Is Better phoronix-ml.txt . 225.16 |===================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 32 - Model: ResNet-50 images/sec > Higher Is Better phoronix-ml.txt . 67.94 |====================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 32 - Model: GoogLeNet images/sec > Higher Is Better phoronix-ml.txt . 218.07 |===================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 images/sec > Higher Is Better phoronix-ml.txt . 62.29 |====================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 16 - Model: GoogLeNet images/sec > Higher Is Better phoronix-ml.txt . 198.11 |===================================================== TensorFlow 2.16.1 Device: GPU - Batch Size: 512 - Model: AlexNet images/sec > Higher Is Better phoronix-ml.txt . 49.28 |====================================================== TensorFlow 2.16.1 Device: GPU - Batch Size: 256 - Model: AlexNet images/sec > Higher Is Better phoronix-ml.txt . 48.92 |====================================================== TensorFlow 2.16.1 Device: GPU - Batch Size: 1 - Model: ResNet-50 images/sec > Higher Is Better phoronix-ml.txt . 6.69 |======================================================= TensorFlow 2.16.1 Device: GPU - Batch Size: 1 - Model: GoogLeNet images/sec > Higher Is Better phoronix-ml.txt . 21.03 |====================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 512 - Model: AlexNet images/sec > Higher Is Better phoronix-ml.txt . 643.44 |===================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 256 - Model: AlexNet images/sec > Higher Is Better phoronix-ml.txt . 627.50 |===================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 1 - Model: ResNet-50 images/sec > Higher Is Better phoronix-ml.txt . 18.38 |====================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 1 - Model: GoogLeNet images/sec > Higher Is Better phoronix-ml.txt . 60.92 |====================================================== TensorFlow 2.16.1 Device: GPU - Batch Size: 64 - Model: AlexNet images/sec > Higher Is Better phoronix-ml.txt . 47.85 |====================================================== TensorFlow 2.16.1 Device: GPU - Batch Size: 512 - Model: VGG-16 images/sec > Higher Is Better phoronix-ml.txt . 2.55 |======================================================= TensorFlow 2.16.1 Device: GPU - Batch Size: 32 - Model: AlexNet images/sec > Higher Is Better phoronix-ml.txt . 46.14 |====================================================== TensorFlow 2.16.1 Device: GPU - Batch Size: 256 - Model: VGG-16 images/sec > Higher Is Better phoronix-ml.txt . 2.55 |======================================================= TensorFlow 2.16.1 Device: GPU - Batch Size: 16 - Model: AlexNet images/sec > Higher Is Better phoronix-ml.txt . 42.43 |====================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 64 - Model: AlexNet images/sec > Higher Is Better phoronix-ml.txt . 516.18 |===================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 512 - Model: VGG-16 images/sec > Higher Is Better phoronix-ml.txt . 30.43 |====================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 32 - Model: AlexNet images/sec > Higher Is Better phoronix-ml.txt . 409.56 |===================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 256 - Model: VGG-16 images/sec > Higher Is Better phoronix-ml.txt . 30.16 |====================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 16 - Model: AlexNet images/sec > Higher Is Better phoronix-ml.txt . 288.71 |===================================================== TensorFlow 2.16.1 Device: GPU - Batch Size: 64 - Model: VGG-16 images/sec > Higher Is Better phoronix-ml.txt . 2.55 |======================================================= TensorFlow 2.16.1 Device: GPU - Batch Size: 32 - Model: VGG-16 images/sec > Higher Is Better phoronix-ml.txt . 2.54 |======================================================= TensorFlow 2.16.1 Device: GPU - Batch Size: 16 - Model: VGG-16 images/sec > Higher Is Better phoronix-ml.txt . 2.51 |======================================================= TensorFlow 2.16.1 Device: GPU - Batch Size: 1 - Model: AlexNet images/sec > Higher Is Better phoronix-ml.txt . 15.38 |====================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 64 - Model: VGG-16 images/sec > Higher Is Better phoronix-ml.txt . 29.05 |====================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 32 - Model: VGG-16 images/sec > Higher Is Better phoronix-ml.txt . 28.32 |====================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 16 - Model: VGG-16 images/sec > Higher Is Better phoronix-ml.txt . 27.34 |====================================================== TensorFlow 2.16.1 Device: CPU - Batch Size: 1 - Model: AlexNet images/sec > Higher Is Better phoronix-ml.txt . 30.66 |====================================================== TensorFlow 2.16.1 Device: GPU - Batch Size: 1 - Model: VGG-16 images/sec > Higher Is Better phoronix-ml.txt . 2.20 |======================================================= TensorFlow 2.16.1 Device: CPU - Batch Size: 1 - Model: VGG-16 images/sec > Higher Is Better phoronix-ml.txt . 9.70 |======================================================= PyTorch 2.2.1 Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l batches/sec > Higher Is Better phoronix-ml.txt . 9.94 |======================================================= PyTorch 2.2.1 Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l batches/sec > Higher Is Better phoronix-ml.txt . 10.11 |====================================================== PyTorch 2.2.1 Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l batches/sec > Higher Is Better phoronix-ml.txt . 9.98 |======================================================= PyTorch 2.2.1 Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l batches/sec > Higher Is Better phoronix-ml.txt . 9.85 |======================================================= PyTorch 2.2.1 Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l batches/sec > Higher Is Better phoronix-ml.txt . 9.90 |======================================================= PyTorch 2.2.1 Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l batches/sec > Higher Is Better phoronix-ml.txt . 14.18 |====================================================== PyTorch 2.2.1 Device: CPU - Batch Size: 512 - Model: ResNet-152 batches/sec > Higher Is Better phoronix-ml.txt . 17.99 |====================================================== PyTorch 2.2.1 Device: CPU - Batch Size: 256 - Model: ResNet-152 batches/sec > Higher Is Better phoronix-ml.txt . 17.92 |====================================================== PyTorch 2.2.1 Device: CPU - Batch Size: 64 - Model: ResNet-152 batches/sec > Higher Is Better phoronix-ml.txt . 17.64 |====================================================== PyTorch 2.2.1 Device: CPU - Batch Size: 512 - Model: ResNet-50 batches/sec > Higher Is Better phoronix-ml.txt . 45.75 |====================================================== PyTorch 2.2.1 Device: CPU - Batch Size: 32 - Model: ResNet-152 batches/sec > Higher Is Better phoronix-ml.txt . 17.78 |====================================================== PyTorch 2.2.1 Device: CPU - Batch Size: 256 - Model: ResNet-50 batches/sec > Higher Is Better phoronix-ml.txt . 46.06 |====================================================== PyTorch 2.2.1 Device: CPU - Batch Size: 16 - Model: ResNet-152 batches/sec > Higher Is Better phoronix-ml.txt . 17.97 |====================================================== PyTorch 2.2.1 Device: CPU - Batch Size: 64 - Model: ResNet-50 batches/sec > Higher Is Better phoronix-ml.txt . 46.69 |====================================================== PyTorch 2.2.1 Device: CPU - Batch Size: 32 - Model: ResNet-50 batches/sec > Higher Is Better phoronix-ml.txt . 46.42 |====================================================== PyTorch 2.2.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 batches/sec > Higher Is Better phoronix-ml.txt . 45.59 |====================================================== PyTorch 2.2.1 Device: CPU - Batch Size: 1 - Model: ResNet-152 batches/sec > Higher Is Better phoronix-ml.txt . 23.19 |====================================================== PyTorch 2.2.1 Device: CPU - Batch Size: 1 - Model: ResNet-50 batches/sec > Higher Is Better phoronix-ml.txt . 60.17 |====================================================== TensorFlow Lite 2022-05-18 Model: Inception ResNet V2 Microseconds < Lower Is Better phoronix-ml.txt . 33356.3 |==================================================== TensorFlow Lite 2022-05-18 Model: Mobilenet Quant Microseconds < Lower Is Better phoronix-ml.txt . 2501.21 |==================================================== TensorFlow Lite 2022-05-18 Model: Mobilenet Float Microseconds < Lower Is Better phoronix-ml.txt . 1381.25 |==================================================== TensorFlow Lite 2022-05-18 Model: NASNet Mobile Microseconds < Lower Is Better phoronix-ml.txt . 33662.5 |==================================================== TensorFlow Lite 2022-05-18 Model: SqueezeNet Microseconds < Lower Is Better phoronix-ml.txt . 1836.28 |==================================================== RNNoise 0.2 Input: 26 Minute Long Talking Sample Seconds < Lower Is Better phoronix-ml.txt . 7.852 |====================================================== R Benchmark Seconds < Lower Is Better phoronix-ml.txt . 0.1252 |===================================================== DeepSpeech 0.6 Acceleration: CPU Seconds < Lower Is Better phoronix-ml.txt . 46.23 |====================================================== Numpy Benchmark Score > Higher Is Better phoronix-ml.txt . 715.50 |===================================================== oneDNN 3.6 Harness: Recurrent Neural Network Inference - Engine: CPU ms < Lower Is Better phoronix-ml.txt . 736.40 |===================================================== oneDNN 3.6 Harness: Recurrent Neural Network Training - Engine: CPU ms < Lower Is Better phoronix-ml.txt . 1261.40 |==================================================== oneDNN 3.6 Harness: Deconvolution Batch shapes_3d - Engine: CPU ms < Lower Is Better phoronix-ml.txt . 1.85206 |==================================================== oneDNN 3.6 Harness: Deconvolution Batch shapes_1d - Engine: CPU ms < Lower Is Better phoronix-ml.txt . 3.77567 |==================================================== oneDNN 3.6 Harness: Convolution Batch Shapes Auto - Engine: CPU ms < Lower Is Better phoronix-ml.txt . 2.36317 |==================================================== oneDNN 3.6 Harness: IP Shapes 3D - Engine: CPU ms < Lower Is Better phoronix-ml.txt . 1.39591 |==================================================== oneDNN 3.6 Harness: IP Shapes 1D - Engine: CPU ms < Lower Is Better phoronix-ml.txt . 1.13657 |==================================================== SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Texture Read Bandwidth GB/s > Higher Is Better phoronix-ml.txt . 1003.32 |==================================================== SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Bus Speed Readback GB/s > Higher Is Better phoronix-ml.txt . 26.25 |====================================================== SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Bus Speed Download GB/s > Higher Is Better phoronix-ml.txt . 24.99 |====================================================== SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Max SP Flops GFLOPS > Higher Is Better phoronix-ml.txt . 93757.3 |==================================================== OpenCV 4.7 Test: DNN - Deep Neural Network ms < Lower Is Better phoronix-ml.txt . 33080 |====================================================== Llamafile 0.8.6 Test: wizardcoder-python-34b-v1.0.Q6_K - Acceleration: CPU Tokens Per Second > Higher Is Better Llamafile 0.8.6 Test: mistral-7b-instruct-v0.2.Q8_0 - Acceleration: CPU Tokens Per Second > Higher Is Better Llamafile 0.8.6 Test: llava-v1.5-7b-q4 - Acceleration: CPU Tokens Per Second > Higher Is Better Llama.cpp b3067 Model: llama-2-70b-chat.Q5_0.gguf Tokens Per Second > Higher Is Better Llama.cpp b3067 Model: llama-2-13b.Q4_0.gguf Tokens Per Second > Higher Is Better Llama.cpp b3067 Model: llama-2-7b.Q4_0.gguf Tokens Per Second > Higher Is Better Scikit-Learn 1.2.2 Benchmark: Plot Non-Negative Matrix Factorization Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Singular Value Decomposition Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: RCV1 Logreg Convergencet Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Parallel Pairwise Seconds < Lower Is Better phoronix-ml.txt . 168.00 |===================================================== Scikit-Learn 1.2.2 Benchmark: Plot Fast KMeans Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Lasso Path Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Glmnet Seconds < Lower Is Better Mlpack Benchmark Benchmark: scikit_linearridgeregression Seconds < Lower Is Better Mlpack Benchmark Benchmark: scikit_svm Seconds < Lower Is Better Mlpack Benchmark Benchmark: scikit_qda Seconds < Lower Is Better Mlpack Benchmark Benchmark: scikit_ica Seconds < Lower Is Better AI Benchmark Alpha 0.1.2 Score > Higher Is Better ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better phoronix-ml.txt . 452.31 |===================================================== ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better phoronix-ml.txt . 770.91 |===================================================== ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better phoronix-ml.txt . 10.28 |====================================================== ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better phoronix-ml.txt . 8.08149 |==================================================== ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better phoronix-ml.txt . 3.04551 |==================================================== ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better phoronix-ml.txt . 9.18292 |==================================================== ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better phoronix-ml.txt . 242.91 |===================================================== ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better phoronix-ml.txt . 834.57 |===================================================== ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better phoronix-ml.txt . 1.20339 |==================================================== ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better phoronix-ml.txt . 1.56253 |==================================================== ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better phoronix-ml.txt . 4.92768 |==================================================== ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better phoronix-ml.txt . 5.81489 |==================================================== ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better phoronix-ml.txt . 3.67307 |==================================================== ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better phoronix-ml.txt . 9.10376 |==================================================== ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better phoronix-ml.txt . 17.29 |====================================================== ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better phoronix-ml.txt . 103.58 |===================================================== ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better phoronix-ml.txt . 185.36 |===================================================== ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better Numenta Anomaly Benchmark 1.1 Detector: Contextual Anomaly Detector OSE Seconds < Lower Is Better Numenta Anomaly Benchmark 1.1 Detector: Bayesian Changepoint Seconds < Lower Is Better Numenta Anomaly Benchmark 1.1 Detector: Earthgecko Skyline Seconds < Lower Is Better Numenta Anomaly Benchmark 1.1 Detector: Windowed Gaussian Seconds < Lower Is Better Numenta Anomaly Benchmark 1.1 Detector: Relative Entropy Seconds < Lower Is Better Numenta Anomaly Benchmark 1.1 Detector: KNN CAD Seconds < Lower Is Better OpenVINO 2024.0 Model: Noise Suppression Poconet-Like FP16 - Device: CPU FPS > Higher Is Better phoronix-ml.txt . 2052.02 |==================================================== OpenVINO 2024.0 Model: Machine Translation EN To DE FP16 - Device: CPU FPS > Higher Is Better phoronix-ml.txt . 187.88 |===================================================== OpenVINO 2024.0 Model: Road Segmentation ADAS FP16 - Device: CPU ms < Lower Is Better phoronix-ml.txt . 25.83 |====================================================== OpenVINO 2024.0 Model: Road Segmentation ADAS FP16 - Device: CPU FPS > Higher Is Better phoronix-ml.txt . 465.86 |===================================================== OpenVINO 2024.0 Model: Person Detection FP16 - Device: CPU ms < Lower Is Better phoronix-ml.txt . 93.29 |====================================================== OpenVINO 2024.0 Model: Person Detection FP16 - Device: CPU FPS > Higher Is Better phoronix-ml.txt . 129.45 |===================================================== PlaidML FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU Examples Per Second > Higher Is Better PlaidML FP16: No - Mode: Inference - Network: VGG16 - Device: CPU Examples Per Second > Higher Is Better TNN 0.3 Target: CPU - Model: SqueezeNet v1.1 ms < Lower Is Better TNN 0.3 Target: CPU - Model: SqueezeNet v2 ms < Lower Is Better TNN 0.3 Target: CPU - Model: MobileNet v2 ms < Lower Is Better TNN 0.3 Target: CPU - Model: DenseNet ms < Lower Is Better NCNN 20230517 Target: Vulkan GPU - Model: FastestDet ms < Lower Is Better phoronix-ml.txt . 9.82 |======================================================= NCNN 20230517 Target: Vulkan GPU - Model: squeezenet_ssd ms < Lower Is Better phoronix-ml.txt . 16.04 |====================================================== NCNN 20230517 Target: Vulkan GPU - Model: resnet50 ms < Lower Is Better phoronix-ml.txt . 14.65 |====================================================== NCNN 20230517 Target: Vulkan GPU - Model: efficientnet-b0 ms < Lower Is Better phoronix-ml.txt . 8.68 |======================================================= NCNN 20230517 Target: CPU - Model: FastestDet ms < Lower Is Better phoronix-ml.txt . 9.80 |======================================================= NCNN 20230517 Target: CPU - Model: mnasnet ms < Lower Is Better phoronix-ml.txt . 6.11 |======================================================= Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 1000 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 200 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 100 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 1000 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 200 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 100 Milli-Seconds < Lower Is Better spaCy 3.4.1 tokens/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better TensorFlow Lite 2022-05-18 Model: Inception V4 Microseconds < Lower Is Better phoronix-ml.txt . 20372.6 |==================================================== LeelaChessZero 0.31.1 Backend: BLAS Nodes Per Second > Higher Is Better phoronix-ml.txt . 184 |========================================================