ryz-5800X-orig-water-build AMD Ryzen 7 5800X 8-Core testing with a ASRock X570 Phantom Gaming-ITX/TB3 (P5.01 BIOS) and Gigabyte NVIDIA GeForce RTX 3080 10GB on Ubuntu 22.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2307096-NE-RYZ5800XO55&grt .
ryz-5800X-orig-water-build Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server Display Driver OpenGL OpenCL Vulkan Compiler File-System Screen Resolution water cooled build of 5800X AMD Ryzen 7 5800X 8-Core @ 3.80GHz (8 Cores / 16 Threads) ASRock X570 Phantom Gaming-ITX/TB3 (P5.01 BIOS) AMD Starship/Matisse 64GB 2000GB Samsung SSD 970 EVO Plus 2TB + 4001GB Samsung SSD 870 Gigabyte NVIDIA GeForce RTX 3080 10GB NVIDIA GA102 HD Audio HP Z27 Intel I211 + Intel Wi-Fi 6 AX200 Ubuntu 22.04 5.19.0-45-generic (x86_64) GNOME Shell 42.5 X Server 1.21.1.3 NVIDIA 530.30.02 4.6.0 OpenCL 3.0 CUDA 12.1.68 1.3.236 GCC 11.3.0 + CUDA 12.0 ext4 3840x2160 OpenBenchmarking.org - Transparent Huge Pages: madvise - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-aYxV0E/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-aYxV0E/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201025 - BAR1 / Visible vRAM Size: 256 MiB - vBIOS Version: 94.02.42.80.61 - GPU Compute Cores: 8704 - Python 3.10.9 - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
ryz-5800X-orig-water-build ai-benchmark: Device Inference Score ai-benchmark: Device Training Score ai-benchmark: Device AI Score caffe: AlexNet - CPU - 100 caffe: AlexNet - CPU - 200 caffe: AlexNet - CPU - 1000 caffe: GoogleNet - CPU - 100 caffe: GoogleNet - CPU - 200 caffe: GoogleNet - CPU - 1000 deepspeech: CPU lczero: BLAS mnn: nasnet mnn: mobilenetV3 mnn: squeezenetv1.1 mnn: resnet-v2-50 mnn: SqueezeNetV1.0 mnn: MobileNetV2_224 mnn: mobilenet-v1-1.0 mnn: inception-v3 ncnn: CPU - mobilenet ncnn: CPU-v2-v2 - mobilenet-v2 ncnn: CPU-v3-v3 - mobilenet-v3 ncnn: CPU - mnasnet ncnn: CPU - efficientnet-b0 ncnn: CPU - blazeface ncnn: CPU - googlenet ncnn: CPU - vgg16 ncnn: CPU - resnet18 ncnn: CPU - alexnet ncnn: CPU - resnet50 ncnn: CPU - yolov4-tiny ncnn: CPU - squeezenet_ssd ncnn: CPU - regnety_400m ncnn: CPU - vision_transformer ncnn: CPU - FastestDet ncnn: CPU - shufflenet-v2 ncnn: Vulkan GPU - mobilenet ncnn: Vulkan GPU-v2-v2 - mobilenet-v2 ncnn: Vulkan GPU-v3-v3 - mobilenet-v3 ncnn: Vulkan GPU - shufflenet-v2 ncnn: Vulkan GPU - mnasnet ncnn: Vulkan GPU - efficientnet-b0 ncnn: Vulkan GPU - blazeface ncnn: Vulkan GPU - googlenet ncnn: Vulkan GPU - vgg16 ncnn: Vulkan GPU - resnet18 ncnn: Vulkan GPU - alexnet ncnn: Vulkan GPU - resnet50 ncnn: Vulkan GPU - yolov4-tiny ncnn: Vulkan GPU - squeezenet_ssd ncnn: Vulkan GPU - vision_transformer ncnn: Vulkan GPU - FastestDet ncnn: Vulkan GPU - regnety_400m deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream numenta-nab: KNN CAD numenta-nab: Relative Entropy numenta-nab: Windowed Gaussian numenta-nab: Earthgecko Skyline numenta-nab: Bayesian Changepoint numenta-nab: Contextual Anomaly Detector OSE numpy: onednn: IP Shapes 1D - f32 - CPU onednn: IP Shapes 3D - f32 - CPU onednn: IP Shapes 1D - u8s8f32 - CPU onednn: IP Shapes 3D - u8s8f32 - CPU onednn: Convolution Batch Shapes Auto - f32 - CPU onednn: Deconvolution Batch shapes_1d - f32 - CPU onednn: Deconvolution Batch shapes_3d - f32 - CPU onednn: Convolution Batch Shapes Auto - u8s8f32 - CPU onednn: Deconvolution Batch shapes_1d - u8s8f32 - CPU onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPU onednn: Recurrent Neural Network Training - f32 - CPU onednn: Recurrent Neural Network Inference - f32 - CPU onednn: Recurrent Neural Network Training - u8s8f32 - CPU onednn: Recurrent Neural Network Inference - u8s8f32 - CPU onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPU opencv: DNN - Deep Neural Network rbenchmark: rnnoise: scikit-learn: GLM scikit-learn: SAGA scikit-learn: Tree scikit-learn: Lasso scikit-learn: Sparsify scikit-learn: Plot Ward scikit-learn: MNIST Dataset scikit-learn: Plot Neighbors scikit-learn: SGD Regression scikit-learn: SGDOneClassSVM scikit-learn: Plot Lasso Path scikit-learn: Isolation Forest scikit-learn: Plot Fast KMeans scikit-learn: Text Vectorizers scikit-learn: Plot Hierarchical scikit-learn: Plot OMP vs. LARS scikit-learn: Feature Expansions scikit-learn: LocalOutlierFactor scikit-learn: TSNE MNIST Dataset scikit-learn: Isotonic / Logistic scikit-learn: Plot Incremental PCA scikit-learn: Hist Gradient Boosting scikit-learn: Sample Without Replacement scikit-learn: Covertype Dataset Benchmark scikit-learn: Hist Gradient Boosting Adult scikit-learn: Isotonic / Perturbed Logarithm scikit-learn: Hist Gradient Boosting Threading scikit-learn: Plot Singular Value Decomposition scikit-learn: Hist Gradient Boosting Higgs Boson scikit-learn: 20 Newsgroups / Logistic Regression scikit-learn: Plot Polynomial Kernel Approximation scikit-learn: Hist Gradient Boosting Categorical Only scikit-learn: Kernel PCA Solvers / Time vs. N Samples scikit-learn: Kernel PCA Solvers / Time vs. N Components scikit-learn: Sparse Rand Projections / 100 Iterations spacy: en_core_web_lg spacy: en_core_web_trf tensorflow: CPU - 16 - VGG-16 tensorflow: CPU - 32 - VGG-16 tensorflow: CPU - 64 - VGG-16 tensorflow: CPU - 16 - AlexNet tensorflow: CPU - 256 - VGG-16 tensorflow: CPU - 32 - AlexNet tensorflow: CPU - 512 - VGG-16 tensorflow: CPU - 64 - AlexNet tensorflow: CPU - 256 - AlexNet tensorflow: CPU - 512 - AlexNet tensorflow: CPU - 16 - GoogLeNet tensorflow: CPU - 16 - ResNet-50 tensorflow: CPU - 32 - GoogLeNet tensorflow: CPU - 32 - ResNet-50 tensorflow: CPU - 64 - GoogLeNet tensorflow: CPU - 64 - ResNet-50 tensorflow: CPU - 256 - GoogLeNet tensorflow: CPU - 256 - ResNet-50 tensorflow: CPU - 512 - GoogLeNet tensorflow: CPU - 512 - ResNet-50 tensorflow-lite: SqueezeNet tensorflow-lite: Inception V4 tensorflow-lite: NASNet Mobile tensorflow-lite: Mobilenet Float tensorflow-lite: Mobilenet Quant tensorflow-lite: Inception ResNet V2 tnn: CPU - DenseNet tnn: CPU - MobileNet v2 tnn: CPU - SqueezeNet v2 tnn: CPU - SqueezeNet v1.1 water cooled build of 5800X 1313 1247 2560 33917 68197 339063 89939 178392 897866 54.51380 1052 7.876 1.025 2.436 19.250 4.888 2.006 1.921 25.569 10.81 2.80 2.16 2.60 4.72 0.80 9.91 47.30 10.54 8.62 18.39 19.17 14.61 6.95 198.97 2.57 2.15 4.01 1.84 2.69 1.64 1.56 11.16 0.95 15.53 6.11 10.13 2.15 3.79 34.89 40.68 236.24 2.70 2.02 8.4566 471.6171 8.0906 123.5932 98.4622 40.6114 69.0496 14.4739 28.7120 139.1744 27.7199 36.0661 49.5898 80.5747 45.1774 22.1244 108.2819 36.9222 89.9204 11.1134 76.3226 52.3861 61.6332 16.2182 10.4672 381.4113 10.2513 97.5339 37.6722 106.0402 30.7738 32.4883 8.4386 472.2158 8.1206 123.1356 191.575 16.879 9.389 94.483 34.994 37.643 580.78 3.41565 8.44050 1.42899 1.63297 15.0031 7.07805 6.11078 14.4191 2.01682 2.87855 3195.17 1866.37 3208.11 1857.48 3219.95 1862.51 46491 0.1092 15.654 223.170 704.289 41.300 291.782 80.632 47.382 54.736 133.110 73.501 254.497 179.090 224.619 200.753 50.423 169.254 61.652 108.539 48.563 234.677 1293.710 40.088 91.586 111.700 310.452 75.394 1587.487 144.780 95.229 58.166 35.315 116.848 16.443 140.840 41.791 520.693 15204 1500 5.57 5.79 5.91 68.68 5.87 91.11 5.87 108.11 123.61 126.25 36.89 12.60 36.35 12.14 35.33 11.80 34.10 11.81 34.34 11.87 2994.52 44873.4 8458.27 2122.82 3837.99 41004.4 2599.271 230.654 50.973 212.955 OpenBenchmarking.org
AI Benchmark Alpha Device Inference Score OpenBenchmarking.org Score, More Is Better AI Benchmark Alpha 0.1.2 Device Inference Score water cooled build of 5800X 300 600 900 1200 1500 1313
AI Benchmark Alpha Device Training Score OpenBenchmarking.org Score, More Is Better AI Benchmark Alpha 0.1.2 Device Training Score water cooled build of 5800X 300 600 900 1200 1500 1247
AI Benchmark Alpha Device AI Score OpenBenchmarking.org Score, More Is Better AI Benchmark Alpha 0.1.2 Device AI Score water cooled build of 5800X 500 1000 1500 2000 2500 2560
Caffe Model: AlexNet - Acceleration: CPU - Iterations: 100 OpenBenchmarking.org Milli-Seconds, Fewer Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 100 water cooled build of 5800X 7K 14K 21K 28K 35K SE +/- 118.32, N = 3 33917 1. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
Caffe Model: AlexNet - Acceleration: CPU - Iterations: 200 OpenBenchmarking.org Milli-Seconds, Fewer Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 200 water cooled build of 5800X 15K 30K 45K 60K 75K SE +/- 336.05, N = 3 68197 1. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
Caffe Model: AlexNet - Acceleration: CPU - Iterations: 1000 OpenBenchmarking.org Milli-Seconds, Fewer Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 1000 water cooled build of 5800X 70K 140K 210K 280K 350K SE +/- 427.76, N = 3 339063 1. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
Caffe Model: GoogleNet - Acceleration: CPU - Iterations: 100 OpenBenchmarking.org Milli-Seconds, Fewer Is Better Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 100 water cooled build of 5800X 20K 40K 60K 80K 100K SE +/- 291.61, N = 3 89939 1. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
Caffe Model: GoogleNet - Acceleration: CPU - Iterations: 200 OpenBenchmarking.org Milli-Seconds, Fewer Is Better Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 200 water cooled build of 5800X 40K 80K 120K 160K 200K SE +/- 602.16, N = 3 178392 1. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
Caffe Model: GoogleNet - Acceleration: CPU - Iterations: 1000 OpenBenchmarking.org Milli-Seconds, Fewer Is Better Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 1000 water cooled build of 5800X 200K 400K 600K 800K 1000K SE +/- 1792.84, N = 3 897866 1. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
DeepSpeech Acceleration: CPU OpenBenchmarking.org Seconds, Fewer Is Better DeepSpeech 0.6 Acceleration: CPU water cooled build of 5800X 12 24 36 48 60 SE +/- 0.10, N = 3 54.51
LeelaChessZero Backend: BLAS OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.28 Backend: BLAS water cooled build of 5800X 200 400 600 800 1000 SE +/- 9.55, N = 9 1052 1. (CXX) g++ options: -flto -pthread
Mobile Neural Network Model: nasnet OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: nasnet water cooled build of 5800X 2 4 6 8 10 SE +/- 0.021, N = 3 7.876 MIN: 6.52 / MAX: 93.41 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: mobilenetV3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenetV3 water cooled build of 5800X 0.2306 0.4612 0.6918 0.9224 1.153 SE +/- 0.003, N = 3 1.025 MIN: 0.84 / MAX: 56.03 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: squeezenetv1.1 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: squeezenetv1.1 water cooled build of 5800X 0.5481 1.0962 1.6443 2.1924 2.7405 SE +/- 0.016, N = 3 2.436 MIN: 2.02 / MAX: 76.44 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: resnet-v2-50 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: resnet-v2-50 water cooled build of 5800X 5 10 15 20 25 SE +/- 0.12, N = 3 19.25 MIN: 16.18 / MAX: 85.04 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: SqueezeNetV1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: SqueezeNetV1.0 water cooled build of 5800X 1.0998 2.1996 3.2994 4.3992 5.499 SE +/- 0.051, N = 3 4.888 MIN: 4.05 / MAX: 59.43 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: MobileNetV2_224 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: MobileNetV2_224 water cooled build of 5800X 0.4514 0.9028 1.3542 1.8056 2.257 SE +/- 0.008, N = 3 2.006 MIN: 1.66 / MAX: 50.3 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: mobilenet-v1-1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenet-v1-1.0 water cooled build of 5800X 0.4322 0.8644 1.2966 1.7288 2.161 SE +/- 0.013, N = 3 1.921 MIN: 1.59 / MAX: 55.99 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: inception-v3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: inception-v3 water cooled build of 5800X 6 12 18 24 30 SE +/- 0.02, N = 3 25.57 MIN: 21.5 / MAX: 91.34 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
NCNN Target: CPU - Model: mobilenet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: mobilenet water cooled build of 5800X 3 6 9 12 15 SE +/- 0.16, N = 3 10.81 MIN: 8.95 / MAX: 71.73 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU-v2-v2 - Model: mobilenet-v2 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU-v2-v2 - Model: mobilenet-v2 water cooled build of 5800X 0.63 1.26 1.89 2.52 3.15 SE +/- 0.05, N = 3 2.80 MIN: 2.17 / MAX: 44.95 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU-v3-v3 - Model: mobilenet-v3 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU-v3-v3 - Model: mobilenet-v3 water cooled build of 5800X 0.486 0.972 1.458 1.944 2.43 SE +/- 0.02, N = 3 2.16 MIN: 1.8 / MAX: 49.8 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: mnasnet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: mnasnet water cooled build of 5800X 0.585 1.17 1.755 2.34 2.925 SE +/- 0.09, N = 3 2.60 MIN: 1.92 / MAX: 70.04 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: efficientnet-b0 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: efficientnet-b0 water cooled build of 5800X 1.062 2.124 3.186 4.248 5.31 SE +/- 0.07, N = 3 4.72 MIN: 3.83 / MAX: 48.8 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: blazeface OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: blazeface water cooled build of 5800X 0.18 0.36 0.54 0.72 0.9 SE +/- 0.05, N = 3 0.80 MIN: 0.69 / MAX: 17.69 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: googlenet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: googlenet water cooled build of 5800X 3 6 9 12 15 SE +/- 0.15, N = 3 9.91 MIN: 8 / MAX: 66.05 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: vgg16 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: vgg16 water cooled build of 5800X 11 22 33 44 55 SE +/- 0.14, N = 3 47.30 MIN: 39.92 / MAX: 152.45 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: resnet18 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: resnet18 water cooled build of 5800X 3 6 9 12 15 SE +/- 0.16, N = 3 10.54 MIN: 8.38 / MAX: 81.97 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: alexnet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: alexnet water cooled build of 5800X 2 4 6 8 10 SE +/- 0.09, N = 3 8.62 MIN: 7.11 / MAX: 52.06 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: resnet50 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: resnet50 water cooled build of 5800X 5 10 15 20 25 SE +/- 0.17, N = 3 18.39 MIN: 14.64 / MAX: 91.29 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: yolov4-tiny OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: yolov4-tiny water cooled build of 5800X 5 10 15 20 25 SE +/- 0.28, N = 3 19.17 MIN: 15.69 / MAX: 119.86 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: squeezenet_ssd OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: squeezenet_ssd water cooled build of 5800X 4 8 12 16 20 SE +/- 0.03, N = 3 14.61 MIN: 12.02 / MAX: 125.44 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: regnety_400m OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: regnety_400m water cooled build of 5800X 2 4 6 8 10 SE +/- 0.14, N = 3 6.95 MIN: 5.47 / MAX: 73.44 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: vision_transformer OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: vision_transformer water cooled build of 5800X 40 80 120 160 200 SE +/- 0.07, N = 3 198.97 MIN: 173.67 / MAX: 301.72 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: FastestDet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: FastestDet water cooled build of 5800X 0.5783 1.1566 1.7349 2.3132 2.8915 SE +/- 0.04, N = 3 2.57 MIN: 2.17 / MAX: 46.48 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: shufflenet-v2 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: shufflenet-v2 water cooled build of 5800X 0.4838 0.9676 1.4514 1.9352 2.419 SE +/- 0.00, N = 2 2.15 MIN: 1.81 / MAX: 41.64 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: mobilenet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: mobilenet water cooled build of 5800X 0.9023 1.8046 2.7069 3.6092 4.5115 SE +/- 0.08, N = 12 4.01 MIN: 3.06 / MAX: 44.01 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 water cooled build of 5800X 0.414 0.828 1.242 1.656 2.07 SE +/- 0.09, N = 12 1.84 MIN: 1.09 / MAX: 18.59 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 water cooled build of 5800X 0.6053 1.2106 1.8159 2.4212 3.0265 SE +/- 0.17, N = 12 2.69 MIN: 1.41 / MAX: 23.86 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: shufflenet-v2 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: shufflenet-v2 water cooled build of 5800X 0.369 0.738 1.107 1.476 1.845 SE +/- 0.03, N = 12 1.64 MIN: 1.34 / MAX: 17.65 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: mnasnet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: mnasnet water cooled build of 5800X 0.351 0.702 1.053 1.404 1.755 SE +/- 0.08, N = 11 1.56 MIN: 1.11 / MAX: 20.5 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: efficientnet-b0 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: efficientnet-b0 water cooled build of 5800X 3 6 9 12 15 SE +/- 0.43, N = 12 11.16 MIN: 2.29 / MAX: 28.21 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: blazeface OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: blazeface water cooled build of 5800X 0.2138 0.4276 0.6414 0.8552 1.069 SE +/- 0.02, N = 12 0.95 MIN: 0.82 / MAX: 23.68 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: googlenet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: googlenet water cooled build of 5800X 4 8 12 16 20 SE +/- 0.15, N = 12 15.53 MIN: 5.34 / MAX: 34.04 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: vgg16 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: vgg16 water cooled build of 5800X 2 4 6 8 10 SE +/- 0.11, N = 12 6.11 MIN: 1.73 / MAX: 32.3 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: resnet18 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: resnet18 water cooled build of 5800X 3 6 9 12 15 SE +/- 1.00, N = 12 10.13 MIN: 1.16 / MAX: 27.77 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: alexnet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: alexnet water cooled build of 5800X 0.4838 0.9676 1.4514 1.9352 2.419 SE +/- 0.45, N = 12 2.15 MIN: 1.09 / MAX: 23.67 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: resnet50 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: resnet50 water cooled build of 5800X 0.8528 1.7056 2.5584 3.4112 4.264 SE +/- 0.20, N = 12 3.79 MIN: 1.76 / MAX: 22.36 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: yolov4-tiny OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: yolov4-tiny water cooled build of 5800X 8 16 24 32 40 SE +/- 0.13, N = 12 34.89 MIN: 8.42 / MAX: 53.08 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: squeezenet_ssd OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: squeezenet_ssd water cooled build of 5800X 9 18 27 36 45 SE +/- 0.08, N = 12 40.68 MIN: 16.38 / MAX: 65.59 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: vision_transformer OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: vision_transformer water cooled build of 5800X 50 100 150 200 250 SE +/- 0.38, N = 12 236.24 MIN: 169.87 / MAX: 358.43 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: FastestDet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: FastestDet water cooled build of 5800X 0.6075 1.215 1.8225 2.43 3.0375 SE +/- 0.04, N = 9 2.70 MIN: 1.54 / MAX: 22.68 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: regnety_400m OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: regnety_400m water cooled build of 5800X 0.4545 0.909 1.3635 1.818 2.2725 SE +/- 0.05, N = 11 2.02 MIN: 1.7 / MAX: 18.9 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream water cooled build of 5800X 2 4 6 8 10 SE +/- 0.0154, N = 3 8.4566
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream water cooled build of 5800X 100 200 300 400 500 SE +/- 1.18, N = 3 471.62
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream water cooled build of 5800X 2 4 6 8 10 SE +/- 0.0023, N = 3 8.0906
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream water cooled build of 5800X 30 60 90 120 150 SE +/- 0.04, N = 3 123.59
Neural Magic DeepSparse Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream water cooled build of 5800X 20 40 60 80 100 SE +/- 0.20, N = 3 98.46
Neural Magic DeepSparse Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream water cooled build of 5800X 9 18 27 36 45 SE +/- 0.08, N = 3 40.61
Neural Magic DeepSparse Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream water cooled build of 5800X 15 30 45 60 75 SE +/- 0.03, N = 3 69.05
Neural Magic DeepSparse Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream water cooled build of 5800X 4 8 12 16 20 SE +/- 0.01, N = 3 14.47
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream water cooled build of 5800X 7 14 21 28 35 SE +/- 0.09, N = 3 28.71
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream water cooled build of 5800X 30 60 90 120 150 SE +/- 0.42, N = 3 139.17
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream water cooled build of 5800X 7 14 21 28 35 SE +/- 0.03, N = 3 27.72
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream water cooled build of 5800X 8 16 24 32 40 SE +/- 0.04, N = 3 36.07
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream water cooled build of 5800X 11 22 33 44 55 SE +/- 0.04, N = 3 49.59
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream water cooled build of 5800X 20 40 60 80 100 SE +/- 0.06, N = 3 80.57
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream water cooled build of 5800X 10 20 30 40 50 SE +/- 0.02, N = 3 45.18
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream water cooled build of 5800X 5 10 15 20 25 SE +/- 0.01, N = 3 22.12
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream water cooled build of 5800X 20 40 60 80 100 SE +/- 0.08, N = 3 108.28
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream water cooled build of 5800X 8 16 24 32 40 SE +/- 0.03, N = 3 36.92
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream water cooled build of 5800X 20 40 60 80 100 SE +/- 0.02, N = 3 89.92
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream water cooled build of 5800X 3 6 9 12 15 SE +/- 0.00, N = 3 11.11
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream water cooled build of 5800X 20 40 60 80 100 SE +/- 0.05, N = 3 76.32
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream water cooled build of 5800X 12 24 36 48 60 SE +/- 0.03, N = 3 52.39
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream water cooled build of 5800X 14 28 42 56 70 SE +/- 0.03, N = 3 61.63
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream water cooled build of 5800X 4 8 12 16 20 SE +/- 0.01, N = 3 16.22
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream water cooled build of 5800X 3 6 9 12 15 SE +/- 0.02, N = 3 10.47
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream water cooled build of 5800X 80 160 240 320 400 SE +/- 0.46, N = 3 381.41
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream water cooled build of 5800X 3 6 9 12 15 SE +/- 0.00, N = 3 10.25
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream water cooled build of 5800X 20 40 60 80 100 SE +/- 0.02, N = 3 97.53
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream water cooled build of 5800X 9 18 27 36 45 SE +/- 0.02, N = 3 37.67
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream water cooled build of 5800X 20 40 60 80 100 SE +/- 0.03, N = 3 106.04
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream water cooled build of 5800X 7 14 21 28 35 SE +/- 0.01, N = 3 30.77
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream water cooled build of 5800X 8 16 24 32 40 SE +/- 0.02, N = 3 32.49
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream water cooled build of 5800X 2 4 6 8 10 SE +/- 0.0152, N = 3 8.4386
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream water cooled build of 5800X 100 200 300 400 500 SE +/- 0.92, N = 3 472.22
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream water cooled build of 5800X 2 4 6 8 10 SE +/- 0.0031, N = 3 8.1206
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream water cooled build of 5800X 30 60 90 120 150 SE +/- 0.05, N = 3 123.14
Numenta Anomaly Benchmark Detector: KNN CAD OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: KNN CAD water cooled build of 5800X 40 80 120 160 200 SE +/- 0.16, N = 3 191.58
Numenta Anomaly Benchmark Detector: Relative Entropy OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: Relative Entropy water cooled build of 5800X 4 8 12 16 20 SE +/- 0.16, N = 3 16.88
Numenta Anomaly Benchmark Detector: Windowed Gaussian OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: Windowed Gaussian water cooled build of 5800X 3 6 9 12 15 SE +/- 0.025, N = 3 9.389
Numenta Anomaly Benchmark Detector: Earthgecko Skyline OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: Earthgecko Skyline water cooled build of 5800X 20 40 60 80 100 SE +/- 1.06, N = 15 94.48
Numenta Anomaly Benchmark Detector: Bayesian Changepoint OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: Bayesian Changepoint water cooled build of 5800X 8 16 24 32 40 SE +/- 0.33, N = 15 34.99
Numenta Anomaly Benchmark Detector: Contextual Anomaly Detector OSE OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: Contextual Anomaly Detector OSE water cooled build of 5800X 9 18 27 36 45 SE +/- 0.17, N = 3 37.64
Numpy Benchmark OpenBenchmarking.org Score, More Is Better Numpy Benchmark water cooled build of 5800X 130 260 390 520 650 SE +/- 0.90, N = 3 580.78
oneDNN Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU water cooled build of 5800X 0.7685 1.537 2.3055 3.074 3.8425 SE +/- 0.03627, N = 3 3.41565 MIN: 2.83 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU water cooled build of 5800X 2 4 6 8 10 SE +/- 0.05589, N = 15 8.44050 MIN: 7.24 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU water cooled build of 5800X 0.3215 0.643 0.9645 1.286 1.6075 SE +/- 0.00989, N = 3 1.42899 MIN: 1.2 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU water cooled build of 5800X 0.3674 0.7348 1.1022 1.4696 1.837 SE +/- 0.00833, N = 3 1.63297 MIN: 1.27 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU water cooled build of 5800X 4 8 12 16 20 SE +/- 0.02, N = 3 15.00 MIN: 13.39 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU water cooled build of 5800X 2 4 6 8 10 SE +/- 0.14290, N = 15 7.07805 MIN: 4.6 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU water cooled build of 5800X 2 4 6 8 10 SE +/- 0.05759, N = 3 6.11078 MIN: 5.32 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU water cooled build of 5800X 4 8 12 16 20 SE +/- 0.18, N = 3 14.42 MIN: 12.58 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU water cooled build of 5800X 0.4538 0.9076 1.3614 1.8152 2.269 SE +/- 0.00889, N = 3 2.01682 MIN: 1.71 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU water cooled build of 5800X 0.6477 1.2954 1.9431 2.5908 3.2385 SE +/- 0.03425, N = 3 2.87855 MIN: 2.45 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU water cooled build of 5800X 700 1400 2100 2800 3500 SE +/- 8.33, N = 3 3195.17 MIN: 2979.53 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU water cooled build of 5800X 400 800 1200 1600 2000 SE +/- 8.97, N = 3 1866.37 MIN: 1677.27 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU water cooled build of 5800X 700 1400 2100 2800 3500 SE +/- 3.95, N = 3 3208.11 MIN: 2984.61 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU water cooled build of 5800X 400 800 1200 1600 2000 SE +/- 11.31, N = 3 1857.48 MIN: 1671.51 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU water cooled build of 5800X 700 1400 2100 2800 3500 SE +/- 33.32, N = 5 3219.95 MIN: 2980.23 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU water cooled build of 5800X 400 800 1200 1600 2000 SE +/- 9.93, N = 3 1862.51 MIN: 1676.62 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenCV Test: DNN - Deep Neural Network OpenBenchmarking.org ms, Fewer Is Better OpenCV 4.7 Test: DNN - Deep Neural Network water cooled build of 5800X 10K 20K 30K 40K 50K SE +/- 442.90, N = 15 46491 1. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt
R Benchmark OpenBenchmarking.org Seconds, Fewer Is Better R Benchmark water cooled build of 5800X 0.0246 0.0492 0.0738 0.0984 0.123 SE +/- 0.0004, N = 3 0.1092 1. R scripting front-end version 4.1.2 (2021-11-01)
RNNoise OpenBenchmarking.org Seconds, Fewer Is Better RNNoise 2020-06-28 water cooled build of 5800X 4 8 12 16 20 SE +/- 0.02, N = 3 15.65 1. (CC) gcc options: -O2 -pedantic -fvisibility=hidden
Scikit-Learn Benchmark: GLM OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: GLM water cooled build of 5800X 50 100 150 200 250 SE +/- 0.19, N = 3 223.17 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: SAGA OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: SAGA water cooled build of 5800X 150 300 450 600 750 SE +/- 4.87, N = 3 704.29 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Tree OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Tree water cooled build of 5800X 9 18 27 36 45 SE +/- 0.49, N = 4 41.30 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Lasso OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Lasso water cooled build of 5800X 60 120 180 240 300 SE +/- 0.98, N = 3 291.78 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Sparsify OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Sparsify water cooled build of 5800X 20 40 60 80 100 SE +/- 0.48, N = 3 80.63 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Plot Ward OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Plot Ward water cooled build of 5800X 11 22 33 44 55 SE +/- 0.22, N = 3 47.38 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: MNIST Dataset OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: MNIST Dataset water cooled build of 5800X 12 24 36 48 60 SE +/- 0.19, N = 3 54.74 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Plot Neighbors OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Plot Neighbors water cooled build of 5800X 30 60 90 120 150 SE +/- 0.40, N = 3 133.11 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: SGD Regression OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: SGD Regression water cooled build of 5800X 16 32 48 64 80 SE +/- 0.23, N = 3 73.50 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: SGDOneClassSVM OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: SGDOneClassSVM water cooled build of 5800X 60 120 180 240 300 SE +/- 1.74, N = 3 254.50 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Plot Lasso Path OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Plot Lasso Path water cooled build of 5800X 40 80 120 160 200 SE +/- 0.55, N = 3 179.09 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Isolation Forest OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Isolation Forest water cooled build of 5800X 50 100 150 200 250 SE +/- 1.03, N = 3 224.62 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Plot Fast KMeans OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Plot Fast KMeans water cooled build of 5800X 40 80 120 160 200 SE +/- 1.99, N = 3 200.75 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Text Vectorizers OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Text Vectorizers water cooled build of 5800X 11 22 33 44 55 SE +/- 0.34, N = 3 50.42 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Plot Hierarchical OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Plot Hierarchical water cooled build of 5800X 40 80 120 160 200 SE +/- 0.71, N = 3 169.25 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Plot OMP vs. LARS OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Plot OMP vs. LARS water cooled build of 5800X 14 28 42 56 70 SE +/- 0.14, N = 3 61.65 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Feature Expansions OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Feature Expansions water cooled build of 5800X 20 40 60 80 100 SE +/- 0.33, N = 3 108.54 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: LocalOutlierFactor OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: LocalOutlierFactor water cooled build of 5800X 11 22 33 44 55 SE +/- 0.21, N = 3 48.56 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: TSNE MNIST Dataset OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: TSNE MNIST Dataset water cooled build of 5800X 50 100 150 200 250 SE +/- 2.14, N = 3 234.68 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Isotonic / Logistic OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Isotonic / Logistic water cooled build of 5800X 300 600 900 1200 1500 SE +/- 3.73, N = 3 1293.71 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Plot Incremental PCA OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Plot Incremental PCA water cooled build of 5800X 9 18 27 36 45 SE +/- 0.49, N = 15 40.09 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Hist Gradient Boosting OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting water cooled build of 5800X 20 40 60 80 100 SE +/- 0.56, N = 3 91.59 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Sample Without Replacement OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Sample Without Replacement water cooled build of 5800X 30 60 90 120 150 SE +/- 0.87, N = 3 111.70 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Covertype Dataset Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Covertype Dataset Benchmark water cooled build of 5800X 70 140 210 280 350 SE +/- 0.62, N = 3 310.45 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Hist Gradient Boosting Adult OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Adult water cooled build of 5800X 20 40 60 80 100 SE +/- 0.19, N = 3 75.39 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Isotonic / Perturbed Logarithm OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Isotonic / Perturbed Logarithm water cooled build of 5800X 300 600 900 1200 1500 SE +/- 2.57, N = 3 1587.49 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Hist Gradient Boosting Threading OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Threading water cooled build of 5800X 30 60 90 120 150 SE +/- 0.26, N = 3 144.78 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Plot Singular Value Decomposition OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Plot Singular Value Decomposition water cooled build of 5800X 20 40 60 80 100 SE +/- 0.19, N = 3 95.23 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Hist Gradient Boosting Higgs Boson OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Higgs Boson water cooled build of 5800X 13 26 39 52 65 SE +/- 0.05, N = 3 58.17 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: 20 Newsgroups / Logistic Regression OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: 20 Newsgroups / Logistic Regression water cooled build of 5800X 8 16 24 32 40 SE +/- 0.07, N = 3 35.32 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Plot Polynomial Kernel Approximation OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Plot Polynomial Kernel Approximation water cooled build of 5800X 30 60 90 120 150 SE +/- 0.18, N = 3 116.85 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Hist Gradient Boosting Categorical Only OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Categorical Only water cooled build of 5800X 4 8 12 16 20 SE +/- 0.03, N = 3 16.44 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Kernel PCA Solvers / Time vs. N Samples OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Kernel PCA Solvers / Time vs. N Samples water cooled build of 5800X 30 60 90 120 150 SE +/- 1.28, N = 7 140.84 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Kernel PCA Solvers / Time vs. N Components OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Kernel PCA Solvers / Time vs. N Components water cooled build of 5800X 10 20 30 40 50 SE +/- 0.37, N = 15 41.79 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Sparse Random Projections / 100 Iterations OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Sparse Random Projections / 100 Iterations water cooled build of 5800X 110 220 330 440 550 SE +/- 1.58, N = 3 520.69 1. (F9X) gfortran options: -O0
spaCy Model: en_core_web_lg OpenBenchmarking.org tokens/sec, More Is Better spaCy 3.4.1 Model: en_core_web_lg water cooled build of 5800X 3K 6K 9K 12K 15K SE +/- 36.67, N = 3 15204
spaCy Model: en_core_web_trf OpenBenchmarking.org tokens/sec, More Is Better spaCy 3.4.1 Model: en_core_web_trf water cooled build of 5800X 300 600 900 1200 1500 SE +/- 1.15, N = 3 1500
TensorFlow Device: CPU - Batch Size: 16 - Model: VGG-16 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: VGG-16 water cooled build of 5800X 1.2533 2.5066 3.7599 5.0132 6.2665 SE +/- 0.00, N = 3 5.57
TensorFlow Device: CPU - Batch Size: 32 - Model: VGG-16 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: VGG-16 water cooled build of 5800X 1.3028 2.6056 3.9084 5.2112 6.514 SE +/- 0.01, N = 3 5.79
TensorFlow Device: CPU - Batch Size: 64 - Model: VGG-16 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: VGG-16 water cooled build of 5800X 1.3298 2.6596 3.9894 5.3192 6.649 SE +/- 0.00, N = 3 5.91
TensorFlow Device: CPU - Batch Size: 16 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: AlexNet water cooled build of 5800X 15 30 45 60 75 SE +/- 0.02, N = 3 68.68
TensorFlow Device: CPU - Batch Size: 256 - Model: VGG-16 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: VGG-16 water cooled build of 5800X 1.3208 2.6416 3.9624 5.2832 6.604 SE +/- 0.00, N = 3 5.87
TensorFlow Device: CPU - Batch Size: 32 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: AlexNet water cooled build of 5800X 20 40 60 80 100 SE +/- 0.12, N = 3 91.11
TensorFlow Device: CPU - Batch Size: 512 - Model: VGG-16 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: VGG-16 water cooled build of 5800X 1.3208 2.6416 3.9624 5.2832 6.604 SE +/- 0.01, N = 3 5.87
TensorFlow Device: CPU - Batch Size: 64 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: AlexNet water cooled build of 5800X 20 40 60 80 100 SE +/- 0.04, N = 3 108.11
TensorFlow Device: CPU - Batch Size: 256 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: AlexNet water cooled build of 5800X 30 60 90 120 150 SE +/- 0.03, N = 3 123.61
TensorFlow Device: CPU - Batch Size: 512 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: AlexNet water cooled build of 5800X 30 60 90 120 150 SE +/- 0.09, N = 3 126.25
TensorFlow Device: CPU - Batch Size: 16 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: GoogLeNet water cooled build of 5800X 8 16 24 32 40 SE +/- 0.15, N = 3 36.89
TensorFlow Device: CPU - Batch Size: 16 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: ResNet-50 water cooled build of 5800X 3 6 9 12 15 SE +/- 0.01, N = 3 12.60
TensorFlow Device: CPU - Batch Size: 32 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: GoogLeNet water cooled build of 5800X 8 16 24 32 40 SE +/- 0.07, N = 3 36.35
TensorFlow Device: CPU - Batch Size: 32 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: ResNet-50 water cooled build of 5800X 3 6 9 12 15 SE +/- 0.01, N = 3 12.14
TensorFlow Device: CPU - Batch Size: 64 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: GoogLeNet water cooled build of 5800X 8 16 24 32 40 SE +/- 0.05, N = 3 35.33
TensorFlow Device: CPU - Batch Size: 64 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: ResNet-50 water cooled build of 5800X 3 6 9 12 15 SE +/- 0.01, N = 3 11.80
TensorFlow Device: CPU - Batch Size: 256 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: GoogLeNet water cooled build of 5800X 8 16 24 32 40 SE +/- 0.00, N = 3 34.10
TensorFlow Device: CPU - Batch Size: 256 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: ResNet-50 water cooled build of 5800X 3 6 9 12 15 SE +/- 0.00, N = 3 11.81
TensorFlow Device: CPU - Batch Size: 512 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: GoogLeNet water cooled build of 5800X 8 16 24 32 40 SE +/- 0.02, N = 3 34.34
TensorFlow Device: CPU - Batch Size: 512 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: ResNet-50 water cooled build of 5800X 3 6 9 12 15 SE +/- 0.01, N = 3 11.87
TensorFlow Lite Model: SqueezeNet OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: SqueezeNet water cooled build of 5800X 600 1200 1800 2400 3000 SE +/- 3.54, N = 3 2994.52
TensorFlow Lite Model: Inception V4 OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: Inception V4 water cooled build of 5800X 10K 20K 30K 40K 50K SE +/- 12.63, N = 3 44873.4
TensorFlow Lite Model: NASNet Mobile OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: NASNet Mobile water cooled build of 5800X 2K 4K 6K 8K 10K SE +/- 6.76, N = 3 8458.27
TensorFlow Lite Model: Mobilenet Float OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: Mobilenet Float water cooled build of 5800X 500 1000 1500 2000 2500 SE +/- 13.87, N = 3 2122.82
TensorFlow Lite Model: Mobilenet Quant OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: Mobilenet Quant water cooled build of 5800X 800 1600 2400 3200 4000 SE +/- 2.89, N = 3 3837.99
TensorFlow Lite Model: Inception ResNet V2 OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: Inception ResNet V2 water cooled build of 5800X 9K 18K 27K 36K 45K SE +/- 36.07, N = 3 41004.4
TNN Target: CPU - Model: DenseNet OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: DenseNet water cooled build of 5800X 600 1200 1800 2400 3000 SE +/- 2.24, N = 3 2599.27 MIN: 2505.87 / MAX: 2720.63 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
TNN Target: CPU - Model: MobileNet v2 OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: MobileNet v2 water cooled build of 5800X 50 100 150 200 250 SE +/- 0.41, N = 3 230.65 MIN: 217.78 / MAX: 251.52 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
TNN Target: CPU - Model: SqueezeNet v2 OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: SqueezeNet v2 water cooled build of 5800X 12 24 36 48 60 SE +/- 0.29, N = 3 50.97 MIN: 50.4 / MAX: 51.91 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
TNN Target: CPU - Model: SqueezeNet v1.1 OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: SqueezeNet v1.1 water cooled build of 5800X 50 100 150 200 250 SE +/- 0.64, N = 3 212.96 MIN: 211.79 / MAX: 214.69 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
Phoronix Test Suite v10.8.5