ryz-5800X-orig-water-build

AMD Ryzen 7 5800X 8-Core testing with a ASRock X570 Phantom Gaming-ITX/TB3 (P5.01 BIOS) and Gigabyte NVIDIA GeForce RTX 3080 10GB on Ubuntu 22.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2307096-NE-RYZ5800XO55&grr.

ryz-5800X-orig-water-buildProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLOpenCLVulkanCompilerFile-SystemScreen Resolutionwater cooled build of 5800XAMD Ryzen 7 5800X 8-Core @ 3.80GHz (8 Cores / 16 Threads)ASRock X570 Phantom Gaming-ITX/TB3 (P5.01 BIOS)AMD Starship/Matisse64GB2000GB Samsung SSD 970 EVO Plus 2TB + 4001GB Samsung SSD 870Gigabyte NVIDIA GeForce RTX 3080 10GBNVIDIA GA102 HD AudioHP Z27Intel I211 + Intel Wi-Fi 6 AX200Ubuntu 22.045.19.0-45-generic (x86_64)GNOME Shell 42.5X Server 1.21.1.3NVIDIA 530.30.024.6.0OpenCL 3.0 CUDA 12.1.681.3.236GCC 11.3.0 + CUDA 12.0ext43840x2160OpenBenchmarking.org- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-aYxV0E/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-aYxV0E/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201025 - BAR1 / Visible vRAM Size: 256 MiB - vBIOS Version: 94.02.42.80.61- GPU Compute Cores: 8704- Python 3.10.9- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

ryz-5800X-orig-water-buildtensorflow: CPU - 512 - VGG-16tensorflow: CPU - 256 - VGG-16tensorflow: CPU - 512 - ResNet-50tensorflow: CPU - 256 - ResNet-50scikit-learn: Isotonic / Perturbed Logarithmscikit-learn: Isotonic / Logistictensorflow: CPU - 512 - GoogLeNettensorflow: CPU - 64 - VGG-16lczero: BLASscikit-learn: SAGAcaffe: GoogleNet - CPU - 1000tensorflow: CPU - 256 - GoogLeNetscikit-learn: Sparse Rand Projections / 100 Iterationsncnn: Vulkan GPU - regnety_400mncnn: Vulkan GPU - FastestDetncnn: Vulkan GPU - vision_transformerncnn: Vulkan GPU - squeezenet_ssdncnn: Vulkan GPU - yolov4-tinyncnn: Vulkan GPU - resnet50ncnn: Vulkan GPU - alexnetncnn: Vulkan GPU - resnet18ncnn: Vulkan GPU - vgg16ncnn: Vulkan GPU - googlenetncnn: Vulkan GPU - blazefacencnn: Vulkan GPU - efficientnet-b0ncnn: Vulkan GPU - mnasnetncnn: Vulkan GPU - shufflenet-v2ncnn: Vulkan GPU-v3-v3 - mobilenet-v3ncnn: Vulkan GPU-v2-v2 - mobilenet-v2ncnn: Vulkan GPU - mobilenettensorflow: CPU - 32 - VGG-16tensorflow: CPU - 64 - ResNet-50numenta-nab: Earthgecko Skylinetensorflow: CPU - 512 - AlexNetscikit-learn: Covertype Dataset Benchmarkai-benchmark: Device AI Scoreai-benchmark: Device Training Scoreai-benchmark: Device Inference Scorescikit-learn: Lassoscikit-learn: Kernel PCA Solvers / Time vs. N Samplesscikit-learn: SGDOneClassSVMcaffe: AlexNet - CPU - 1000tensorflow: CPU - 16 - VGG-16scikit-learn: Isolation Forestscikit-learn: TSNE MNIST Datasetscikit-learn: GLMtensorflow: CPU - 32 - ResNet-50scikit-learn: Plot Fast KMeansscikit-learn: Plot Lasso Pathopencv: DNN - Deep Neural Networktensorflow: CPU - 256 - AlexNetscikit-learn: Plot Hierarchicalscikit-learn: Kernel PCA Solvers / Time vs. N Componentsscikit-learn: Plot Incremental PCAtensorflow: CPU - 64 - GoogLeNetscikit-learn: Hist Gradient Boosting Threadingnumenta-nab: KNN CADtnn: CPU - DenseNetcaffe: GoogleNet - CPU - 200scikit-learn: Plot Neighborsnumenta-nab: Bayesian Changepointscikit-learn: Plot Polynomial Kernel Approximationscikit-learn: Sample Without Replacementnumpy: onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUscikit-learn: Feature Expansionstensorflow: CPU - 16 - ResNet-50mnn: inception-v3mnn: mobilenet-v1-1.0mnn: MobileNetV2_224mnn: SqueezeNetV1.0mnn: resnet-v2-50mnn: squeezenetv1.1mnn: mobilenetV3mnn: nasnetscikit-learn: Plot Singular Value Decompositionscikit-learn: Hist Gradient Boostingscikit-learn: Hist Gradient Boosting Higgs Bosonncnn: CPU - shufflenet-v2ncnn: CPU - FastestDetncnn: CPU - vision_transformerncnn: CPU - regnety_400mncnn: CPU - squeezenet_ssdncnn: CPU - yolov4-tinyncnn: CPU - resnet50ncnn: CPU - alexnetncnn: CPU - resnet18ncnn: CPU - vgg16ncnn: CPU - googlenetncnn: CPU - blazefacencnn: CPU - efficientnet-b0ncnn: CPU - mnasnetncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - mobilenetscikit-learn: Sparsifyonednn: Deconvolution Batch shapes_1d - f32 - CPUscikit-learn: Hist Gradient Boosting Adulttensorflow: CPU - 32 - GoogLeNetscikit-learn: SGD Regressioncaffe: GoogleNet - CPU - 100onednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUscikit-learn: Plot OMP vs. LARSonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUscikit-learn: MNIST Datasetscikit-learn: Text Vectorizerscaffe: AlexNet - CPU - 200scikit-learn: Treescikit-learn: LocalOutlierFactortensorflow: CPU - 64 - AlexNetscikit-learn: Plot Wardtensorflow-lite: Inception V4tensorflow-lite: Inception ResNet V2tensorflow-lite: NASNet Mobiletensorflow-lite: Mobilenet Quanttensorflow-lite: Mobilenet Floattensorflow-lite: SqueezeNetspacy: en_core_web_trfspacy: en_core_web_lgtensorflow: CPU - 16 - GoogLeNetscikit-learn: 20 Newsgroups / Logistic Regressiondeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamonednn: IP Shapes 3D - f32 - CPUdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepspeech: CPUtensorflow: CPU - 32 - AlexNetdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamnumenta-nab: Contextual Anomaly Detector OSEdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamcaffe: AlexNet - CPU - 100tensorflow: CPU - 16 - AlexNetscikit-learn: Hist Gradient Boosting Categorical Onlyonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUrbenchmark: numenta-nab: Relative Entropytnn: CPU - MobileNet v2rnnoise: onednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUtnn: CPU - SqueezeNet v1.1numenta-nab: Windowed Gaussianonednn: IP Shapes 3D - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUtnn: CPU - SqueezeNet v2onednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUshoc: OpenCL - Triadwater cooled build of 5800X5.875.8711.8711.811587.4871293.71034.345.911052704.28989786634.10520.6932.022.70236.2440.6834.893.792.1510.136.1115.530.9511.161.561.642.691.844.015.7911.8094.483126.25310.452256012471313291.782140.840254.4973390635.57224.619234.677223.17012.14200.753179.09046491123.61169.25441.79140.08835.33144.780191.5752599.271178392133.11034.994116.848111.700580.783219.95108.53912.6025.5691.9212.0064.88819.2502.4361.0257.87695.22991.58658.1662.152.57198.976.9514.6119.1718.398.6210.5447.309.910.804.722.602.162.8010.8180.6327.0780575.39436.3573.501899393195.173208.1161.6521862.511866.371857.4854.73650.4236819741.30048.563108.1147.38244873.441004.48458.273837.992122.822994.5215001520436.8935.315139.174428.712040.611498.46228.44050471.61718.4566472.21588.4386106.040237.6722381.411310.467236.066127.719914.473969.0496123.59328.0906123.13568.120632.488330.773897.533910.251354.5138091.1152.386176.322616.218261.633280.574749.589837.64336.9222108.281922.124445.177411.113489.92043391768.6816.4432.016820.109216.879230.65415.6543.415651.42899212.9559.3891.6329715.003114.419150.9736.110782.87855OpenBenchmarking.org

TensorFlow

Device: CPU - Batch Size: 512 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: VGG-16water cooled build of 5800X1.32082.64163.96245.28326.604SE +/- 0.01, N = 35.87

TensorFlow

Device: CPU - Batch Size: 256 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: VGG-16water cooled build of 5800X1.32082.64163.96245.28326.604SE +/- 0.00, N = 35.87

TensorFlow

Device: CPU - Batch Size: 512 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: ResNet-50water cooled build of 5800X3691215SE +/- 0.01, N = 311.87

TensorFlow

Device: CPU - Batch Size: 256 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: ResNet-50water cooled build of 5800X3691215SE +/- 0.00, N = 311.81

Scikit-Learn

Benchmark: Isotonic / Perturbed Logarithm

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Isotonic / Perturbed Logarithmwater cooled build of 5800X30060090012001500SE +/- 2.57, N = 31587.491. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Isotonic / Logistic

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Isotonic / Logisticwater cooled build of 5800X30060090012001500SE +/- 3.73, N = 31293.711. (F9X) gfortran options: -O0

TensorFlow

Device: CPU - Batch Size: 512 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: GoogLeNetwater cooled build of 5800X816243240SE +/- 0.02, N = 334.34

TensorFlow

Device: CPU - Batch Size: 64 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: VGG-16water cooled build of 5800X1.32982.65963.98945.31926.649SE +/- 0.00, N = 35.91

LeelaChessZero

Backend: BLAS

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLASwater cooled build of 5800X2004006008001000SE +/- 9.55, N = 910521. (CXX) g++ options: -flto -pthread

Scikit-Learn

Benchmark: SAGA

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: SAGAwater cooled build of 5800X150300450600750SE +/- 4.87, N = 3704.291. (F9X) gfortran options: -O0

Caffe

Model: GoogleNet - Acceleration: CPU - Iterations: 1000

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 1000water cooled build of 5800X200K400K600K800K1000KSE +/- 1792.84, N = 38978661. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

TensorFlow

Device: CPU - Batch Size: 256 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: GoogLeNetwater cooled build of 5800X816243240SE +/- 0.00, N = 334.10

Scikit-Learn

Benchmark: Sparse Random Projections / 100 Iterations

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Sparse Random Projections / 100 Iterationswater cooled build of 5800X110220330440550SE +/- 1.58, N = 3520.691. (F9X) gfortran options: -O0

NCNN

Target: Vulkan GPU - Model: regnety_400m

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: regnety_400mwater cooled build of 5800X0.45450.9091.36351.8182.2725SE +/- 0.05, N = 112.02MIN: 1.7 / MAX: 18.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: FastestDet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: FastestDetwater cooled build of 5800X0.60751.2151.82252.433.0375SE +/- 0.04, N = 92.70MIN: 1.54 / MAX: 22.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: vision_transformer

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: vision_transformerwater cooled build of 5800X50100150200250SE +/- 0.38, N = 12236.24MIN: 169.87 / MAX: 358.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: squeezenet_ssd

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: squeezenet_ssdwater cooled build of 5800X918273645SE +/- 0.08, N = 1240.68MIN: 16.38 / MAX: 65.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: yolov4-tiny

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: yolov4-tinywater cooled build of 5800X816243240SE +/- 0.13, N = 1234.89MIN: 8.42 / MAX: 53.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: resnet50

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: resnet50water cooled build of 5800X0.85281.70562.55843.41124.264SE +/- 0.20, N = 123.79MIN: 1.76 / MAX: 22.361. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: alexnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: alexnetwater cooled build of 5800X0.48380.96761.45141.93522.419SE +/- 0.45, N = 122.15MIN: 1.09 / MAX: 23.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: resnet18

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: resnet18water cooled build of 5800X3691215SE +/- 1.00, N = 1210.13MIN: 1.16 / MAX: 27.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: vgg16

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: vgg16water cooled build of 5800X246810SE +/- 0.11, N = 126.11MIN: 1.73 / MAX: 32.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: googlenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: googlenetwater cooled build of 5800X48121620SE +/- 0.15, N = 1215.53MIN: 5.34 / MAX: 34.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: blazeface

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: blazefacewater cooled build of 5800X0.21380.42760.64140.85521.069SE +/- 0.02, N = 120.95MIN: 0.82 / MAX: 23.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: efficientnet-b0

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: efficientnet-b0water cooled build of 5800X3691215SE +/- 0.43, N = 1211.16MIN: 2.29 / MAX: 28.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: mnasnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: mnasnetwater cooled build of 5800X0.3510.7021.0531.4041.755SE +/- 0.08, N = 111.56MIN: 1.11 / MAX: 20.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: shufflenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: shufflenet-v2water cooled build of 5800X0.3690.7381.1071.4761.845SE +/- 0.03, N = 121.64MIN: 1.34 / MAX: 17.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3water cooled build of 5800X0.60531.21061.81592.42123.0265SE +/- 0.17, N = 122.69MIN: 1.41 / MAX: 23.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2water cooled build of 5800X0.4140.8281.2421.6562.07SE +/- 0.09, N = 121.84MIN: 1.09 / MAX: 18.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: mobilenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: mobilenetwater cooled build of 5800X0.90231.80462.70693.60924.5115SE +/- 0.08, N = 124.01MIN: 3.06 / MAX: 44.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

TensorFlow

Device: CPU - Batch Size: 32 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: VGG-16water cooled build of 5800X1.30282.60563.90845.21126.514SE +/- 0.01, N = 35.79

TensorFlow

Device: CPU - Batch Size: 64 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: ResNet-50water cooled build of 5800X3691215SE +/- 0.01, N = 311.80

Numenta Anomaly Benchmark

Detector: Earthgecko Skyline

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko Skylinewater cooled build of 5800X20406080100SE +/- 1.06, N = 1594.48

TensorFlow

Device: CPU - Batch Size: 512 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: AlexNetwater cooled build of 5800X306090120150SE +/- 0.09, N = 3126.25

Scikit-Learn

Benchmark: Covertype Dataset Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Covertype Dataset Benchmarkwater cooled build of 5800X70140210280350SE +/- 0.62, N = 3310.451. (F9X) gfortran options: -O0

AI Benchmark Alpha

Device AI Score

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI Scorewater cooled build of 5800X50010001500200025002560

AI Benchmark Alpha

Device Training Score

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training Scorewater cooled build of 5800X300600900120015001247

AI Benchmark Alpha

Device Inference Score

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference Scorewater cooled build of 5800X300600900120015001313

Scikit-Learn

Benchmark: Lasso

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Lassowater cooled build of 5800X60120180240300SE +/- 0.98, N = 3291.781. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Kernel PCA Solvers / Time vs. N Samples

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Kernel PCA Solvers / Time vs. N Sampleswater cooled build of 5800X306090120150SE +/- 1.28, N = 7140.841. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: SGDOneClassSVM

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: SGDOneClassSVMwater cooled build of 5800X60120180240300SE +/- 1.74, N = 3254.501. (F9X) gfortran options: -O0

Caffe

Model: AlexNet - Acceleration: CPU - Iterations: 1000

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 1000water cooled build of 5800X70K140K210K280K350KSE +/- 427.76, N = 33390631. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

TensorFlow

Device: CPU - Batch Size: 16 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: VGG-16water cooled build of 5800X1.25332.50663.75995.01326.2665SE +/- 0.00, N = 35.57

Scikit-Learn

Benchmark: Isolation Forest

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Isolation Forestwater cooled build of 5800X50100150200250SE +/- 1.03, N = 3224.621. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: TSNE MNIST Dataset

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: TSNE MNIST Datasetwater cooled build of 5800X50100150200250SE +/- 2.14, N = 3234.681. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: GLM

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: GLMwater cooled build of 5800X50100150200250SE +/- 0.19, N = 3223.171. (F9X) gfortran options: -O0

TensorFlow

Device: CPU - Batch Size: 32 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: ResNet-50water cooled build of 5800X3691215SE +/- 0.01, N = 312.14

Scikit-Learn

Benchmark: Plot Fast KMeans

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Fast KMeanswater cooled build of 5800X4080120160200SE +/- 1.99, N = 3200.751. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Plot Lasso Path

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Lasso Pathwater cooled build of 5800X4080120160200SE +/- 0.55, N = 3179.091. (F9X) gfortran options: -O0

OpenCV

Test: DNN - Deep Neural Network

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: DNN - Deep Neural Networkwater cooled build of 5800X10K20K30K40K50KSE +/- 442.90, N = 15464911. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt

TensorFlow

Device: CPU - Batch Size: 256 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: AlexNetwater cooled build of 5800X306090120150SE +/- 0.03, N = 3123.61

Scikit-Learn

Benchmark: Plot Hierarchical

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Hierarchicalwater cooled build of 5800X4080120160200SE +/- 0.71, N = 3169.251. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Kernel PCA Solvers / Time vs. N Components

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Kernel PCA Solvers / Time vs. N Componentswater cooled build of 5800X1020304050SE +/- 0.37, N = 1541.791. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Plot Incremental PCA

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Incremental PCAwater cooled build of 5800X918273645SE +/- 0.49, N = 1540.091. (F9X) gfortran options: -O0

TensorFlow

Device: CPU - Batch Size: 64 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: GoogLeNetwater cooled build of 5800X816243240SE +/- 0.05, N = 335.33

Scikit-Learn

Benchmark: Hist Gradient Boosting Threading

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Hist Gradient Boosting Threadingwater cooled build of 5800X306090120150SE +/- 0.26, N = 3144.781. (F9X) gfortran options: -O0

Numenta Anomaly Benchmark

Detector: KNN CAD

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: KNN CADwater cooled build of 5800X4080120160200SE +/- 0.16, N = 3191.58

TNN

Target: CPU - Model: DenseNet

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetwater cooled build of 5800X6001200180024003000SE +/- 2.24, N = 32599.27MIN: 2505.87 / MAX: 2720.631. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

Caffe

Model: GoogleNet - Acceleration: CPU - Iterations: 200

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 200water cooled build of 5800X40K80K120K160K200KSE +/- 602.16, N = 31783921. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

Scikit-Learn

Benchmark: Plot Neighbors

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Neighborswater cooled build of 5800X306090120150SE +/- 0.40, N = 3133.111. (F9X) gfortran options: -O0

Numenta Anomaly Benchmark

Detector: Bayesian Changepoint

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian Changepointwater cooled build of 5800X816243240SE +/- 0.33, N = 1534.99

Scikit-Learn

Benchmark: Plot Polynomial Kernel Approximation

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Polynomial Kernel Approximationwater cooled build of 5800X306090120150SE +/- 0.18, N = 3116.851. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Sample Without Replacement

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Sample Without Replacementwater cooled build of 5800X306090120150SE +/- 0.87, N = 3111.701. (F9X) gfortran options: -O0

Numpy Benchmark

OpenBenchmarking.orgScore, More Is BetterNumpy Benchmarkwater cooled build of 5800X130260390520650SE +/- 0.90, N = 3580.78

oneDNN

Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUwater cooled build of 5800X7001400210028003500SE +/- 33.32, N = 53219.95MIN: 2980.231. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Scikit-Learn

Benchmark: Feature Expansions

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Feature Expansionswater cooled build of 5800X20406080100SE +/- 0.33, N = 3108.541. (F9X) gfortran options: -O0

TensorFlow

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-50water cooled build of 5800X3691215SE +/- 0.01, N = 312.60

Mobile Neural Network

Model: inception-v3

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3water cooled build of 5800X612182430SE +/- 0.02, N = 325.57MIN: 21.5 / MAX: 91.341. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: mobilenet-v1-1.0

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0water cooled build of 5800X0.43220.86441.29661.72882.161SE +/- 0.013, N = 31.921MIN: 1.59 / MAX: 55.991. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: MobileNetV2_224

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224water cooled build of 5800X0.45140.90281.35421.80562.257SE +/- 0.008, N = 32.006MIN: 1.66 / MAX: 50.31. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: SqueezeNetV1.0

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0water cooled build of 5800X1.09982.19963.29944.39925.499SE +/- 0.051, N = 34.888MIN: 4.05 / MAX: 59.431. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: resnet-v2-50

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50water cooled build of 5800X510152025SE +/- 0.12, N = 319.25MIN: 16.18 / MAX: 85.041. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: squeezenetv1.1

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1water cooled build of 5800X0.54811.09621.64432.19242.7405SE +/- 0.016, N = 32.436MIN: 2.02 / MAX: 76.441. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: mobilenetV3

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3water cooled build of 5800X0.23060.46120.69180.92241.153SE +/- 0.003, N = 31.025MIN: 0.84 / MAX: 56.031. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: nasnet

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetwater cooled build of 5800X246810SE +/- 0.021, N = 37.876MIN: 6.52 / MAX: 93.411. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Scikit-Learn

Benchmark: Plot Singular Value Decomposition

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Singular Value Decompositionwater cooled build of 5800X20406080100SE +/- 0.19, N = 395.231. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Hist Gradient Boosting

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Hist Gradient Boostingwater cooled build of 5800X20406080100SE +/- 0.56, N = 391.591. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Hist Gradient Boosting Higgs Boson

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Hist Gradient Boosting Higgs Bosonwater cooled build of 5800X1326395265SE +/- 0.05, N = 358.171. (F9X) gfortran options: -O0

NCNN

Target: CPU - Model: shufflenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v2water cooled build of 5800X0.48380.96761.45141.93522.419SE +/- 0.00, N = 22.15MIN: 1.81 / MAX: 41.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: FastestDet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDetwater cooled build of 5800X0.57831.15661.73492.31322.8915SE +/- 0.04, N = 32.57MIN: 2.17 / MAX: 46.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: vision_transformer

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformerwater cooled build of 5800X4080120160200SE +/- 0.07, N = 3198.97MIN: 173.67 / MAX: 301.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: regnety_400m

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400mwater cooled build of 5800X246810SE +/- 0.14, N = 36.95MIN: 5.47 / MAX: 73.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: squeezenet_ssd

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssdwater cooled build of 5800X48121620SE +/- 0.03, N = 314.61MIN: 12.02 / MAX: 125.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: yolov4-tiny

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tinywater cooled build of 5800X510152025SE +/- 0.28, N = 319.17MIN: 15.69 / MAX: 119.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: resnet50

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet50water cooled build of 5800X510152025SE +/- 0.17, N = 318.39MIN: 14.64 / MAX: 91.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: alexnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnetwater cooled build of 5800X246810SE +/- 0.09, N = 38.62MIN: 7.11 / MAX: 52.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: resnet18

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet18water cooled build of 5800X3691215SE +/- 0.16, N = 310.54MIN: 8.38 / MAX: 81.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: vgg16

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg16water cooled build of 5800X1122334455SE +/- 0.14, N = 347.30MIN: 39.92 / MAX: 152.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: googlenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenetwater cooled build of 5800X3691215SE +/- 0.15, N = 39.91MIN: 8 / MAX: 66.051. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: blazeface

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazefacewater cooled build of 5800X0.180.360.540.720.9SE +/- 0.05, N = 30.80MIN: 0.69 / MAX: 17.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: efficientnet-b0

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b0water cooled build of 5800X1.0622.1243.1864.2485.31SE +/- 0.07, N = 34.72MIN: 3.83 / MAX: 48.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: mnasnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnetwater cooled build of 5800X0.5851.171.7552.342.925SE +/- 0.09, N = 32.60MIN: 1.92 / MAX: 70.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU-v3-v3 - Model: mobilenet-v3

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v3water cooled build of 5800X0.4860.9721.4581.9442.43SE +/- 0.02, N = 32.16MIN: 1.8 / MAX: 49.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU-v2-v2 - Model: mobilenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v2water cooled build of 5800X0.631.261.892.523.15SE +/- 0.05, N = 32.80MIN: 2.17 / MAX: 44.951. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: mobilenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenetwater cooled build of 5800X3691215SE +/- 0.16, N = 310.81MIN: 8.95 / MAX: 71.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Scikit-Learn

Benchmark: Sparsify

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Sparsifywater cooled build of 5800X20406080100SE +/- 0.48, N = 380.631. (F9X) gfortran options: -O0

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUwater cooled build of 5800X246810SE +/- 0.14290, N = 157.07805MIN: 4.61. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Scikit-Learn

Benchmark: Hist Gradient Boosting Adult

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Hist Gradient Boosting Adultwater cooled build of 5800X20406080100SE +/- 0.19, N = 375.391. (F9X) gfortran options: -O0

TensorFlow

Device: CPU - Batch Size: 32 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: GoogLeNetwater cooled build of 5800X816243240SE +/- 0.07, N = 336.35

Scikit-Learn

Benchmark: SGD Regression

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: SGD Regressionwater cooled build of 5800X1632486480SE +/- 0.23, N = 373.501. (F9X) gfortran options: -O0

Caffe

Model: GoogleNet - Acceleration: CPU - Iterations: 100

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 100water cooled build of 5800X20K40K60K80K100KSE +/- 291.61, N = 3899391. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

oneDNN

Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUwater cooled build of 5800X7001400210028003500SE +/- 8.33, N = 33195.17MIN: 2979.531. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUwater cooled build of 5800X7001400210028003500SE +/- 3.95, N = 33208.11MIN: 2984.611. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Scikit-Learn

Benchmark: Plot OMP vs. LARS

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot OMP vs. LARSwater cooled build of 5800X1428425670SE +/- 0.14, N = 361.651. (F9X) gfortran options: -O0

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUwater cooled build of 5800X400800120016002000SE +/- 9.93, N = 31862.51MIN: 1676.621. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUwater cooled build of 5800X400800120016002000SE +/- 8.97, N = 31866.37MIN: 1677.271. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUwater cooled build of 5800X400800120016002000SE +/- 11.31, N = 31857.48MIN: 1671.511. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Scikit-Learn

Benchmark: MNIST Dataset

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: MNIST Datasetwater cooled build of 5800X1224364860SE +/- 0.19, N = 354.741. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Text Vectorizers

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Text Vectorizerswater cooled build of 5800X1122334455SE +/- 0.34, N = 350.421. (F9X) gfortran options: -O0

Caffe

Model: AlexNet - Acceleration: CPU - Iterations: 200

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 200water cooled build of 5800X15K30K45K60K75KSE +/- 336.05, N = 3681971. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

Scikit-Learn

Benchmark: Tree

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Treewater cooled build of 5800X918273645SE +/- 0.49, N = 441.301. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: LocalOutlierFactor

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: LocalOutlierFactorwater cooled build of 5800X1122334455SE +/- 0.21, N = 348.561. (F9X) gfortran options: -O0

TensorFlow

Device: CPU - Batch Size: 64 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: AlexNetwater cooled build of 5800X20406080100SE +/- 0.04, N = 3108.11

Scikit-Learn

Benchmark: Plot Ward

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Wardwater cooled build of 5800X1122334455SE +/- 0.22, N = 347.381. (F9X) gfortran options: -O0

TensorFlow Lite

Model: Inception V4

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4water cooled build of 5800X10K20K30K40K50KSE +/- 12.63, N = 344873.4

TensorFlow Lite

Model: Inception ResNet V2

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V2water cooled build of 5800X9K18K27K36K45KSE +/- 36.07, N = 341004.4

TensorFlow Lite

Model: NASNet Mobile

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet Mobilewater cooled build of 5800X2K4K6K8K10KSE +/- 6.76, N = 38458.27

TensorFlow Lite

Model: Mobilenet Quant

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Quantwater cooled build of 5800X8001600240032004000SE +/- 2.89, N = 33837.99

TensorFlow Lite

Model: Mobilenet Float

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Floatwater cooled build of 5800X5001000150020002500SE +/- 13.87, N = 32122.82

TensorFlow Lite

Model: SqueezeNet

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNetwater cooled build of 5800X6001200180024003000SE +/- 3.54, N = 32994.52

spaCy

Model: en_core_web_trf

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_trfwater cooled build of 5800X30060090012001500SE +/- 1.15, N = 31500

spaCy

Model: en_core_web_lg

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_lgwater cooled build of 5800X3K6K9K12K15KSE +/- 36.67, N = 315204

TensorFlow

Device: CPU - Batch Size: 16 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: GoogLeNetwater cooled build of 5800X816243240SE +/- 0.15, N = 336.89

Scikit-Learn

Benchmark: 20 Newsgroups / Logistic Regression

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: 20 Newsgroups / Logistic Regressionwater cooled build of 5800X816243240SE +/- 0.07, N = 335.321. (F9X) gfortran options: -O0

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamwater cooled build of 5800X306090120150SE +/- 0.42, N = 3139.17

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamwater cooled build of 5800X714212835SE +/- 0.09, N = 328.71

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamwater cooled build of 5800X918273645SE +/- 0.08, N = 340.61

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamwater cooled build of 5800X20406080100SE +/- 0.20, N = 398.46

oneDNN

Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUwater cooled build of 5800X246810SE +/- 0.05589, N = 158.44050MIN: 7.241. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamwater cooled build of 5800X100200300400500SE +/- 1.18, N = 3471.62

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamwater cooled build of 5800X246810SE +/- 0.0154, N = 38.4566

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamwater cooled build of 5800X100200300400500SE +/- 0.92, N = 3472.22

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamwater cooled build of 5800X246810SE +/- 0.0152, N = 38.4386

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamwater cooled build of 5800X20406080100SE +/- 0.03, N = 3106.04

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamwater cooled build of 5800X918273645SE +/- 0.02, N = 337.67

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamwater cooled build of 5800X80160240320400SE +/- 0.46, N = 3381.41

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamwater cooled build of 5800X3691215SE +/- 0.02, N = 310.47

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamwater cooled build of 5800X816243240SE +/- 0.04, N = 336.07

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamwater cooled build of 5800X714212835SE +/- 0.03, N = 327.72

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamwater cooled build of 5800X48121620SE +/- 0.01, N = 314.47

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamwater cooled build of 5800X1530456075SE +/- 0.03, N = 369.05

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamwater cooled build of 5800X306090120150SE +/- 0.04, N = 3123.59

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamwater cooled build of 5800X246810SE +/- 0.0023, N = 38.0906

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamwater cooled build of 5800X306090120150SE +/- 0.05, N = 3123.14

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamwater cooled build of 5800X246810SE +/- 0.0031, N = 38.1206

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamwater cooled build of 5800X816243240SE +/- 0.02, N = 332.49

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamwater cooled build of 5800X714212835SE +/- 0.01, N = 330.77

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamwater cooled build of 5800X20406080100SE +/- 0.02, N = 397.53

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamwater cooled build of 5800X3691215SE +/- 0.00, N = 310.25

DeepSpeech

Acceleration: CPU

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUwater cooled build of 5800X1224364860SE +/- 0.10, N = 354.51

TensorFlow

Device: CPU - Batch Size: 32 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: AlexNetwater cooled build of 5800X20406080100SE +/- 0.12, N = 391.11

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamwater cooled build of 5800X1224364860SE +/- 0.03, N = 352.39

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamwater cooled build of 5800X20406080100SE +/- 0.05, N = 376.32

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamwater cooled build of 5800X48121620SE +/- 0.01, N = 316.22

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamwater cooled build of 5800X1428425670SE +/- 0.03, N = 361.63

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamwater cooled build of 5800X20406080100SE +/- 0.06, N = 380.57

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamwater cooled build of 5800X1122334455SE +/- 0.04, N = 349.59

Numenta Anomaly Benchmark

Detector: Contextual Anomaly Detector OSE

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Contextual Anomaly Detector OSEwater cooled build of 5800X918273645SE +/- 0.17, N = 337.64

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamwater cooled build of 5800X816243240SE +/- 0.03, N = 336.92

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamwater cooled build of 5800X20406080100SE +/- 0.08, N = 3108.28

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamwater cooled build of 5800X510152025SE +/- 0.01, N = 322.12

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamwater cooled build of 5800X1020304050SE +/- 0.02, N = 345.18

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamwater cooled build of 5800X3691215SE +/- 0.00, N = 311.11

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamwater cooled build of 5800X20406080100SE +/- 0.02, N = 389.92

Caffe

Model: AlexNet - Acceleration: CPU - Iterations: 100

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 100water cooled build of 5800X7K14K21K28K35KSE +/- 118.32, N = 3339171. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

TensorFlow

Device: CPU - Batch Size: 16 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: AlexNetwater cooled build of 5800X1530456075SE +/- 0.02, N = 368.68

Scikit-Learn

Benchmark: Hist Gradient Boosting Categorical Only

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Hist Gradient Boosting Categorical Onlywater cooled build of 5800X48121620SE +/- 0.03, N = 316.441. (F9X) gfortran options: -O0

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUwater cooled build of 5800X0.45380.90761.36141.81522.269SE +/- 0.00889, N = 32.01682MIN: 1.711. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

R Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterR Benchmarkwater cooled build of 5800X0.02460.04920.07380.09840.123SE +/- 0.0004, N = 30.10921. R scripting front-end version 4.1.2 (2021-11-01)

Numenta Anomaly Benchmark

Detector: Relative Entropy

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative Entropywater cooled build of 5800X48121620SE +/- 0.16, N = 316.88

TNN

Target: CPU - Model: MobileNet v2

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2water cooled build of 5800X50100150200250SE +/- 0.41, N = 3230.65MIN: 217.78 / MAX: 251.521. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

RNNoise

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28water cooled build of 5800X48121620SE +/- 0.02, N = 315.651. (CC) gcc options: -O2 -pedantic -fvisibility=hidden

oneDNN

Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUwater cooled build of 5800X0.76851.5372.30553.0743.8425SE +/- 0.03627, N = 33.41565MIN: 2.831. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUwater cooled build of 5800X0.32150.6430.96451.2861.6075SE +/- 0.00989, N = 31.42899MIN: 1.21. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TNN

Target: CPU - Model: SqueezeNet v1.1

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.1water cooled build of 5800X50100150200250SE +/- 0.64, N = 3212.96MIN: 211.79 / MAX: 214.691. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

Numenta Anomaly Benchmark

Detector: Windowed Gaussian

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed Gaussianwater cooled build of 5800X3691215SE +/- 0.025, N = 39.389

oneDNN

Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUwater cooled build of 5800X0.36740.73481.10221.46961.837SE +/- 0.00833, N = 31.63297MIN: 1.271. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUwater cooled build of 5800X48121620SE +/- 0.02, N = 315.00MIN: 13.391. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUwater cooled build of 5800X48121620SE +/- 0.18, N = 314.42MIN: 12.581. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TNN

Target: CPU - Model: SqueezeNet v2

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v2water cooled build of 5800X1224364860SE +/- 0.29, N = 350.97MIN: 50.4 / MAX: 51.911. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUwater cooled build of 5800X246810SE +/- 0.05759, N = 36.11078MIN: 5.321. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUwater cooled build of 5800X0.64771.29541.94312.59083.2385SE +/- 0.03425, N = 32.87855MIN: 2.451. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl


Phoronix Test Suite v10.8.5