24.03.13.Pop.2204.ML.test1

AMD Ryzen 9 7950X 16-Core testing with a ASUS ProArt X670E-CREATOR WIFI (1710 BIOS) and Zotac NVIDIA GeForce RTX 4070 Ti 12GB on Pop 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2403157-NE-240313POP28
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Initial test 1 No water cool
March 13
  2 Days, 55 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


24.03.13.Pop.2204.ML.test1OpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen 9 7950X 16-Core @ 5.88GHz (16 Cores / 32 Threads)ASUS ProArt X670E-CREATOR WIFI (1710 BIOS)AMD Device 14d82 x 16 GB DDR5-4800MT/s G Skill F5-6000J3636F16G1000GB PNY CS2130 1TB SSDZotac NVIDIA GeForce RTX 4070 Ti 12GBNVIDIA Device 22bc2 x DELL 2001FPIntel I225-V + Aquantia AQtion AQC113CS NBase-T/IEEE + MEDIATEK MT7922 802.11ax PCIPop 22.046.6.10-76060610-generic (x86_64)GNOME Shell 42.5X Server 1.21.1.4NVIDIA 550.54.144.6.0OpenCL 3.0 CUDA 12.4.891.3.277GCC 11.4.0ext43200x1200ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLOpenCLVulkanCompilerFile-SystemScreen Resolution24.03.13.Pop.2204.ML.test1 BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: amd-pstate-epp powersave (EPP: balance_performance) - CPU Microcode: 0xa601206- GLAMOR - BAR1 / Visible vRAM Size: 16384 MiB - vBIOS Version: 95.04.31.00.3b- GPU Compute Cores: 7680- Python 3.10.12- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

24.03.13.Pop.2204.ML.test1shoc: OpenCL - S3Dshoc: OpenCL - Triadshoc: OpenCL - FFT SPshoc: OpenCL - MD5 Hashshoc: OpenCL - Reductionshoc: OpenCL - GEMM SGEMM_Nshoc: OpenCL - Max SP Flopsshoc: OpenCL - Bus Speed Downloadshoc: OpenCL - Bus Speed Readbackshoc: OpenCL - Texture Read Bandwidthonednn: IP Shapes 1D - CPUonednn: IP Shapes 3D - CPUonednn: Convolution Batch Shapes Auto - CPUonednn: Deconvolution Batch shapes_1d - CPUonednn: Deconvolution Batch shapes_3d - CPUonednn: Recurrent Neural Network Training - CPUonednn: Recurrent Neural Network Inference - CPUnumpy: deepspeech: CPUrnnoise: tensorflow-lite: SqueezeNettensorflow-lite: Inception V4tensorflow-lite: NASNet Mobiletensorflow-lite: Mobilenet Floattensorflow-lite: Mobilenet Quanttensorflow-lite: Inception ResNet V2pytorch: CPU - 1 - ResNet-50pytorch: CPU - 1 - ResNet-152pytorch: CPU - 16 - ResNet-50pytorch: CPU - 32 - ResNet-50pytorch: CPU - 64 - ResNet-50pytorch: CPU - 16 - ResNet-152pytorch: CPU - 256 - ResNet-50pytorch: CPU - 32 - ResNet-152pytorch: CPU - 512 - ResNet-50pytorch: CPU - 64 - ResNet-152pytorch: CPU - 256 - ResNet-152pytorch: CPU - 512 - ResNet-152pytorch: CPU - 1 - Efficientnet_v2_lpytorch: CPU - 16 - Efficientnet_v2_lpytorch: CPU - 32 - Efficientnet_v2_lpytorch: CPU - 64 - Efficientnet_v2_lpytorch: CPU - 256 - Efficientnet_v2_lpytorch: CPU - 512 - Efficientnet_v2_lpytorch: NVIDIA CUDA GPU - 1 - ResNet-50pytorch: NVIDIA CUDA GPU - 1 - ResNet-152pytorch: NVIDIA CUDA GPU - 16 - ResNet-50pytorch: NVIDIA CUDA GPU - 32 - ResNet-50pytorch: NVIDIA CUDA GPU - 64 - ResNet-50pytorch: NVIDIA CUDA GPU - 16 - ResNet-152pytorch: NVIDIA CUDA GPU - 256 - ResNet-50pytorch: NVIDIA CUDA GPU - 32 - ResNet-152pytorch: NVIDIA CUDA GPU - 512 - ResNet-50pytorch: NVIDIA CUDA GPU - 64 - ResNet-152pytorch: NVIDIA CUDA GPU - 256 - ResNet-152pytorch: NVIDIA CUDA GPU - 512 - ResNet-152pytorch: NVIDIA CUDA GPU - 1 - Efficientnet_v2_lpytorch: NVIDIA CUDA GPU - 16 - Efficientnet_v2_lpytorch: NVIDIA CUDA GPU - 32 - Efficientnet_v2_lpytorch: NVIDIA CUDA GPU - 64 - Efficientnet_v2_lpytorch: NVIDIA CUDA GPU - 256 - Efficientnet_v2_lpytorch: NVIDIA CUDA GPU - 512 - Efficientnet_v2_ltensorflow: CPU - 1 - VGG-16tensorflow: GPU - 1 - VGG-16tensorflow: CPU - 1 - AlexNettensorflow: CPU - 16 - VGG-16tensorflow: CPU - 32 - VGG-16tensorflow: CPU - 64 - VGG-16tensorflow: GPU - 1 - AlexNettensorflow: GPU - 16 - VGG-16tensorflow: GPU - 32 - VGG-16tensorflow: GPU - 64 - VGG-16tensorflow: CPU - 16 - AlexNettensorflow: CPU - 256 - VGG-16tensorflow: CPU - 32 - AlexNettensorflow: CPU - 64 - AlexNettensorflow: GPU - 16 - AlexNettensorflow: GPU - 256 - VGG-16tensorflow: GPU - 32 - AlexNettensorflow: GPU - 64 - AlexNettensorflow: CPU - 1 - GoogLeNettensorflow: CPU - 1 - ResNet-50tensorflow: CPU - 256 - AlexNettensorflow: CPU - 512 - AlexNettensorflow: GPU - 1 - GoogLeNettensorflow: GPU - 1 - ResNet-50tensorflow: GPU - 256 - AlexNettensorflow: GPU - 512 - AlexNettensorflow: CPU - 16 - GoogLeNettensorflow: CPU - 16 - ResNet-50tensorflow: CPU - 32 - GoogLeNettensorflow: CPU - 32 - ResNet-50tensorflow: CPU - 64 - GoogLeNettensorflow: CPU - 64 - ResNet-50tensorflow: GPU - 16 - GoogLeNettensorflow: GPU - 16 - ResNet-50tensorflow: GPU - 32 - GoogLeNettensorflow: GPU - 32 - ResNet-50tensorflow: GPU - 64 - GoogLeNettensorflow: GPU - 64 - ResNet-50tensorflow: CPU - 256 - GoogLeNettensorflow: CPU - 256 - ResNet-50tensorflow: CPU - 512 - GoogLeNettensorflow: GPU - 256 - GoogLeNettensorflow: GPU - 256 - ResNet-50tensorflow: GPU - 512 - GoogLeNetdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamspacy: en_core_web_lgspacy: en_core_web_trfmnn: nasnetmnn: mobilenetV3mnn: squeezenetv1.1mnn: resnet-v2-50mnn: SqueezeNetV1.0mnn: MobileNetV2_224mnn: mobilenet-v1-1.0mnn: inception-v3ncnn: CPU - mobilenetncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - shufflenet-v2ncnn: CPU - mnasnetncnn: CPU - efficientnet-b0ncnn: CPU - blazefacencnn: CPU - googlenetncnn: CPU - vgg16ncnn: CPU - resnet18ncnn: CPU - alexnetncnn: CPU - resnet50ncnn: CPUv2-yolov3v2-yolov3 - mobilenetv2-yolov3ncnn: CPU - yolov4-tinyncnn: CPU - squeezenet_ssdncnn: CPU - regnety_400mncnn: CPU - vision_transformerncnn: CPU - FastestDetncnn: Vulkan GPU - mobilenetncnn: Vulkan GPU-v2-v2 - mobilenet-v2ncnn: Vulkan GPU-v3-v3 - mobilenet-v3ncnn: Vulkan GPU - shufflenet-v2ncnn: Vulkan GPU - mnasnetncnn: Vulkan GPU - efficientnet-b0ncnn: Vulkan GPU - blazefacencnn: Vulkan GPU - googlenetncnn: Vulkan GPU - vgg16ncnn: Vulkan GPU - resnet18ncnn: Vulkan GPU - alexnetncnn: Vulkan GPU - resnet50ncnn: Vulkan GPUv2-yolov3v2-yolov3 - mobilenetv2-yolov3ncnn: Vulkan GPU - yolov4-tinyncnn: Vulkan GPU - squeezenet_ssdncnn: Vulkan GPU - regnety_400mncnn: Vulkan GPU - vision_transformerncnn: Vulkan GPU - FastestDettnn: CPU - DenseNettnn: CPU - MobileNet v2tnn: CPU - SqueezeNet v2tnn: CPU - SqueezeNet v1.1openvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection Retail FP16 - CPUopenvino: Face Detection Retail FP16 - CPUopenvino: Road Segmentation ADAS FP16 - CPUopenvino: Road Segmentation ADAS FP16 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Noise Suppression Poconet-Like FP16 - CPUopenvino: Noise Suppression Poconet-Like FP16 - CPUopenvino: Handwritten English Recognition FP16 - CPUopenvino: Handwritten English Recognition FP16 - CPUopenvino: Person Re-Identification Retail FP16 - CPUopenvino: Person Re-Identification Retail FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUnumenta-nab: KNN CADnumenta-nab: Relative Entropynumenta-nab: Windowed Gaussiannumenta-nab: Earthgecko Skylinenumenta-nab: Bayesian Changepointnumenta-nab: Contextual Anomaly Detector OSEai-benchmark: Device Inference Scoreai-benchmark: Device Training Scoreai-benchmark: Device AI Scoremlpack: scikit_icamlpack: scikit_qdamlpack: scikit_svmmlpack: scikit_linearridgeregressionwhisper-cpp: ggml-base.en - 2016 State of the Unionwhisper-cpp: ggml-small.en - 2016 State of the Unionwhisper-cpp: ggml-medium.en - 2016 State of the Unionopencv: DNN - Deep Neural NetworkInitial test 1 No water cool299.46825.46321292.5347.8988388.93413212.043074.926.827527.07232985.701.173514.421707.166313.061792.565191452.96747.499704.5247.0351413.7071716.0421139.410099.31214.111861.5321857.064.8125.6444.0844.0843.3517.6643.4917.6442.9117.6617.6917.5914.1410.4610.5910.5810.6310.44387.06137.39380.67380.74379.98138.78380.34139.41383.56138.72138.62140.4171.9870.6370.5269.8070.8169.974.741.4613.0016.0916.8917.4412.581.701.721.73148.7218.12224.56305.8130.671.7733.3934.8447.2112.70388.40392.1612.364.2535.8235.93125.8336.43122.3936.74119.0436.3615.105.4215.455.4915.615.51116.3336.15115.7015.765.5615.9020.0852397.579717.357257.6038890.18948.9714278.31223.5898264.377330.2415173.37865.76012031.87753.92401214.23990.8214110.991472.041290.355811.055626.3345303.333018.514354.0028265.201830.1491173.89865.7436115.112469.440992.513510.8023182.622543.780196.469210.359033.8095236.314427.530336.3089417.172319.161298.075810.186219.9101400.393117.289357.830418557241511.3001.6382.54212.1234.1413.4102.45623.4219.803.653.693.933.464.501.609.5632.606.625.5213.489.8016.288.499.8737.924.699.363.693.723.903.454.571.629.6932.316.655.6713.169.3615.848.389.8338.124.302005.606183.27742.215179.67912.75625.6676.65104.2777.54103.09618.4112.9124.71323.123062.632.53272.3629.321538.275.181266.8512.614335.663.61448.2817.81121.9665.522470.866.441442.075.531386.9311.33667.2923.941785.484.4632402.420.45729.9921.8846025.940.31105.0018.2814.98455.56613.21325.40029003573647330.1234.0715.121.030.153500.348810.8661530277OpenBenchmarking.org

SHOC Scalable HeterOgeneous Computing

The CUDA and OpenCL version of Vetter's Scalable HeterOgeneous Computing benchmark suite. SHOC provides a number of different benchmark programs for evaluating the performance and stability of compute devices. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: S3DInitial test 1 No water cool70140210280350SE +/- 0.24, N = 3299.471. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -lmpi_cxx -lmpi

OpenBenchmarking.orgGB/s, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: TriadInitial test 1 No water cool612182430SE +/- 0.05, N = 325.461. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -lmpi_cxx -lmpi

OpenBenchmarking.orgGFLOPS, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: FFT SPInitial test 1 No water cool30060090012001500SE +/- 1.18, N = 31292.531. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -lmpi_cxx -lmpi

OpenBenchmarking.orgGHash/s, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: MD5 HashInitial test 1 No water cool1122334455SE +/- 0.04, N = 347.901. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -lmpi_cxx -lmpi

OpenBenchmarking.orgGB/s, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: ReductionInitial test 1 No water cool80160240320400SE +/- 0.08, N = 3388.931. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -lmpi_cxx -lmpi

OpenBenchmarking.orgGFLOPS, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: GEMM SGEMM_NInitial test 1 No water cool3K6K9K12K15KSE +/- 99.14, N = 1113212.01. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -lmpi_cxx -lmpi

OpenBenchmarking.orgGFLOPS, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: Max SP FlopsInitial test 1 No water cool9K18K27K36K45KSE +/- 97.97, N = 343074.91. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -lmpi_cxx -lmpi

OpenBenchmarking.orgGB/s, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: Bus Speed DownloadInitial test 1 No water cool612182430SE +/- 0.00, N = 326.831. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -lmpi_cxx -lmpi

OpenBenchmarking.orgGB/s, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: Bus Speed ReadbackInitial test 1 No water cool612182430SE +/- 0.00, N = 327.071. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -lmpi_cxx -lmpi

OpenBenchmarking.orgGB/s, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: Texture Read BandwidthInitial test 1 No water cool6001200180024003000SE +/- 3.00, N = 32985.701. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -lmpi_cxx -lmpi

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

Backend: BLAS

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ./lczero: line 4: ./lc0: No such file or directory

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: IP Shapes 1D - Engine: CPUInitial test 1 No water cool0.2640.5280.7921.0561.32SE +/- 0.00923, N = 101.17351MIN: 1.011. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: IP Shapes 3D - Engine: CPUInitial test 1 No water cool0.99491.98982.98473.97964.9745SE +/- 0.00764, N = 34.42170MIN: 4.191. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Convolution Batch Shapes Auto - Engine: CPUInitial test 1 No water cool246810SE +/- 0.01686, N = 37.16631MIN: 6.751. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Deconvolution Batch shapes_1d - Engine: CPUInitial test 1 No water cool0.68891.37782.06672.75563.4445SE +/- 0.03049, N = 53.06179MIN: 2.281. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Deconvolution Batch shapes_3d - Engine: CPUInitial test 1 No water cool0.57721.15441.73162.30882.886SE +/- 0.00599, N = 32.56519MIN: 2.371. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Recurrent Neural Network Training - Engine: CPUInitial test 1 No water cool30060090012001500SE +/- 7.60, N = 31452.96MIN: 1391.341. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Recurrent Neural Network Inference - Engine: CPUInitial test 1 No water cool160320480640800SE +/- 1.15, N = 3747.50MIN: 714.811. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkInitial test 1 No water cool150300450600750SE +/- 7.50, N = 3704.52

DeepSpeech

Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUInitial test 1 No water cool1122334455SE +/- 0.35, N = 347.04

R Benchmark

This test is a quick-running survey of general R performance Learn more via the OpenBenchmarking.org test page.

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ERROR: Rscript is not found on the system!

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Initial test 1 No water cool48121620SE +/- 0.16, N = 313.711. (CC) gcc options: -O2 -pedantic -fvisibility=hidden

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNetInitial test 1 No water cool400800120016002000SE +/- 12.41, N = 31716.04

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4Initial test 1 No water cool5K10K15K20K25KSE +/- 19.27, N = 321139.4

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet MobileInitial test 1 No water cool2K4K6K8K10KSE +/- 20.39, N = 310099.3

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet FloatInitial test 1 No water cool30060090012001500SE +/- 1.58, N = 31214.11

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet QuantInitial test 1 No water cool400800120016002000SE +/- 11.38, N = 31861.53

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V2Initial test 1 No water cool5K10K15K20K25KSE +/- 104.72, N = 321857.0

PyTorch

This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Currently this test profile is catered to CPU-based testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-50Initial test 1 No water cool1428425670SE +/- 0.45, N = 364.81MIN: 58.56 / MAX: 67.34

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-152Initial test 1 No water cool612182430SE +/- 0.33, N = 325.64MIN: 23.55 / MAX: 26.92

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-50Initial test 1 No water cool1020304050SE +/- 0.15, N = 344.08MIN: 38.79 / MAX: 45.8

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-50Initial test 1 No water cool1020304050SE +/- 0.26, N = 344.08MIN: 41.27 / MAX: 45.58

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: ResNet-50Initial test 1 No water cool1020304050SE +/- 0.35, N = 343.35MIN: 38.96 / MAX: 45.4

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-152Initial test 1 No water cool48121620SE +/- 0.11, N = 317.66MIN: 17.14 / MAX: 18.3

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: ResNet-50Initial test 1 No water cool1020304050SE +/- 0.47, N = 343.49MIN: 37.04 / MAX: 45.45

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-152Initial test 1 No water cool48121620SE +/- 0.10, N = 317.64MIN: 14.95 / MAX: 18.12

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 512 - Model: ResNet-50Initial test 1 No water cool1020304050SE +/- 0.37, N = 342.91MIN: 37.63 / MAX: 44.8

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: ResNet-152Initial test 1 No water cool48121620SE +/- 0.23, N = 317.66MIN: 14.14 / MAX: 18.38

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: ResNet-152Initial test 1 No water cool48121620SE +/- 0.04, N = 317.69MIN: 14.61 / MAX: 18.11

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 512 - Model: ResNet-152Initial test 1 No water cool48121620SE +/- 0.10, N = 317.59MIN: 16.92 / MAX: 18.27

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_lInitial test 1 No water cool48121620SE +/- 0.09, N = 314.14MIN: 12.35 / MAX: 14.45

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_lInitial test 1 No water cool3691215SE +/- 0.07, N = 310.46MIN: 8.62 / MAX: 11.22

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_lInitial test 1 No water cool3691215SE +/- 0.06, N = 310.59MIN: 8.62 / MAX: 11.44

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_lInitial test 1 No water cool3691215SE +/- 0.05, N = 310.58MIN: 8.79 / MAX: 11.4

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_lInitial test 1 No water cool3691215SE +/- 0.08, N = 310.63MIN: 8.45 / MAX: 11.32

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_lInitial test 1 No water cool3691215SE +/- 0.10, N = 310.44MIN: 8.67 / MAX: 11.33

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: ResNet-50Initial test 1 No water cool80160240320400SE +/- 3.94, N = 3387.06MIN: 305.77 / MAX: 401.83

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: ResNet-152Initial test 1 No water cool306090120150SE +/- 0.86, N = 3137.39MIN: 123.44 / MAX: 140.4

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: ResNet-50Initial test 1 No water cool80160240320400SE +/- 1.52, N = 3380.67MIN: 329.56 / MAX: 390.17

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: ResNet-50Initial test 1 No water cool80160240320400SE +/- 3.53, N = 3380.74MIN: 326.25 / MAX: 392.43

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: ResNet-50Initial test 1 No water cool80160240320400SE +/- 0.25, N = 3379.98MIN: 282.37 / MAX: 389.84

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: ResNet-152Initial test 1 No water cool306090120150SE +/- 0.16, N = 3138.78MIN: 119.06 / MAX: 141.64

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: ResNet-50Initial test 1 No water cool80160240320400SE +/- 0.44, N = 3380.34MIN: 332.41 / MAX: 387.86

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: ResNet-152Initial test 1 No water cool306090120150SE +/- 0.75, N = 3139.41MIN: 120.33 / MAX: 143.02

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: ResNet-50Initial test 1 No water cool80160240320400SE +/- 3.59, N = 3383.56MIN: 325.34 / MAX: 393.54

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: ResNet-152Initial test 1 No water cool306090120150SE +/- 0.85, N = 3138.72MIN: 120.45 / MAX: 142.01

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: ResNet-152Initial test 1 No water cool306090120150SE +/- 0.83, N = 3138.62MIN: 121.16 / MAX: 142.38

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: ResNet-152Initial test 1 No water cool306090120150SE +/- 1.67, N = 3140.41MIN: 122.18 / MAX: 145.83

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: Efficientnet_v2_lInitial test 1 No water cool1632486480SE +/- 0.41, N = 371.98MIN: 60.96 / MAX: 73.57

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: Efficientnet_v2_lInitial test 1 No water cool1632486480SE +/- 0.85, N = 370.63MIN: 59.58 / MAX: 73.6

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: Efficientnet_v2_lInitial test 1 No water cool1632486480SE +/- 0.19, N = 370.52MIN: 60 / MAX: 71.86

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: Efficientnet_v2_lInitial test 1 No water cool1632486480SE +/- 0.47, N = 369.80MIN: 59.21 / MAX: 72.04

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: Efficientnet_v2_lInitial test 1 No water cool1632486480SE +/- 0.55, N = 370.81MIN: 59.4 / MAX: 72.65

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: Efficientnet_v2_lInitial test 1 No water cool1632486480SE +/- 0.08, N = 369.97MIN: 59.27 / MAX: 71.36

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: VGG-16Initial test 1 No water cool1.06652.1333.19954.2665.3325SE +/- 0.01, N = 34.74

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: GPU - Batch Size: 1 - Model: VGG-16Initial test 1 No water cool0.32850.6570.98551.3141.6425SE +/- 0.00, N = 31.46

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: AlexNetInitial test 1 No water cool3691215SE +/- 0.01, N = 313.00

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: VGG-16Initial test 1 No water cool48121620SE +/- 0.12, N = 316.09

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: VGG-16Initial test 1 No water cool48121620SE +/- 0.03, N = 316.89

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: VGG-16Initial test 1 No water cool48121620SE +/- 0.07, N = 317.44

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: GPU - Batch Size: 1 - Model: AlexNetInitial test 1 No water cool3691215SE +/- 0.01, N = 312.58

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: GPU - Batch Size: 16 - Model: VGG-16Initial test 1 No water cool0.38250.7651.14751.531.9125SE +/- 0.00, N = 31.70

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: GPU - Batch Size: 32 - Model: VGG-16Initial test 1 No water cool0.3870.7741.1611.5481.935SE +/- 0.01, N = 31.72

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: GPU - Batch Size: 64 - Model: VGG-16Initial test 1 No water cool0.38930.77861.16791.55721.9465SE +/- 0.01, N = 31.73

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: AlexNetInitial test 1 No water cool306090120150SE +/- 0.24, N = 3148.72

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: VGG-16Initial test 1 No water cool48121620SE +/- 0.01, N = 318.12

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: AlexNetInitial test 1 No water cool50100150200250SE +/- 0.23, N = 3224.56

Device: CPU - Batch Size: 512 - Model: VGG-16

Initial test 1 No water cool: The test quit with a non-zero exit status. E: Fatal Python error: Segmentation fault

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: AlexNetInitial test 1 No water cool70140210280350SE +/- 1.28, N = 3305.81

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: GPU - Batch Size: 16 - Model: AlexNetInitial test 1 No water cool714212835SE +/- 0.21, N = 330.67

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: GPU - Batch Size: 256 - Model: VGG-16Initial test 1 No water cool0.39830.79661.19491.59321.9915SE +/- 0.00, N = 31.77

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: GPU - Batch Size: 32 - Model: AlexNetInitial test 1 No water cool816243240SE +/- 0.12, N = 333.39

Device: GPU - Batch Size: 512 - Model: VGG-16

Initial test 1 No water cool: The test quit with a non-zero exit status. E: Fatal Python error: Segmentation fault

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: GPU - Batch Size: 64 - Model: AlexNetInitial test 1 No water cool816243240SE +/- 0.12, N = 334.84

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: GoogLeNetInitial test 1 No water cool1122334455SE +/- 0.14, N = 347.21

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: ResNet-50Initial test 1 No water cool3691215SE +/- 0.02, N = 312.70

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: AlexNetInitial test 1 No water cool80160240320400SE +/- 3.29, N = 3388.40

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: AlexNetInitial test 1 No water cool90180270360450SE +/- 2.15, N = 3392.16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: GPU - Batch Size: 1 - Model: GoogLeNetInitial test 1 No water cool3691215SE +/- 0.04, N = 312.36

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: GPU - Batch Size: 1 - Model: ResNet-50Initial test 1 No water cool0.95631.91262.86893.82524.7815SE +/- 0.02, N = 34.25

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: GPU - Batch Size: 256 - Model: AlexNetInitial test 1 No water cool816243240SE +/- 0.10, N = 335.82

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: GPU - Batch Size: 512 - Model: AlexNetInitial test 1 No water cool816243240SE +/- 0.09, N = 335.93

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: GoogLeNetInitial test 1 No water cool306090120150SE +/- 0.29, N = 3125.83

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-50Initial test 1 No water cool816243240SE +/- 0.07, N = 336.43

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: GoogLeNetInitial test 1 No water cool306090120150SE +/- 0.15, N = 3122.39

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: ResNet-50Initial test 1 No water cool816243240SE +/- 0.05, N = 336.74

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: GoogLeNetInitial test 1 No water cool306090120150SE +/- 0.13, N = 3119.04

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: ResNet-50Initial test 1 No water cool816243240SE +/- 0.02, N = 336.36

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: GPU - Batch Size: 16 - Model: GoogLeNetInitial test 1 No water cool48121620SE +/- 0.05, N = 315.10

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: GPU - Batch Size: 16 - Model: ResNet-50Initial test 1 No water cool1.21952.4393.65854.8786.0975SE +/- 0.01, N = 35.42

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: GPU - Batch Size: 32 - Model: GoogLeNetInitial test 1 No water cool48121620SE +/- 0.03, N = 315.45

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: GPU - Batch Size: 32 - Model: ResNet-50Initial test 1 No water cool1.23532.47063.70594.94126.1765SE +/- 0.04, N = 35.49

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: GPU - Batch Size: 64 - Model: GoogLeNetInitial test 1 No water cool48121620SE +/- 0.04, N = 315.61

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: GPU - Batch Size: 64 - Model: ResNet-50Initial test 1 No water cool1.23982.47963.71944.95926.199SE +/- 0.01, N = 35.51

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: GoogLeNetInitial test 1 No water cool306090120150SE +/- 0.41, N = 3116.33

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: ResNet-50Initial test 1 No water cool816243240SE +/- 0.00, N = 336.15

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: GoogLeNetInitial test 1 No water cool306090120150SE +/- 0.08, N = 3115.70

Device: CPU - Batch Size: 512 - Model: ResNet-50

Initial test 1 No water cool: The test quit with a non-zero exit status. E: Fatal Python error: Segmentation fault

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: GPU - Batch Size: 256 - Model: GoogLeNetInitial test 1 No water cool48121620SE +/- 0.04, N = 315.76

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: GPU - Batch Size: 256 - Model: ResNet-50Initial test 1 No water cool1.2512.5023.7535.0046.255SE +/- 0.02, N = 35.56

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: GPU - Batch Size: 512 - Model: GoogLeNetInitial test 1 No water cool48121620SE +/- 0.02, N = 315.90

Device: GPU - Batch Size: 512 - Model: ResNet-50

Initial test 1 No water cool: The test quit with a non-zero exit status. E: Fatal Python error: Segmentation fault

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamInitial test 1 No water cool510152025SE +/- 0.12, N = 320.09

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamInitial test 1 No water cool90180270360450SE +/- 2.37, N = 3397.58

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamInitial test 1 No water cool48121620SE +/- 0.04, N = 317.36

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamInitial test 1 No water cool1326395265SE +/- 0.15, N = 357.60

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-StreamInitial test 1 No water cool2004006008001000SE +/- 3.22, N = 3890.19

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-StreamInitial test 1 No water cool3691215SE +/- 0.0324, N = 38.9714

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-StreamInitial test 1 No water cool60120180240300SE +/- 0.11, N = 3278.31

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-StreamInitial test 1 No water cool0.80771.61542.42313.23084.0385SE +/- 0.0014, N = 33.5898

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-StreamInitial test 1 No water cool60120180240300SE +/- 0.96, N = 3264.38

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-StreamInitial test 1 No water cool714212835SE +/- 0.11, N = 330.24

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Synchronous Single-StreamInitial test 1 No water cool4080120160200SE +/- 0.50, N = 3173.38

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Synchronous Single-StreamInitial test 1 No water cool1.2962.5923.8885.1846.48SE +/- 0.0168, N = 35.7601

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-StreamInitial test 1 No water cool400800120016002000SE +/- 10.82, N = 32031.88

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-StreamInitial test 1 No water cool0.88291.76582.64873.53164.4145SE +/- 0.0211, N = 33.9240

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-StreamInitial test 1 No water cool30060090012001500SE +/- 3.33, N = 31214.24

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-StreamInitial test 1 No water cool0.18480.36960.55440.73920.924SE +/- 0.0023, N = 30.8214

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-StreamInitial test 1 No water cool20406080100SE +/- 0.07, N = 3110.99

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-StreamInitial test 1 No water cool1632486480SE +/- 0.05, N = 372.04

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-StreamInitial test 1 No water cool20406080100SE +/- 0.16, N = 390.36

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-StreamInitial test 1 No water cool3691215SE +/- 0.02, N = 311.06

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-StreamInitial test 1 No water cool612182430SE +/- 0.04, N = 326.33

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-StreamInitial test 1 No water cool70140210280350SE +/- 0.41, N = 3303.33

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-StreamInitial test 1 No water cool510152025SE +/- 0.04, N = 318.51

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-StreamInitial test 1 No water cool1224364860SE +/- 0.12, N = 354.00

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamInitial test 1 No water cool60120180240300SE +/- 1.19, N = 3265.20

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamInitial test 1 No water cool714212835SE +/- 0.14, N = 330.15

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamInitial test 1 No water cool4080120160200SE +/- 0.56, N = 3173.90

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamInitial test 1 No water cool1.29232.58463.87695.16926.4615SE +/- 0.0188, N = 35.7436

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-StreamInitial test 1 No water cool306090120150SE +/- 0.61, N = 3115.11

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-StreamInitial test 1 No water cool1530456075SE +/- 0.38, N = 369.44

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-StreamInitial test 1 No water cool20406080100SE +/- 0.10, N = 392.51

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-StreamInitial test 1 No water cool3691215SE +/- 0.01, N = 310.80

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamInitial test 1 No water cool4080120160200SE +/- 0.39, N = 3182.62

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamInitial test 1 No water cool1020304050SE +/- 0.09, N = 343.78

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamInitial test 1 No water cool20406080100SE +/- 0.48, N = 396.47

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamInitial test 1 No water cool3691215SE +/- 0.05, N = 310.36

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamInitial test 1 No water cool816243240SE +/- 0.05, N = 333.81

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamInitial test 1 No water cool50100150200250SE +/- 0.26, N = 3236.31

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-StreamInitial test 1 No water cool612182430SE +/- 0.01, N = 327.53

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-StreamInitial test 1 No water cool816243240SE +/- 0.01, N = 336.31

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-StreamInitial test 1 No water cool90180270360450SE +/- 0.37, N = 3417.17

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-StreamInitial test 1 No water cool510152025SE +/- 0.02, N = 319.16

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-StreamInitial test 1 No water cool20406080100SE +/- 0.11, N = 398.08

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-StreamInitial test 1 No water cool3691215SE +/- 0.01, N = 310.19

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamInitial test 1 No water cool510152025SE +/- 0.08, N = 319.91

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamInitial test 1 No water cool90180270360450SE +/- 1.37, N = 3400.39

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamInitial test 1 No water cool48121620SE +/- 0.03, N = 317.29

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamInitial test 1 No water cool1326395265SE +/- 0.09, N = 357.83

spaCy

The spaCy library is an open-source solution for advanced neural language processing (NLP). The spaCy library leverages Python and is a leading neural language processing solution. This test profile times the spaCy CPU performance with various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_lgInitial test 1 No water cool4K8K12K16K20KSE +/- 175.73, N = 318557

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_trfInitial test 1 No water cool5001000150020002500SE +/- 29.81, N = 32415

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

Model: AlexNet - Acceleration: CPU - Iterations: 100

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found

Model: AlexNet - Acceleration: CPU - Iterations: 200

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found

Model: AlexNet - Acceleration: CPU - Iterations: 1000

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found

Model: GoogleNet - Acceleration: CPU - Iterations: 100

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found

Model: GoogleNet - Acceleration: CPU - Iterations: 200

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found

Model: GoogleNet - Acceleration: CPU - Iterations: 1000

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetInitial test 1 No water cool3691215SE +/- 0.10, N = 311.30MIN: 10.49 / MAX: 27.451. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3Initial test 1 No water cool0.36860.73721.10581.47441.843SE +/- 0.026, N = 31.638MIN: 1.47 / MAX: 4.851. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1Initial test 1 No water cool0.5721.1441.7162.2882.86SE +/- 0.046, N = 32.542MIN: 2.3 / MAX: 9.081. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50Initial test 1 No water cool3691215SE +/- 0.04, N = 312.12MIN: 11.31 / MAX: 29.791. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0Initial test 1 No water cool0.93171.86342.79513.72684.6585SE +/- 0.111, N = 34.141MIN: 3.72 / MAX: 10.491. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224Initial test 1 No water cool0.76731.53462.30193.06923.8365SE +/- 0.049, N = 33.410MIN: 3.18 / MAX: 12.531. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0Initial test 1 No water cool0.55261.10521.65782.21042.763SE +/- 0.038, N = 32.456MIN: 2.27 / MAX: 6.461. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3Initial test 1 No water cool612182430SE +/- 0.58, N = 323.42MIN: 20.69 / MAX: 54.31. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mobilenetInitial test 1 No water cool3691215SE +/- 0.08, N = 39.80MIN: 8.81 / MAX: 23.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v2-v2 - Model: mobilenet-v2Initial test 1 No water cool0.82131.64262.46393.28524.1065SE +/- 0.02, N = 33.65MIN: 3.37 / MAX: 7.361. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v3-v3 - Model: mobilenet-v3Initial test 1 No water cool0.83031.66062.49093.32124.1515SE +/- 0.03, N = 33.69MIN: 3.42 / MAX: 8.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: shufflenet-v2Initial test 1 No water cool0.88431.76862.65293.53724.4215SE +/- 0.01, N = 33.93MIN: 3.65 / MAX: 7.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mnasnetInitial test 1 No water cool0.77851.5572.33553.1143.8925SE +/- 0.01, N = 33.46MIN: 3.18 / MAX: 18.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: efficientnet-b0Initial test 1 No water cool1.01252.0253.03754.055.0625SE +/- 0.02, N = 34.50MIN: 4.16 / MAX: 8.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: blazefaceInitial test 1 No water cool0.360.721.081.441.8SE +/- 0.01, N = 31.60MIN: 1.48 / MAX: 6.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: googlenetInitial test 1 No water cool3691215SE +/- 0.02, N = 39.56MIN: 8.74 / MAX: 18.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vgg16Initial test 1 No water cool816243240SE +/- 0.24, N = 332.60MIN: 30.13 / MAX: 77.851. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet18Initial test 1 No water cool246810SE +/- 0.03, N = 36.62MIN: 5.93 / MAX: 11.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: alexnetInitial test 1 No water cool1.2422.4843.7264.9686.21SE +/- 0.01, N = 35.52MIN: 5.06 / MAX: 10.111. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet50Initial test 1 No water cool3691215SE +/- 0.34, N = 313.48MIN: 11.93 / MAX: 33.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPUv2-yolov3v2-yolov3 - Model: mobilenetv2-yolov3Initial test 1 No water cool3691215SE +/- 0.08, N = 39.80MIN: 8.81 / MAX: 23.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: yolov4-tinyInitial test 1 No water cool48121620SE +/- 0.33, N = 316.28MIN: 14.44 / MAX: 34.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: squeezenet_ssdInitial test 1 No water cool246810SE +/- 0.16, N = 38.49MIN: 7.41 / MAX: 14.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: regnety_400mInitial test 1 No water cool3691215SE +/- 0.06, N = 39.87MIN: 9.16 / MAX: 15.631. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vision_transformerInitial test 1 No water cool918273645SE +/- 0.43, N = 337.92MIN: 34.53 / MAX: 51.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: FastestDetInitial test 1 No water cool1.05532.11063.16594.22125.2765SE +/- 0.11, N = 34.69MIN: 4.25 / MAX: 7.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: mobilenetInitial test 1 No water cool3691215SE +/- 0.03, N = 39.36MIN: 8.66 / MAX: 15.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2Initial test 1 No water cool0.83031.66062.49093.32124.1515SE +/- 0.03, N = 33.69MIN: 3.36 / MAX: 25.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3Initial test 1 No water cool0.8371.6742.5113.3484.185SE +/- 0.03, N = 33.72MIN: 3.43 / MAX: 15.141. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: shufflenet-v2Initial test 1 No water cool0.87751.7552.63253.514.3875SE +/- 0.02, N = 33.90MIN: 3.65 / MAX: 8.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: mnasnetInitial test 1 No water cool0.77631.55262.32893.10523.8815SE +/- 0.00, N = 33.45MIN: 3.2 / MAX: 7.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: efficientnet-b0Initial test 1 No water cool1.02832.05663.08494.11325.1415SE +/- 0.06, N = 34.57MIN: 4.16 / MAX: 9.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: blazefaceInitial test 1 No water cool0.36450.7291.09351.4581.8225SE +/- 0.03, N = 31.62MIN: 1.47 / MAX: 14.361. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: googlenetInitial test 1 No water cool3691215SE +/- 0.08, N = 39.69MIN: 8.75 / MAX: 15.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: vgg16Initial test 1 No water cool816243240SE +/- 0.15, N = 332.31MIN: 29.89 / MAX: 88.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: resnet18Initial test 1 No water cool246810SE +/- 0.03, N = 36.65MIN: 5.9 / MAX: 23.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: alexnetInitial test 1 No water cool1.27582.55163.82745.10326.379SE +/- 0.16, N = 35.67MIN: 5.06 / MAX: 11.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: resnet50Initial test 1 No water cool3691215SE +/- 0.06, N = 313.16MIN: 11.9 / MAX: 19.891. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPUv2-yolov3v2-yolov3 - Model: mobilenetv2-yolov3Initial test 1 No water cool3691215SE +/- 0.03, N = 39.36MIN: 8.66 / MAX: 15.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: yolov4-tinyInitial test 1 No water cool48121620SE +/- 0.04, N = 315.84MIN: 14.55 / MAX: 30.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: squeezenet_ssdInitial test 1 No water cool246810SE +/- 0.04, N = 38.38MIN: 7.63 / MAX: 22.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: regnety_400mInitial test 1 No water cool3691215SE +/- 0.05, N = 39.83MIN: 9.06 / MAX: 26.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: vision_transformerInitial test 1 No water cool918273645SE +/- 0.62, N = 338.12MIN: 34.66 / MAX: 105.141. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: FastestDetInitial test 1 No water cool0.96751.9352.90253.874.8375SE +/- 0.11, N = 34.30MIN: 3.94 / MAX: 7.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetInitial test 1 No water cool400800120016002000SE +/- 7.45, N = 32005.61MIN: 1929.36 / MAX: 2112.891. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2Initial test 1 No water cool4080120160200SE +/- 0.72, N = 3183.28MIN: 178.67 / MAX: 193.661. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v2Initial test 1 No water cool1020304050SE +/- 0.13, N = 342.22MIN: 41.67 / MAX: 45.741. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.1Initial test 1 No water cool4080120160200SE +/- 1.36, N = 3179.68MIN: 176.67 / MAX: 181.681. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

FP16: No - Mode: Inference - Network: VGG16 - Device: CPU

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: cannot import name 'Iterable' from 'collections' (/usr/lib/python3.10/collections/__init__.py)

FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: cannot import name 'Iterable' from 'collections' (/usr/lib/python3.10/collections/__init__.py)

OpenVINO

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection FP16 - Device: CPUInitial test 1 No water cool3691215SE +/- 0.12, N = 712.751. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection FP16 - Device: CPUInitial test 1 No water cool140280420560700SE +/- 5.40, N = 7625.66MIN: 442.72 / MAX: 668.181. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Detection FP16 - Device: CPUInitial test 1 No water cool20406080100SE +/- 0.34, N = 376.651. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Detection FP16 - Device: CPUInitial test 1 No water cool20406080100SE +/- 0.46, N = 3104.27MIN: 66.39 / MAX: 133.391. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Detection FP32 - Device: CPUInitial test 1 No water cool20406080100SE +/- 0.77, N = 377.541. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Detection FP32 - Device: CPUInitial test 1 No water cool20406080100SE +/- 1.01, N = 3103.09MIN: 50.34 / MAX: 135.141. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16 - Device: CPUInitial test 1 No water cool130260390520650SE +/- 1.73, N = 3618.411. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16 - Device: CPUInitial test 1 No water cool3691215SE +/- 0.04, N = 312.91MIN: 5.88 / MAX: 26.231. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection FP16-INT8 - Device: CPUInitial test 1 No water cool612182430SE +/- 0.06, N = 324.711. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection FP16-INT8 - Device: CPUInitial test 1 No water cool70140210280350SE +/- 0.74, N = 3323.12MIN: 174.68 / MAX: 361.951. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16 - Device: CPUInitial test 1 No water cool7001400210028003500SE +/- 11.24, N = 33062.631. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16 - Device: CPUInitial test 1 No water cool0.56931.13861.70792.27722.8465SE +/- 0.01, N = 32.53MIN: 1.29 / MAX: 10.521. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16 - Device: CPUInitial test 1 No water cool60120180240300SE +/- 1.80, N = 3272.361. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16 - Device: CPUInitial test 1 No water cool714212835SE +/- 0.19, N = 329.32MIN: 13.59 / MAX: 44.551. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16-INT8 - Device: CPUInitial test 1 No water cool30060090012001500SE +/- 4.10, N = 31538.271. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16-INT8 - Device: CPUInitial test 1 No water cool1.16552.3313.49654.6625.8275SE +/- 0.02, N = 35.18MIN: 3.09 / MAX: 14.581. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16 - Device: CPUInitial test 1 No water cool30060090012001500SE +/- 3.30, N = 31266.851. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16 - Device: CPUInitial test 1 No water cool3691215SE +/- 0.03, N = 312.61MIN: 6.44 / MAX: 25.711. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16-INT8 - Device: CPUInitial test 1 No water cool9001800270036004500SE +/- 18.97, N = 34335.661. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16-INT8 - Device: CPUInitial test 1 No water cool0.81231.62462.43693.24924.0615SE +/- 0.01, N = 33.61MIN: 1.95 / MAX: 11.371. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16-INT8 - Device: CPUInitial test 1 No water cool100200300400500SE +/- 1.04, N = 3448.281. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16-INT8 - Device: CPUInitial test 1 No water cool48121620SE +/- 0.04, N = 317.81MIN: 8.88 / MAX: 27.31. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Machine Translation EN To DE FP16 - Device: CPUInitial test 1 No water cool306090120150SE +/- 0.16, N = 3121.961. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Machine Translation EN To DE FP16 - Device: CPUInitial test 1 No water cool1530456075SE +/- 0.08, N = 365.52MIN: 34.28 / MAX: 85.731. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16-INT8 - Device: CPUInitial test 1 No water cool5001000150020002500SE +/- 8.85, N = 32470.861. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16-INT8 - Device: CPUInitial test 1 No water cool246810SE +/- 0.02, N = 36.44MIN: 3.29 / MAX: 17.171. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Vehicle Bike Detection FP16 - Device: CPUInitial test 1 No water cool30060090012001500SE +/- 1.93, N = 31442.071. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Vehicle Bike Detection FP16 - Device: CPUInitial test 1 No water cool1.24432.48863.73294.97726.2215SE +/- 0.01, N = 35.53MIN: 3.88 / MAX: 14.271. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Noise Suppression Poconet-Like FP16 - Device: CPUInitial test 1 No water cool30060090012001500SE +/- 1.13, N = 31386.931. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Noise Suppression Poconet-Like FP16 - Device: CPUInitial test 1 No water cool3691215SE +/- 0.01, N = 311.33MIN: 6.71 / MAX: 21.431. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Handwritten English Recognition FP16 - Device: CPUInitial test 1 No water cool140280420560700SE +/- 1.79, N = 3667.291. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Handwritten English Recognition FP16 - Device: CPUInitial test 1 No water cool612182430SE +/- 0.06, N = 323.94MIN: 14.82 / MAX: 35.61. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Re-Identification Retail FP16 - Device: CPUInitial test 1 No water cool400800120016002000SE +/- 2.90, N = 31785.481. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Re-Identification Retail FP16 - Device: CPUInitial test 1 No water cool1.00352.0073.01054.0145.0175SE +/- 0.01, N = 34.46MIN: 2.58 / MAX: 13.941. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUInitial test 1 No water cool7K14K21K28K35KSE +/- 75.38, N = 332402.421. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUInitial test 1 No water cool0.10130.20260.30390.40520.5065SE +/- 0.00, N = 30.45MIN: 0.21 / MAX: 7.281. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Handwritten English Recognition FP16-INT8 - Device: CPUInitial test 1 No water cool160320480640800SE +/- 2.28, N = 3729.991. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Handwritten English Recognition FP16-INT8 - Device: CPUInitial test 1 No water cool510152025SE +/- 0.07, N = 321.88MIN: 16.69 / MAX: 40.531. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUInitial test 1 No water cool10K20K30K40K50KSE +/- 95.25, N = 346025.941. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUInitial test 1 No water cool0.06980.13960.20940.27920.349SE +/- 0.00, N = 30.31MIN: 0.16 / MAX: 8.121. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: KNN CADInitial test 1 No water cool20406080100SE +/- 0.86, N = 9105.00

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative EntropyInitial test 1 No water cool246810SE +/- 0.101, N = 48.281

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed GaussianInitial test 1 No water cool1.12142.24283.36424.48565.607SE +/- 0.046, N = 154.984

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko SkylineInitial test 1 No water cool1224364860SE +/- 0.29, N = 355.57

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian ChangepointInitial test 1 No water cool3691215SE +/- 0.05, N = 313.21

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Contextual Anomaly Detector OSEInitial test 1 No water cool612182430SE +/- 0.23, N = 325.40

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

Model: GPT-2 - Device: CPU - Executor: Parallel

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: GPT-2 - Device: CPU - Executor: Standard

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: yolov4 - Device: CPU - Executor: Parallel

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: yolov4 - Device: CPU - Executor: Standard

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: T5 Encoder - Device: CPU - Executor: Parallel

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: T5 Encoder - Device: CPU - Executor: Standard

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: bertsquad-12 - Device: CPU - Executor: Parallel

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: bertsquad-12 - Device: CPU - Executor: Standard

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: fcn-resnet101-11 - Device: CPU - Executor: Standard

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: super-resolution-10 - Device: CPU - Executor: Parallel

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: super-resolution-10 - Device: CPU - Executor: Standard

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference ScoreInitial test 1 No water cool60012001800240030002900

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training ScoreInitial test 1 No water cool80016002400320040003573

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI ScoreInitial test 1 No water cool140028004200560070006473

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_icaInitial test 1 No water cool714212835SE +/- 0.05, N = 330.12

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qdaInitial test 1 No water cool816243240SE +/- 0.25, N = 334.07

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svmInitial test 1 No water cool48121620SE +/- 0.05, N = 315.12

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregressionInitial test 1 No water cool0.23180.46360.69540.92721.159SE +/- 0.00, N = 31.03

Scikit-Learn

Scikit-learn is a Python module for machine learning built on NumPy, SciPy, and is BSD-licensed. Learn more via the OpenBenchmarking.org test page.

Benchmark: GLM

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: SAGA

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Tree

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Lasso

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Glmnet

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Sparsify

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Plot Ward

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: MNIST Dataset

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Plot Neighbors

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: SGD Regression

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: SGDOneClassSVM

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Plot Lasso Path

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Isolation Forest

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Plot Fast KMeans

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Text Vectorizers

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Plot Hierarchical

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Plot OMP vs. LARS

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Feature Expansions

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: LocalOutlierFactor

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: TSNE MNIST Dataset

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Isotonic / Logistic

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Plot Incremental PCA

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Hist Gradient Boosting

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Plot Parallel Pairwise

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Isotonic / Pathological

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: RCV1 Logreg Convergencet

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Sample Without Replacement

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Covertype Dataset Benchmark

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Hist Gradient Boosting Adult

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Isotonic / Perturbed Logarithm

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Hist Gradient Boosting Threading

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Plot Singular Value Decomposition

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/libblas.so.3: undefined symbol: gotoblas

Benchmark: Hist Gradient Boosting Higgs Boson

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: 20 Newsgroups / Logistic Regression

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Plot Polynomial Kernel Approximation

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Plot Non-Negative Matrix Factorization

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Hist Gradient Boosting Categorical Only

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Kernel PCA Solvers / Time vs. N Samples

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Kernel PCA Solvers / Time vs. N Components

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Sparse Random Projections / 100 Iterations

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Whisper.cpp

Whisper.cpp is a port of OpenAI's Whisper model in C/C++. Whisper.cpp is developed by Georgi Gerganov for transcribing WAV audio files to text / speech recognition. Whisper.cpp supports ARM NEON, x86 AVX, and other advanced CPU features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.4Model: ggml-base.en - Input: 2016 State of the UnionInitial test 1 No water cool0.03450.0690.10350.1380.1725SE +/- 0.00750, N = 150.153501. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.4Model: ggml-small.en - Input: 2016 State of the UnionInitial test 1 No water cool0.07850.1570.23550.3140.3925SE +/- 0.02582, N = 150.348811. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.4Model: ggml-medium.en - Input: 2016 State of the UnionInitial test 1 No water cool0.19490.38980.58470.77960.9745SE +/- 0.08461, N = 150.866151. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread

Llama.cpp

Llama.cpp is a port of Facebook's LLaMA model in C/C++ developed by Georgi Gerganov. Llama.cpp allows the inference of LLaMA and other supported models in C/C++. For CPU inference Llama.cpp supports AVX2/AVX-512, ARM NEON, and other modern ISAs along with features like OpenBLAS usage. Learn more via the OpenBenchmarking.org test page.

Model: llama-2-7b.Q4_0.gguf

Initial test 1 No water cool: The test quit with a non-zero exit status. E: main: error: unable to load model

Model: llama-2-13b.Q4_0.gguf

Initial test 1 No water cool: The test quit with a non-zero exit status. E: main: error: unable to load model

Model: llama-2-70b-chat.Q5_0.gguf

Initial test 1 No water cool: The test quit with a non-zero exit status. E: main: error: unable to load model

Llamafile

Mozilla's Llamafile allows distributing and running large language models (LLMs) as a single file. Llamafile aims to make open-source LLMs more accessible to developers and users. Llamafile supports a variety of models, CPUs and GPUs, and other options. Learn more via the OpenBenchmarking.org test page.

Test: llava-v1.5-7b-q4 - Acceleration: CPU

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ./run-llava: line 2: ./llava-v1.5-7b-q4.llamafile: No such file or directory

Test: mistral-7b-instruct-v0.2.Q8_0 - Acceleration: CPU

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ./run-mistral: line 2: ./mistral-7b-instruct-v0.2.Q8_0.llamafile: No such file or directory

Test: wizardcoder-python-34b-v1.0.Q6_K - Acceleration: CPU

Initial test 1 No water cool: The test quit with a non-zero exit status. E: ./run-wizardcoder: line 2: ./wizardcoder-python-34b-v1.0.Q6_K.llamafile: No such file or directory

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: DNN - Deep Neural NetworkInitial test 1 No water cool6K12K18K24K30KSE +/- 360.44, N = 15302771. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared

261 Results Shown

SHOC Scalable HeterOgeneous Computing:
  OpenCL - S3D
  OpenCL - Triad
  OpenCL - FFT SP
  OpenCL - MD5 Hash
  OpenCL - Reduction
  OpenCL - GEMM SGEMM_N
  OpenCL - Max SP Flops
  OpenCL - Bus Speed Download
  OpenCL - Bus Speed Readback
  OpenCL - Texture Read Bandwidth
oneDNN:
  IP Shapes 1D - CPU
  IP Shapes 3D - CPU
  Convolution Batch Shapes Auto - CPU
  Deconvolution Batch shapes_1d - CPU
  Deconvolution Batch shapes_3d - CPU
  Recurrent Neural Network Training - CPU
  Recurrent Neural Network Inference - CPU
Numpy Benchmark
DeepSpeech
RNNoise
TensorFlow Lite:
  SqueezeNet
  Inception V4
  NASNet Mobile
  Mobilenet Float
  Mobilenet Quant
  Inception ResNet V2
PyTorch:
  CPU - 1 - ResNet-50
  CPU - 1 - ResNet-152
  CPU - 16 - ResNet-50
  CPU - 32 - ResNet-50
  CPU - 64 - ResNet-50
  CPU - 16 - ResNet-152
  CPU - 256 - ResNet-50
  CPU - 32 - ResNet-152
  CPU - 512 - ResNet-50
  CPU - 64 - ResNet-152
  CPU - 256 - ResNet-152
  CPU - 512 - ResNet-152
  CPU - 1 - Efficientnet_v2_l
  CPU - 16 - Efficientnet_v2_l
  CPU - 32 - Efficientnet_v2_l
  CPU - 64 - Efficientnet_v2_l
  CPU - 256 - Efficientnet_v2_l
  CPU - 512 - Efficientnet_v2_l
  NVIDIA CUDA GPU - 1 - ResNet-50
  NVIDIA CUDA GPU - 1 - ResNet-152
  NVIDIA CUDA GPU - 16 - ResNet-50
  NVIDIA CUDA GPU - 32 - ResNet-50
  NVIDIA CUDA GPU - 64 - ResNet-50
  NVIDIA CUDA GPU - 16 - ResNet-152
  NVIDIA CUDA GPU - 256 - ResNet-50
  NVIDIA CUDA GPU - 32 - ResNet-152
  NVIDIA CUDA GPU - 512 - ResNet-50
  NVIDIA CUDA GPU - 64 - ResNet-152
  NVIDIA CUDA GPU - 256 - ResNet-152
  NVIDIA CUDA GPU - 512 - ResNet-152
  NVIDIA CUDA GPU - 1 - Efficientnet_v2_l
  NVIDIA CUDA GPU - 16 - Efficientnet_v2_l
  NVIDIA CUDA GPU - 32 - Efficientnet_v2_l
  NVIDIA CUDA GPU - 64 - Efficientnet_v2_l
  NVIDIA CUDA GPU - 256 - Efficientnet_v2_l
  NVIDIA CUDA GPU - 512 - Efficientnet_v2_l
TensorFlow:
  CPU - 1 - VGG-16
  GPU - 1 - VGG-16
  CPU - 1 - AlexNet
  CPU - 16 - VGG-16
  CPU - 32 - VGG-16
  CPU - 64 - VGG-16
  GPU - 1 - AlexNet
  GPU - 16 - VGG-16
  GPU - 32 - VGG-16
  GPU - 64 - VGG-16
  CPU - 16 - AlexNet
  CPU - 256 - VGG-16
  CPU - 32 - AlexNet
  CPU - 64 - AlexNet
  GPU - 16 - AlexNet
  GPU - 256 - VGG-16
  GPU - 32 - AlexNet
  GPU - 64 - AlexNet
  CPU - 1 - GoogLeNet
  CPU - 1 - ResNet-50
  CPU - 256 - AlexNet
  CPU - 512 - AlexNet
  GPU - 1 - GoogLeNet
  GPU - 1 - ResNet-50
  GPU - 256 - AlexNet
  GPU - 512 - AlexNet
  CPU - 16 - GoogLeNet
  CPU - 16 - ResNet-50
  CPU - 32 - GoogLeNet
  CPU - 32 - ResNet-50
  CPU - 64 - GoogLeNet
  CPU - 64 - ResNet-50
  GPU - 16 - GoogLeNet
  GPU - 16 - ResNet-50
  GPU - 32 - GoogLeNet
  GPU - 32 - ResNet-50
  GPU - 64 - GoogLeNet
  GPU - 64 - ResNet-50
  CPU - 256 - GoogLeNet
  CPU - 256 - ResNet-50
  CPU - 512 - GoogLeNet
  GPU - 256 - GoogLeNet
  GPU - 256 - ResNet-50
  GPU - 512 - GoogLeNet
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream:
    items/sec
    ms/batch
  ResNet-50, Baseline - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  ResNet-50, Baseline - Synchronous Single-Stream:
    items/sec
    ms/batch
  ResNet-50, Sparse INT8 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  ResNet-50, Sparse INT8 - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Detection, YOLOv5s COCO - Synchronous Single-Stream:
    items/sec
    ms/batch
  BERT-Large, NLP Question Answering - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  BERT-Large, NLP Question Answering - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream:
    items/sec
    ms/batch
  BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream:
    items/sec
    ms/batch
spaCy:
  en_core_web_lg
  en_core_web_trf
Mobile Neural Network:
  nasnet
  mobilenetV3
  squeezenetv1.1
  resnet-v2-50
  SqueezeNetV1.0
  MobileNetV2_224
  mobilenet-v1-1.0
  inception-v3
NCNN:
  CPU - mobilenet
  CPU-v2-v2 - mobilenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU - shufflenet-v2
  CPU - mnasnet
  CPU - efficientnet-b0
  CPU - blazeface
  CPU - googlenet
  CPU - vgg16
  CPU - resnet18
  CPU - alexnet
  CPU - resnet50
  CPUv2-yolov3v2-yolov3 - mobilenetv2-yolov3
  CPU - yolov4-tiny
  CPU - squeezenet_ssd
  CPU - regnety_400m
  CPU - vision_transformer
  CPU - FastestDet
  Vulkan GPU - mobilenet
  Vulkan GPU-v2-v2 - mobilenet-v2
  Vulkan GPU-v3-v3 - mobilenet-v3
  Vulkan GPU - shufflenet-v2
  Vulkan GPU - mnasnet
  Vulkan GPU - efficientnet-b0
  Vulkan GPU - blazeface
  Vulkan GPU - googlenet
  Vulkan GPU - vgg16
  Vulkan GPU - resnet18
  Vulkan GPU - alexnet
  Vulkan GPU - resnet50
  Vulkan GPUv2-yolov3v2-yolov3 - mobilenetv2-yolov3
  Vulkan GPU - yolov4-tiny
  Vulkan GPU - squeezenet_ssd
  Vulkan GPU - regnety_400m
  Vulkan GPU - vision_transformer
  Vulkan GPU - FastestDet
TNN:
  CPU - DenseNet
  CPU - MobileNet v2
  CPU - SqueezeNet v2
  CPU - SqueezeNet v1.1
OpenVINO:
  Face Detection FP16 - CPU:
    FPS
    ms
  Person Detection FP16 - CPU:
    FPS
    ms
  Person Detection FP32 - CPU:
    FPS
    ms
  Vehicle Detection FP16 - CPU:
    FPS
    ms
  Face Detection FP16-INT8 - CPU:
    FPS
    ms
  Face Detection Retail FP16 - CPU:
    FPS
    ms
  Road Segmentation ADAS FP16 - CPU:
    FPS
    ms
  Vehicle Detection FP16-INT8 - CPU:
    FPS
    ms
  Weld Porosity Detection FP16 - CPU:
    FPS
    ms
  Face Detection Retail FP16-INT8 - CPU:
    FPS
    ms
  Road Segmentation ADAS FP16-INT8 - CPU:
    FPS
    ms
  Machine Translation EN To DE FP16 - CPU:
    FPS
    ms
  Weld Porosity Detection FP16-INT8 - CPU:
    FPS
    ms
  Person Vehicle Bike Detection FP16 - CPU:
    FPS
    ms
  Noise Suppression Poconet-Like FP16 - CPU:
    FPS
    ms
  Handwritten English Recognition FP16 - CPU:
    FPS
    ms
  Person Re-Identification Retail FP16 - CPU:
    FPS
    ms
  Age Gender Recognition Retail 0013 FP16 - CPU:
    FPS
    ms
  Handwritten English Recognition FP16-INT8 - CPU:
    FPS
    ms
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU:
    FPS
    ms
Numenta Anomaly Benchmark:
  KNN CAD
  Relative Entropy
  Windowed Gaussian
  Earthgecko Skyline
  Bayesian Changepoint
  Contextual Anomaly Detector OSE
AI Benchmark Alpha:
  Device Inference Score
  Device Training Score
  Device AI Score
Mlpack Benchmark:
  scikit_ica
  scikit_qda
  scikit_svm
  scikit_linearridgeregression
Whisper.cpp:
  ggml-base.en - 2016 State of the Union
  ggml-small.en - 2016 State of the Union
  ggml-medium.en - 2016 State of the Union
OpenCV