sk3tchy-creator-tst

VMware testing on Debian 11 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2303010-NE-SK3TCHYCR76
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
centos-vm
February 28 2023
  4 Days, 1 Hour, 11 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


sk3tchy-creator-tstOpenBenchmarking.orgPhoronix Test Suite2 x Intel Core i9-9900K (4 Cores)Intel 440BX (6.00 BIOS)Intel 440BX/ZX/DX1 x 4 GB DRAM107GB Virtual diskVMware SVGA IIVMware VMXNET3Debian 115.10.0-21-amd64 (x86_64)1.0.2GCC 10.2.1 20210110ext41176x885VMwareProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelVulkanCompilerFile-SystemScreen ResolutionSystem LayerSk3tchy-creator-tst BenchmarksSystem Logs- Transparent Huge Pages: always- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-mutex --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-Km9U7s/gcc-10-10.2.1/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-Km9U7s/gcc-10-10.2.1/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - CPU Microcode: 0xea- Python 3.9.2- itlb_multihit: KVM: Mitigation of VMX unsupported + l1tf: Mitigation of PTE Inversion + mds: Mitigation of Clear buffers; SMT Host state unknown + meltdown: Mitigation of PTI + mmio_stale_data: Mitigation of Clear buffers; SMT Host state unknown + retbleed: Mitigation of IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of IBRS IBPB: conditional RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

sk3tchy-creator-tstlczero: BLASonednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 3D - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUnumpy: deepspeech: CPUrbenchmark: rnnoise: tensorflow-lite: SqueezeNettensorflow-lite: Inception V4tensorflow-lite: NASNet Mobiletensorflow-lite: Mobilenet Floattensorflow-lite: Mobilenet Quanttensorflow-lite: Inception ResNet V2deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamcaffe: AlexNet - CPU - 100caffe: AlexNet - CPU - 200caffe: AlexNet - CPU - 1000caffe: GoogleNet - CPU - 100caffe: GoogleNet - CPU - 200caffe: GoogleNet - CPU - 1000mnn: nasnetmnn: mobilenetV3mnn: squeezenetv1.1mnn: resnet-v2-50mnn: SqueezeNetV1.0mnn: MobileNetV2_224mnn: mobilenet-v1-1.0mnn: inception-v3ncnn: CPU - mobilenetncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - shufflenet-v2ncnn: CPU - mnasnetncnn: CPU - efficientnet-b0ncnn: CPU - blazefacencnn: CPU - googlenetncnn: CPU - vgg16ncnn: CPU - resnet18ncnn: CPU - alexnetncnn: CPU - resnet50ncnn: CPU - yolov4-tinyncnn: CPU - squeezenet_ssdncnn: CPU - regnety_400mncnn: CPU - vision_transformerncnn: CPU - FastestDetncnn: Vulkan GPU - mobilenetncnn: Vulkan GPU-v2-v2 - mobilenet-v2ncnn: Vulkan GPU-v3-v3 - mobilenet-v3ncnn: Vulkan GPU - shufflenet-v2ncnn: Vulkan GPU - mnasnetncnn: Vulkan GPU - efficientnet-b0ncnn: Vulkan GPU - blazefacencnn: Vulkan GPU - googlenetncnn: Vulkan GPU - vgg16ncnn: Vulkan GPU - resnet18ncnn: Vulkan GPU - alexnetncnn: Vulkan GPU - resnet50ncnn: Vulkan GPU - yolov4-tinyncnn: Vulkan GPU - squeezenet_ssdncnn: Vulkan GPU - regnety_400mncnn: Vulkan GPU - vision_transformerncnn: Vulkan GPU - FastestDettnn: CPU - DenseNettnn: CPU - MobileNet v2tnn: CPU - SqueezeNet v2tnn: CPU - SqueezeNet v1.1plaidml: No - Inference - VGG16 - CPUplaidml: No - Inference - ResNet 50 - CPUnumenta-nab: KNN CADnumenta-nab: Relative Entropynumenta-nab: Windowed Gaussiannumenta-nab: Earthgecko Skylinenumenta-nab: Bayesian Changepointnumenta-nab: Contextual Anomaly Detector OSEscikit-learn: MNIST Datasetscikit-learn: TSNE MNIST Datasetscikit-learn: Sparse Rand Projections, 100 Iterationsopencv: DNN - Deep Neural Networkcentos-vm12686.074708.611282.643532.7487216.467010.190610.592316.79154.012908.523085169.423103.295203.703114.254.205345220.573114.152.27932371.4465.854440.131922.9876449.6695985.414313.64839.428367.1787461.03.9171508.49472.0387490.511745.713643.716425.487139.226914.6138136.76568.1049123.374821.596192.545911.395487.743444.256845.148923.095543.288432.802160.923516.872159.26165.7598345.84003.0853324.098616.4088121.80938.4059118.95713.9141509.88172.0309492.39732527350387253259637781276966378739.5981.5332.63722.8084.3802.9663.43529.07816.624.633.723.534.028.880.9114.5262.6812.499.7925.4425.5117.669.64470.883.63654.98223.54205.42170.09230.97343.0051.27578.673020.88537.33667.051298.50944.39620.86281.728709.22162.713875.532289.27466.231285.8628.935.74340.66036.76920.895226.20162.58561.830109.97337.240165.89013471OpenBenchmarking.org

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLAScentos-vm30060090012001500SE +/- 9.21, N = 312681. (CXX) g++ options: -flto -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUcentos-vm246810SE +/- 0.03230, N = 36.07470MIN: 5.841. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUcentos-vm246810SE +/- 0.01390, N = 38.61128MIN: 8.171. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUcentos-vm0.59481.18961.78442.37922.974SE +/- 0.00074, N = 32.64353MIN: 2.531. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUcentos-vm0.61851.2371.85552.4743.0925SE +/- 0.02400, N = 72.74872MIN: 2.581. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU

centos-vm: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU

centos-vm: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUcentos-vm48121620SE +/- 0.03, N = 316.47MIN: 15.971. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUcentos-vm3691215SE +/- 0.05, N = 310.19MIN: 9.81. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUcentos-vm3691215SE +/- 0.03, N = 310.59MIN: 10.261. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUcentos-vm48121620SE +/- 0.06, N = 316.79MIN: 16.191. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUcentos-vm0.90291.80582.70873.61164.5145SE +/- 0.00809, N = 34.01290MIN: 3.861. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUcentos-vm246810SE +/- 0.13961, N = 128.52308MIN: 7.441. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUcentos-vm11002200330044005500SE +/- 25.57, N = 35169.42MIN: 5076.311. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUcentos-vm7001400210028003500SE +/- 17.09, N = 33103.29MIN: 3032.391. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUcentos-vm11002200330044005500SE +/- 19.73, N = 35203.70MIN: 5093.881. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU

centos-vm: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU

centos-vm: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU

centos-vm: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUcentos-vm7001400210028003500SE +/- 12.04, N = 33114.25MIN: 3041.321. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUcentos-vm0.94621.89242.83863.78484.731SE +/- 0.01449, N = 34.20534MIN: 3.921. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUcentos-vm11002200330044005500SE +/- 21.04, N = 35220.57MIN: 5124.281. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUcentos-vm7001400210028003500SE +/- 9.14, N = 33114.15MIN: 3057.41. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUcentos-vm0.51281.02561.53842.05122.564SE +/- 0.00226, N = 32.27932MIN: 2.131. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU

centos-vm: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy Benchmarkcentos-vm80160240320400SE +/- 1.45, N = 3371.44

DeepSpeech

Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUcentos-vm1530456075SE +/- 0.16, N = 365.85

R Benchmark

This test is a quick-running survey of general R performance Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterR Benchmarkcentos-vm0.02970.05940.08910.11880.1485SE +/- 0.0002, N = 30.13191. R scripting front-end version 4.0.4 (2021-02-15)

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28centos-vm612182430SE +/- 0.04, N = 322.991. (CC) gcc options: -O2 -pedantic -fvisibility=hidden

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNetcentos-vm14002800420056007000SE +/- 26.61, N = 36449.66

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4centos-vm20K40K60K80K100KSE +/- 137.23, N = 395985.4

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet Mobilecentos-vm3K6K9K12K15KSE +/- 62.51, N = 314313.6

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Floatcentos-vm10002000300040005000SE +/- 8.64, N = 34839.42

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Quantcentos-vm2K4K6K8K10KSE +/- 9.73, N = 38367.17

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V2centos-vm20K40K60K80K100KSE +/- 140.59, N = 387461.0

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

Device: CPU - Batch Size: 16 - Model: VGG-16

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: AttributeError: module 'numpy' has no attribute 'typeDict'

Device: CPU - Batch Size: 32 - Model: VGG-16

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: AttributeError: module 'numpy' has no attribute 'typeDict'

Device: CPU - Batch Size: 64 - Model: VGG-16

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: AttributeError: module 'numpy' has no attribute 'typeDict'

Device: CPU - Batch Size: 16 - Model: AlexNet

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: AttributeError: module 'numpy' has no attribute 'typeDict'

Device: CPU - Batch Size: 256 - Model: VGG-16

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: AttributeError: module 'numpy' has no attribute 'typeDict'

Device: CPU - Batch Size: 32 - Model: AlexNet

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: AttributeError: module 'numpy' has no attribute 'typeDict'

Device: CPU - Batch Size: 512 - Model: VGG-16

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: AttributeError: module 'numpy' has no attribute 'typeDict'

Device: CPU - Batch Size: 64 - Model: AlexNet

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: AttributeError: module 'numpy' has no attribute 'typeDict'

Device: CPU - Batch Size: 256 - Model: AlexNet

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: AttributeError: module 'numpy' has no attribute 'typeDict'

Device: CPU - Batch Size: 512 - Model: AlexNet

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: AttributeError: module 'numpy' has no attribute 'typeDict'

Device: CPU - Batch Size: 16 - Model: GoogLeNet

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: AttributeError: module 'numpy' has no attribute 'typeDict'

Device: CPU - Batch Size: 16 - Model: ResNet-50

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: AttributeError: module 'numpy' has no attribute 'typeDict'

Device: CPU - Batch Size: 32 - Model: GoogLeNet

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: AttributeError: module 'numpy' has no attribute 'typeDict'

Device: CPU - Batch Size: 32 - Model: ResNet-50

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: AttributeError: module 'numpy' has no attribute 'typeDict'

Device: CPU - Batch Size: 64 - Model: GoogLeNet

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: AttributeError: module 'numpy' has no attribute 'typeDict'

Device: CPU - Batch Size: 64 - Model: ResNet-50

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: AttributeError: module 'numpy' has no attribute 'typeDict'

Device: CPU - Batch Size: 256 - Model: GoogLeNet

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: AttributeError: module 'numpy' has no attribute 'typeDict'

Device: CPU - Batch Size: 256 - Model: ResNet-50

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: AttributeError: module 'numpy' has no attribute 'typeDict'

Device: CPU - Batch Size: 512 - Model: GoogLeNet

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: AttributeError: module 'numpy' has no attribute 'typeDict'

Device: CPU - Batch Size: 512 - Model: ResNet-50

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: AttributeError: module 'numpy' has no attribute 'typeDict'

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamcentos-vm0.88131.76262.64393.52524.4065SE +/- 0.0138, N = 33.9171

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamcentos-vm110220330440550SE +/- 1.59, N = 3508.49

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamcentos-vm0.45870.91741.37611.83482.2935SE +/- 0.0081, N = 32.0387

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamcentos-vm110220330440550SE +/- 1.95, N = 3490.51

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamcentos-vm1020304050SE +/- 0.24, N = 345.71

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamcentos-vm1020304050SE +/- 0.22, N = 343.72

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamcentos-vm612182430SE +/- 0.10, N = 325.49

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamcentos-vm918273645SE +/- 0.16, N = 339.23

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamcentos-vm48121620SE +/- 0.05, N = 314.61

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamcentos-vm306090120150SE +/- 0.44, N = 3136.77

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamcentos-vm246810SE +/- 0.0303, N = 38.1049

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamcentos-vm306090120150SE +/- 0.46, N = 3123.37

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamcentos-vm510152025SE +/- 0.07, N = 321.60

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamcentos-vm20406080100SE +/- 0.28, N = 392.55

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamcentos-vm3691215SE +/- 0.00, N = 311.40

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamcentos-vm20406080100SE +/- 0.03, N = 387.74

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamcentos-vm1020304050SE +/- 0.09, N = 344.26

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamcentos-vm1020304050SE +/- 0.09, N = 345.15

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamcentos-vm612182430SE +/- 0.03, N = 323.10

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamcentos-vm1020304050SE +/- 0.06, N = 343.29

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamcentos-vm816243240SE +/- 0.01, N = 332.80

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamcentos-vm1428425670SE +/- 0.01, N = 360.92

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamcentos-vm48121620SE +/- 0.03, N = 316.87

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamcentos-vm1326395265SE +/- 0.10, N = 359.26

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamcentos-vm1.2962.5923.8885.1846.48SE +/- 0.0086, N = 35.7598

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamcentos-vm80160240320400SE +/- 0.68, N = 3345.84

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamcentos-vm0.69421.38842.08262.77683.471SE +/- 0.0010, N = 33.0853

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamcentos-vm70140210280350SE +/- 0.10, N = 3324.10

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamcentos-vm48121620SE +/- 0.01, N = 316.41

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamcentos-vm306090120150SE +/- 0.04, N = 3121.81

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamcentos-vm246810SE +/- 0.0205, N = 38.4059

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamcentos-vm306090120150SE +/- 0.29, N = 3118.96

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamcentos-vm0.88071.76142.64213.52284.4035SE +/- 0.0187, N = 33.9141

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamcentos-vm110220330440550SE +/- 2.54, N = 3509.88

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamcentos-vm0.4570.9141.3711.8282.285SE +/- 0.0058, N = 32.0309

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamcentos-vm110220330440550SE +/- 1.41, N = 3492.40

spaCy

The spaCy library is an open-source solution for advanced neural language processing (NLP). The spaCy library leverages Python and is a leading neural language processing solution. This test profile times the spaCy CPU performance with various models. Learn more via the OpenBenchmarking.org test page.

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: OSError: [E050] Can't find model 'en_core_web_trf'. It doesn't seem to be a Python package or a valid path to a data directory.

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 100centos-vm5K10K15K20K25KSE +/- 62.69, N = 3252731. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 200centos-vm11K22K33K44K55KSE +/- 110.33, N = 3503871. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 1000centos-vm50K100K150K200K250KSE +/- 438.44, N = 32532591. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 100centos-vm14K28K42K56K70KSE +/- 100.74, N = 3637781. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 200centos-vm30K60K90K120K150KSE +/- 238.00, N = 31276961. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 1000centos-vm140K280K420K560K700KSE +/- 1015.06, N = 36378731. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetcentos-vm3691215SE +/- 0.092, N = 129.598MIN: 8.18 / MAX: 34.791. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3centos-vm0.34490.68981.03471.37961.7245SE +/- 0.028, N = 121.533MIN: 1.29 / MAX: 23.281. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1centos-vm0.59331.18661.77992.37322.9665SE +/- 0.026, N = 122.637MIN: 2.34 / MAX: 23.411. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50centos-vm510152025SE +/- 0.04, N = 1222.81MIN: 21.78 / MAX: 50.811. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0centos-vm0.98551.9712.95653.9424.9275SE +/- 0.016, N = 124.380MIN: 4.14 / MAX: 26.231. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224centos-vm0.66741.33482.00222.66963.337SE +/- 0.027, N = 122.966MIN: 2.55 / MAX: 23.151. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0centos-vm0.77291.54582.31873.09163.8645SE +/- 0.017, N = 123.435MIN: 3.19 / MAX: 24.991. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3centos-vm714212835SE +/- 0.07, N = 1229.08MIN: 27.99 / MAX: 75.441. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenetcentos-vm48121620SE +/- 0.04, N = 316.62MIN: 15.88 / MAX: 65.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v2centos-vm1.04182.08363.12544.16725.209SE +/- 0.04, N = 34.63MIN: 4.28 / MAX: 47.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v3centos-vm0.8371.6742.5113.3484.185SE +/- 0.04, N = 33.72MIN: 3.54 / MAX: 15.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v2centos-vm0.79431.58862.38293.17723.9715SE +/- 0.07, N = 33.53MIN: 3.18 / MAX: 63.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnetcentos-vm0.90451.8092.71353.6184.5225SE +/- 0.09, N = 34.02MIN: 3.67 / MAX: 46.491. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b0centos-vm246810SE +/- 0.06, N = 38.88MIN: 8.3 / MAX: 52.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazefacecentos-vm0.20480.40960.61440.81921.024SE +/- 0.04, N = 30.91MIN: 0.81 / MAX: 1.231. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenetcentos-vm48121620SE +/- 0.16, N = 314.52MIN: 13.55 / MAX: 69.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg16centos-vm1428425670SE +/- 0.08, N = 362.68MIN: 60.77 / MAX: 111.111. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet18centos-vm3691215SE +/- 0.47, N = 312.49MIN: 11.31 / MAX: 60.021. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnetcentos-vm3691215SE +/- 0.12, N = 39.79MIN: 9.18 / MAX: 21.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet50centos-vm612182430SE +/- 0.29, N = 325.44MIN: 24.18 / MAX: 71.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tinycentos-vm612182430SE +/- 0.04, N = 225.51MIN: 24.44 / MAX: 71.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssdcentos-vm48121620SE +/- 0.11, N = 317.66MIN: 16.93 / MAX: 62.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400mcentos-vm3691215SE +/- 0.14, N = 39.64MIN: 8.91 / MAX: 53.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformercentos-vm100200300400500SE +/- 0.30, N = 3470.88MIN: 463.72 / MAX: 515.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDetcentos-vm0.81681.63362.45043.26724.084SE +/- 0.04, N = 33.63MIN: 3.22 / MAX: 19.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: mobilenetcentos-vm140280420560700SE +/- 3.23, N = 3654.98MIN: 631.79 / MAX: 701.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2centos-vm50100150200250SE +/- 0.66, N = 3223.54MIN: 210.98 / MAX: 240.951. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3centos-vm50100150200250SE +/- 0.25, N = 3205.42MIN: 195.53 / MAX: 221.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: shufflenet-v2centos-vm4080120160200SE +/- 0.27, N = 3170.09MIN: 162.43 / MAX: 186.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: mnasnetcentos-vm50100150200250SE +/- 0.56, N = 3230.97MIN: 219.26 / MAX: 253.221. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: efficientnet-b0centos-vm70140210280350SE +/- 1.25, N = 3343.00MIN: 328.3 / MAX: 362.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: blazefacecentos-vm1224364860SE +/- 0.05, N = 351.27MIN: 46.95 / MAX: 63.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: googlenetcentos-vm130260390520650SE +/- 1.32, N = 3578.67MIN: 556.51 / MAX: 638.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: vgg16centos-vm6001200180024003000SE +/- 1.29, N = 33020.88MIN: 2995.08 / MAX: 3117.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: resnet18centos-vm120240360480600SE +/- 0.32, N = 3537.33MIN: 523.52 / MAX: 592.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: alexnetcentos-vm140280420560700SE +/- 0.18, N = 3667.05MIN: 655.5 / MAX: 714.131. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: resnet50centos-vm30060090012001500SE +/- 2.05, N = 31298.50MIN: 1271.83 / MAX: 1359.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: yolov4-tinycentos-vm2004006008001000SE +/- 0.37, N = 3944.39MIN: 927.08 / MAX: 987.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: squeezenet_ssdcentos-vm130260390520650SE +/- 0.98, N = 3620.86MIN: 605.38 / MAX: 648.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: regnety_400mcentos-vm60120180240300SE +/- 1.77, N = 3281.72MIN: 271.06 / MAX: 308.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: vision_transformercentos-vm2K4K6K8K10KSE +/- 0.52, N = 38709.22MIN: 8652.35 / MAX: 8861.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: FastestDetcentos-vm4080120160200SE +/- 0.16, N = 3162.71MIN: 156.62 / MAX: 177.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetcentos-vm8001600240032004000SE +/- 5.36, N = 33875.53MIN: 3780.95 / MAX: 3989.331. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2centos-vm60120180240300SE +/- 0.12, N = 3289.27MIN: 285.71 / MAX: 298.981. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v2centos-vm1530456075SE +/- 0.09, N = 366.23MIN: 65.38 / MAX: 78.771. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.1centos-vm60120180240300SE +/- 0.25, N = 3285.86MIN: 284.22 / MAX: 293.741. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: CPUcentos-vm246810SE +/- 0.09, N = 38.93

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: ResNet 50 - Device: CPUcentos-vm1.29152.5833.87455.1666.4575SE +/- 0.02, N = 35.74

ECP-CANDLE

The CANDLE benchmark codes implement deep learning architectures relevant to problems in cancer. These architectures address problems at different biological scales, specifically problems at the molecular, cellular and population scales. Learn more via the OpenBenchmarking.org test page.

Benchmark: P1B2

centos-vm: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'

Benchmark: P3B1

centos-vm: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'

Benchmark: P3B2

centos-vm: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: KNN CADcentos-vm70140210280350SE +/- 2.45, N = 3340.66

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative Entropycentos-vm816243240SE +/- 0.31, N = 336.77

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed Gaussiancentos-vm510152025SE +/- 0.18, N = 1520.90

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko Skylinecentos-vm50100150200250SE +/- 1.22, N = 3226.20

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian Changepointcentos-vm1428425670SE +/- 0.40, N = 362.59

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Contextual Anomaly Detector OSEcentos-vm1428425670SE +/- 1.43, N = 1561.83

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

Model: GPT-2 - Device: CPU - Executor: Parallel

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: GPT-2 - Device: CPU - Executor: Standard

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: yolov4 - Device: CPU - Executor: Parallel

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: yolov4 - Device: CPU - Executor: Standard

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: bertsquad-12 - Device: CPU - Executor: Parallel

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: bertsquad-12 - Device: CPU - Executor: Standard

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: fcn-resnet101-11 - Device: CPU - Executor: Standard

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: super-resolution-10 - Device: CPU - Executor: Parallel

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: super-resolution-10 - Device: CPU - Executor: Standard

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

centos-vm: The test quit with a non-zero exit status. E: AttributeError: module 'numpy' has no attribute 'typeDict'

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

Benchmark: scikit_ica

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'yaml'

Benchmark: scikit_qda

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'yaml'

Benchmark: scikit_svm

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'yaml'

Benchmark: scikit_linearridgeregression

centos-vm: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'yaml'

Scikit-Learn

Scikit-learn is a Python module for machine learning Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.1.3Benchmark: MNIST Datasetcentos-vm20406080100SE +/- 1.23, N = 3109.97

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.1.3Benchmark: TSNE MNIST Datasetcentos-vm918273645SE +/- 0.40, N = 1537.24

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.1.3Benchmark: Sparse Random Projections, 100 Iterationscentos-vm4080120160200SE +/- 0.24, N = 3165.89

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.6Test: DNN - Deep Neural Networkcentos-vm3K6K9K12K15KSE +/- 187.15, N = 15134711. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt

129 Results Shown

LeelaChessZero
oneDNN:
  IP Shapes 1D - f32 - CPU
  IP Shapes 3D - f32 - CPU
  IP Shapes 1D - u8s8f32 - CPU
  IP Shapes 3D - u8s8f32 - CPU
  Convolution Batch Shapes Auto - f32 - CPU
  Deconvolution Batch shapes_1d - f32 - CPU
  Deconvolution Batch shapes_3d - f32 - CPU
  Convolution Batch Shapes Auto - u8s8f32 - CPU
  Deconvolution Batch shapes_1d - u8s8f32 - CPU
  Deconvolution Batch shapes_3d - u8s8f32 - CPU
  Recurrent Neural Network Training - f32 - CPU
  Recurrent Neural Network Inference - f32 - CPU
  Recurrent Neural Network Training - u8s8f32 - CPU
  Recurrent Neural Network Inference - u8s8f32 - CPU
  Matrix Multiply Batch Shapes Transformer - f32 - CPU
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
  Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU
Numpy Benchmark
DeepSpeech
R Benchmark
RNNoise
TensorFlow Lite:
  SqueezeNet
  Inception V4
  NASNet Mobile
  Mobilenet Float
  Mobilenet Quant
  Inception ResNet V2
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Detection, YOLOv5s COCO - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream:
    items/sec
    ms/batch
Caffe:
  AlexNet - CPU - 100
  AlexNet - CPU - 200
  AlexNet - CPU - 1000
  GoogleNet - CPU - 100
  GoogleNet - CPU - 200
  GoogleNet - CPU - 1000
Mobile Neural Network:
  nasnet
  mobilenetV3
  squeezenetv1.1
  resnet-v2-50
  SqueezeNetV1.0
  MobileNetV2_224
  mobilenet-v1-1.0
  inception-v3
NCNN:
  CPU - mobilenet
  CPU-v2-v2 - mobilenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU - shufflenet-v2
  CPU - mnasnet
  CPU - efficientnet-b0
  CPU - blazeface
  CPU - googlenet
  CPU - vgg16
  CPU - resnet18
  CPU - alexnet
  CPU - resnet50
  CPU - yolov4-tiny
  CPU - squeezenet_ssd
  CPU - regnety_400m
  CPU - vision_transformer
  CPU - FastestDet
  Vulkan GPU - mobilenet
  Vulkan GPU-v2-v2 - mobilenet-v2
  Vulkan GPU-v3-v3 - mobilenet-v3
  Vulkan GPU - shufflenet-v2
  Vulkan GPU - mnasnet
  Vulkan GPU - efficientnet-b0
  Vulkan GPU - blazeface
  Vulkan GPU - googlenet
  Vulkan GPU - vgg16
  Vulkan GPU - resnet18
  Vulkan GPU - alexnet
  Vulkan GPU - resnet50
  Vulkan GPU - yolov4-tiny
  Vulkan GPU - squeezenet_ssd
  Vulkan GPU - regnety_400m
  Vulkan GPU - vision_transformer
  Vulkan GPU - FastestDet
TNN:
  CPU - DenseNet
  CPU - MobileNet v2
  CPU - SqueezeNet v2
  CPU - SqueezeNet v1.1
PlaidML:
  No - Inference - VGG16 - CPU
  No - Inference - ResNet 50 - CPU
Numenta Anomaly Benchmark:
  KNN CAD
  Relative Entropy
  Windowed Gaussian
  Earthgecko Skyline
  Bayesian Changepoint
  Contextual Anomaly Detector OSE
Scikit-Learn:
  MNIST Dataset
  TSNE MNIST Dataset
  Sparse Rand Projections, 100 Iterations
OpenCV