a

Benchmarks for a future article.

HTML result view exported from: https://openbenchmarking.org/result/2312143-NE-A8154652071&gru&sor.

aProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelCompilerFile-SystemScreen Resolutionabcd2 x INTEL XEON PLATINUM 8592+ @ 3.90GHz (128 Cores / 256 Threads)Quanta Cloud S6Q-MB-MPS (3B05.TEL4P1 BIOS)Intel Device 1bce1008GB3201GB Micron_7450_MTFDKCB3T2TFSASPEED2 x Intel X710 for 10GBASE-TUbuntu 23.106.5.0-13-generic (x86_64)GCC 13.2.0ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseProcessor Details- Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x21000161 Python Details- Python 3.11.6Security Details- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected Compiler Details- d: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v

adeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamnwchem: C240 Buckyballwrf: conus 2.5kmabcd133.515931.19454554.9796238.65231829.9644338.956211228.66251030.3181828.0384190.8420156.056329.94211832.7927339.0416854.0525194.24761231.2049199.6930183.496634.96201880.768896.4300133.602331.6954475.696432.061314.03624.208334.91692.94925.68520.968177.17585.2382407.277733.402234.86812.948474.81775.148551.93015.0073347.663728.587633.980810.3634475.126531.582117445566.729133.718931.23364563.5097237.04601824.3759337.679911169.85171027.5117828.6532190.0399155.722230.38911817.9939338.8583852.9455193.59721231.2939199.3380180.315735.10651875.903596.3790133.823931.5332474.185232.036814.00814.229535.00632.96025.71400.970677.09245.2617408.626332.898935.14232.950074.94355.165951.92735.0196353.688928.470134.081510.3695474.368731.73031730.75600.976133.484331.63004563.4278230.76401827.9780338.553611163.65931043.2804828.4462189.8175155.767730.29441834.4771338.2192854.2521193.08721231.2736199.3323177.151235.00571879.838896.5474133.600431.7202475.680431.650114.00904.343634.96832.95305.71650.955677.12355.2651408.969932.998434.82172.955574.79455.180251.91925.0179359.891928.552133.993910.3507475.349731.56181757.35583.112133.007932.89364556.0233240.23361837.4977338.146611221.13571032.2410825.0841190.1376155.602029.52311819.6861339.3671850.8926193.81861226.7960196.7874181.202934.99871865.019395.6981133.596933.0540477.610830.422214.03154.160434.75672.95735.69080.966777.45735.2589408.163833.863535.10132.945575.08485.161452.07065.0808352.223528.558634.250810.4501475.446430.289117485617.197OpenBenchmarking.org

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streambacd306090120150SE +/- 0.13, N = 3SE +/- 0.15, N = 3SE +/- 0.16, N = 3SE +/- 0.11, N = 3133.72133.52133.48133.01

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamdcba816243240SE +/- 0.26, N = 15SE +/- 0.32, N = 15SE +/- 0.25, N = 15SE +/- 0.38, N = 432.8931.6331.2331.19

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streambcda10002000300040005000SE +/- 40.39, N = 7SE +/- 43.60, N = 6SE +/- 42.83, N = 6SE +/- 49.40, N = 54563.514563.434556.024554.98

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamdabc50100150200250SE +/- 3.16, N = 3SE +/- 4.61, N = 15SE +/- 3.82, N = 15SE +/- 3.66, N = 15240.23238.65237.05230.76

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamdacb400800120016002000SE +/- 14.66, N = 3SE +/- 4.85, N = 3SE +/- 17.94, N = 3SE +/- 13.19, N = 31837.501829.961827.981824.38

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamacdb70140210280350SE +/- 2.76, N = 9SE +/- 2.56, N = 12SE +/- 2.94, N = 12SE +/- 3.44, N = 6338.96338.55338.15337.68

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamadbc2K4K6K8K10KSE +/- 71.77, N = 13SE +/- 77.36, N = 12SE +/- 89.80, N = 9SE +/- 101.43, N = 711228.6611221.1411169.8511163.66

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamcdab2004006008001000SE +/- 5.37, N = 3SE +/- 9.64, N = 7SE +/- 11.12, N = 4SE +/- 9.47, N = 31043.281032.241030.321027.51

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streambcad2004006008001000SE +/- 6.37, N = 3SE +/- 6.81, N = 3SE +/- 7.51, N = 3SE +/- 9.22, N = 3828.65828.45828.04825.08

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamadbc4080120160200SE +/- 1.29, N = 12SE +/- 1.52, N = 12SE +/- 1.73, N = 12SE +/- 1.69, N = 7190.84190.14190.04189.82

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamacbd306090120150SE +/- 0.66, N = 3SE +/- 1.19, N = 3SE +/- 0.66, N = 3SE +/- 1.10, N = 3156.06155.77155.72155.60

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Streambcad714212835SE +/- 0.27, N = 3SE +/- 0.15, N = 3SE +/- 0.30, N = 6SE +/- 0.10, N = 330.3930.2929.9429.52

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamcadb400800120016002000SE +/- 5.61, N = 3SE +/- 22.80, N = 3SE +/- 21.20, N = 3SE +/- 7.71, N = 31834.481832.791819.691817.99

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamdabc70140210280350SE +/- 2.24, N = 12SE +/- 2.66, N = 10SE +/- 2.82, N = 9SE +/- 3.30, N = 6339.37339.04338.86338.22

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamcabd2004006008001000SE +/- 9.10, N = 3SE +/- 8.70, N = 3SE +/- 10.26, N = 3SE +/- 10.96, N = 3854.25854.05852.95850.89

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamadbc4080120160200SE +/- 1.36, N = 12SE +/- 1.62, N = 12SE +/- 1.53, N = 13SE +/- 1.70, N = 12194.25193.82193.60193.09

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streambcad30060090012001500SE +/- 14.60, N = 4SE +/- 13.91, N = 3SE +/- 14.83, N = 3SE +/- 17.38, N = 31231.291231.271231.201226.80

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamabcd4080120160200SE +/- 1.45, N = 12SE +/- 2.05, N = 12SE +/- 1.76, N = 12SE +/- 1.69, N = 8199.69199.34199.33196.79

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamadbc4080120160200SE +/- 1.57, N = 8SE +/- 1.48, N = 9SE +/- 2.09, N = 3SE +/- 1.83, N = 3183.50181.20180.32177.15

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streambcda816243240SE +/- 0.34, N = 6SE +/- 0.35, N = 6SE +/- 0.23, N = 12SE +/- 0.33, N = 735.1135.0135.0034.96

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamacbd400800120016002000SE +/- 21.42, N = 3SE +/- 21.19, N = 3SE +/- 26.46, N = 3SE +/- 21.54, N = 31880.771879.841875.901865.02

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamcabd20406080100SE +/- 0.16, N = 3SE +/- 0.11, N = 3SE +/- 0.30, N = 3SE +/- 0.72, N = 1096.5596.4396.3895.70

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streambacd306090120150SE +/- 0.14, N = 3SE +/- 0.13, N = 3SE +/- 0.19, N = 3SE +/- 0.26, N = 3133.82133.60133.60133.60

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamdcab816243240SE +/- 0.33, N = 15SE +/- 0.33, N = 15SE +/- 0.31, N = 15SE +/- 0.25, N = 1533.0531.7231.7031.53

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streambcad100200300400500SE +/- 0.47, N = 3SE +/- 0.52, N = 3SE +/- 0.61, N = 3SE +/- 0.44, N = 3474.19475.68475.70477.61

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamdcba714212835SE +/- 0.25, N = 15SE +/- 0.32, N = 15SE +/- 0.27, N = 15SE +/- 0.40, N = 430.4231.6532.0432.06

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streambcda48121620SE +/- 0.13, N = 7SE +/- 0.14, N = 6SE +/- 0.14, N = 6SE +/- 0.16, N = 514.0114.0114.0314.04

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamdabc0.97731.95462.93193.90924.8865SE +/- 0.0540, N = 3SE +/- 0.0821, N = 15SE +/- 0.0666, N = 15SE +/- 0.0661, N = 154.16044.20834.22954.3436

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamdacb816243240SE +/- 0.29, N = 3SE +/- 0.08, N = 3SE +/- 0.35, N = 3SE +/- 0.25, N = 334.7634.9234.9735.01

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamacdb0.6661.3321.9982.6643.33SE +/- 0.0255, N = 9SE +/- 0.0242, N = 12SE +/- 0.0282, N = 12SE +/- 0.0315, N = 62.94922.95302.95732.9602

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamadbc1.28622.57243.85865.14486.431SE +/- 0.0393, N = 13SE +/- 0.0425, N = 12SE +/- 0.0484, N = 9SE +/- 0.0544, N = 75.68525.69085.71405.7165

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamcdab0.21840.43680.65520.87361.092SE +/- 0.0048, N = 3SE +/- 0.0094, N = 7SE +/- 0.0105, N = 4SE +/- 0.0090, N = 30.95560.96670.96810.9706

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streambcad20406080100SE +/- 0.63, N = 3SE +/- 0.64, N = 3SE +/- 0.70, N = 3SE +/- 0.87, N = 377.0977.1277.1877.46

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamadbc1.18462.36923.55384.73845.923SE +/- 0.0382, N = 12SE +/- 0.0457, N = 12SE +/- 0.0526, N = 12SE +/- 0.0492, N = 75.23825.25895.26175.2651

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamadbc90180270360450SE +/- 1.62, N = 3SE +/- 2.06, N = 3SE +/- 1.91, N = 3SE +/- 2.55, N = 3407.28408.16408.63408.97

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Streambcad816243240SE +/- 0.30, N = 3SE +/- 0.16, N = 3SE +/- 0.34, N = 6SE +/- 0.12, N = 332.9033.0033.4033.86

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamcadb816243240SE +/- 0.11, N = 3SE +/- 0.43, N = 3SE +/- 0.39, N = 3SE +/- 0.14, N = 334.8234.8735.1035.14

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamdabc0.6651.331.9952.663.325SE +/- 0.0208, N = 12SE +/- 0.0247, N = 10SE +/- 0.0260, N = 9SE +/- 0.0300, N = 62.94552.94842.95002.9555

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamcabd20406080100SE +/- 0.80, N = 3SE +/- 0.77, N = 3SE +/- 0.92, N = 3SE +/- 0.97, N = 374.7974.8274.9475.08

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamadbc1.16552.3313.49654.6625.8275SE +/- 0.0389, N = 12SE +/- 0.0471, N = 12SE +/- 0.0445, N = 13SE +/- 0.0500, N = 125.14855.16145.16595.1802

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamcbad1224364860SE +/- 0.62, N = 3SE +/- 0.60, N = 4SE +/- 0.63, N = 3SE +/- 0.70, N = 351.9251.9351.9352.07

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamacbd1.14322.28643.42964.57285.716SE +/- 0.0391, N = 12SE +/- 0.0484, N = 12SE +/- 0.0573, N = 12SE +/- 0.0457, N = 85.00735.01795.01965.0808

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamadbc80160240320400SE +/- 3.00, N = 8SE +/- 2.84, N = 9SE +/- 4.19, N = 3SE +/- 3.47, N = 3347.66352.22353.69359.89

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streambcda714212835SE +/- 0.29, N = 6SE +/- 0.29, N = 6SE +/- 0.20, N = 12SE +/- 0.28, N = 728.4728.5528.5628.59

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamacbd816243240SE +/- 0.38, N = 3SE +/- 0.38, N = 3SE +/- 0.48, N = 3SE +/- 0.40, N = 333.9833.9934.0834.25

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamcabd3691215SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.08, N = 1010.3510.3610.3710.45

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streambacd100200300400500SE +/- 0.39, N = 3SE +/- 0.32, N = 3SE +/- 0.55, N = 3SE +/- 0.75, N = 3474.37475.13475.35475.45

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamdcab714212835SE +/- 0.31, N = 15SE +/- 0.32, N = 15SE +/- 0.31, N = 15SE +/- 0.25, N = 1530.2931.5631.5831.73

NWChem

Input: C240 Buckyball

OpenBenchmarking.orgSeconds, Fewer Is BetterNWChem 7.0.2Input: C240 Buckyballbadc4008001200160020001730.71744.01748.01757.31. (F9X) gfortran options: -lnwctask -lccsd -lmcscf -lselci -lmp2 -lmoints -lstepper -ldriver -loptim -lnwdft -lgradients -lcphf -lesp -lddscf -ldangchang -lguess -lhessian -lvib -lnwcutil -lrimp2 -lproperty -lsolvation -lnwints -lprepar -lnwmd -lnwpw -lofpw -lpaw -lpspw -lband -lnwpwlib -lcafe -lspace -lanalyze -lqhop -lpfft -ldplot -ldrdy -lvscf -lqmmm -lqmd -letrans -ltce -lbq -lmm -lcons -lperfm -ldntmc -lccca -ldimqm -lga -larmci -lpeigs -l64to32 -lopenblas -lpthread -lrt -llapack -lnwcblas -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz -lcomex -m64 -ffast-math -std=legacy -fdefault-integer-8 -finline-functions -O2

WRF

Input: conus 2.5km

OpenBenchmarking.orgSeconds, Fewer Is BetterWRF 4.2.2Input: conus 2.5kmacbd120024003600480060005566.735583.115600.985617.201. (F9X) gfortran options: -O2 -ftree-vectorize -funroll-loops -ffree-form -fconvert=big-endian -frecord-marker=4 -fallow-invalid-boz -lesmf_time -lwrfio_nf -lnetcdff -lnetcdf -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz


Phoronix Test Suite v10.8.5