a

Benchmarks for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2312143-NE-A8154652071
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

BLAS (Basic Linear Algebra Sub-Routine) Tests 2 Tests
Fortran Tests 2 Tests
HPC - High Performance Computing 3 Tests
LAPACK (Linear Algebra Pack) Tests 2 Tests
OpenMPI Tests 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
December 12 2023
  6 Hours, 14 Minutes
b
December 12 2023
  6 Hours, 8 Minutes
c
December 12 2023
  5 Hours, 57 Minutes
d
December 13 2023
  6 Hours, 19 Minutes
Invert Hiding All Results Option
  6 Hours, 9 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


a - Phoronix Test Suite

a

Benchmarks for a future article.

HTML result view exported from: https://openbenchmarking.org/result/2312143-NE-A8154652071&export=pdf&grs&sor&rro.

aProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelCompilerFile-SystemScreen Resolutionabcd2 x INTEL XEON PLATINUM 8592+ @ 3.90GHz (128 Cores / 256 Threads)Quanta Cloud S6Q-MB-MPS (3B05.TEL4P1 BIOS)Intel Device 1bce1008GB3201GB Micron_7450_MTFDKCB3T2TFSASPEED2 x Intel X710 for 10GBASE-TUbuntu 23.106.5.0-13-generic (x86_64)GCC 13.2.0ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseProcessor Details- Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x21000161 Python Details- Python 3.11.6Security Details- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected Compiler Details- d: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v

adeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamnwchem: C240 Buckyballdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamwrf: conus 2.5kmdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamabcd31.194532.061331.695431.5821183.4966347.663729.942133.40220.968117441030.3181199.69305.007310.363434.86811832.79275566.72996.43001880.768833.9808475.69641829.964434.91695.1485194.247611228.66255.6852190.8420133.51595.238277.1758828.0384407.277734.962028.5876854.052574.8177338.95622.94921231.20492.9484339.0416156.056351.9301475.126514.03624554.9796133.60234.2083238.652331.233632.036831.533231.7303180.3157353.688930.389132.89890.97061730.71027.5117199.33805.019610.369535.14231817.99395600.97696.37901875.903534.0815474.18521824.375935.00635.1659193.597211169.85175.7140190.0399133.71895.261777.0924828.6532408.626335.106528.4701852.945574.9435337.67992.96021231.29392.9500338.8583155.722251.9273474.368714.00814563.5097133.82394.2295237.046031.630031.650131.720231.5618177.1512359.891930.294432.99840.95561757.31043.2804199.33235.017910.350734.82171834.47715583.11296.54741879.838833.9939475.68041827.978034.96835.1802193.087211163.65935.7165189.8175133.48435.265177.1235828.4462408.969935.005728.5521854.252174.7945338.55362.95301231.27362.9555338.2192155.767751.9192475.349714.00904563.4278133.60044.3436230.764032.893630.422233.054030.2891181.2029352.223529.523133.86350.966717481032.2410196.78745.080810.450135.10131819.68615617.19795.69811865.019334.2508477.61081837.497734.75675.1614193.818611221.13575.6908190.1376133.00795.258977.4573825.0841408.163834.998728.5586850.892675.0848338.14662.95731226.79602.9455339.3671155.602052.0706475.446414.03154556.0233133.59694.1604240.2336OpenBenchmarking.org

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamabcd816243240SE +/- 0.38, N = 4SE +/- 0.25, N = 15SE +/- 0.32, N = 15SE +/- 0.26, N = 1531.1931.2331.6332.89

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamabcd714212835SE +/- 0.40, N = 4SE +/- 0.27, N = 15SE +/- 0.32, N = 15SE +/- 0.25, N = 1532.0632.0431.6530.42

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streambacd816243240SE +/- 0.25, N = 15SE +/- 0.31, N = 15SE +/- 0.33, N = 15SE +/- 0.33, N = 1531.5331.7031.7233.05

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streambacd714212835SE +/- 0.25, N = 15SE +/- 0.31, N = 15SE +/- 0.32, N = 15SE +/- 0.31, N = 1531.7331.5831.5630.29

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamcbda4080120160200SE +/- 1.83, N = 3SE +/- 2.09, N = 3SE +/- 1.48, N = 9SE +/- 1.57, N = 8177.15180.32181.20183.50

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamcbda80160240320400SE +/- 3.47, N = 3SE +/- 4.19, N = 3SE +/- 2.84, N = 9SE +/- 3.00, N = 8359.89353.69352.22347.66

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Streamdacb714212835SE +/- 0.10, N = 3SE +/- 0.30, N = 6SE +/- 0.15, N = 3SE +/- 0.27, N = 329.5229.9430.2930.39

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Streamdacb816243240SE +/- 0.12, N = 3SE +/- 0.34, N = 6SE +/- 0.16, N = 3SE +/- 0.30, N = 333.8633.4033.0032.90

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streambadc0.21840.43680.65520.87361.092SE +/- 0.0090, N = 3SE +/- 0.0105, N = 4SE +/- 0.0094, N = 7SE +/- 0.0048, N = 30.97060.96810.96670.9556

NWChem

Input: C240 Buckyball

OpenBenchmarking.orgSeconds, Fewer Is BetterNWChem 7.0.2Input: C240 Buckyballcdab4008001200160020001757.31748.01744.01730.71. (F9X) gfortran options: -lnwctask -lccsd -lmcscf -lselci -lmp2 -lmoints -lstepper -ldriver -loptim -lnwdft -lgradients -lcphf -lesp -lddscf -ldangchang -lguess -lhessian -lvib -lnwcutil -lrimp2 -lproperty -lsolvation -lnwints -lprepar -lnwmd -lnwpw -lofpw -lpaw -lpspw -lband -lnwpwlib -lcafe -lspace -lanalyze -lqhop -lpfft -ldplot -ldrdy -lvscf -lqmmm -lqmd -letrans -ltce -lbq -lmm -lcons -lperfm -ldntmc -lccca -ldimqm -lga -larmci -lpeigs -l64to32 -lopenblas -lpthread -lrt -llapack -lnwcblas -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz -lcomex -m64 -ffast-math -std=legacy -fdefault-integer-8 -finline-functions -O2

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streambadc2004006008001000SE +/- 9.47, N = 3SE +/- 11.12, N = 4SE +/- 9.64, N = 7SE +/- 5.37, N = 31027.511030.321032.241043.28

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamdcba4080120160200SE +/- 1.69, N = 8SE +/- 1.76, N = 12SE +/- 2.05, N = 12SE +/- 1.45, N = 12196.79199.33199.34199.69

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamdbca1.14322.28643.42964.57285.716SE +/- 0.0457, N = 8SE +/- 0.0573, N = 12SE +/- 0.0484, N = 12SE +/- 0.0391, N = 125.08085.01965.01795.0073

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamdbac3691215SE +/- 0.08, N = 10SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 310.4510.3710.3610.35

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streambdac816243240SE +/- 0.14, N = 3SE +/- 0.39, N = 3SE +/- 0.43, N = 3SE +/- 0.11, N = 335.1435.1034.8734.82

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streambdac400800120016002000SE +/- 7.71, N = 3SE +/- 21.20, N = 3SE +/- 22.80, N = 3SE +/- 5.61, N = 31817.991819.691832.791834.48

WRF

Input: conus 2.5km

OpenBenchmarking.orgSeconds, Fewer Is BetterWRF 4.2.2Input: conus 2.5kmdbca120024003600480060005617.205600.985583.115566.731. (F9X) gfortran options: -O2 -ftree-vectorize -funroll-loops -ffree-form -fconvert=big-endian -frecord-marker=4 -fallow-invalid-boz -lesmf_time -lwrfio_nf -lnetcdff -lnetcdf -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamdbac20406080100SE +/- 0.72, N = 10SE +/- 0.30, N = 3SE +/- 0.11, N = 3SE +/- 0.16, N = 395.7096.3896.4396.55

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamdbca400800120016002000SE +/- 21.54, N = 3SE +/- 26.46, N = 3SE +/- 21.19, N = 3SE +/- 21.42, N = 31865.021875.901879.841880.77

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamdbca816243240SE +/- 0.40, N = 3SE +/- 0.48, N = 3SE +/- 0.38, N = 3SE +/- 0.38, N = 334.2534.0833.9933.98

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamdacb100200300400500SE +/- 0.44, N = 3SE +/- 0.61, N = 3SE +/- 0.52, N = 3SE +/- 0.47, N = 3477.61475.70475.68474.19

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streambcad400800120016002000SE +/- 13.19, N = 3SE +/- 17.94, N = 3SE +/- 4.85, N = 3SE +/- 14.66, N = 31824.381827.981829.961837.50

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streambcad816243240SE +/- 0.25, N = 3SE +/- 0.35, N = 3SE +/- 0.08, N = 3SE +/- 0.29, N = 335.0134.9734.9234.76

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamcbda1.16552.3313.49654.6625.8275SE +/- 0.0500, N = 12SE +/- 0.0445, N = 13SE +/- 0.0471, N = 12SE +/- 0.0389, N = 125.18025.16595.16145.1485

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamcbda4080120160200SE +/- 1.70, N = 12SE +/- 1.53, N = 13SE +/- 1.62, N = 12SE +/- 1.36, N = 12193.09193.60193.82194.25

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamcbda2K4K6K8K10KSE +/- 101.43, N = 7SE +/- 89.80, N = 9SE +/- 77.36, N = 12SE +/- 71.77, N = 1311163.6611169.8511221.1411228.66

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamcbda1.28622.57243.85865.14486.431SE +/- 0.0544, N = 7SE +/- 0.0484, N = 9SE +/- 0.0425, N = 12SE +/- 0.0393, N = 135.71655.71405.69085.6852

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamcbda4080120160200SE +/- 1.69, N = 7SE +/- 1.73, N = 12SE +/- 1.52, N = 12SE +/- 1.29, N = 12189.82190.04190.14190.84

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamdcab306090120150SE +/- 0.11, N = 3SE +/- 0.16, N = 3SE +/- 0.15, N = 3SE +/- 0.13, N = 3133.01133.48133.52133.72

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamcbda1.18462.36923.55384.73845.923SE +/- 0.0492, N = 7SE +/- 0.0526, N = 12SE +/- 0.0457, N = 12SE +/- 0.0382, N = 125.26515.26175.25895.2382

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamdacb20406080100SE +/- 0.87, N = 3SE +/- 0.70, N = 3SE +/- 0.64, N = 3SE +/- 0.63, N = 377.4677.1877.1277.09

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamdacb2004006008001000SE +/- 9.22, N = 3SE +/- 7.51, N = 3SE +/- 6.81, N = 3SE +/- 6.37, N = 3825.08828.04828.45828.65

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamcbda90180270360450SE +/- 2.55, N = 3SE +/- 1.91, N = 3SE +/- 2.06, N = 3SE +/- 1.62, N = 3408.97408.63408.16407.28

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamadcb816243240SE +/- 0.33, N = 7SE +/- 0.23, N = 12SE +/- 0.35, N = 6SE +/- 0.34, N = 634.9635.0035.0135.11

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamadcb714212835SE +/- 0.28, N = 7SE +/- 0.20, N = 12SE +/- 0.29, N = 6SE +/- 0.29, N = 628.5928.5628.5528.47

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamdbac2004006008001000SE +/- 10.96, N = 3SE +/- 10.26, N = 3SE +/- 8.70, N = 3SE +/- 9.10, N = 3850.89852.95854.05854.25

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamdbac20406080100SE +/- 0.97, N = 3SE +/- 0.92, N = 3SE +/- 0.77, N = 3SE +/- 0.80, N = 375.0874.9474.8274.79

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streambdca70140210280350SE +/- 3.44, N = 6SE +/- 2.94, N = 12SE +/- 2.56, N = 12SE +/- 2.76, N = 9337.68338.15338.55338.96

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streambdca0.6661.3321.9982.6643.33SE +/- 0.0315, N = 6SE +/- 0.0282, N = 12SE +/- 0.0242, N = 12SE +/- 0.0255, N = 92.96022.95732.95302.9492

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamdacb30060090012001500SE +/- 17.38, N = 3SE +/- 14.83, N = 3SE +/- 13.91, N = 3SE +/- 14.60, N = 41226.801231.201231.271231.29

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamcbad0.6651.331.9952.663.325SE +/- 0.0300, N = 6SE +/- 0.0260, N = 9SE +/- 0.0247, N = 10SE +/- 0.0208, N = 122.95552.95002.94842.9455

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamcbad70140210280350SE +/- 3.30, N = 6SE +/- 2.82, N = 9SE +/- 2.66, N = 10SE +/- 2.24, N = 12338.22338.86339.04339.37

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamdbca306090120150SE +/- 1.10, N = 3SE +/- 0.66, N = 3SE +/- 1.19, N = 3SE +/- 0.66, N = 3155.60155.72155.77156.06

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamdabc1224364860SE +/- 0.70, N = 3SE +/- 0.63, N = 3SE +/- 0.60, N = 4SE +/- 0.62, N = 352.0751.9351.9351.92

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamdcab100200300400500SE +/- 0.75, N = 3SE +/- 0.55, N = 3SE +/- 0.32, N = 3SE +/- 0.39, N = 3475.45475.35475.13474.37

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamadcb48121620SE +/- 0.16, N = 5SE +/- 0.14, N = 6SE +/- 0.14, N = 6SE +/- 0.13, N = 714.0414.0314.0114.01

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamadcb10002000300040005000SE +/- 49.40, N = 5SE +/- 42.83, N = 6SE +/- 43.60, N = 6SE +/- 40.39, N = 74554.984556.024563.434563.51

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamdcab306090120150SE +/- 0.26, N = 3SE +/- 0.19, N = 3SE +/- 0.13, N = 3SE +/- 0.14, N = 3133.60133.60133.60133.82

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamcbad0.97731.95462.93193.90924.8865SE +/- 0.0661, N = 15SE +/- 0.0666, N = 15SE +/- 0.0821, N = 15SE +/- 0.0540, N = 34.34364.22954.20834.1604

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamcbad50100150200250SE +/- 3.66, N = 15SE +/- 3.82, N = 15SE +/- 4.61, N = 15SE +/- 3.16, N = 3230.76237.05238.65240.23


Phoronix Test Suite v10.8.4