a

Benchmarks for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2312143-NE-A8154652071
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

BLAS (Basic Linear Algebra Sub-Routine) Tests 2 Tests
Fortran Tests 2 Tests
HPC - High Performance Computing 3 Tests
LAPACK (Linear Algebra Pack) Tests 2 Tests
OpenMPI Tests 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
December 12 2023
  6 Hours, 14 Minutes
b
December 12 2023
  6 Hours, 8 Minutes
c
December 12 2023
  5 Hours, 57 Minutes
d
December 13 2023
  6 Hours, 19 Minutes
Invert Hiding All Results Option
  6 Hours, 9 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


a - Phoronix Test Suite

a

Benchmarks for a future article.

HTML result view exported from: https://openbenchmarking.org/result/2312143-NE-A8154652071&export=pdf&gru&sro&rro.

aProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelCompilerFile-SystemScreen Resolutionabcd2 x INTEL XEON PLATINUM 8592+ @ 3.90GHz (128 Cores / 256 Threads)Quanta Cloud S6Q-MB-MPS (3B05.TEL4P1 BIOS)Intel Device 1bce1008GB3201GB Micron_7450_MTFDKCB3T2TFSASPEED2 x Intel X710 for 10GBASE-TUbuntu 23.106.5.0-13-generic (x86_64)GCC 13.2.0ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseProcessor Details- Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x21000161 Python Details- Python 3.11.6Security Details- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected Compiler Details- d: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v

adeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamnwchem: C240 Buckyballwrf: conus 2.5kmabcd133.515931.19454554.9796238.65231829.9644338.956211228.66251030.3181828.0384190.8420156.056329.94211832.7927339.0416854.0525194.24761231.2049199.6930183.496634.96201880.768896.4300133.602331.6954475.696432.061314.03624.208334.91692.94925.68520.968177.17585.2382407.277733.402234.86812.948474.81775.148551.93015.0073347.663728.587633.980810.3634475.126531.582117445566.729133.718931.23364563.5097237.04601824.3759337.679911169.85171027.5117828.6532190.0399155.722230.38911817.9939338.8583852.9455193.59721231.2939199.3380180.315735.10651875.903596.3790133.823931.5332474.185232.036814.00814.229535.00632.96025.71400.970677.09245.2617408.626332.898935.14232.950074.94355.165951.92735.0196353.688928.470134.081510.3695474.368731.73031730.75600.976133.484331.63004563.4278230.76401827.9780338.553611163.65931043.2804828.4462189.8175155.767730.29441834.4771338.2192854.2521193.08721231.2736199.3323177.151235.00571879.838896.5474133.600431.7202475.680431.650114.00904.343634.96832.95305.71650.955677.12355.2651408.969932.998434.82172.955574.79455.180251.91925.0179359.891928.552133.993910.3507475.349731.56181757.35583.112133.007932.89364556.0233240.23361837.4977338.146611221.13571032.2410825.0841190.1376155.602029.52311819.6861339.3671850.8926193.81861226.7960196.7874181.202934.99871865.019395.6981133.596933.0540477.610830.422214.03154.160434.75672.95735.69080.966777.45735.2589408.163833.863535.10132.945575.08485.161452.07065.0808352.223528.558634.250810.4501475.446430.289117485617.197OpenBenchmarking.org

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamdcba306090120150SE +/- 0.11, N = 3SE +/- 0.16, N = 3SE +/- 0.13, N = 3SE +/- 0.15, N = 3133.01133.48133.72133.52

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamdcba816243240SE +/- 0.26, N = 15SE +/- 0.32, N = 15SE +/- 0.25, N = 15SE +/- 0.38, N = 432.8931.6331.2331.19

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamdcba10002000300040005000SE +/- 42.83, N = 6SE +/- 43.60, N = 6SE +/- 40.39, N = 7SE +/- 49.40, N = 54556.024563.434563.514554.98

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamdcba50100150200250SE +/- 3.16, N = 3SE +/- 3.66, N = 15SE +/- 3.82, N = 15SE +/- 4.61, N = 15240.23230.76237.05238.65

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamdcba400800120016002000SE +/- 14.66, N = 3SE +/- 17.94, N = 3SE +/- 13.19, N = 3SE +/- 4.85, N = 31837.501827.981824.381829.96

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamdcba70140210280350SE +/- 2.94, N = 12SE +/- 2.56, N = 12SE +/- 3.44, N = 6SE +/- 2.76, N = 9338.15338.55337.68338.96

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamdcba2K4K6K8K10KSE +/- 77.36, N = 12SE +/- 101.43, N = 7SE +/- 89.80, N = 9SE +/- 71.77, N = 1311221.1411163.6611169.8511228.66

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamdcba2004006008001000SE +/- 9.64, N = 7SE +/- 5.37, N = 3SE +/- 9.47, N = 3SE +/- 11.12, N = 41032.241043.281027.511030.32

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamdcba2004006008001000SE +/- 9.22, N = 3SE +/- 6.81, N = 3SE +/- 6.37, N = 3SE +/- 7.51, N = 3825.08828.45828.65828.04

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamdcba4080120160200SE +/- 1.52, N = 12SE +/- 1.69, N = 7SE +/- 1.73, N = 12SE +/- 1.29, N = 12190.14189.82190.04190.84

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamdcba306090120150SE +/- 1.10, N = 3SE +/- 1.19, N = 3SE +/- 0.66, N = 3SE +/- 0.66, N = 3155.60155.77155.72156.06

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Streamdcba714212835SE +/- 0.10, N = 3SE +/- 0.15, N = 3SE +/- 0.27, N = 3SE +/- 0.30, N = 629.5230.2930.3929.94

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamdcba400800120016002000SE +/- 21.20, N = 3SE +/- 5.61, N = 3SE +/- 7.71, N = 3SE +/- 22.80, N = 31819.691834.481817.991832.79

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamdcba70140210280350SE +/- 2.24, N = 12SE +/- 3.30, N = 6SE +/- 2.82, N = 9SE +/- 2.66, N = 10339.37338.22338.86339.04

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamdcba2004006008001000SE +/- 10.96, N = 3SE +/- 9.10, N = 3SE +/- 10.26, N = 3SE +/- 8.70, N = 3850.89854.25852.95854.05

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamdcba4080120160200SE +/- 1.62, N = 12SE +/- 1.70, N = 12SE +/- 1.53, N = 13SE +/- 1.36, N = 12193.82193.09193.60194.25

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamdcba30060090012001500SE +/- 17.38, N = 3SE +/- 13.91, N = 3SE +/- 14.60, N = 4SE +/- 14.83, N = 31226.801231.271231.291231.20

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamdcba4080120160200SE +/- 1.69, N = 8SE +/- 1.76, N = 12SE +/- 2.05, N = 12SE +/- 1.45, N = 12196.79199.33199.34199.69

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamdcba4080120160200SE +/- 1.48, N = 9SE +/- 1.83, N = 3SE +/- 2.09, N = 3SE +/- 1.57, N = 8181.20177.15180.32183.50

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamdcba816243240SE +/- 0.23, N = 12SE +/- 0.35, N = 6SE +/- 0.34, N = 6SE +/- 0.33, N = 735.0035.0135.1134.96

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamdcba400800120016002000SE +/- 21.54, N = 3SE +/- 21.19, N = 3SE +/- 26.46, N = 3SE +/- 21.42, N = 31865.021879.841875.901880.77

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamdcba20406080100SE +/- 0.72, N = 10SE +/- 0.16, N = 3SE +/- 0.30, N = 3SE +/- 0.11, N = 395.7096.5596.3896.43

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamdcba306090120150SE +/- 0.26, N = 3SE +/- 0.19, N = 3SE +/- 0.14, N = 3SE +/- 0.13, N = 3133.60133.60133.82133.60

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamdcba816243240SE +/- 0.33, N = 15SE +/- 0.33, N = 15SE +/- 0.25, N = 15SE +/- 0.31, N = 1533.0531.7231.5331.70

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamdcba100200300400500SE +/- 0.44, N = 3SE +/- 0.52, N = 3SE +/- 0.47, N = 3SE +/- 0.61, N = 3477.61475.68474.19475.70

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamdcba714212835SE +/- 0.25, N = 15SE +/- 0.32, N = 15SE +/- 0.27, N = 15SE +/- 0.40, N = 430.4231.6532.0432.06

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamdcba48121620SE +/- 0.14, N = 6SE +/- 0.14, N = 6SE +/- 0.13, N = 7SE +/- 0.16, N = 514.0314.0114.0114.04

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamdcba0.97731.95462.93193.90924.8865SE +/- 0.0540, N = 3SE +/- 0.0661, N = 15SE +/- 0.0666, N = 15SE +/- 0.0821, N = 154.16044.34364.22954.2083

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamdcba816243240SE +/- 0.29, N = 3SE +/- 0.35, N = 3SE +/- 0.25, N = 3SE +/- 0.08, N = 334.7634.9735.0134.92

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamdcba0.6661.3321.9982.6643.33SE +/- 0.0282, N = 12SE +/- 0.0242, N = 12SE +/- 0.0315, N = 6SE +/- 0.0255, N = 92.95732.95302.96022.9492

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamdcba1.28622.57243.85865.14486.431SE +/- 0.0425, N = 12SE +/- 0.0544, N = 7SE +/- 0.0484, N = 9SE +/- 0.0393, N = 135.69085.71655.71405.6852

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamdcba0.21840.43680.65520.87361.092SE +/- 0.0094, N = 7SE +/- 0.0048, N = 3SE +/- 0.0090, N = 3SE +/- 0.0105, N = 40.96670.95560.97060.9681

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamdcba20406080100SE +/- 0.87, N = 3SE +/- 0.64, N = 3SE +/- 0.63, N = 3SE +/- 0.70, N = 377.4677.1277.0977.18

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamdcba1.18462.36923.55384.73845.923SE +/- 0.0457, N = 12SE +/- 0.0492, N = 7SE +/- 0.0526, N = 12SE +/- 0.0382, N = 125.25895.26515.26175.2382

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamdcba90180270360450SE +/- 2.06, N = 3SE +/- 2.55, N = 3SE +/- 1.91, N = 3SE +/- 1.62, N = 3408.16408.97408.63407.28

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Streamdcba816243240SE +/- 0.12, N = 3SE +/- 0.16, N = 3SE +/- 0.30, N = 3SE +/- 0.34, N = 633.8633.0032.9033.40

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamdcba816243240SE +/- 0.39, N = 3SE +/- 0.11, N = 3SE +/- 0.14, N = 3SE +/- 0.43, N = 335.1034.8235.1434.87

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamdcba0.6651.331.9952.663.325SE +/- 0.0208, N = 12SE +/- 0.0300, N = 6SE +/- 0.0260, N = 9SE +/- 0.0247, N = 102.94552.95552.95002.9484

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamdcba20406080100SE +/- 0.97, N = 3SE +/- 0.80, N = 3SE +/- 0.92, N = 3SE +/- 0.77, N = 375.0874.7974.9474.82

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamdcba1.16552.3313.49654.6625.8275SE +/- 0.0471, N = 12SE +/- 0.0500, N = 12SE +/- 0.0445, N = 13SE +/- 0.0389, N = 125.16145.18025.16595.1485

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamdcba1224364860SE +/- 0.70, N = 3SE +/- 0.62, N = 3SE +/- 0.60, N = 4SE +/- 0.63, N = 352.0751.9251.9351.93

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamdcba1.14322.28643.42964.57285.716SE +/- 0.0457, N = 8SE +/- 0.0484, N = 12SE +/- 0.0573, N = 12SE +/- 0.0391, N = 125.08085.01795.01965.0073

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamdcba80160240320400SE +/- 2.84, N = 9SE +/- 3.47, N = 3SE +/- 4.19, N = 3SE +/- 3.00, N = 8352.22359.89353.69347.66

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamdcba714212835SE +/- 0.20, N = 12SE +/- 0.29, N = 6SE +/- 0.29, N = 6SE +/- 0.28, N = 728.5628.5528.4728.59

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamdcba816243240SE +/- 0.40, N = 3SE +/- 0.38, N = 3SE +/- 0.48, N = 3SE +/- 0.38, N = 334.2533.9934.0833.98

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamdcba3691215SE +/- 0.08, N = 10SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 310.4510.3510.3710.36

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamdcba100200300400500SE +/- 0.75, N = 3SE +/- 0.55, N = 3SE +/- 0.39, N = 3SE +/- 0.32, N = 3475.45475.35474.37475.13

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamdcba714212835SE +/- 0.31, N = 15SE +/- 0.32, N = 15SE +/- 0.25, N = 15SE +/- 0.31, N = 1530.2931.5631.7331.58

NWChem

Input: C240 Buckyball

OpenBenchmarking.orgSeconds, Fewer Is BetterNWChem 7.0.2Input: C240 Buckyballdcba4008001200160020001748.01757.31730.71744.01. (F9X) gfortran options: -lnwctask -lccsd -lmcscf -lselci -lmp2 -lmoints -lstepper -ldriver -loptim -lnwdft -lgradients -lcphf -lesp -lddscf -ldangchang -lguess -lhessian -lvib -lnwcutil -lrimp2 -lproperty -lsolvation -lnwints -lprepar -lnwmd -lnwpw -lofpw -lpaw -lpspw -lband -lnwpwlib -lcafe -lspace -lanalyze -lqhop -lpfft -ldplot -ldrdy -lvscf -lqmmm -lqmd -letrans -ltce -lbq -lmm -lcons -lperfm -ldntmc -lccca -ldimqm -lga -larmci -lpeigs -l64to32 -lopenblas -lpthread -lrt -llapack -lnwcblas -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz -lcomex -m64 -ffast-math -std=legacy -fdefault-integer-8 -finline-functions -O2

WRF

Input: conus 2.5km

OpenBenchmarking.orgSeconds, Fewer Is BetterWRF 4.2.2Input: conus 2.5kmdcba120024003600480060005617.205583.115600.985566.731. (F9X) gfortran options: -O2 -ftree-vectorize -funroll-loops -ffree-form -fconvert=big-endian -frecord-marker=4 -fallow-invalid-boz -lesmf_time -lwrfio_nf -lnetcdff -lnetcdf -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz


Phoronix Test Suite v10.8.4