ddfg

2 x AMD EPYC 9684X 96-Core testing with a AMD Titanite_4G (RTI1007B BIOS) and ASPEED on Ubuntu 23.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2403164-NE-DDFG2505160
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

CPU Massive 2 Tests
Multi-Core 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
March 16
  36 Minutes
b
March 16
  1 Hour, 48 Minutes
c
March 16
  2 Hours, 38 Minutes
d
March 16
  36 Minutes
Invert Hiding All Results Option
  1 Hour, 25 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


ddfgOpenBenchmarking.orgPhoronix Test Suite2 x AMD EPYC 9684X 96-Core @ 2.55GHz (192 Cores / 384 Threads)AMD Titanite_4G (RTI1007B BIOS)AMD Device 14a41520GB3201GB Micron_7450_MTFDKCB3T2TFS + 257GB Flash DriveASPEEDBroadcom NetXtreme BCM5720 PCIeUbuntu 23.106.5.0-25-generic (x86_64)GCC 13.2.0ext4640x480ProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelCompilerFile-SystemScreen ResolutionDdfg BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10113e - Python 3.11.6- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

abcdResult OverviewPhoronix Test Suite100%101%101%102%SVT-AV1PrimesieveNeural Magic DeepSparse

ddfgsvt-av1: Preset 4 - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Ksvt-av1: Preset 13 - Bosphorus 4Ksvt-av1: Preset 4 - Bosphorus 1080psvt-av1: Preset 8 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 1080pprimesieve: 1e12primesieve: 1e13deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: Llama2 Chat 7b Quantized - Asynchronous Multi-Streamdeepsparse: Llama2 Chat 7b Quantized - Asynchronous Multi-Streamdeepsparse: Llama2 Chat 7b Quantized - Synchronous Single-Streamdeepsparse: Llama2 Chat 7b Quantized - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamabcd8.55789.277149.069159.21822.964176.674572.349564.4131.16411.789132.9523713.544348.51620.60475566.385317.2212191.51375.2181762.528654.3739206.81544.832117421.46655.4946807.16991.23642.935824242.384820.830347.97351763.194954.3534207.67334.8123798.6248119.9052212.18124.71041144.503483.6574225.46744.4325249.7234381.037664.890715.38992598.208336.889168.582214.5734132.9467715.459548.528420.59918.57591.288163.175161.65322.898182.531566.993606.3761.16411.848132.9817717.490048.355520.67275538.980617.3043191.77145.21091758.805754.4869209.77114.764217350.46405.5170804.40251.24092.860724888.156320.805248.03371756.174254.5660208.72354.7882796.5966120.1787212.49454.70351142.994483.8482225.44964.4329249.4018382.123764.858615.39822595.176336.932468.397214.6134132.7968717.256548.422820.64418.48490.441160.560162.02023.474184.132568.209588.4501.15611.871132.8792715.877048.517820.60375534.260417.3188192.23355.19851759.956254.4588208.74064.787517333.35015.5224806.23381.23812.500828968.953620.725948.21771756.560454.5610208.63274.7905795.0277120.3976212.63904.70021141.241783.9259225.44154.4329249.0943382.174264.922215.38442595.478836.921668.556514.5791133.8061709.258848.384120.66078.63486.593163.5162.37722.713183.285577.444598.041.14211.841133.012714.45648.509520.60725554.034517.257194.47295.13831763.424754.3609208.97674.782317400.57975.502806.32191.23752.927124331.062620.462248.83641758.305854.505209.19114.7773797.7062119.9993213.55554.68011144.005483.7657225.56144.4307249.7754381.123464.99715.3662592.851636.956468.765714.5348132.9148715.429448.37920.6629OpenBenchmarking.org

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 4 - Input: Bosphorus 4Kabcd246810SE +/- 0.017, N = 3SE +/- 0.020, N = 38.5578.5758.4848.6341. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 4 - Input: Bosphorus 4Kabcd3691215Min: 8.54 / Avg: 8.58 / Max: 8.6Min: 8.45 / Avg: 8.48 / Max: 8.521. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 8 - Input: Bosphorus 4Kabcd20406080100SE +/- 0.83, N = 3SE +/- 0.91, N = 389.2891.2990.4486.591. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 8 - Input: Bosphorus 4Kabcd20406080100Min: 90.19 / Avg: 91.29 / Max: 92.91Min: 88.66 / Avg: 90.44 / Max: 91.611. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 12 - Input: Bosphorus 4Kabcd4080120160200SE +/- 1.72, N = 5SE +/- 1.52, N = 15149.07163.18160.56163.501. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 12 - Input: Bosphorus 4Kabcd306090120150Min: 159.12 / Avg: 163.17 / Max: 169.1Min: 150.38 / Avg: 160.56 / Max: 168.371. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 13 - Input: Bosphorus 4Kabcd4080120160200SE +/- 0.74, N = 3SE +/- 1.62, N = 5159.22161.65162.02162.381. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 13 - Input: Bosphorus 4Kabcd306090120150Min: 160.88 / Avg: 161.65 / Max: 163.14Min: 155.92 / Avg: 162.02 / Max: 165.281. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 4 - Input: Bosphorus 1080pabcd612182430SE +/- 0.17, N = 3SE +/- 0.24, N = 322.9622.9023.4722.711. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 4 - Input: Bosphorus 1080pabcd510152025Min: 22.67 / Avg: 22.9 / Max: 23.23Min: 23.03 / Avg: 23.47 / Max: 23.841. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 8 - Input: Bosphorus 1080pabcd4080120160200SE +/- 0.56, N = 3SE +/- 2.01, N = 3176.67182.53184.13183.291. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 8 - Input: Bosphorus 1080pabcd306090120150Min: 181.91 / Avg: 182.53 / Max: 183.65Min: 181.5 / Avg: 184.13 / Max: 188.081. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 12 - Input: Bosphorus 1080pabcd120240360480600SE +/- 3.34, N = 3SE +/- 5.33, N = 3572.35566.99568.21577.441. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 12 - Input: Bosphorus 1080pabcd100200300400500Min: 560.85 / Avg: 566.99 / Max: 572.33Min: 561.13 / Avg: 568.21 / Max: 578.641. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 13 - Input: Bosphorus 1080pabcd130260390520650SE +/- 4.06, N = 3SE +/- 6.58, N = 3564.41606.38588.45598.041. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 13 - Input: Bosphorus 1080pabcd110220330440550Min: 600.44 / Avg: 606.38 / Max: 614.14Min: 575.5 / Avg: 588.45 / Max: 596.931. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Primesieve

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 12.1Length: 1e12abcd0.26190.52380.78571.04761.3095SE +/- 0.009, N = 3SE +/- 0.008, N = 31.1641.1641.1561.1421. (CXX) g++ options: -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 12.1Length: 1e12abcd246810Min: 1.15 / Avg: 1.16 / Max: 1.18Min: 1.15 / Avg: 1.16 / Max: 1.171. (CXX) g++ options: -O3

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 12.1Length: 1e13abcd3691215SE +/- 0.02, N = 3SE +/- 0.01, N = 311.7911.8511.8711.841. (CXX) g++ options: -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 12.1Length: 1e13abcd3691215Min: 11.81 / Avg: 11.85 / Max: 11.89Min: 11.85 / Avg: 11.87 / Max: 11.891. (CXX) g++ options: -O3

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamabcd306090120150SE +/- 0.21, N = 3SE +/- 0.23, N = 3132.95132.98132.88133.01
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamabcd20406080100Min: 132.63 / Avg: 132.98 / Max: 133.36Min: 132.54 / Avg: 132.88 / Max: 133.32

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamabcd150300450600750SE +/- 1.03, N = 3SE +/- 3.41, N = 3713.54717.49715.88714.46
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamabcd130260390520650Min: 715.61 / Avg: 717.49 / Max: 719.15Min: 709.07 / Avg: 715.88 / Max: 719.6

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamabcd1122334455SE +/- 0.03, N = 3SE +/- 0.04, N = 348.5248.3648.5248.51
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamabcd1020304050Min: 48.29 / Avg: 48.36 / Max: 48.41Min: 48.45 / Avg: 48.52 / Max: 48.59

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamabcd510152025SE +/- 0.01, N = 3SE +/- 0.02, N = 320.6020.6720.6020.61
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamabcd510152025Min: 20.65 / Avg: 20.67 / Max: 20.7Min: 20.57 / Avg: 20.6 / Max: 20.63

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcd12002400360048006000SE +/- 6.35, N = 3SE +/- 7.14, N = 35566.395538.985534.265554.03
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcd10002000300040005000Min: 5532.24 / Avg: 5538.98 / Max: 5551.67Min: 5520.12 / Avg: 5534.26 / Max: 5543.06

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcd48121620SE +/- 0.02, N = 3SE +/- 0.02, N = 317.2217.3017.3217.26
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcd48121620Min: 17.26 / Avg: 17.3 / Max: 17.33Min: 17.29 / Avg: 17.32 / Max: 17.36

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamabcd4080120160200SE +/- 0.65, N = 3SE +/- 0.62, N = 3191.51191.77192.23194.47
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamabcd4080120160200Min: 190.47 / Avg: 191.77 / Max: 192.46Min: 191.33 / Avg: 192.23 / Max: 193.41

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamabcd1.17412.34823.52234.69645.8705SE +/- 0.0177, N = 3SE +/- 0.0167, N = 35.21805.21095.19855.1383
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamabcd246810Min: 5.19 / Avg: 5.21 / Max: 5.25Min: 5.17 / Avg: 5.2 / Max: 5.22

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamabcd400800120016002000SE +/- 2.58, N = 3SE +/- 2.77, N = 31762.531758.811759.961763.42
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamabcd30060090012001500Min: 1754.18 / Avg: 1758.81 / Max: 1763.08Min: 1755.02 / Avg: 1759.96 / Max: 1764.61

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamabcd1224364860SE +/- 0.09, N = 3SE +/- 0.08, N = 354.3754.4954.4654.36
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamabcd1122334455Min: 54.35 / Avg: 54.49 / Max: 54.65Min: 54.31 / Avg: 54.46 / Max: 54.61

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamabcd50100150200250SE +/- 0.44, N = 3SE +/- 0.35, N = 3206.82209.77208.74208.98
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamabcd4080120160200Min: 208.9 / Avg: 209.77 / Max: 210.3Min: 208.04 / Avg: 208.74 / Max: 209.13

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamabcd1.08722.17443.26164.34885.436SE +/- 0.0100, N = 3SE +/- 0.0081, N = 34.83214.76424.78754.7823
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamabcd246810Min: 4.75 / Avg: 4.76 / Max: 4.78Min: 4.78 / Avg: 4.79 / Max: 4.8

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcd4K8K12K16K20KSE +/- 17.22, N = 3SE +/- 34.07, N = 317421.4717350.4617333.3517400.58
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcd3K6K9K12K15KMin: 17319.5 / Avg: 17350.46 / Max: 17378.98Min: 17285.07 / Avg: 17333.35 / Max: 17399.12

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcd1.24252.4853.72754.976.2125SE +/- 0.0052, N = 3SE +/- 0.0109, N = 35.49465.51705.52245.5020
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcd246810Min: 5.51 / Avg: 5.52 / Max: 5.53Min: 5.5 / Avg: 5.52 / Max: 5.54

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamabcd2004006008001000SE +/- 1.84, N = 3SE +/- 0.46, N = 3807.17804.40806.23806.32
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamabcd140280420560700Min: 800.72 / Avg: 804.4 / Max: 806.36Min: 805.32 / Avg: 806.23 / Max: 806.79

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamabcd0.27920.55840.83761.11681.396SE +/- 0.0029, N = 3SE +/- 0.0007, N = 31.23641.24091.23811.2375
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamabcd246810Min: 1.24 / Avg: 1.24 / Max: 1.25Min: 1.24 / Avg: 1.24 / Max: 1.24

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Streamabcd0.66061.32121.98182.64243.303SE +/- 0.0343, N = 3SE +/- 0.1622, N = 122.93582.86072.50082.9271
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Streamabcd246810Min: 2.81 / Avg: 2.86 / Max: 2.93Min: 1.7 / Avg: 2.5 / Max: 2.94

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Streamabcd6K12K18K24K30KSE +/- 279.77, N = 3SE +/- 1840.16, N = 1224242.3824888.1628968.9524331.06
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Streamabcd5K10K15K20K25KMin: 24377.28 / Avg: 24888.16 / Max: 25341.24Min: 24220.1 / Avg: 28968.95 / Max: 38563.22

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Streamabcd510152025SE +/- 0.08, N = 3SE +/- 0.10, N = 320.8320.8120.7320.46
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Streamabcd510152025Min: 20.65 / Avg: 20.81 / Max: 20.9Min: 20.55 / Avg: 20.73 / Max: 20.9

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Streamabcd1122334455SE +/- 0.18, N = 3SE +/- 0.23, N = 347.9748.0348.2248.84
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Streamabcd1020304050Min: 47.82 / Avg: 48.03 / Max: 48.4Min: 47.82 / Avg: 48.22 / Max: 48.62

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamabcd400800120016002000SE +/- 3.28, N = 3SE +/- 1.97, N = 31763.191756.171756.561758.31
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamabcd30060090012001500Min: 1752 / Avg: 1756.17 / Max: 1762.64Min: 1752.62 / Avg: 1756.56 / Max: 1758.54

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamabcd1224364860SE +/- 0.11, N = 3SE +/- 0.05, N = 354.3554.5754.5654.51
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamabcd1122334455Min: 54.34 / Avg: 54.57 / Max: 54.71Min: 54.5 / Avg: 54.56 / Max: 54.67

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamabcd50100150200250SE +/- 0.62, N = 3SE +/- 1.29, N = 3207.67208.72208.63209.19
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamabcd4080120160200Min: 207.64 / Avg: 208.72 / Max: 209.78Min: 206.92 / Avg: 208.63 / Max: 211.15

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamabcd1.08282.16563.24844.33125.414SE +/- 0.0142, N = 3SE +/- 0.0294, N = 34.81234.78824.79054.7773
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamabcd246810Min: 4.76 / Avg: 4.79 / Max: 4.81Min: 4.73 / Avg: 4.79 / Max: 4.83

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcd2004006008001000SE +/- 0.80, N = 3SE +/- 1.69, N = 3798.62796.60795.03797.71
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcd140280420560700Min: 795.27 / Avg: 796.6 / Max: 798.03Min: 793.12 / Avg: 795.03 / Max: 798.4

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcd306090120150SE +/- 0.13, N = 3SE +/- 0.24, N = 3119.91120.18120.40120.00
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcd20406080100Min: 119.96 / Avg: 120.18 / Max: 120.4Min: 119.91 / Avg: 120.4 / Max: 120.67

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamabcd50100150200250SE +/- 0.31, N = 3SE +/- 0.07, N = 3212.18212.49212.64213.56
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamabcd4080120160200Min: 211.92 / Avg: 212.49 / Max: 212.99Min: 212.56 / Avg: 212.64 / Max: 212.77

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamabcd1.05982.11963.17944.23925.299SE +/- 0.0069, N = 3SE +/- 0.0014, N = 34.71044.70354.70024.6801
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamabcd246810Min: 4.69 / Avg: 4.7 / Max: 4.72Min: 4.7 / Avg: 4.7 / Max: 4.7

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamabcd2004006008001000SE +/- 2.26, N = 3SE +/- 2.42, N = 31144.501142.991141.241144.01
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamabcd2004006008001000Min: 1139.26 / Avg: 1142.99 / Max: 1147.07Min: 1136.96 / Avg: 1141.24 / Max: 1145.33

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamabcd20406080100SE +/- 0.17, N = 3SE +/- 0.17, N = 383.6683.8583.9383.77
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamabcd1632486480Min: 83.55 / Avg: 83.85 / Max: 84.13Min: 83.63 / Avg: 83.93 / Max: 84.21

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamabcd50100150200250SE +/- 0.36, N = 3SE +/- 0.10, N = 3225.47225.45225.44225.56
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamabcd4080120160200Min: 224.72 / Avg: 225.45 / Max: 225.86Min: 225.27 / Avg: 225.44 / Max: 225.61

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamabcd0.99741.99482.99223.98964.987SE +/- 0.0072, N = 3SE +/- 0.0020, N = 34.43254.43294.43294.4307
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamabcd246810Min: 4.42 / Avg: 4.43 / Max: 4.45Min: 4.43 / Avg: 4.43 / Max: 4.44

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamabcd50100150200250SE +/- 0.46, N = 3SE +/- 0.26, N = 3249.72249.40249.09249.78
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamabcd50100150200250Min: 248.64 / Avg: 249.4 / Max: 250.22Min: 248.8 / Avg: 249.09 / Max: 249.61

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamabcd80160240320400SE +/- 0.66, N = 3SE +/- 0.66, N = 3381.04382.12382.17381.12
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamabcd70140210280350Min: 380.84 / Avg: 382.12 / Max: 383.07Min: 380.89 / Avg: 382.17 / Max: 383.06

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamabcd1530456075SE +/- 0.12, N = 3SE +/- 0.15, N = 364.8964.8664.9265.00
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamabcd1326395265Min: 64.65 / Avg: 64.86 / Max: 65.07Min: 64.73 / Avg: 64.92 / Max: 65.21

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamabcd48121620SE +/- 0.03, N = 3SE +/- 0.03, N = 315.3915.4015.3815.37
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamabcd48121620Min: 15.35 / Avg: 15.4 / Max: 15.45Min: 15.32 / Avg: 15.38 / Max: 15.43

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcd6001200180024003000SE +/- 2.93, N = 3SE +/- 1.45, N = 32598.212595.182595.482592.85
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcd5001000150020002500Min: 2589.79 / Avg: 2595.18 / Max: 2599.86Min: 2593.05 / Avg: 2595.48 / Max: 2598.06

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcd816243240SE +/- 0.04, N = 3SE +/- 0.02, N = 336.8936.9336.9236.96
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcd816243240Min: 36.86 / Avg: 36.93 / Max: 37.01Min: 36.89 / Avg: 36.92 / Max: 36.95

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamabcd1530456075SE +/- 0.14, N = 3SE +/- 0.09, N = 368.5868.4068.5668.77
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamabcd1326395265Min: 68.23 / Avg: 68.4 / Max: 68.68Min: 68.39 / Avg: 68.56 / Max: 68.7

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamabcd48121620SE +/- 0.03, N = 3SE +/- 0.02, N = 314.5714.6114.5814.53
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamabcd48121620Min: 14.55 / Avg: 14.61 / Max: 14.65Min: 14.55 / Avg: 14.58 / Max: 14.61

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamabcd306090120150SE +/- 0.27, N = 3SE +/- 0.72, N = 3132.95132.80133.81132.91
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamabcd306090120150Min: 132.49 / Avg: 132.8 / Max: 133.34Min: 132.41 / Avg: 133.81 / Max: 134.84

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamabcd150300450600750SE +/- 1.53, N = 3SE +/- 5.35, N = 3715.46717.26709.26715.43
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamabcd130260390520650Min: 714.24 / Avg: 717.26 / Max: 719.17Min: 702.51 / Avg: 709.26 / Max: 719.83

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamabcd1122334455SE +/- 0.01, N = 3SE +/- 0.09, N = 348.5348.4248.3848.38
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamabcd1020304050Min: 48.4 / Avg: 48.42 / Max: 48.44Min: 48.21 / Avg: 48.38 / Max: 48.49

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamabcd510152025SE +/- 0.00, N = 3SE +/- 0.04, N = 320.6020.6420.6620.66
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamabcd510152025Min: 20.64 / Avg: 20.64 / Max: 20.65Min: 20.62 / Avg: 20.66 / Max: 20.73

54 Results Shown

SVT-AV1:
  Preset 4 - Bosphorus 4K
  Preset 8 - Bosphorus 4K
  Preset 12 - Bosphorus 4K
  Preset 13 - Bosphorus 4K
  Preset 4 - Bosphorus 1080p
  Preset 8 - Bosphorus 1080p
  Preset 12 - Bosphorus 1080p
  Preset 13 - Bosphorus 1080p
Primesieve:
  1e12
  1e13
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream:
    items/sec
    ms/batch
  ResNet-50, Baseline - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  ResNet-50, Baseline - Synchronous Single-Stream:
    items/sec
    ms/batch
  ResNet-50, Sparse INT8 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  ResNet-50, Sparse INT8 - Synchronous Single-Stream:
    items/sec
    ms/batch
  Llama2 Chat 7b Quantized - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  Llama2 Chat 7b Quantized - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream:
    items/sec
    ms/batch
  BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream:
    items/sec
    ms/batch