net

INTEL XEON PLATINUM 8592+ testing with a Quanta Cloud S6Q-MB-MPS (3B05.TEL4P1 BIOS) and ASPEED on Ubuntu 23.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2312187-NE-NET50813920
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

CPU Massive 2 Tests
HPC - High Performance Computing 2 Tests
Machine Learning 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
a
December 18 2023
  3 Hours, 20 Minutes
b
December 18 2023
  3 Hours, 20 Minutes
c
December 18 2023
  3 Hours, 16 Minutes
Invert Hiding All Results Option
  3 Hours, 19 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


netOpenBenchmarking.orgPhoronix Test SuiteINTEL XEON PLATINUM 8592+ @ 3.90GHz (64 Cores / 128 Threads)Quanta Cloud S6Q-MB-MPS (3B05.TEL4P1 BIOS)Intel Device 1bce512GB3201GB Micron_7450_MTFDKCC3T2TFS + 0GB Virtual HDisk0 + 0GB Virtual HDisk2 + 0GB Virtual HDisk1 + 0GB Virtual HDisk3ASPEEDUbuntu 23.106.6.0-rc5-phx-patched (x86_64)GNOME Shell 45.0X Server 1.21.1.7GCC 13.2.0ext41920x1200ProcessorMotherboardChipsetMemoryDiskGraphicsOSKernelDesktopDisplay ServerCompilerFile-SystemScreen ResolutionNet BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x21000161- a: OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu1)- Python 3.11.6- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

abcResult OverviewPhoronix Test Suite100%101%101%102%102%LeelaChessZeroSVT-AV1Neural Magic DeepSparseXmrig

netlczero: BLASlczero: Eigenxmrig: KawPow - 1Mxmrig: Monero - 1Mxmrig: Wownero - 1Mxmrig: GhostRider - 1Mxmrig: CryptoNight-Heavy - 1Mxmrig: CryptoNight-Femto UPX2 - 1Msvt-av1: Preset 4 - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Ksvt-av1: Preset 13 - Bosphorus 4Ksvt-av1: Preset 4 - Bosphorus 1080psvt-av1: Preset 8 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 1080pdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamabc6779639936.240329.643088.36862.940394.940425.97.15675.139228.874227.28520.770139.811545.910632.45971.9864442.311934.474929.00162323.979913.7504210.96384.7354934.652434.2126286.88823.48255706.69645.5953701.79151.4200436.522673.2660164.93936.057786.3024370.728731.224632.0197933.007434.2750290.01333.4452452.195170.7373166.65675.9970641.104749.8955186.02575.372996.1343332.806335.102328.4611938.066134.072567.869414.727672.0462442.221034.609228.88906780540344.740153.242998.76857.840282.240080.47.16875.737229.514229.55620.587141.977543.725636.90971.6759443.397134.482628.99532324.736213.7462210.81004.7386928.446234.4463290.28263.44175717.51855.5844705.82681.4123436.328673.2935164.89686.058986.2914370.776131.146532.0999934.522434.2173288.68493.4609452.164170.7412166.51236.0023637.609250.1708188.53765.300595.6329334.315334.617428.8654937.577534.090568.176014.661272.0875441.477634.625028.87566881740242.540072.243080.46820.840286.640331.17.15174.926228.518229.07520.492141.873546.877637.33771.9685442.752534.237729.20302322.580313.7597209.60954.7654939.536334.0380287.34693.47675704.95305.5965701.70611.4205435.493273.4267163.97186.092286.1709371.295731.187532.0585940.219834.0080289.80813.4473450.491871.0046166.30506.0096637.417550.1864188.23245.309296.8521330.335233.724629.6799937.863034.079267.580214.789871.7827443.158234.452629.0205OpenBenchmarking.org

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.30Backend: BLASabc1530456075SE +/- 0.67, N = 3SE +/- 0.67, N = 3SE +/- 0.58, N = 36767681. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.30Backend: BLASabc1326395265Min: 66 / Avg: 67.33 / Max: 68Min: 66 / Avg: 67.33 / Max: 68Min: 67 / Avg: 68 / Max: 691. (CXX) g++ options: -flto -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.30Backend: Eigenabc2004006008001000SE +/- 21.82, N = 9SE +/- 11.08, N = 9SE +/- 32.94, N = 67968058171. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.30Backend: Eigenabc140280420560700Min: 688 / Avg: 795.56 / Max: 890Min: 747 / Avg: 805 / Max: 846Min: 683 / Avg: 817.33 / Max: 8941. (CXX) g++ options: -flto -pthread

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: KawPow - Hash Count: 1Mabc9K18K27K36K45KSE +/- 185.86, N = 3SE +/- 97.76, N = 3SE +/- 102.95, N = 339936.240344.740242.51. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: KawPow - Hash Count: 1Mabc7K14K21K28K35KMin: 39577.3 / Avg: 39936.23 / Max: 40199.4Min: 40151 / Avg: 40344.73 / Max: 40464.5Min: 40131.6 / Avg: 40242.5 / Max: 40448.21. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Monero - Hash Count: 1Mabc9K18K27K36K45KSE +/- 106.27, N = 3SE +/- 234.97, N = 3SE +/- 234.22, N = 340329.640153.240072.21. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Monero - Hash Count: 1Mabc7K14K21K28K35KMin: 40162.3 / Avg: 40329.63 / Max: 40526.8Min: 39684.1 / Avg: 40153.2 / Max: 40412.2Min: 39605.5 / Avg: 40072.2 / Max: 40340.51. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Wownero - Hash Count: 1Mabc9K18K27K36K45KSE +/- 76.66, N = 3SE +/- 81.91, N = 3SE +/- 104.95, N = 343088.342998.743080.41. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Wownero - Hash Count: 1Mabc7K14K21K28K35KMin: 42944.3 / Avg: 43088.27 / Max: 43205.9Min: 42835.7 / Avg: 42998.73 / Max: 43094.2Min: 42889 / Avg: 43080.43 / Max: 43250.71. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: GhostRider - Hash Count: 1Mabc15003000450060007500SE +/- 5.37, N = 3SE +/- 29.78, N = 3SE +/- 18.86, N = 36862.96857.86820.81. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: GhostRider - Hash Count: 1Mabc12002400360048006000Min: 6852.3 / Avg: 6862.93 / Max: 6869.5Min: 6813.9 / Avg: 6857.77 / Max: 6914.6Min: 6783.2 / Avg: 6820.77 / Max: 6842.41. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Heavy - Hash Count: 1Mabc9K18K27K36K45KSE +/- 42.10, N = 3SE +/- 64.66, N = 3SE +/- 169.87, N = 340394.940282.240286.61. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Heavy - Hash Count: 1Mabc7K14K21K28K35KMin: 40325.8 / Avg: 40394.87 / Max: 40471.1Min: 40159 / Avg: 40282.17 / Max: 40377.9Min: 40008 / Avg: 40286.63 / Max: 40594.31. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Femto UPX2 - Hash Count: 1Mabc9K18K27K36K45KSE +/- 55.21, N = 3SE +/- 72.11, N = 3SE +/- 88.91, N = 340425.940080.440331.11. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Femto UPX2 - Hash Count: 1Mabc7K14K21K28K35KMin: 40350.2 / Avg: 40425.93 / Max: 40533.4Min: 39966.4 / Avg: 40080.4 / Max: 40213.9Min: 40236.6 / Avg: 40331.1 / Max: 40508.81. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 4 - Input: Bosphorus 4Kabc246810SE +/- 0.022, N = 3SE +/- 0.025, N = 3SE +/- 0.071, N = 37.1567.1687.1511. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 4 - Input: Bosphorus 4Kabc3691215Min: 7.12 / Avg: 7.16 / Max: 7.19Min: 7.12 / Avg: 7.17 / Max: 7.21Min: 7.02 / Avg: 7.15 / Max: 7.261. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 8 - Input: Bosphorus 4Kabc20406080100SE +/- 0.38, N = 3SE +/- 0.27, N = 3SE +/- 0.15, N = 375.1475.7474.931. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 8 - Input: Bosphorus 4Kabc1530456075Min: 74.4 / Avg: 75.14 / Max: 75.64Min: 75.4 / Avg: 75.74 / Max: 76.28Min: 74.63 / Avg: 74.93 / Max: 75.11. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 12 - Input: Bosphorus 4Kabc50100150200250SE +/- 0.42, N = 3SE +/- 1.07, N = 3SE +/- 1.46, N = 3228.87229.51228.521. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 12 - Input: Bosphorus 4Kabc4080120160200Min: 228.35 / Avg: 228.87 / Max: 229.71Min: 227.46 / Avg: 229.51 / Max: 231.04Min: 225.79 / Avg: 228.52 / Max: 230.781. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 13 - Input: Bosphorus 4Kabc50100150200250SE +/- 2.23, N = 3SE +/- 0.75, N = 3SE +/- 0.19, N = 3227.29229.56229.081. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 13 - Input: Bosphorus 4Kabc4080120160200Min: 225.05 / Avg: 227.29 / Max: 231.74Min: 228.43 / Avg: 229.56 / Max: 230.99Min: 228.74 / Avg: 229.07 / Max: 229.411. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 4 - Input: Bosphorus 1080pabc510152025SE +/- 0.08, N = 3SE +/- 0.01, N = 3SE +/- 0.25, N = 320.7720.5920.491. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 4 - Input: Bosphorus 1080pabc510152025Min: 20.62 / Avg: 20.77 / Max: 20.88Min: 20.58 / Avg: 20.59 / Max: 20.61Min: 20.03 / Avg: 20.49 / Max: 20.861. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 8 - Input: Bosphorus 1080pabc306090120150SE +/- 0.16, N = 3SE +/- 1.29, N = 7SE +/- 1.86, N = 3139.81141.98141.871. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 8 - Input: Bosphorus 1080pabc306090120150Min: 139.64 / Avg: 139.81 / Max: 140.13Min: 138.22 / Avg: 141.98 / Max: 148.85Min: 138.17 / Avg: 141.87 / Max: 143.951. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 12 - Input: Bosphorus 1080pabc120240360480600SE +/- 4.95, N = 3SE +/- 3.52, N = 3SE +/- 4.85, N = 3545.91543.73546.881. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 12 - Input: Bosphorus 1080pabc100200300400500Min: 537.41 / Avg: 545.91 / Max: 554.55Min: 539.2 / Avg: 543.73 / Max: 550.66Min: 537.96 / Avg: 546.88 / Max: 554.641. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 13 - Input: Bosphorus 1080pabc140280420560700SE +/- 3.46, N = 3SE +/- 4.83, N = 3SE +/- 4.46, N = 3632.46636.91637.341. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 13 - Input: Bosphorus 1080pabc110220330440550Min: 627.09 / Avg: 632.46 / Max: 638.92Min: 628.71 / Avg: 636.91 / Max: 645.42Min: 632.81 / Avg: 637.34 / Max: 646.251. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamabc1632486480SE +/- 0.16, N = 3SE +/- 0.17, N = 3SE +/- 0.20, N = 371.9971.6871.97
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamabc1428425670Min: 71.69 / Avg: 71.99 / Max: 72.21Min: 71.37 / Avg: 71.68 / Max: 71.94Min: 71.57 / Avg: 71.97 / Max: 72.24

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamabc100200300400500SE +/- 0.57, N = 3SE +/- 0.20, N = 3SE +/- 0.77, N = 3442.31443.40442.75
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamabc80160240320400Min: 441.42 / Avg: 442.31 / Max: 443.38Min: 443.13 / Avg: 443.4 / Max: 443.78Min: 441.67 / Avg: 442.75 / Max: 444.25

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamabc816243240SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.07, N = 334.4734.4834.24
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamabc714212835Min: 34.44 / Avg: 34.47 / Max: 34.52Min: 34.38 / Avg: 34.48 / Max: 34.55Min: 34.1 / Avg: 34.24 / Max: 34.31

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamabc714212835SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.06, N = 329.0029.0029.20
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamabc612182430Min: 28.97 / Avg: 29 / Max: 29.03Min: 28.94 / Avg: 29 / Max: 29.08Min: 29.14 / Avg: 29.2 / Max: 29.32

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc5001000150020002500SE +/- 2.42, N = 3SE +/- 2.92, N = 3SE +/- 0.54, N = 32323.982324.742322.58
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc400800120016002000Min: 2319.16 / Avg: 2323.98 / Max: 2326.85Min: 2321.73 / Avg: 2324.74 / Max: 2330.57Min: 2321.51 / Avg: 2322.58 / Max: 2323.29

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc48121620SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 313.7513.7513.76
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc48121620Min: 13.73 / Avg: 13.75 / Max: 13.78Min: 13.71 / Avg: 13.75 / Max: 13.76Min: 13.76 / Avg: 13.76 / Max: 13.77

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamabc50100150200250SE +/- 1.74, N = 3SE +/- 0.83, N = 3SE +/- 0.49, N = 3210.96210.81209.61
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamabc4080120160200Min: 207.53 / Avg: 210.96 / Max: 213.17Min: 209.15 / Avg: 210.81 / Max: 211.73Min: 208.69 / Avg: 209.61 / Max: 210.35

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamabc1.07222.14443.21664.28885.361SE +/- 0.0394, N = 3SE +/- 0.0188, N = 3SE +/- 0.0109, N = 34.73544.73864.7654
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamabc246810Min: 4.69 / Avg: 4.74 / Max: 4.81Min: 4.72 / Avg: 4.74 / Max: 4.78Min: 4.75 / Avg: 4.77 / Max: 4.79

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamabc2004006008001000SE +/- 3.60, N = 3SE +/- 8.90, N = 3SE +/- 0.83, N = 3934.65928.45939.54
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamabc160320480640800Min: 928.23 / Avg: 934.65 / Max: 940.69Min: 911.86 / Avg: 928.45 / Max: 942.35Min: 938.19 / Avg: 939.54 / Max: 941.06

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamabc816243240SE +/- 0.13, N = 3SE +/- 0.33, N = 3SE +/- 0.03, N = 334.2134.4534.04
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamabc714212835Min: 33.99 / Avg: 34.21 / Max: 34.45Min: 33.94 / Avg: 34.45 / Max: 35.07Min: 33.98 / Avg: 34.04 / Max: 34.08

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamabc60120180240300SE +/- 0.98, N = 3SE +/- 0.65, N = 3SE +/- 1.23, N = 3286.89290.28287.35
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamabc50100150200250Min: 285.15 / Avg: 286.89 / Max: 288.53Min: 289.25 / Avg: 290.28 / Max: 291.48Min: 285.41 / Avg: 287.35 / Max: 289.63

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamabc0.78361.56722.35083.13443.918SE +/- 0.0119, N = 3SE +/- 0.0077, N = 3SE +/- 0.0147, N = 33.48253.44173.4767
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamabc246810Min: 3.46 / Avg: 3.48 / Max: 3.5Min: 3.43 / Avg: 3.44 / Max: 3.45Min: 3.45 / Avg: 3.48 / Max: 3.5

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc12002400360048006000SE +/- 15.58, N = 3SE +/- 10.52, N = 3SE +/- 16.89, N = 35706.705717.525704.95
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc10002000300040005000Min: 5686.31 / Avg: 5706.7 / Max: 5737.3Min: 5698.34 / Avg: 5717.52 / Max: 5734.59Min: 5672.42 / Avg: 5704.95 / Max: 5729.12

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc1.25922.51843.77765.03686.296SE +/- 0.0153, N = 3SE +/- 0.0102, N = 3SE +/- 0.0160, N = 35.59535.58445.5965
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc246810Min: 5.57 / Avg: 5.6 / Max: 5.62Min: 5.57 / Avg: 5.58 / Max: 5.6Min: 5.57 / Avg: 5.6 / Max: 5.63

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamabc150300450600750SE +/- 2.07, N = 3SE +/- 2.96, N = 3SE +/- 0.26, N = 3701.79705.83701.71
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamabc120240360480600Min: 699.56 / Avg: 701.79 / Max: 705.94Min: 701.85 / Avg: 705.83 / Max: 711.62Min: 701.26 / Avg: 701.71 / Max: 702.14

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamabc0.31960.63920.95881.27841.598SE +/- 0.0038, N = 3SE +/- 0.0057, N = 3SE +/- 0.0007, N = 31.42001.41231.4205
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamabc246810Min: 1.41 / Avg: 1.42 / Max: 1.42Min: 1.4 / Avg: 1.41 / Max: 1.42Min: 1.42 / Avg: 1.42 / Max: 1.42

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamabc90180270360450SE +/- 0.33, N = 3SE +/- 0.34, N = 3SE +/- 0.19, N = 3436.52436.33435.49
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamabc80160240320400Min: 436.16 / Avg: 436.52 / Max: 437.18Min: 435.73 / Avg: 436.33 / Max: 436.9Min: 435.13 / Avg: 435.49 / Max: 435.78

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamabc1632486480SE +/- 0.05, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 373.2773.2973.43
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamabc1428425670Min: 73.17 / Avg: 73.27 / Max: 73.32Min: 73.21 / Avg: 73.29 / Max: 73.38Min: 73.38 / Avg: 73.43 / Max: 73.49

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamabc4080120160200SE +/- 0.67, N = 3SE +/- 0.12, N = 3SE +/- 0.23, N = 3164.94164.90163.97
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamabc306090120150Min: 163.61 / Avg: 164.94 / Max: 165.77Min: 164.68 / Avg: 164.9 / Max: 165.08Min: 163.51 / Avg: 163.97 / Max: 164.22

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamabc246810SE +/- 0.0247, N = 3SE +/- 0.0043, N = 3SE +/- 0.0081, N = 36.05776.05896.0922
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamabc246810Min: 6.03 / Avg: 6.06 / Max: 6.11Min: 6.05 / Avg: 6.06 / Max: 6.07Min: 6.08 / Avg: 6.09 / Max: 6.11

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamabc20406080100SE +/- 0.14, N = 3SE +/- 0.09, N = 3SE +/- 0.05, N = 386.3086.2986.17
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamabc1632486480Min: 86.05 / Avg: 86.3 / Max: 86.52Min: 86.18 / Avg: 86.29 / Max: 86.46Min: 86.1 / Avg: 86.17 / Max: 86.27

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamabc80160240320400SE +/- 0.59, N = 3SE +/- 0.36, N = 3SE +/- 0.22, N = 3370.73370.78371.30
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamabc70140210280350Min: 369.78 / Avg: 370.73 / Max: 371.81Min: 370.07 / Avg: 370.78 / Max: 371.27Min: 370.88 / Avg: 371.3 / Max: 371.62

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Streamabc714212835SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.08, N = 331.2231.1531.19
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Streamabc714212835Min: 31.19 / Avg: 31.22 / Max: 31.25Min: 31.09 / Avg: 31.15 / Max: 31.2Min: 31.04 / Avg: 31.19 / Max: 31.28

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Streamabc714212835SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.08, N = 332.0232.1032.06
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Streamabc714212835Min: 31.99 / Avg: 32.02 / Max: 32.05Min: 32.05 / Avg: 32.1 / Max: 32.16Min: 31.96 / Avg: 32.06 / Max: 32.21

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamabc2004006008001000SE +/- 3.85, N = 3SE +/- 4.66, N = 3SE +/- 0.19, N = 3933.01934.52940.22
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamabc160320480640800Min: 925.69 / Avg: 933.01 / Max: 938.73Min: 928.56 / Avg: 934.52 / Max: 943.7Min: 939.84 / Avg: 940.22 / Max: 940.42

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamabc816243240SE +/- 0.14, N = 3SE +/- 0.17, N = 3SE +/- 0.01, N = 334.2834.2234.01
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamabc714212835Min: 34.07 / Avg: 34.28 / Max: 34.55Min: 33.89 / Avg: 34.22 / Max: 34.43Min: 34 / Avg: 34.01 / Max: 34.03

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamabc60120180240300SE +/- 1.63, N = 3SE +/- 0.99, N = 3SE +/- 0.98, N = 3290.01288.68289.81
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamabc50100150200250Min: 287.75 / Avg: 290.01 / Max: 293.17Min: 286.7 / Avg: 288.68 / Max: 289.69Min: 288.54 / Avg: 289.81 / Max: 291.75

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamabc0.77871.55742.33613.11483.8935SE +/- 0.0193, N = 3SE +/- 0.0119, N = 3SE +/- 0.0117, N = 33.44523.46093.4473
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamabc246810Min: 3.41 / Avg: 3.45 / Max: 3.47Min: 3.45 / Avg: 3.46 / Max: 3.48Min: 3.42 / Avg: 3.45 / Max: 3.46

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc100200300400500SE +/- 0.30, N = 3SE +/- 0.02, N = 3SE +/- 0.54, N = 3452.20452.16450.49
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc80160240320400Min: 451.61 / Avg: 452.2 / Max: 452.59Min: 452.13 / Avg: 452.16 / Max: 452.19Min: 449.43 / Avg: 450.49 / Max: 451.16

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc1632486480SE +/- 0.05, N = 3SE +/- 0.00, N = 3SE +/- 0.09, N = 370.7470.7471.00
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc1428425670Min: 70.68 / Avg: 70.74 / Max: 70.83Min: 70.74 / Avg: 70.74 / Max: 70.74Min: 70.9 / Avg: 71 / Max: 71.17

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamabc4080120160200SE +/- 0.33, N = 3SE +/- 0.25, N = 3SE +/- 0.19, N = 3166.66166.51166.31
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamabc306090120150Min: 166.06 / Avg: 166.66 / Max: 167.21Min: 166.03 / Avg: 166.51 / Max: 166.85Min: 165.95 / Avg: 166.3 / Max: 166.61

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamabc246810SE +/- 0.0119, N = 3SE +/- 0.0087, N = 3SE +/- 0.0069, N = 35.99706.00236.0096
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamabc246810Min: 5.98 / Avg: 6 / Max: 6.02Min: 5.99 / Avg: 6 / Max: 6.02Min: 6 / Avg: 6.01 / Max: 6.02

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamabc140280420560700SE +/- 0.45, N = 3SE +/- 1.94, N = 3SE +/- 2.11, N = 3641.10637.61637.42
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamabc110220330440550Min: 640.2 / Avg: 641.1 / Max: 641.6Min: 633.74 / Avg: 637.61 / Max: 639.86Min: 634.38 / Avg: 637.42 / Max: 641.47

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamabc1122334455SE +/- 0.03, N = 3SE +/- 0.15, N = 3SE +/- 0.17, N = 349.9050.1750.19
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamabc1020304050Min: 49.86 / Avg: 49.9 / Max: 49.97Min: 49.99 / Avg: 50.17 / Max: 50.48Min: 49.87 / Avg: 50.19 / Max: 50.43

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamabc4080120160200SE +/- 1.59, N = 3SE +/- 0.69, N = 3SE +/- 0.41, N = 3186.03188.54188.23
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamabc306090120150Min: 183.76 / Avg: 186.03 / Max: 189.09Min: 187.7 / Avg: 188.54 / Max: 189.9Min: 187.41 / Avg: 188.23 / Max: 188.69

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamabc1.20892.41783.62674.83566.0445SE +/- 0.0459, N = 3SE +/- 0.0193, N = 3SE +/- 0.0116, N = 35.37295.30055.3092
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamabc246810Min: 5.28 / Avg: 5.37 / Max: 5.44Min: 5.26 / Avg: 5.3 / Max: 5.32Min: 5.3 / Avg: 5.31 / Max: 5.33

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamabc20406080100SE +/- 0.29, N = 3SE +/- 0.27, N = 3SE +/- 0.08, N = 396.1395.6396.85
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamabc20406080100Min: 95.74 / Avg: 96.13 / Max: 96.7Min: 95.09 / Avg: 95.63 / Max: 95.9Min: 96.74 / Avg: 96.85 / Max: 97.01

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamabc70140210280350SE +/- 1.00, N = 3SE +/- 0.71, N = 3SE +/- 0.28, N = 3332.81334.32330.34
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamabc60120180240300Min: 330.86 / Avg: 332.81 / Max: 334.17Min: 333.6 / Avg: 334.32 / Max: 335.73Min: 329.79 / Avg: 330.34 / Max: 330.71

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamabc816243240SE +/- 0.16, N = 3SE +/- 0.35, N = 3SE +/- 0.38, N = 1535.1034.6233.72
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamabc816243240Min: 34.79 / Avg: 35.1 / Max: 35.29Min: 34.04 / Avg: 34.62 / Max: 35.25Min: 31.26 / Avg: 33.72 / Max: 35.09

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamabc714212835SE +/- 0.12, N = 3SE +/- 0.29, N = 3SE +/- 0.34, N = 1528.4628.8729.68
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamabc714212835Min: 28.31 / Avg: 28.46 / Max: 28.71Min: 28.35 / Avg: 28.87 / Max: 29.35Min: 28.48 / Avg: 29.68 / Max: 31.97

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc2004006008001000SE +/- 0.34, N = 3SE +/- 0.09, N = 3SE +/- 0.07, N = 3938.07937.58937.86
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc160320480640800Min: 937.51 / Avg: 938.07 / Max: 938.68Min: 937.46 / Avg: 937.58 / Max: 937.76Min: 937.76 / Avg: 937.86 / Max: 938.01

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc816243240SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 334.0734.0934.08
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc714212835Min: 34.05 / Avg: 34.07 / Max: 34.09Min: 34.09 / Avg: 34.09 / Max: 34.1Min: 34.07 / Avg: 34.08 / Max: 34.09

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamabc1530456075SE +/- 0.27, N = 3SE +/- 0.12, N = 3SE +/- 0.11, N = 367.8768.1867.58
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamabc1326395265Min: 67.51 / Avg: 67.87 / Max: 68.39Min: 67.95 / Avg: 68.18 / Max: 68.37Min: 67.47 / Avg: 67.58 / Max: 67.8

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamabc48121620SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 314.7314.6614.79
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamabc48121620Min: 14.62 / Avg: 14.73 / Max: 14.81Min: 14.62 / Avg: 14.66 / Max: 14.71Min: 14.74 / Avg: 14.79 / Max: 14.81

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamabc1632486480SE +/- 0.17, N = 3SE +/- 0.04, N = 3SE +/- 0.07, N = 372.0572.0971.78
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamabc1428425670Min: 71.86 / Avg: 72.05 / Max: 72.39Min: 72.03 / Avg: 72.09 / Max: 72.16Min: 71.64 / Avg: 71.78 / Max: 71.88

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamabc100200300400500SE +/- 0.73, N = 3SE +/- 0.59, N = 3SE +/- 0.25, N = 3442.22441.48443.16
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamabc80160240320400Min: 440.98 / Avg: 442.22 / Max: 443.5Min: 440.86 / Avg: 441.48 / Max: 442.65Min: 442.83 / Avg: 443.16 / Max: 443.66

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamabc816243240SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 334.6134.6334.45
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamabc714212835Min: 34.53 / Avg: 34.61 / Max: 34.66Min: 34.6 / Avg: 34.63 / Max: 34.67Min: 34.4 / Avg: 34.45 / Max: 34.51

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamabc714212835SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 328.8928.8829.02
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamabc612182430Min: 28.84 / Avg: 28.89 / Max: 28.95Min: 28.84 / Avg: 28.88 / Max: 28.9Min: 28.97 / Avg: 29.02 / Max: 29.07

64 Results Shown

LeelaChessZero:
  BLAS
  Eigen
Xmrig:
  KawPow - 1M
  Monero - 1M
  Wownero - 1M
  GhostRider - 1M
  CryptoNight-Heavy - 1M
  CryptoNight-Femto UPX2 - 1M
SVT-AV1:
  Preset 4 - Bosphorus 4K
  Preset 8 - Bosphorus 4K
  Preset 12 - Bosphorus 4K
  Preset 13 - Bosphorus 4K
  Preset 4 - Bosphorus 1080p
  Preset 8 - Bosphorus 1080p
  Preset 12 - Bosphorus 1080p
  Preset 13 - Bosphorus 1080p
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream:
    items/sec
    ms/batch
  ResNet-50, Baseline - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  ResNet-50, Baseline - Synchronous Single-Stream:
    items/sec
    ms/batch
  ResNet-50, Sparse INT8 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  ResNet-50, Sparse INT8 - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Detection, YOLOv5s COCO - Synchronous Single-Stream:
    items/sec
    ms/batch
  BERT-Large, NLP Question Answering - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  BERT-Large, NLP Question Answering - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream:
    items/sec
    ms/batch
  BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream:
    items/sec
    ms/batch