Tests
Suites
Latest Results
Search
Register
Login
Popular Tests
Timed Linux Kernel Compilation
SVT-AV1
7-Zip Compression
Stockfish
FFmpeg
x265
Newest Tests
Rustls
LiteRT
WarpX
Epoch
Valkey
Whisperfile
Recently Updated Tests
ACES DGEMM
NWChem
SuperTuxKart
ASTC Encoder
SVT-AV1
Unvanquished
New & Recently Updated Tests
Recently Updated Suites
Database Test Suite
Machine Learning
Steam
New & Recently Updated Suites
Component Benchmarks
CPUs / Processors
GPUs / Graphics
OpenGL
Disks / Storage
Motherboards
File-Systems
Operating Systems
OpenBenchmarking.org
Corporate / Organization Info
Bug Reports / Feature Requests
Neural Magic DeepSparse 1.6.0
pts/deepsparse-1.6.0
- 11 December 2023 -
Update against deepsparse 1.6 upstream.
install.sh
#!/bin/sh rm -rf ~/.cache/ pip3 install --user deepsparse==1.6.0 sparsezoo==1.6.0 ~/.local/bin/sparsezoo.download zoo:nlp/text_classification/distilbert-none/pytorch/huggingface/mnli/base-none ~/.local/bin/sparsezoo.download zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none ~/.local/bin/sparsezoo.download zoo:nlp/token_classification/bert-base/pytorch/huggingface/conll2003/base-none ~/.local/bin/sparsezoo.download zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/base-none ~/.local/bin/sparsezoo.download zoo:nlp/document_classification/obert-base/pytorch/huggingface/imdb/base-none ~/.local/bin/sparsezoo.download zoo:cv/segmentation/yolact-darknet53/pytorch/dbolya/coco/pruned90-none ~/.local/bin/sparsezoo.download zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none ~/.local/bin/sparsezoo.download zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/pruned95_uniform_quant-none ~/.local/bin/sparsezoo.download zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/pruned85-none ~/.local/bin/sparsezoo.download zoo:nlp/sentiment_analysis/oberta-base/pytorch/huggingface/sst2/pruned90_quant-none ~/.local/bin/sparsezoo.download zoo:nlp/question_answering/obert-large/pytorch/huggingface/squad/base-none ~/.local/bin/sparsezoo.download zoo:nlp/question_answering/obert-large/pytorch/huggingface/squad/pruned97_quant-none echo $? > ~/install-exit-status echo "#!/bin/sh ~/.local/bin/deepsparse.benchmark \$@ > \$LOG_FILE 2>&1 echo \$? > ~/test-exit-status" > deepsparse chmod +x deepsparse
results-definition.xml
<?xml version="1.0"?> <!--Phoronix Test Suite v10.8.4--> <PhoronixTestSuite> <ResultsParser> <OutputTemplate>Throughput (items/sec): #_RESULT_#</OutputTemplate> <ResultScale>items/sec</ResultScale> <ResultProportion>HIB</ResultProportion> </ResultsParser> <ResultsParser> <OutputTemplate>Latency Mean (ms/batch): #_RESULT_#</OutputTemplate> <ResultScale>ms/batch</ResultScale> <ResultProportion>LIB</ResultProportion> </ResultsParser> </PhoronixTestSuite>
test-definition.xml
<?xml version="1.0"?> <!--Phoronix Test Suite v10.8.4--> <PhoronixTestSuite> <TestInformation> <Title>Neural Magic DeepSparse</Title> <AppVersion>1.6</AppVersion> <Description>This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/).</Description> <ResultScale>items/sec</ResultScale> <Proportion>HIB</Proportion> <TimesToRun>3</TimesToRun> </TestInformation> <TestProfile> <Version>1.6.0</Version> <SupportedPlatforms>Linux</SupportedPlatforms> <SoftwareType>Benchmark</SoftwareType> <TestType>System</TestType> <License>Free</License> <Status>Verified</Status> <ExternalDependencies>python</ExternalDependencies> <InstallRequiresInternet>TRUE</InstallRequiresInternet> <EnvironmentSize>9200</EnvironmentSize> <ProjectURL>https://neuralmagic.com/deepsparse-engine/</ProjectURL> <RepositoryURL>https://github.com/neuralmagic/deepsparse</RepositoryURL> <Maintainer>Michael Larabel</Maintainer> <SystemDependencies>pip3</SystemDependencies> </TestProfile> <TestSettings> <Default> <PostArguments> -t 30 -w 5</PostArguments> </Default> <Option> <DisplayName>Model</DisplayName> <Identifier>model</Identifier> <Menu> <Entry> <Name>NLP Text Classification, BERT base uncased SST2, Sparse INT8</Name> <Value>zoo:nlp/sentiment_analysis/oberta-base/pytorch/huggingface/sst2/pruned90_quant-none --input_shapes='[1,128]'</Value> </Entry> <Entry> <Name>NLP Text Classification, DistilBERT mnli</Name> <Value>zoo:nlp/text_classification/distilbert-none/pytorch/huggingface/mnli/base-none</Value> </Entry> <Entry> <Name>CV Classification, ResNet-50 ImageNet</Name> <Value>zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none</Value> </Entry> <Entry> <Name>NLP Token Classification, BERT base uncased conll2003</Name> <Value>zoo:nlp/token_classification/bert-base/pytorch/huggingface/conll2003/base-none</Value> </Entry> <Entry> <Name>CV Detection, YOLOv5s COCO</Name> <Value>zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/base-none</Value> </Entry> <Entry> <Name>CV Detection, YOLOv5s COCO, Sparse INT8</Name> <Value>zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/pruned85-none</Value> </Entry> <Entry> <Name>NLP Document Classification, oBERT base uncased on IMDB</Name> <Value>zoo:nlp/document_classification/obert-base/pytorch/huggingface/imdb/base-none</Value> </Entry> <Entry> <Name>CV Segmentation, 90% Pruned YOLACT Pruned</Name> <Value>zoo:cv/segmentation/yolact-darknet53/pytorch/dbolya/coco/pruned90-none</Value> </Entry> <Entry> <Name>ResNet-50, Baseline</Name> <Value>zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none</Value> </Entry> <Entry> <Name>ResNet-50, Sparse INT8</Name> <Value>zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/pruned95_uniform_quant-none</Value> </Entry> <Entry> <Name>BERT-Large, NLP Question Answering</Name> <Value>zoo:nlp/question_answering/obert-large/pytorch/huggingface/squad/base-none --input_shapes='[1,128]'</Value> </Entry> <Entry> <Name>BERT-Large, NLP Question Answering, Sparse INT8</Name> <Value>zoo:nlp/question_answering/obert-large/pytorch/huggingface/squad/pruned97_quant-none --input_shapes='[1,128]'</Value> </Entry> </Menu> </Option> <Option> <DisplayName>Scenario</DisplayName> <Identifier>scenario</Identifier> <ArgumentPrefix>--scenario </ArgumentPrefix> <Menu> <Entry> <Name>Synchronous Single-Stream</Name> <Value>sync</Value> </Entry> <Entry> <Name>Asynchronous Multi-Stream</Name> <Value>async</Value> </Entry> </Menu> </Option> </TestSettings> </PhoronixTestSuite>