Tests
Suites
Latest Results
Search
Register
Login
Popular Tests
Timed Linux Kernel Compilation
SVT-AV1
7-Zip Compression
Stockfish
FFmpeg
x265
Newest Tests
Rustls
LiteRT
WarpX
Epoch
Valkey
Whisperfile
Recently Updated Tests
ACES DGEMM
NWChem
SuperTuxKart
ASTC Encoder
SVT-AV1
Unvanquished
New & Recently Updated Tests
Recently Updated Suites
Database Test Suite
Machine Learning
Steam
New & Recently Updated Suites
Component Benchmarks
CPUs / Processors
GPUs / Graphics
OpenGL
Disks / Storage
Motherboards
File-Systems
Operating Systems
OpenBenchmarking.org
Corporate / Organization Info
Bug Reports / Feature Requests
NCNN 1.1.0
pts/ncnn-1.1.0
- 18 December 2020 -
Update against new upstream NCNN 20201218.
downloads.xml
<?xml version="1.0"?> <!--Phoronix Test Suite v10.2.0m2--> <PhoronixTestSuite> <Downloads> <Package> <URL>https://github.com/Tencent/ncnn/archive/20201218.tar.gz</URL> <MD5>0d3abc60b767b969baa1a4ac65276754</MD5> <SHA256>94a2ce7d6ba4eb76d99f9f0a0e74e08a0d51b973346b091254dde844c95168ac</SHA256> <FileName>ncnn-20201218.tar.gz</FileName> <FileSize>11194495</FileSize> </Package> </Downloads> </PhoronixTestSuite>
install.sh
#!/bin/sh tar -xf ncnn-20201218.tar.gz cd ncnn-20201218 # remove int8 tests sed -i -e "/benchmark(\".*_int8\"/d" benchmark/benchncnn.cpp mkdir build cd build cmake -DNCNN_VULKAN=ON -DNCNN_BUILD_TOOLS=OFF -DNCNN_BUILD_EXAMPLES=OFF .. # try to build cpu-only test on system without vulkan development files is_cmake_ok=$? if [ $is_cmake_ok -ne 0 ]; then cmake -DNCNN_VULKAN=OFF -DNCNN_BUILD_TOOLS=OFF -DNCNN_BUILD_EXAMPLES=OFF .. fi make -j $NUM_CPU_CORES echo $? > ~/install-exit-status cp ../benchmark/*.param benchmark/ cd ~/ cat>ncnn<<EOT #!/bin/sh cd ncnn-20201218/build/benchmark ./benchncnn 200 \$NUM_CPU_CORES 0 \$@ 0 > \$LOG_FILE 2>&1 echo \$? > ~/test-exit-status EOT chmod +x ncnn
results-definition.xml
<?xml version="1.0"?> <!--Phoronix Test Suite v10.2.0m2--> <PhoronixTestSuite> <ResultsParser> <OutputTemplate> mobilenet min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> mobilenet </LineHint> <AppendToArgumentsDescription>Model: mobilenet</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> mobilenet_v2 min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> mobilenet_v2 </LineHint> <AppendToArgumentsDescription>Model: mobilenet-v2</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> mobilenet_v3 min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> mobilenet_v3 </LineHint> <AppendToArgumentsDescription>Model: mobilenet-v3</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> shufflenet_v2 min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> shufflenet_v2 </LineHint> <AppendToArgumentsDescription>Model: shufflenet-v2</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> mnasnet min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> mnasnet </LineHint> <AppendToArgumentsDescription>Model: mnasnet</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> efficientnet_b0 min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> efficientnet_b0 </LineHint> <AppendToArgumentsDescription>Model: efficientnet-b0</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> blazeface min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> blazeface </LineHint> <AppendToArgumentsDescription>Model: blazeface</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> googlenet min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> googlenet </LineHint> <AppendToArgumentsDescription>Model: googlenet</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> vgg16 min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> vgg16 </LineHint> <AppendToArgumentsDescription>Model: vgg16</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> resnet18 min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> resnet18 </LineHint> <AppendToArgumentsDescription>Model: resnet18</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> alexnet min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> alexnet </LineHint> <AppendToArgumentsDescription>Model: alexnet</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> resnet50 min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> resnet50 </LineHint> <AppendToArgumentsDescription>Model: resnet50</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> mobilenetv2_yolov3 min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> mobilenetv2_yolov3 </LineHint> <AppendToArgumentsDescription>Model: mobilenetv2-yolov3</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> yolov4-tiny min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> yolov4-tiny </LineHint> <AppendToArgumentsDescription>Model: yolov4-tiny</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> squeezenet_ssd min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> squeezenet_ssd </LineHint> <AppendToArgumentsDescription>Model: squeezenet_ssd</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> regnety_400m min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> regnety_400m </LineHint> <AppendToArgumentsDescription>Model: regnety_400m</AppendToArgumentsDescription> </ResultsParser> </PhoronixTestSuite>
test-definition.xml
<?xml version="1.0"?> <!--Phoronix Test Suite v10.2.0m2--> <PhoronixTestSuite> <TestInformation> <Title>NCNN</Title> <AppVersion>20201218</AppVersion> <Description>NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent.</Description> <ResultScale>ms</ResultScale> <Proportion>LIB</Proportion> <TimesToRun>3</TimesToRun> </TestInformation> <TestProfile> <Version>1.1.0</Version> <SupportedPlatforms>Linux, MacOSX</SupportedPlatforms> <SoftwareType>Scientific</SoftwareType> <TestType>System</TestType> <License>Free</License> <Status>Verified</Status> <ExternalDependencies>cmake, build-utilities, vulkan-development</ExternalDependencies> <EnvironmentSize>140</EnvironmentSize> <ProjectURL>https://github.com/Tencent/ncnn</ProjectURL> <Maintainer>Michael Larabel</Maintainer> <SystemDependencies>glslang/Include/Common.h, glslangValidator</SystemDependencies> </TestProfile> <TestSettings> <Option> <DisplayName>Target</DisplayName> <Identifier>target</Identifier> <Menu> <Entry> <Name>CPU</Name> <Value>-1</Value> </Entry> <Entry> <Name>Vulkan GPU</Name> <Value>0</Value> </Entry> </Menu> </Option> </TestSettings> </PhoronixTestSuite>