Tests
Suites
Latest Results
Search
Register
Login
Popular Tests
Timed Linux Kernel Compilation
SVT-AV1
7-Zip Compression
Stockfish
FFmpeg
x265
Newest Tests
Rustls
LiteRT
WarpX
Epoch
Valkey
Whisperfile
Recently Updated Tests
ACES DGEMM
NWChem
SuperTuxKart
ASTC Encoder
SVT-AV1
Unvanquished
New & Recently Updated Tests
Recently Updated Suites
Database Test Suite
Machine Learning
Steam
New & Recently Updated Suites
Component Benchmarks
CPUs / Processors
GPUs / Graphics
OpenGL
Disks / Storage
Motherboards
File-Systems
Operating Systems
OpenBenchmarking.org
Corporate / Organization Info
Bug Reports / Feature Requests
NCNN 1.2.0
pts/ncnn-1.2.0
- 18 June 2021 -
Update against NCNN 20210525 release.
downloads.xml
<?xml version="1.0"?> <!--Phoronix Test Suite v10.4.0--> <PhoronixTestSuite> <Downloads> <Package> <URL>https://github.com/Tencent/ncnn/archive/refs/tags/20210525.tar.gz</URL> <MD5>e1f127f24bfedc1dacbbe95e3eb274fb</MD5> <SHA256>a385eb5505f09e59ae486fa89584e5a15d4c45e7463927bbdddf9060d81b9a18</SHA256> <FileName>ncnn-20210525.tar.gz</FileName> <FileSize>11578821</FileSize> </Package> </Downloads> </PhoronixTestSuite>
install.sh
#!/bin/sh tar -xf ncnn-20210525.tar.gz cd ncnn-20210525 # remove int8 tests sed -i -e "/benchmark(\".*_int8\"/d" benchmark/benchncnn.cpp mkdir build cd build cmake -DNCNN_VULKAN=ON -DNCNN_BUILD_TOOLS=OFF -DNCNN_BUILD_EXAMPLES=OFF .. # try to build cpu-only test on system without vulkan development files is_cmake_ok=$? if [ $is_cmake_ok -ne 0 ]; then cmake -DNCNN_VULKAN=OFF -DNCNN_BUILD_TOOLS=OFF -DNCNN_BUILD_EXAMPLES=OFF .. fi make -j $NUM_CPU_CORES echo $? > ~/install-exit-status cp ../benchmark/*.param benchmark/ cd ~/ cat>ncnn<<EOT #!/bin/sh cd ncnn-20210525/build/benchmark ./benchncnn 200 \$NUM_CPU_CORES 0 \$@ 0 > \$LOG_FILE 2>&1 echo \$? > ~/test-exit-status EOT chmod +x ncnn
results-definition.xml
<?xml version="1.0"?> <!--Phoronix Test Suite v10.4.0--> <PhoronixTestSuite> <ResultsParser> <OutputTemplate> mobilenet min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> mobilenet </LineHint> <AppendToArgumentsDescription>Model: mobilenet</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> mobilenet_v2 min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> mobilenet_v2 </LineHint> <AppendToArgumentsDescription>Model: mobilenet-v2</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> mobilenet_v3 min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> mobilenet_v3 </LineHint> <AppendToArgumentsDescription>Model: mobilenet-v3</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> shufflenet_v2 min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> shufflenet_v2 </LineHint> <AppendToArgumentsDescription>Model: shufflenet-v2</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> mnasnet min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> mnasnet </LineHint> <AppendToArgumentsDescription>Model: mnasnet</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> efficientnet_b0 min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> efficientnet_b0 </LineHint> <AppendToArgumentsDescription>Model: efficientnet-b0</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> blazeface min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> blazeface </LineHint> <AppendToArgumentsDescription>Model: blazeface</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> googlenet min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> googlenet </LineHint> <AppendToArgumentsDescription>Model: googlenet</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> vgg16 min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> vgg16 </LineHint> <AppendToArgumentsDescription>Model: vgg16</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> resnet18 min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> resnet18 </LineHint> <AppendToArgumentsDescription>Model: resnet18</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> alexnet min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> alexnet </LineHint> <AppendToArgumentsDescription>Model: alexnet</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> resnet50 min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> resnet50 </LineHint> <AppendToArgumentsDescription>Model: resnet50</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> mobilenetv2_yolov3 min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> mobilenetv2_yolov3 </LineHint> <AppendToArgumentsDescription>Model: mobilenetv2-yolov3</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> yolov4-tiny min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> yolov4-tiny </LineHint> <AppendToArgumentsDescription>Model: yolov4-tiny</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> squeezenet_ssd min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> squeezenet_ssd </LineHint> <AppendToArgumentsDescription>Model: squeezenet_ssd</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> regnety_400m min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> regnety_400m </LineHint> <AppendToArgumentsDescription>Model: regnety_400m</AppendToArgumentsDescription> </ResultsParser> </PhoronixTestSuite>
test-definition.xml
<?xml version="1.0"?> <!--Phoronix Test Suite v10.4.0--> <PhoronixTestSuite> <TestInformation> <Title>NCNN</Title> <AppVersion>20210525</AppVersion> <Description>NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent.</Description> <ResultScale>ms</ResultScale> <Proportion>LIB</Proportion> <TimesToRun>3</TimesToRun> </TestInformation> <TestProfile> <Version>1.2.0</Version> <SupportedPlatforms>Linux, MacOSX</SupportedPlatforms> <SoftwareType>Scientific</SoftwareType> <TestType>System</TestType> <License>Free</License> <Status>Verified</Status> <ExternalDependencies>cmake, build-utilities, vulkan-development</ExternalDependencies> <EnvironmentSize>140</EnvironmentSize> <ProjectURL>https://github.com/Tencent/ncnn</ProjectURL> <Maintainer>Michael Larabel</Maintainer> <SystemDependencies>glslang/Include/Common.h, glslangValidator</SystemDependencies> </TestProfile> <TestSettings> <Option> <DisplayName>Target</DisplayName> <Identifier>target</Identifier> <Menu> <Entry> <Name>CPU</Name> <Value>-1</Value> </Entry> <Entry> <Name>Vulkan GPU</Name> <Value>0</Value> </Entry> </Menu> </Option> </TestSettings> </PhoronixTestSuite>