Tests
Suites
Latest Results
Search
Register
Login
Popular Tests
Timed Linux Kernel Compilation
SVT-AV1
7-Zip Compression
Stockfish
Blender
FFmpeg
Newest Tests
OpenVINO GenAI
Rustls
LiteRT
WarpX
Epoch
Valkey
Recently Updated Tests
Llama.cpp
OpenVINO
Renaissance
Blender
vkpeak
ProjectPhysX OpenCL-Benchmark
New & Recently Updated Tests
Recently Updated Suites
Machine Learning
Server Motherboard
HPC - High Performance Computing
New & Recently Updated Suites
Component Benchmarks
CPUs / Processors
GPUs / Graphics
OpenGL
Disks / Storage
Motherboards
File-Systems
Operating Systems
OpenBenchmarking.org
Corporate / Organization Info
Bug Reports / Feature Requests
Mobile Neural Network 2.0.0
pts/mnn-2.0.0
- 13 August 2022 -
Update against MNN 2.0 upstream.
downloads.xml
<?xml version="1.0"?> <!--Phoronix Test Suite v10.8.4--> <PhoronixTestSuite> <Downloads> <Package> <URL>https://github.com/alibaba/MNN/archive/refs/tags/2.0.0.tar.gz</URL> <MD5>ad1799691917c490e4a9f97900b72ba8</MD5> <SHA256>5e189c79ec6f85805c9bb7121ed17ac6aeda58cf2ecd86c04d768645d7aab83e</SHA256> <FileName>MNN-2.0.0.tar.gz</FileName> <FileSize>107202695</FileSize> </Package> </Downloads> </PhoronixTestSuite>
install.sh
#!/bin/bash rm -rf MNN-2.0.0 tar -xf MNN-2.0.0.tar.gz cd MNN-2.0.0 cd schema ./generate.sh cd .. mkdir build cd build EXTRA_CMAKE_FLAGS="" if [ $OS_TYPE = "Linux" ] then if grep avx512 /proc/cpuinfo > /dev/null then EXTRA_CMAKE_FLAGS="$EXTRA_CMAKE_FLAGS -DMNN_AVX512=ON" fi if grep avx512_vnni /proc/cpuinfo > /dev/null then EXTRA_CMAKE_FLAGS="$EXTRA_CMAKE_FLAGS -DMNN_AVX512_VNNI=ON" fi fi cmake .. -DMNN_BUILD_BENCHMARK=true -DCMAKE_BUILD_TYPE=Release -DMNN_OPENMP=ON $EXTRA_CMAKE_FLAGS make -j $NUM_CPU_CORES echo $? > ~/install-exit-status cd ~/ cat>mnn<<EOT #!/bin/sh cd MNN-2.0.0/build ./benchmark.out ../benchmark/models/ 1200 100 0 \$NUM_CPU_CORES > \$LOG_FILE echo \$? > ~/test-exit-status EOT chmod +x mnn
results-definition.xml
<?xml version="1.0"?> <!--Phoronix Test Suite v10.8.4--> <PhoronixTestSuite> <ResultsParser> <OutputTemplate>[ - ] nasnet.mnn max = #_MAX_RESULT_# ms min = #_MIN_RESULT_# ms avg = #_RESULT_# ms</OutputTemplate> <LineHint>squeezenetv1.1.mnn</LineHint> <ArgumentsDescription>Model: squeezenetv1.1</ArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate>[ - ] mobilenetV3.mnn max = #_MAX_RESULT_# ms min = #_MIN_RESULT_# ms avg = #_RESULT_# ms</OutputTemplate> <LineHint>mobilenetV3.mnn</LineHint> <ArgumentsDescription>Model: mobilenetV3</ArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate>[ - ] squeezenetv1.1.mnn max = #_MAX_RESULT_# ms min = #_MIN_RESULT_# ms avg = #_RESULT_# ms</OutputTemplate> <LineHint>squeezenetv1.1.mnn</LineHint> <ArgumentsDescription>Model: squeezenetv1.1</ArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate>[ - ] resnet-v2-50.mnn max = #_MAX_RESULT_# ms min = #_MIN_RESULT_# ms avg = #_RESULT_# ms</OutputTemplate> <LineHint>resnet-v2-50.mnn</LineHint> <ArgumentsDescription>Model: resnet-v2-50</ArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate>[ - ] SqueezeNetV1.0.mnn max = #_MAX_RESULT_# ms min = #_MIN_RESULT_# ms avg = #_RESULT_# ms</OutputTemplate> <LineHint>SqueezeNetV1.0.mnn</LineHint> <ArgumentsDescription>Model: SqueezeNetV1.0</ArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate>[ - ] MobileNetV2_224.mnn max = #_MAX_RESULT_# ms min = #_MIN_RESULT_# ms avg = #_RESULT_# ms</OutputTemplate> <LineHint>MobileNetV2_224.mnn</LineHint> <ArgumentsDescription>Model: MobileNetV2_224</ArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate>[ - ] mobilenet-v1-1.0.mnn max = #_MAX_RESULT_# ms min = #_MIN_RESULT_# ms avg = #_RESULT_# ms</OutputTemplate> <LineHint>mobilenet-v1-1.0.mnn</LineHint> <ArgumentsDescription>Model: mobilenet-v1-1.0</ArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate>[ - ] inception-v3.mnn max = #_MAX_RESULT_# ms min = #_MIN_RESULT_# ms avg = #_RESULT_# ms</OutputTemplate> <LineHint>inception-v3.mnn</LineHint> <ArgumentsDescription>Model: inception-v3</ArgumentsDescription> </ResultsParser> </PhoronixTestSuite>
test-definition.xml
<?xml version="1.0"?> <!--Phoronix Test Suite v10.8.4--> <PhoronixTestSuite> <TestInformation> <Title>Mobile Neural Network</Title> <AppVersion>2.0</AppVersion> <Description>MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test.</Description> <ResultScale>ms</ResultScale> <Proportion>LIB</Proportion> <TimesToRun>3</TimesToRun> </TestInformation> <TestProfile> <Version>2.0.0</Version> <SupportedPlatforms>Linux, MacOSX</SupportedPlatforms> <SoftwareType>Scientific</SoftwareType> <TestType>System</TestType> <License>Free</License> <Status>Verified</Status> <ExternalDependencies>cmake, build-utilities</ExternalDependencies> <EnvironmentSize>2700</EnvironmentSize> <ProjectURL>https://www.mnn.zone/</ProjectURL> <RepositoryURL>https://github.com/alibaba/MNN/</RepositoryURL> <Maintainer>Michael Larabel</Maintainer> </TestProfile> </PhoronixTestSuite>