MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions.
To run this test with the Phoronix Test Suite, the basic command is: phoronix-test-suite benchmark mnn.
* Uploading of benchmark result data to OpenBenchmarking.org is always optional (opt-in) via the Phoronix Test Suite for users wishing to share their results publicly. ** Data based on those opting to upload their test results to OpenBenchmarking.org and users enabling the opt-in anonymous statistics reporting while running benchmarks from an Internet-connected platform. *** Test profile page view reporting began March 2021. Data updated weekly as of 18 November 2024.
Revision History
pts/mnn-3.0.0 [View Source] Sun, 17 Nov 2024 23:52:11 GMT Update against MNN 3.0 upstream.
pts/mnn-2.9.0 [View Source] Sun, 11 Aug 2024 09:28:57 GMT Update against MNN upstream Git to fix build problems on modern compilers.
pts/mnn-2.1.0 [View Source] Wed, 31 Aug 2022 10:53:57 GMT Update against MNN 2.1 upstream.
pts/mnn-2.0.0 [View Source] Sat, 13 Aug 2022 09:41:19 GMT Update against MNN 2.0 upstream.
pts/mnn-1.3.0 [View Source] Fri, 18 Jun 2021 06:27:34 GMT Update against new upstream MNN 1.2.0 release.
pts/mnn-1.2.0 [View Source] Fri, 12 Mar 2021 07:05:09 GMT Update against upstream MNN 1.1.3.
pts/mnn-1.1.1 [View Source] Tue, 12 Jan 2021 16:25:37 GMT Test builds fine on macOS.
pts/mnn-1.1.0 [View Source] Wed, 06 Jan 2021 12:46:43 GMT Update against MNN 1.1.1 upstream.
pts/mnn-1.0.1 [View Source] Thu, 17 Sep 2020 20:25:29 GMT Add min/max reporting to result parser.
pts/mnn-1.0.0 [View Source] Thu, 17 Sep 2020 18:57:32 GMT Initial commit of Alibaba MNN deep learning framework benchmark.
OpenBenchmarking.org metrics for this test profile configuration based on 36 public results since 18 November 2024 with the latest data as of 19 November 2024.
Below is an overview of the generalized performance for components where there is sufficient statistically significant data based upon user-uploaded results. It is important to keep in mind particularly in the Linux/open-source space there can be vastly different OS configurations, with this overview intended to offer just general guidance as to the performance expectations.
Based on OpenBenchmarking.org data, the selected test / test configuration (Mobile Neural Network 3.0 - Model: MobileNetV2_224) has an average run-time of 14 minutes. By default this test profile is set to run at least 3 times but may increase if the standard deviation exceeds pre-defined defaults or other calculations deem additional runs necessary for greater statistical accuracy of the result.
Based on public OpenBenchmarking.org results, the selected test / test configuration has an average standard deviation of 1.5%.
Notable Instruction Set Usage
Notable instruction set extensions supported by this test, based on an automatic analysis by the Phoronix Test Suite / OpenBenchmarking.org analytics engine.
The test / benchmark does honor compiler flag changes.
Last automated analysis: 23 August 2024
This test profile binary relies on the shared libraries libMNN.so, libm.so.6, libc.so.6, libmvec.so.1.
Tested CPU Architectures
This benchmark has been successfully tested on the below mentioned architectures. The CPU architectures listed is where successful OpenBenchmarking.org result uploads occurred, namely for helping to determine if a given test is compatible with various alternative CPU architectures.