Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions.

To run this test with the Phoronix Test Suite, the basic command is: phoronix-test-suite benchmark mnn.

Project Site

mnn.zone

Source Repository

github.com

Test Created

17 September 2020

Last Updated

17 November 2024

Test Maintainer

Michael Larabel 

Test Type

System

Average Install Time

59 Seconds

Average Run Time

8 Minutes, 24 Seconds

Test Dependencies

CMake + C/C++ Compiler Toolchain

Accolades

70k+ Downloads + Recently Updated Test Profile

Supported Platforms


Public Result Uploads *Reported Installs **Reported Test Completions **Test Profile Page Views ***OpenBenchmarking.orgEventsMobile Neural Network Popularity Statisticspts/mnn2020.092020.112021.012021.032021.052021.072021.092021.112022.012022.032022.052022.072022.092022.112023.012023.032023.052023.072023.092023.112024.012024.032024.052024.072024.092024.114K8K12K16K20K
* Uploading of benchmark result data to OpenBenchmarking.org is always optional (opt-in) via the Phoronix Test Suite for users wishing to share their results publicly.
** Data based on those opting to upload their test results to OpenBenchmarking.org and users enabling the opt-in anonymous statistics reporting while running benchmarks from an Internet-connected platform.
*** Test profile page view reporting began March 2021.
Data updated weekly as of 18 November 2024.

Revision History

pts/mnn-3.0.0   [View Source]   Sun, 17 Nov 2024 23:52:11 GMT
Update against MNN 3.0 upstream.

pts/mnn-2.9.0   [View Source]   Sun, 11 Aug 2024 09:28:57 GMT
Update against MNN upstream Git to fix build problems on modern compilers.

pts/mnn-2.1.0   [View Source]   Wed, 31 Aug 2022 10:53:57 GMT
Update against MNN 2.1 upstream.

pts/mnn-2.0.0   [View Source]   Sat, 13 Aug 2022 09:41:19 GMT
Update against MNN 2.0 upstream.

pts/mnn-1.3.0   [View Source]   Fri, 18 Jun 2021 06:27:34 GMT
Update against new upstream MNN 1.2.0 release.

pts/mnn-1.2.0   [View Source]   Fri, 12 Mar 2021 07:05:09 GMT
Update against upstream MNN 1.1.3.

pts/mnn-1.1.1   [View Source]   Tue, 12 Jan 2021 16:25:37 GMT
Test builds fine on macOS.

pts/mnn-1.1.0   [View Source]   Wed, 06 Jan 2021 12:46:43 GMT
Update against MNN 1.1.1 upstream.

pts/mnn-1.0.1   [View Source]   Thu, 17 Sep 2020 20:25:29 GMT
Add min/max reporting to result parser.

pts/mnn-1.0.0   [View Source]   Thu, 17 Sep 2020 18:57:32 GMT
Initial commit of Alibaba MNN deep learning framework benchmark.

Suites Using This Test

Machine Learning

HPC - High Performance Computing


Performance Metrics

Analyze Test Configuration:

Mobile Neural Network 3.0

Model: MobileNetV2_224

OpenBenchmarking.org metrics for this test profile configuration based on 36 public results since 18 November 2024 with the latest data as of 19 November 2024.

Below is an overview of the generalized performance for components where there is sufficient statistically significant data based upon user-uploaded results. It is important to keep in mind particularly in the Linux/open-source space there can be vastly different OS configurations, with this overview intended to offer just general guidance as to the performance expectations.

Component
Percentile Rank
# Compatible Public Results
ms (Average)
98th
4
1.013 +/- 0.005
84th
5
1.112 +/- 0.017
Mid-Tier
75th
> 1.903
70th
3
1.919 +/- 0.017
62nd
3
2.880 +/- 0.027
53rd
3
3.422 +/- 0.072
Median
50th
3.468
42nd
4
3.500 +/- 0.022
Low-Tier
25th
> 4.115
23rd
4
4.116 +/- 0.049
17th
3
4.564 +/- 0.127
3rd
4
4.998 +/- 0.123
OpenBenchmarking.orgDistribution Of Public Results - Model: MobileNetV2_22430 Results Range From 1 To 6 ms11.4171.8342.2512.6683.0853.5023.9194.3364.7535.175.5876.0043691215

Based on OpenBenchmarking.org data, the selected test / test configuration (Mobile Neural Network 3.0 - Model: MobileNetV2_224) has an average run-time of 14 minutes. By default this test profile is set to run at least 3 times but may increase if the standard deviation exceeds pre-defined defaults or other calculations deem additional runs necessary for greater statistical accuracy of the result.

OpenBenchmarking.orgMinutesTime Required To Complete BenchmarkModel: MobileNetV2_224Run-Time1020304050Min: 4 / Avg: 13.36 / Max: 49

Based on public OpenBenchmarking.org results, the selected test / test configuration has an average standard deviation of 1.5%.

OpenBenchmarking.orgPercent, Fewer Is BetterAverage Deviation Between RunsModel: MobileNetV2_224Deviation3691215Min: 0 / Avg: 1.52 / Max: 7

Notable Instruction Set Usage

Notable instruction set extensions supported by this test, based on an automatic analysis by the Phoronix Test Suite / OpenBenchmarking.org analytics engine.

Instruction Set
Support
Instructions Detected
SSE2 (SSE2)
Used by default on supported hardware.
 
CVTSS2SD MOVDQA CVTSD2SS MOVDQU PUNPCKLQDQ MOVD CVTSI2SD CVTTSD2SI COMISD MULSD ANDPD MINSD MAXSD SUBSD CMPNLESD ADDSD MOVAPD CVTDQ2PS PSHUFD PMULUDQ PSRLDQ PADDQ CVTDQ2PD CMPLTPD ADDPD CVTPS2PD UNPCKLPD MOVUPD MAXPD UNPCKHPD DIVSD SHUFPD CVTTPS2DQ CVTPS2DQ CVTTPD2DQ ORPD XORPD PSHUFLW SQRTSD ANDNPD CVTPD2PS MOVHPD MULPD SUBPD
SSE3 (SSE3)
Used by default on supported hardware.
 
MOVDDUP MOVSLDUP
SSSE3 (SSSE3)
Used by default on supported hardware.
 
PHADDD PSHUFB
Used by default on supported hardware.
Found on Intel processors since Sandy Bridge (2011).
Found on AMD processors since Bulldozer (2011).

 
VBROADCASTSS VEXTRACTF128 VZEROUPPER VZEROALL VPERM2F128 VINSERTF128 VMASKMOVPS VPERMILPS VBROADCASTSD
Used by default on supported hardware.
Found on Intel processors since Haswell (2013).
Found on AMD processors since Excavator (2016).

 
VINSERTI128 VPBROADCASTD VPERMQ VPBROADCASTQ VEXTRACTI128 VPERM2I128 VPBROADCASTW VPBROADCASTB VGATHERQPS VPMASKMOVD VPSLLVD VPERMD VPSRAVD VPERMPD
Advanced Vector Extensions 512 (AVX512)
Used by default on supported hardware.
 
(ZMM REGISTER USE)
AVX Vector Neural Network Instructions (AVX-VNNI)
Used by default on supported hardware.
 
VPDPBUSDS
FMA (FMA)
Used by default on supported hardware.
Found on Intel processors since Haswell (2013).
Found on AMD processors since Bulldozer (2011).

 
VFMADD231PS VFMADD231SS VFMADD132PS VFNMADD132PS VFNMADD231PS VFNMADD213PS VFMADD213PS VFMSUB132PS VFMSUB231PS VFMADD132SD VFMADD132SS VFNMADD213SS VFNMADD132SS VFNMADD231SS VFMSUB132SS VFMSUB231SS VFMSUB132SD VFMADD213SS VFMADD231SD VFMSUB231SD VFNMADD132SD VFNMADD231PD VFNMADD132PD VFMADD132PD VFMADD213PD VFNMADD231SD VFMADD231PD
The test / benchmark does honor compiler flag changes.
Last automated analysis: 23 August 2024

This test profile binary relies on the shared libraries libMNN.so, libm.so.6, libc.so.6, libmvec.so.1.

Tested CPU Architectures

This benchmark has been successfully tested on the below mentioned architectures. The CPU architectures listed is where successful OpenBenchmarking.org result uploads occurred, namely for helping to determine if a given test is compatible with various alternative CPU architectures.

CPU Architecture
Kernel Identifier
Verified On
Intel / AMD x86 64-bit
x86_64
(Many Processors)
ARMv8 64-bit
aarch64
ARMv8 Neoverse-N1 128-Core, ARMv8 Neoverse-V2 72-Core

Recent Test Results

OpenBenchmarking.org Results Compare

3 Systems - 9 Benchmark Results

AMD EPYC 9655P 96-Core - Supermicro Super Server H13SSL-N v1.01 - AMD 1Ah

Ubuntu 24.10 - 6.12.0-rc7-linux-pm-next-phx - GNOME Shell 47.0

4 Systems - 9 Benchmark Results

AMD Ryzen AI 9 365 - ASUS Zenbook S 16 UM5606WA_UM5606WA UM5606WA v1.0 - AMD Device 1507

Ubuntu 24.10 - 6.12.0-rc7-phx-eraps - GNOME Shell 47.0

3 Systems - 8 Benchmark Results

AMD Ryzen Threadripper 7980X 64-Cores - System76 Thelio Major - AMD Device 14a4

Ubuntu 24.04 - 6.8.0-48-generic - GNOME Shell 46.0

4 Systems - 8 Benchmark Results

AMD Ryzen AI 9 HX 370 - ASUS Zenbook S 16 UM5606WA_UM5606WA UM5606WA v1.0 - AMD Device 1507

Ubuntu 24.10 - 6.11.0-rc6-phx - GNOME Shell 47.0

3 Systems - 8 Benchmark Results

ARMv8 Neoverse-N1 - System76 Thelio Astra - Ampere Computing LLC Altra PCI Root Complex A

Ubuntu 24.04 - 6.8.0-48-generic-64k - GNOME Shell 46.0

4 Systems - 8 Benchmark Results

ARMv8 Neoverse-V2 - Pegatron JIMBO P4352 - 1 x 480GB LPDDR5-6400MT

Ubuntu 24.04 - 6.8.0-48-generic-64k - GCC 13.2.0 + Clang 18.1.3 + CUDA 11.8

4 Systems - 8 Benchmark Results

Intel Core Ultra 7 256V - ASUS Zenbook S 14 UX5406SA_UX5406SA UX5406SA v1.0 - Intel Device a87f

Ubuntu 24.10 - 6.12.0-rc3-phx-aipt - GNOME Shell 47.0

3 Systems - 8 Benchmark Results

AMD Ryzen 7 7840HS - Framework Laptop 16 - AMD Device 14e8

Ubuntu 24.04 - 6.8.0-48-generic - GNOME Shell 46.0

5 Systems - 8 Benchmark Results

AMD Ryzen 7 9800X3D 8-Core - ASRock X870E Taichi - AMD Device 14d8

Ubuntu 24.04 - 6.10.0-phx - GNOME Shell 46.0

Find More Test Results