Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions.

To run this test with the Phoronix Test Suite, the basic command is: phoronix-test-suite benchmark mnn.

Project Site

mnn.zone

Source Repository

github.com

Test Created

17 September 2020

Last Updated

31 August 2022

Test Maintainer

Michael Larabel 

Test Type

System

Average Install Time

1 Minute, 7 Seconds

Average Run Time

8 Minutes, 24 Seconds

Test Dependencies

CMake + C/C++ Compiler Toolchain

Accolades

50k+ Downloads

Supported Platforms


Public Result Uploads *Reported Installs **Reported Test Completions **Test Profile Page Views ***OpenBenchmarking.orgEventsMobile Neural Network Popularity Statisticspts/mnn2020.092020.112021.012021.032021.052021.072021.092021.112022.012022.032022.052022.072022.092022.112023.012023.032023.052023.072023.092023.112024.012024.032024.053K6K9K12K15K
* Uploading of benchmark result data to OpenBenchmarking.org is always optional (opt-in) via the Phoronix Test Suite for users wishing to share their results publicly.
** Data based on those opting to upload their test results to OpenBenchmarking.org and users enabling the opt-in anonymous statistics reporting while running benchmarks from an Internet-connected platform.
*** Test profile page view reporting began March 2021.
Data updated weekly as of 15 June 2024.
mobilenet-v1-1.012.3%SqueezeNetV1.012.9%squeezenetv1.112.6%mobilenetV312.5%inception-v312.5%nasnet12.3%MobileNetV2_22412.3%resnet-v2-5012.7%Model Option PopularityOpenBenchmarking.org

Revision History

pts/mnn-2.1.0   [View Source]   Wed, 31 Aug 2022 10:53:57 GMT
Update against MNN 2.1 upstream.

pts/mnn-2.0.0   [View Source]   Sat, 13 Aug 2022 09:41:19 GMT
Update against MNN 2.0 upstream.

pts/mnn-1.3.0   [View Source]   Fri, 18 Jun 2021 06:27:34 GMT
Update against new upstream MNN 1.2.0 release.

pts/mnn-1.2.0   [View Source]   Fri, 12 Mar 2021 07:05:09 GMT
Update against upstream MNN 1.1.3.

pts/mnn-1.1.1   [View Source]   Tue, 12 Jan 2021 16:25:37 GMT
Test builds fine on macOS.

pts/mnn-1.1.0   [View Source]   Wed, 06 Jan 2021 12:46:43 GMT
Update against MNN 1.1.1 upstream.

pts/mnn-1.0.1   [View Source]   Thu, 17 Sep 2020 20:25:29 GMT
Add min/max reporting to result parser.

pts/mnn-1.0.0   [View Source]   Thu, 17 Sep 2020 18:57:32 GMT
Initial commit of Alibaba MNN deep learning framework benchmark.

Suites Using This Test

Machine Learning

HPC - High Performance Computing


Performance Metrics

Analyze Test Configuration:

Mobile Neural Network 2.1

Model: SqueezeNetV1.0

OpenBenchmarking.org metrics for this test profile configuration based on 344 public results since 31 August 2022 with the latest data as of 8 May 2024.

Below is an overview of the generalized performance for components where there is sufficient statistically significant data based upon user-uploaded results. It is important to keep in mind particularly in the Linux/open-source space there can be vastly different OS configurations, with this overview intended to offer just general guidance as to the performance expectations.

Component
Percentile Rank
# Compatible Public Results
ms (Average)
Mid-Tier
75th
> 4
Median
50th
6
30th
7
7
Low-Tier
25th
> 8
24th
4
8 +/- 1
OpenBenchmarking.orgDistribution Of Public Results - Model: SqueezeNetV1.0344 Results Range From 2 To 501 ms22446689011213415617820022224426628831033235437639842044246448650870140210280350

Based on OpenBenchmarking.org data, the selected test / test configuration (Mobile Neural Network 2.1 - Model: SqueezeNetV1.0) has an average run-time of 25 minutes. By default this test profile is set to run at least 3 times but may increase if the standard deviation exceeds pre-defined defaults or other calculations deem additional runs necessary for greater statistical accuracy of the result.

OpenBenchmarking.orgMinutesTime Required To Complete BenchmarkModel: SqueezeNetV1.0Run-Time306090120150Min: 5 / Avg: 24.73 / Max: 186

Based on public OpenBenchmarking.org results, the selected test / test configuration has an average standard deviation of 0.5%.

OpenBenchmarking.orgPercent, Fewer Is BetterAverage Deviation Between RunsModel: SqueezeNetV1.0Deviation246810Min: 0 / Avg: 0.53 / Max: 6

Does It Scale Well With Increasing Cores?

No, based on the automated analysis of the collected public benchmark data, this test / test settings does not generally scale well with increasing CPU core counts. Data based on publicly available results for this test / test settings, separated by vendor, result divided by the reference CPU clock speed, grouped by matching physical CPU core count, and normalized against the smallest core count tested from each vendor for each CPU having a sufficient number of test samples and statistically significant data.

IntelAMDOpenBenchmarking.orgRelative Core Scaling To BaseMobile Neural Network CPU Core ScalingModel: SqueezeNetV1.046812160.43960.87921.31881.75842.198

Notable Instruction Set Usage

Notable instruction set extensions supported by this test, based on an automatic analysis by the Phoronix Test Suite / OpenBenchmarking.org analytics engine.

Instruction Set
Support
Instructions Detected
SSE2 (SSE2)
Used by default on supported hardware.
 
CVTSS2SD MOVAPD CVTSD2SS MOVDQU PUNPCKLQDQ MOVDQA CVTSI2SD ADDSD MULSD MOVD CVTDQ2PS ANDPD COMISD CVTTSD2SI CMPNLESD SUBSD PSHUFD PMULUDQ PSRLDQ CVTTPS2DQ PADDQ CVTDQ2PD CMPLTPD ADDPD CVTPS2PD UNPCKLPD MOVUPD MAXPD UNPCKHPD MAXSD DIVSD SHUFPD ORPD XORPD PSHUFLW DIVPD CVTPD2PS ANDNPD SQRTSD CVTPS2DQ
SSE3 (SSE3)
Used by default on supported hardware.
 
MOVSLDUP MOVSHDUP HADDPS
SSSE3 (SSSE3)
Used by default on supported hardware.
 
PSHUFB PALIGNR PHADDD
Used by default on supported hardware.
Found on Intel processors since Sandy Bridge (2011).
Found on AMD processors since Bulldozer (2011).

 
VBROADCASTSS VEXTRACTF128 VZEROUPPER VZEROALL VPERM2F128 VINSERTF128 VBROADCASTF128 VPERMILPS VMASKMOVPS VBROADCASTSD
Used by default on supported hardware.
Found on Intel processors since Haswell (2013).
Found on AMD processors since Excavator (2016).

 
VINSERTI128 VPBROADCASTD VPERMQ VPBROADCASTQ VEXTRACTI128 VPERM2I128 VPBROADCASTB VGATHERQPS VPGATHERQD VPERMD VPMASKMOVD VPBROADCASTW VPGATHERQQ
FMA (FMA)
Used by default on supported hardware.
Found on Intel processors since Haswell (2013).
Found on AMD processors since Bulldozer (2011).

 
VFMADD231PS VFMADD231SS VFMADD132SS VFMADD132PS VFNMADD231SS VFMADD231SD VFMADD132SD VFMSUB132SS VFNMADD132SS VFMSUB231SS VFMSUB132SD VFMADD213SS VFMSUB231SD VFNMADD213PS VFNMADD132PS VFNMADD213SS VFMADD213PS VFNMADD132SD VFMSUB132PS VFMADD213SD VFNMADD231PS VFMSUB213PS
Advanced Vector Extensions 512 (AVX512)
Requires passing a supported compiler/build flag (verified with targets: tigerlake, cascadelake, sapphirerapids).
 
(ZMM REGISTER USE)
The test / benchmark does honor compiler flag changes.
Last automated analysis: 17 January 2022

This test profile binary relies on the shared libraries libMNN.so, libc.so.6, libm.so.6, libmvec.so.1.

Tested CPU Architectures

This benchmark has been successfully tested on the below mentioned architectures. The CPU architectures listed is where successful OpenBenchmarking.org result uploads occurred, namely for helping to determine if a given test is compatible with various alternative CPU architectures.

CPU Architecture
Kernel Identifier
Verified On
Intel / AMD x86 64-bit
x86_64
(Many Processors)
RISC-V 64-bit
riscv64
SiFive RISC-V, rv64imafdcvsu
ARMv8 64-bit
arm64
Apple M1, Apple M2 Pro
ARMv8 64-bit
aarch64
ARMv8 Cortex-A72 4-Core, ARMv8 Cortex-A76 4-Core, ARMv8 Cortex-A78E 6-Core, ARMv8 Neoverse-N1 128-Core, Apple, Apple M1, Apple M2, Rockchip ARMv8 Cortex-A76 4-Core

Recent Test Results

OpenBenchmarking.org Results Compare

1 System - 8 Benchmark Results

Intel Core i5-10210U - HUAWEI NBLB-WAX9N-PCB - Intel Comet Lake PCH-LP

Debian 12 - 6.1.0-18-amd64 - KDE Plasma 5.27.5

1 System - 8 Benchmark Results

rv64imafdcvsu - T-HEAD Light Lichee Pi 4A configuration for 16GB DDR board - 16GB

Debian 12 - 5.10.113-lpi4a - Xfce

1 System - 8 Benchmark Results

SiFive RISC-V - StarFive VisionFive V2 - 8GB

Debian - 5.15.0-starfive - X Server 1.21.1.5

1 System - 341 Benchmark Results

AMD Ryzen 9 7950X 16-Core - ASUS ProArt X670E-CREATOR WIFI - AMD Device 14d8

Pop 22.04 - 6.6.10-76060610-generic - GNOME Shell 42.5

1 System - 8 Benchmark Results

SiFive RISC-V - StarFive VisionFive V2 - 8GB

Debian - 5.15.0-starfive - GNOME Shell 43.1

1 System - 304 Benchmark Results

AMD A4-5300 APU - ASRock FM2A88M-HD+ R3.0 - AMD 15h

Ubuntu 20.04 - 5.15.0-89-generic - GNOME Shell 3.36.9

1 System - 295 Benchmark Results

AMD Ryzen 9 7950X3D 16-Core - ASUS PRIME X670E-PRO WIFI - AMD Device 14d8

Ubuntu 22.04 - 6.2.0-39-generic - GNOME Shell 42.9

1 System - 301 Benchmark Results

AMD A8-9600 RADEON R7 10 COMPUTE CORES 4C+6G - ASRock A320M-HDV R4.0 - AMD 15h

Ubuntu 20.04 - 5.15.0-89-generic - GNOME Shell 3.36.9

Find More Test Results