Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions.


Mobile Neural Network 3.0

Model: SqueezeNetV1.0

OpenBenchmarking.org metrics for this test profile configuration based on 39 public results since 18 November 2024 with the latest data as of 20 January 2025.

Below is an overview of the generalized performance for components where there is sufficient statistically significant data based upon user-uploaded results. It is important to keep in mind particularly in the Linux/open-source space there can be vastly different OS configurations, with this overview intended to offer just general guidance as to the performance expectations.

Component
Details
Percentile Rank
# Compatible Public Results
ms (Average)
Zen 5 [8 Cores / 16 Threads]
93rd
5
1.348 +/- 0.009
85th
4
1.781 +/- 0.012
Mid-Tier
75th
> 3.121
67th
3
3.176 +/- 0.028
Zen 5 [96 Cores / 192 Threads]
62nd
6
3.213 +/- 0.085
Zen 4 [8 Cores / 16 Threads]
52nd
3
4.318 +/- 0.081
Median
50th
4.354
Zen 5 [10 Cores / 20 Threads]
39th
4
5.250 +/- 0.163
Lunar Lake [8 Cores / 8 Threads]
31st
4
5.692 +/- 0.016
Low-Tier
25th
> 6.044
Zen 4 [64 Cores / 128 Threads]
21st
3
6.078 +/- 0.052
Meteor Lake [16 Cores / 22 Threads]
11th
3
6.380 +/- 0.074
Zen 5 [12 Cores / 24 Threads]
6th
4
6.580 +/- 0.190