mnn 7zip

Apple M2 testing with a Apple MacBook Air (13 h M2 2022) and llvmpipe on Arch rolling via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2209018-NE-MNN7ZIP8200
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
A
September 01 2022
  3 Hours, 25 Minutes
B
September 01 2022
  5 Hours, 13 Minutes
C
September 01 2022
  36 Minutes
D
September 01 2022
  34 Minutes
Invert Hiding All Results Option
  2 Hours, 27 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


mnn 7zipOpenBenchmarking.orgPhoronix Test SuiteApple M2 @ 2.42GHz (4 Cores / 8 Threads)Apple MacBook Air (13 h M2 2022)8GB251GB APPLE SSD AP0256Z + 2 x 0GB APPLE SSD AP0256ZllvmpipeBroadcom Device 4433 + Broadcom Device 5f71Arch rolling5.19.0-rc7-asahi-2-1-ARCH (aarch64)KDE Plasma 5.25.4X Server 1.21.1.44.5 Mesa 22.1.6 (LLVM 14.0.6 128 bits)GCC 12.1.0 + Clang 14.0.6ext42560x1600ProcessorMotherboardMemoryDiskGraphicsNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionMnn 7zip BenchmarksSystem Logs- --build=aarch64-unknown-linux-gnu --disable-libssp --disable-libstdcxx-pch --disable-multilib --disable-werror --enable-__cxa_atexit --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-default-ssp --enable-fix-cortex-a53-835769 --enable-fix-cortex-a53-843419 --enable-gnu-indirect-function --enable-gnu-unique-object --enable-languages=c,c++,fortran,go,lto,objc,obj-c++ --enable-lto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-unknown-linux-gnu --mandir=/usr/share/man --with-arch=armv8-a --with-linker-hash-style=gnu - Scaling Governor: apple-cpufreq schedutil- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Not affected + srbds: Not affected + tsx_async_abort: Not affected

ABCDResult OverviewPhoronix Test Suite100%110%120%129%139%Mobile Neural NetworkMobile Neural NetworkMobile Neural NetworkMobile Neural NetworkMobile Neural NetworkMobile Neural NetworkMobile Neural NetworkMobile Neural Network7-Zip Compression7-Zip CompressionMobileNetV2_224nasnetSqueezeNetV1.0mobilenet-v1-1.0mobilenetV3inception-v3resnet-v2-50squeezenetv1.1Compression RatingD.R

mnn 7zipmnn: inception-v3mnn: mobilenet-v1-1.0mnn: MobileNetV2_224mnn: SqueezeNetV1.0mnn: resnet-v2-50mnn: squeezenetv1.1mnn: mobilenetV3mnn: nasnetcompress-7zip: Decompression Ratingcompress-7zip: Compression RatingABCD47.1196.3384.7208.11634.2274.6421.77613.170306735339349.5036.3234.9178.10234.9134.6681.76013.379305185282444.8737.2456.5767.82933.6674.5771.71616.459304885327844.0766.1645.0819.28833.0364.4631.99311.8273067552859OpenBenchmarking.org

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3DCAB1122334455SE +/- 1.82, N = 6SE +/- 1.21, N = 944.0844.8747.1249.50MIN: 37.69 / MAX: 54.94MIN: 37.43 / MAX: 54.33MIN: 36.63 / MAX: 164.89MIN: 37.44 / MAX: 146.031. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3DCAB1020304050Min: 39.91 / Avg: 47.12 / Max: 51.8Min: 42.15 / Avg: 49.5 / Max: 53.711. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0DBAC246810SE +/- 0.245, N = 9SE +/- 0.422, N = 66.1646.3236.3387.245MIN: 5.07 / MAX: 6.85MIN: 4.89 / MAX: 15.9MIN: 4.91 / MAX: 40.57MIN: 4.96 / MAX: 19.621. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0DBAC3691215Min: 5.82 / Avg: 6.32 / Max: 7.8Min: 5.63 / Avg: 6.34 / Max: 8.421. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224ABDC246810SE +/- 0.252, N = 6SE +/- 0.343, N = 94.7204.9175.0816.576MIN: 3.52 / MAX: 19.07MIN: 3.5 / MAX: 44.16MIN: 4.23 / MAX: 5.91MIN: 4.14 / MAX: 156.141. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224ABDC3691215Min: 4.08 / Avg: 4.72 / Max: 5.59Min: 4.17 / Avg: 4.92 / Max: 6.971. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0CBAD3691215SE +/- 0.370, N = 9SE +/- 0.392, N = 67.8298.1028.1169.288MIN: 7.07 / MAX: 17.35MIN: 6.09 / MAX: 62.5MIN: 5.99 / MAX: 62.46MIN: 6.36 / MAX: 22.661. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0CBAD3691215Min: 7.15 / Avg: 8.1 / Max: 10.14Min: 7.16 / Avg: 8.12 / Max: 9.791. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50DCAB816243240SE +/- 1.31, N = 6SE +/- 0.67, N = 933.0433.6734.2334.91MIN: 30.96 / MAX: 42.21MIN: 31.92 / MAX: 41.07MIN: 27.54 / MAX: 138.11MIN: 27.54 / MAX: 132.861. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50DCAB714212835Min: 30.11 / Avg: 34.23 / Max: 38.66Min: 31.4 / Avg: 34.91 / Max: 38.41. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1DCAB1.05032.10063.15094.20125.2515SE +/- 0.295, N = 6SE +/- 0.231, N = 94.4634.5774.6424.668MIN: 3.58 / MAX: 6.16MIN: 3.56 / MAX: 6.21MIN: 3.56 / MAX: 14.48MIN: 3.54 / MAX: 13.411. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1DCAB246810Min: 4.27 / Avg: 4.64 / Max: 6.11Min: 4.18 / Avg: 4.67 / Max: 6.461. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3CBAD0.44840.89681.34521.79362.242SE +/- 0.025, N = 9SE +/- 0.045, N = 61.7161.7601.7761.993MIN: 1.53 / MAX: 2.79MIN: 1.61 / MAX: 3MIN: 1.53 / MAX: 3.99MIN: 1.76 / MAX: 2.661. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3CBAD246810Min: 1.65 / Avg: 1.76 / Max: 1.9Min: 1.7 / Avg: 1.78 / Max: 1.931. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetDABC48121620SE +/- 1.26, N = 6SE +/- 0.71, N = 911.8313.1713.3816.46MIN: 11.04 / MAX: 23.25MIN: 10.26 / MAX: 69.23MIN: 10.7 / MAX: 71.42MIN: 15.07 / MAX: 21.71. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetDABC48121620Min: 11.27 / Avg: 13.17 / Max: 19.2Min: 11.46 / Avg: 13.38 / Max: 17.651. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingDABC7K14K21K28K35KSE +/- 6.57, N = 3SE +/- 90.67, N = 3306753067330518304881. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingDABC5K10K15K20K25KMin: 30664 / Avg: 30673.33 / Max: 30686Min: 30400 / Avg: 30517.67 / Max: 306961. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingACDB11K22K33K44K55KSE +/- 623.15, N = 3SE +/- 478.23, N = 3533935327852859528241. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingACDB9K18K27K36K45KMin: 52491 / Avg: 53393.33 / Max: 54589Min: 52332 / Avg: 52823.67 / Max: 537801. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

10 Results Shown

Mobile Neural Network:
  inception-v3
  mobilenet-v1-1.0
  MobileNetV2_224
  SqueezeNetV1.0
  resnet-v2-50
  squeezenetv1.1
  mobilenetV3
  nasnet
7-Zip Compression:
  Decompression Rating
  Compression Rating