mnn 7zip

Apple M2 testing with a Apple MacBook Air (13 h M2 2022) and llvmpipe on Arch rolling via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2209018-NE-MNN7ZIP8200
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
A
September 01 2022
  3 Hours, 25 Minutes
B
September 01 2022
  5 Hours, 13 Minutes
C
September 01 2022
  36 Minutes
D
September 01 2022
  34 Minutes
Invert Hiding All Results Option
  2 Hours, 27 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


mnn 7zipOpenBenchmarking.orgPhoronix Test SuiteApple M2 @ 2.42GHz (4 Cores / 8 Threads)Apple MacBook Air (13 h M2 2022)8GB251GB APPLE SSD AP0256Z + 2 x 0GB APPLE SSD AP0256ZllvmpipeBroadcom Device 4433 + Broadcom Device 5f71Arch rolling5.19.0-rc7-asahi-2-1-ARCH (aarch64)KDE Plasma 5.25.4X Server 1.21.1.44.5 Mesa 22.1.6 (LLVM 14.0.6 128 bits)GCC 12.1.0 + Clang 14.0.6ext42560x1600ProcessorMotherboardMemoryDiskGraphicsNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionMnn 7zip BenchmarksSystem Logs- --build=aarch64-unknown-linux-gnu --disable-libssp --disable-libstdcxx-pch --disable-multilib --disable-werror --enable-__cxa_atexit --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-default-ssp --enable-fix-cortex-a53-835769 --enable-fix-cortex-a53-843419 --enable-gnu-indirect-function --enable-gnu-unique-object --enable-languages=c,c++,fortran,go,lto,objc,obj-c++ --enable-lto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-unknown-linux-gnu --mandir=/usr/share/man --with-arch=armv8-a --with-linker-hash-style=gnu - Scaling Governor: apple-cpufreq schedutil- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Not affected + srbds: Not affected + tsx_async_abort: Not affected

ABCDResult OverviewPhoronix Test Suite100%110%120%129%139%Mobile Neural NetworkMobile Neural NetworkMobile Neural NetworkMobile Neural NetworkMobile Neural NetworkMobile Neural NetworkMobile Neural NetworkMobile Neural Network7-Zip Compression7-Zip CompressionMobileNetV2_224nasnetSqueezeNetV1.0mobilenet-v1-1.0mobilenetV3inception-v3resnet-v2-50squeezenetv1.1Compression RatingD.R

mnn 7zipcompress-7zip: Compression Ratingcompress-7zip: Decompression Ratingmnn: inception-v3mnn: mobilenet-v1-1.0mnn: MobileNetV2_224mnn: SqueezeNetV1.0mnn: resnet-v2-50mnn: squeezenetv1.1mnn: mobilenetV3mnn: nasnetABCD533933067347.1196.3384.7208.11634.2274.6421.77613.170528243051849.5036.3234.9178.10234.9134.6681.76013.379532783048844.8737.2456.5767.82933.6674.5771.71616.459528593067544.0766.1645.0819.28833.0364.4631.99311.827OpenBenchmarking.org

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingBDCA11K22K33K44K55KSE +/- 478.23, N = 3SE +/- 623.15, N = 3528245285953278533931. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingBDCA9K18K27K36K45KMin: 52332 / Avg: 52823.67 / Max: 53780Min: 52491 / Avg: 53393.33 / Max: 545891. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingCBAD7K14K21K28K35KSE +/- 90.67, N = 3SE +/- 6.57, N = 3304883051830673306751. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingCBAD5K10K15K20K25KMin: 30400 / Avg: 30517.67 / Max: 30696Min: 30664 / Avg: 30673.33 / Max: 306861. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3BACD1122334455SE +/- 1.21, N = 9SE +/- 1.82, N = 649.5047.1244.8744.08MIN: 37.44 / MAX: 146.03MIN: 36.63 / MAX: 164.89MIN: 37.43 / MAX: 54.33MIN: 37.69 / MAX: 54.941. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3BACD1020304050Min: 42.15 / Avg: 49.5 / Max: 53.71Min: 39.91 / Avg: 47.12 / Max: 51.81. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0CABD246810SE +/- 0.422, N = 6SE +/- 0.245, N = 97.2456.3386.3236.164MIN: 4.96 / MAX: 19.62MIN: 4.91 / MAX: 40.57MIN: 4.89 / MAX: 15.9MIN: 5.07 / MAX: 6.851. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0CABD3691215Min: 5.63 / Avg: 6.34 / Max: 8.42Min: 5.82 / Avg: 6.32 / Max: 7.81. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224CDBA246810SE +/- 0.343, N = 9SE +/- 0.252, N = 66.5765.0814.9174.720MIN: 4.14 / MAX: 156.14MIN: 4.23 / MAX: 5.91MIN: 3.5 / MAX: 44.16MIN: 3.52 / MAX: 19.071. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224CDBA3691215Min: 4.17 / Avg: 4.92 / Max: 6.97Min: 4.08 / Avg: 4.72 / Max: 5.591. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0DABC3691215SE +/- 0.392, N = 6SE +/- 0.370, N = 99.2888.1168.1027.829MIN: 6.36 / MAX: 22.66MIN: 5.99 / MAX: 62.46MIN: 6.09 / MAX: 62.5MIN: 7.07 / MAX: 17.351. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0DABC3691215Min: 7.16 / Avg: 8.12 / Max: 9.79Min: 7.15 / Avg: 8.1 / Max: 10.141. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50BACD816243240SE +/- 0.67, N = 9SE +/- 1.31, N = 634.9134.2333.6733.04MIN: 27.54 / MAX: 132.86MIN: 27.54 / MAX: 138.11MIN: 31.92 / MAX: 41.07MIN: 30.96 / MAX: 42.211. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50BACD714212835Min: 31.4 / Avg: 34.91 / Max: 38.4Min: 30.11 / Avg: 34.23 / Max: 38.661. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1BACD1.05032.10063.15094.20125.2515SE +/- 0.231, N = 9SE +/- 0.295, N = 64.6684.6424.5774.463MIN: 3.54 / MAX: 13.41MIN: 3.56 / MAX: 14.48MIN: 3.56 / MAX: 6.21MIN: 3.58 / MAX: 6.161. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1BACD246810Min: 4.18 / Avg: 4.67 / Max: 6.46Min: 4.27 / Avg: 4.64 / Max: 6.111. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3DABC0.44840.89681.34521.79362.242SE +/- 0.045, N = 6SE +/- 0.025, N = 91.9931.7761.7601.716MIN: 1.76 / MAX: 2.66MIN: 1.53 / MAX: 3.99MIN: 1.61 / MAX: 3MIN: 1.53 / MAX: 2.791. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3DABC246810Min: 1.7 / Avg: 1.78 / Max: 1.93Min: 1.65 / Avg: 1.76 / Max: 1.91. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetCBAD48121620SE +/- 0.71, N = 9SE +/- 1.26, N = 616.4613.3813.1711.83MIN: 15.07 / MAX: 21.7MIN: 10.7 / MAX: 71.42MIN: 10.26 / MAX: 69.23MIN: 11.04 / MAX: 23.251. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetCBAD48121620Min: 11.46 / Avg: 13.38 / Max: 17.65Min: 11.27 / Avg: 13.17 / Max: 19.21. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

10 Results Shown

7-Zip Compression:
  Compression Rating
  Decompression Rating
Mobile Neural Network:
  inception-v3
  mobilenet-v1-1.0
  MobileNetV2_224
  SqueezeNetV1.0
  resnet-v2-50
  squeezenetv1.1
  mobilenetV3
  nasnet