mnn 7zip

Apple M2 testing with a Apple MacBook Air (13 h M2 2022) and llvmpipe on Arch rolling via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2209018-NE-MNN7ZIP8200
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
A
September 01 2022
  3 Hours, 25 Minutes
B
September 01 2022
  5 Hours, 13 Minutes
C
September 01 2022
  36 Minutes
D
September 01 2022
  34 Minutes
Invert Hiding All Results Option
  2 Hours, 27 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


mnn 7zipOpenBenchmarking.orgPhoronix Test SuiteApple M2 @ 2.42GHz (4 Cores / 8 Threads)Apple MacBook Air (13 h M2 2022)8GB251GB APPLE SSD AP0256Z + 2 x 0GB APPLE SSD AP0256ZllvmpipeBroadcom Device 4433 + Broadcom Device 5f71Arch rolling5.19.0-rc7-asahi-2-1-ARCH (aarch64)KDE Plasma 5.25.4X Server 1.21.1.44.5 Mesa 22.1.6 (LLVM 14.0.6 128 bits)GCC 12.1.0 + Clang 14.0.6ext42560x1600ProcessorMotherboardMemoryDiskGraphicsNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionMnn 7zip BenchmarksSystem Logs- --build=aarch64-unknown-linux-gnu --disable-libssp --disable-libstdcxx-pch --disable-multilib --disable-werror --enable-__cxa_atexit --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-default-ssp --enable-fix-cortex-a53-835769 --enable-fix-cortex-a53-843419 --enable-gnu-indirect-function --enable-gnu-unique-object --enable-languages=c,c++,fortran,go,lto,objc,obj-c++ --enable-lto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-unknown-linux-gnu --mandir=/usr/share/man --with-arch=armv8-a --with-linker-hash-style=gnu - Scaling Governor: apple-cpufreq schedutil- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Not affected + srbds: Not affected + tsx_async_abort: Not affected

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingDCBA11K22K33K44K55KSE +/- 478.23, N = 3SE +/- 623.15, N = 3528595327852824533931. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingDCBA7K14K21K28K35KSE +/- 90.67, N = 3SE +/- 6.57, N = 3306753048830518306731. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetDCBA48121620SE +/- 0.71, N = 9SE +/- 1.26, N = 611.8316.4613.3813.17MIN: 11.04 / MAX: 23.25MIN: 15.07 / MAX: 21.7MIN: 10.7 / MAX: 71.42MIN: 10.26 / MAX: 69.231. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3DCBA0.44840.89681.34521.79362.242SE +/- 0.025, N = 9SE +/- 0.045, N = 61.9931.7161.7601.776MIN: 1.76 / MAX: 2.66MIN: 1.53 / MAX: 2.79MIN: 1.61 / MAX: 3MIN: 1.53 / MAX: 3.991. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1DCBA1.05032.10063.15094.20125.2515SE +/- 0.231, N = 9SE +/- 0.295, N = 64.4634.5774.6684.642MIN: 3.58 / MAX: 6.16MIN: 3.56 / MAX: 6.21MIN: 3.54 / MAX: 13.41MIN: 3.56 / MAX: 14.481. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50DCBA816243240SE +/- 0.67, N = 9SE +/- 1.31, N = 633.0433.6734.9134.23MIN: 30.96 / MAX: 42.21MIN: 31.92 / MAX: 41.07MIN: 27.54 / MAX: 132.86MIN: 27.54 / MAX: 138.111. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0DCBA3691215SE +/- 0.370, N = 9SE +/- 0.392, N = 69.2887.8298.1028.116MIN: 6.36 / MAX: 22.66MIN: 7.07 / MAX: 17.35MIN: 6.09 / MAX: 62.5MIN: 5.99 / MAX: 62.461. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224DCBA246810SE +/- 0.343, N = 9SE +/- 0.252, N = 65.0816.5764.9174.720MIN: 4.23 / MAX: 5.91MIN: 4.14 / MAX: 156.14MIN: 3.5 / MAX: 44.16MIN: 3.52 / MAX: 19.071. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0DCBA246810SE +/- 0.245, N = 9SE +/- 0.422, N = 66.1647.2456.3236.338MIN: 5.07 / MAX: 6.85MIN: 4.96 / MAX: 19.62MIN: 4.89 / MAX: 15.9MIN: 4.91 / MAX: 40.571. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3DCBA1122334455SE +/- 1.21, N = 9SE +/- 1.82, N = 644.0844.8749.5047.12MIN: 37.69 / MAX: 54.94MIN: 37.43 / MAX: 54.33MIN: 37.44 / MAX: 146.03MIN: 36.63 / MAX: 164.891. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

10 Results Shown

7-Zip Compression:
  Compression Rating
  Decompression Rating
Mobile Neural Network:
  nasnet
  mobilenetV3
  squeezenetv1.1
  resnet-v2-50
  SqueezeNetV1.0
  MobileNetV2_224
  mobilenet-v1-1.0
  inception-v3