mnn 7zip

Apple M2 testing with a Apple MacBook Air (13 h M2 2022) and llvmpipe on Arch rolling via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2209018-NE-MNN7ZIP8200
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
A
September 01 2022
  3 Hours, 25 Minutes
B
September 01 2022
  5 Hours, 13 Minutes
C
September 01 2022
  36 Minutes
D
September 01 2022
  34 Minutes
Invert Hiding All Results Option
  2 Hours, 27 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


mnn 7zipOpenBenchmarking.orgPhoronix Test SuiteApple M2 @ 2.42GHz (4 Cores / 8 Threads)Apple MacBook Air (13 h M2 2022)8GB251GB APPLE SSD AP0256Z + 2 x 0GB APPLE SSD AP0256ZllvmpipeBroadcom Device 4433 + Broadcom Device 5f71Arch rolling5.19.0-rc7-asahi-2-1-ARCH (aarch64)KDE Plasma 5.25.4X Server 1.21.1.44.5 Mesa 22.1.6 (LLVM 14.0.6 128 bits)GCC 12.1.0 + Clang 14.0.6ext42560x1600ProcessorMotherboardMemoryDiskGraphicsNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionMnn 7zip BenchmarksSystem Logs- --build=aarch64-unknown-linux-gnu --disable-libssp --disable-libstdcxx-pch --disable-multilib --disable-werror --enable-__cxa_atexit --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-default-ssp --enable-fix-cortex-a53-835769 --enable-fix-cortex-a53-843419 --enable-gnu-indirect-function --enable-gnu-unique-object --enable-languages=c,c++,fortran,go,lto,objc,obj-c++ --enable-lto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-unknown-linux-gnu --mandir=/usr/share/man --with-arch=armv8-a --with-linker-hash-style=gnu - Scaling Governor: apple-cpufreq schedutil- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Not affected + srbds: Not affected + tsx_async_abort: Not affected

ABCDResult OverviewPhoronix Test Suite100%110%120%129%139%Mobile Neural NetworkMobile Neural NetworkMobile Neural NetworkMobile Neural NetworkMobile Neural NetworkMobile Neural NetworkMobile Neural NetworkMobile Neural Network7-Zip Compression7-Zip CompressionMobileNetV2_224nasnetSqueezeNetV1.0mobilenet-v1-1.0mobilenetV3inception-v3resnet-v2-50squeezenetv1.1Compression RatingD.R

mnn 7zipcompress-7zip: Compression Ratingcompress-7zip: Decompression Ratingmnn: nasnetmnn: mobilenetV3mnn: squeezenetv1.1mnn: resnet-v2-50mnn: SqueezeNetV1.0mnn: MobileNetV2_224mnn: mobilenet-v1-1.0mnn: inception-v3ABCD533933067313.1701.7764.64234.2278.1164.7206.33847.119528243051813.3791.7604.66834.9138.1024.9176.32349.503532783048816.4591.7164.57733.6677.8296.5767.24544.873528593067511.8271.9934.46333.0369.2885.0816.16444.076OpenBenchmarking.org

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingABCD11K22K33K44K55KSE +/- 623.15, N = 3SE +/- 478.23, N = 3533935282453278528591. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingABCD9K18K27K36K45KMin: 52491 / Avg: 53393.33 / Max: 54589Min: 52332 / Avg: 52823.67 / Max: 537801. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingABCD7K14K21K28K35KSE +/- 6.57, N = 3SE +/- 90.67, N = 3306733051830488306751. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingABCD5K10K15K20K25KMin: 30664 / Avg: 30673.33 / Max: 30686Min: 30400 / Avg: 30517.67 / Max: 306961. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetABCD48121620SE +/- 1.26, N = 6SE +/- 0.71, N = 913.1713.3816.4611.83MIN: 10.26 / MAX: 69.23MIN: 10.7 / MAX: 71.42MIN: 15.07 / MAX: 21.7MIN: 11.04 / MAX: 23.251. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetABCD48121620Min: 11.27 / Avg: 13.17 / Max: 19.2Min: 11.46 / Avg: 13.38 / Max: 17.651. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3ABCD0.44840.89681.34521.79362.242SE +/- 0.045, N = 6SE +/- 0.025, N = 91.7761.7601.7161.993MIN: 1.53 / MAX: 3.99MIN: 1.61 / MAX: 3MIN: 1.53 / MAX: 2.79MIN: 1.76 / MAX: 2.661. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3ABCD246810Min: 1.7 / Avg: 1.78 / Max: 1.93Min: 1.65 / Avg: 1.76 / Max: 1.91. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1ABCD1.05032.10063.15094.20125.2515SE +/- 0.295, N = 6SE +/- 0.231, N = 94.6424.6684.5774.463MIN: 3.56 / MAX: 14.48MIN: 3.54 / MAX: 13.41MIN: 3.56 / MAX: 6.21MIN: 3.58 / MAX: 6.161. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1ABCD246810Min: 4.27 / Avg: 4.64 / Max: 6.11Min: 4.18 / Avg: 4.67 / Max: 6.461. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50ABCD816243240SE +/- 1.31, N = 6SE +/- 0.67, N = 934.2334.9133.6733.04MIN: 27.54 / MAX: 138.11MIN: 27.54 / MAX: 132.86MIN: 31.92 / MAX: 41.07MIN: 30.96 / MAX: 42.211. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50ABCD714212835Min: 30.11 / Avg: 34.23 / Max: 38.66Min: 31.4 / Avg: 34.91 / Max: 38.41. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0ABCD3691215SE +/- 0.392, N = 6SE +/- 0.370, N = 98.1168.1027.8299.288MIN: 5.99 / MAX: 62.46MIN: 6.09 / MAX: 62.5MIN: 7.07 / MAX: 17.35MIN: 6.36 / MAX: 22.661. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0ABCD3691215Min: 7.16 / Avg: 8.12 / Max: 9.79Min: 7.15 / Avg: 8.1 / Max: 10.141. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224ABCD246810SE +/- 0.252, N = 6SE +/- 0.343, N = 94.7204.9176.5765.081MIN: 3.52 / MAX: 19.07MIN: 3.5 / MAX: 44.16MIN: 4.14 / MAX: 156.14MIN: 4.23 / MAX: 5.911. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224ABCD3691215Min: 4.08 / Avg: 4.72 / Max: 5.59Min: 4.17 / Avg: 4.92 / Max: 6.971. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0ABCD246810SE +/- 0.422, N = 6SE +/- 0.245, N = 96.3386.3237.2456.164MIN: 4.91 / MAX: 40.57MIN: 4.89 / MAX: 15.9MIN: 4.96 / MAX: 19.62MIN: 5.07 / MAX: 6.851. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0ABCD3691215Min: 5.63 / Avg: 6.34 / Max: 8.42Min: 5.82 / Avg: 6.32 / Max: 7.81. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3ABCD1122334455SE +/- 1.82, N = 6SE +/- 1.21, N = 947.1249.5044.8744.08MIN: 36.63 / MAX: 164.89MIN: 37.44 / MAX: 146.03MIN: 37.43 / MAX: 54.33MIN: 37.69 / MAX: 54.941. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3ABCD1020304050Min: 39.91 / Avg: 47.12 / Max: 51.8Min: 42.15 / Avg: 49.5 / Max: 53.711. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

10 Results Shown

7-Zip Compression:
  Compression Rating
  Decompression Rating
Mobile Neural Network:
  nasnet
  mobilenetV3
  squeezenetv1.1
  resnet-v2-50
  SqueezeNetV1.0
  MobileNetV2_224
  mobilenet-v1-1.0
  inception-v3