128-p

Intel Core i7-1280P testing with a MSI MS-14C6 (E14C6IMS.115 BIOS) and MSI Intel ADL GT2 15GB on Arch rolling via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2208318-NE-128P9487891
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
A
August 31 2022
  2 Hours, 38 Minutes
B
August 31 2022
  2 Hours, 47 Minutes
cc
August 31 2022
  2 Hours, 42 Minutes
D
August 31 2022
  2 Hours, 45 Minutes
Invert Hiding All Results Option
  2 Hours, 43 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


128-pOpenBenchmarking.orgPhoronix Test SuiteIntel Core i7-1280P @ 4.80GHz (14 Cores / 20 Threads)MSI MS-14C6 (E14C6IMS.115 BIOS)Intel Alder Lake PCH16GB1024GB Micron_3400_MTFDKBA1T0TFHMSI Intel ADL GT2 15GB (1450MHz)Intel Alder Lake PCH-P HD AudioIntel Alder Lake-P PCH CNVi WiFiArch rolling5.19.1-arch2-1 (x86_64)KDE Plasma 5.25.4X Server 1.21.1.4 + Wayland4.6 Mesa 22.1.61.3.211GCC 12.1.1 20220730ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen Resolution128-p BenchmarksSystem Logs- Transparent Huge Pages: always- --disable-libssp --disable-libstdcxx-pch --disable-werror --enable-__cxa_atexit --enable-bootstrap --enable-cet=auto --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-default-ssp --enable-gnu-indirect-function --enable-gnu-unique-object --enable-languages=c,c++,ada,fortran,go,lto,objc,obj-c++,d --enable-libstdcxx-backtrace --enable-link-serialization=1 --enable-lto --enable-multilib --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-build-config=bootstrap-lto --with-linker-hash-style=gnu - Scaling Governor: intel_cpufreq schedutil - CPU Microcode: 0x421- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

ABccDResult OverviewPhoronix Test Suite100%102%105%107%7-Zip CompressionMobile Neural NetworkMobile Neural NetworkMobile Neural Network7-Zip CompressionMobile Neural NetworkMobile Neural NetworkMobile Neural NetworkMobile Neural NetworkMobile Neural NetworkD.Rsqueezenetv1.1resnet-v2-50mobilenet-v1-1.0Compression RatingSqueezeNetV1.0inception-v3nasnetMobileNetV2_224mobilenetV3

128-pmnn: squeezenetv1.1mnn: nasnetmnn: MobileNetV2_224mnn: mobilenetV3mnn: inception-v3mnn: mobilenet-v1-1.0mnn: SqueezeNetV1.0mnn: resnet-v2-50compress-7zip: Decompression Ratingcompress-7zip: Compression RatingABccD6.18619.4905.8312.84555.7546.82610.18251.67138667644146.68720.0175.9002.86057.9627.10110.64055.72135410610456.32319.4455.8552.83157.6227.20410.48353.17038353639256.65419.7435.8192.85457.7317.17510.62955.3183723062914OpenBenchmarking.org

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1BDccA246810SE +/- 0.176, N = 3SE +/- 0.183, N = 3SE +/- 0.010, N = 3SE +/- 0.192, N = 36.6876.6546.3236.186MIN: 6.05 / MAX: 11.02MIN: 5.97 / MAX: 11.96MIN: 6.04 / MAX: 9.35MIN: 5.67 / MAX: 12.011. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1BDccA3691215Min: 6.34 / Avg: 6.69 / Max: 6.87Min: 6.29 / Avg: 6.65 / Max: 6.85Min: 6.31 / Avg: 6.32 / Max: 6.34Min: 5.8 / Avg: 6.19 / Max: 6.381. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetBDAcc510152025SE +/- 0.08, N = 3SE +/- 0.04, N = 3SE +/- 0.15, N = 3SE +/- 0.11, N = 320.0219.7419.4919.45MIN: 19.57 / MAX: 29.15MIN: 19.47 / MAX: 33.15MIN: 18.12 / MAX: 29.37MIN: 19.03 / MAX: 28.671. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetBDAcc510152025Min: 19.89 / Avg: 20.02 / Max: 20.18Min: 19.69 / Avg: 19.74 / Max: 19.82Min: 19.19 / Avg: 19.49 / Max: 19.69Min: 19.31 / Avg: 19.44 / Max: 19.671. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224BccAD1.32752.6553.98255.316.6375SE +/- 0.017, N = 3SE +/- 0.071, N = 3SE +/- 0.054, N = 3SE +/- 0.101, N = 35.9005.8555.8315.819MIN: 5.77 / MAX: 11.46MIN: 5.54 / MAX: 6.96MIN: 5.59 / MAX: 11.12MIN: 5.38 / MAX: 11.761. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224BccAD246810Min: 5.87 / Avg: 5.9 / Max: 5.93Min: 5.72 / Avg: 5.86 / Max: 5.96Min: 5.73 / Avg: 5.83 / Max: 5.92Min: 5.62 / Avg: 5.82 / Max: 5.961. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3BDAcc0.64351.2871.93052.5743.2175SE +/- 0.006, N = 3SE +/- 0.029, N = 3SE +/- 0.010, N = 3SE +/- 0.008, N = 32.8602.8542.8452.831MIN: 2.77 / MAX: 6.81MIN: 2.7 / MAX: 8.24MIN: 2.73 / MAX: 3.24MIN: 2.72 / MAX: 7.891. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3BDAcc246810Min: 2.85 / Avg: 2.86 / Max: 2.87Min: 2.8 / Avg: 2.85 / Max: 2.9Min: 2.83 / Avg: 2.84 / Max: 2.86Min: 2.82 / Avg: 2.83 / Max: 2.841. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3BDccA1326395265SE +/- 0.06, N = 3SE +/- 0.01, N = 3SE +/- 0.17, N = 3SE +/- 2.09, N = 357.9657.7357.6255.75MIN: 57.26 / MAX: 65.83MIN: 57.34 / MAX: 70.53MIN: 56.17 / MAX: 70.09MIN: 51.22 / MAX: 67.531. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3BDccA1122334455Min: 57.86 / Avg: 57.96 / Max: 58.06Min: 57.72 / Avg: 57.73 / Max: 57.74Min: 57.42 / Avg: 57.62 / Max: 57.95Min: 51.58 / Avg: 55.75 / Max: 57.881. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0ccDBA246810SE +/- 0.009, N = 3SE +/- 0.040, N = 3SE +/- 0.051, N = 3SE +/- 0.251, N = 37.2047.1757.1016.826MIN: 6.76 / MAX: 12.77MIN: 6.74 / MAX: 11.41MIN: 6.72 / MAX: 11.89MIN: 6.15 / MAX: 11.171. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0ccDBA3691215Min: 7.19 / Avg: 7.2 / Max: 7.22Min: 7.1 / Avg: 7.18 / Max: 7.22Min: 7.03 / Avg: 7.1 / Max: 7.2Min: 6.32 / Avg: 6.83 / Max: 7.091. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0BDccA3691215SE +/- 0.07, N = 3SE +/- 0.10, N = 3SE +/- 0.03, N = 3SE +/- 0.43, N = 310.6410.6310.4810.18MIN: 10.27 / MAX: 16.48MIN: 10.19 / MAX: 16.72MIN: 10.24 / MAX: 24.75MIN: 9.18 / MAX: 18.591. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0BDccA3691215Min: 10.49 / Avg: 10.64 / Max: 10.73Min: 10.45 / Avg: 10.63 / Max: 10.79Min: 10.42 / Avg: 10.48 / Max: 10.52Min: 9.32 / Avg: 10.18 / Max: 10.621. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50BDccA1326395265SE +/- 0.69, N = 3SE +/- 1.14, N = 3SE +/- 1.16, N = 3SE +/- 2.45, N = 355.7255.3253.1751.67MIN: 30.39 / MAX: 64.06MIN: 26.38 / MAX: 63.58MIN: 26.81 / MAX: 62.26MIN: 25.79 / MAX: 61.871. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50BDccA1122334455Min: 54.51 / Avg: 55.72 / Max: 56.89Min: 53.03 / Avg: 55.32 / Max: 56.54Min: 50.88 / Avg: 53.17 / Max: 54.67Min: 46.77 / Avg: 51.67 / Max: 54.391. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingBDccA8K16K24K32K40KSE +/- 1011.42, N = 15SE +/- 1064.39, N = 15SE +/- 924.94, N = 15SE +/- 908.40, N = 15354103723038353386671. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingBDccA7K14K21K28K35KMin: 32642 / Avg: 35409.6 / Max: 48109Min: 32521 / Avg: 37230.2 / Max: 48884Min: 35874 / Avg: 38353 / Max: 49246Min: 35743 / Avg: 38667.13 / Max: 493271. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingBDccA14K28K42K56K70KSE +/- 1381.58, N = 15SE +/- 1364.42, N = 15SE +/- 1334.45, N = 15SE +/- 1410.04, N = 15610456291463925644141. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingBDccA11K22K33K44K55KMin: 57524 / Avg: 61045.4 / Max: 79269Min: 57683 / Avg: 62913.87 / Max: 79508Min: 60781 / Avg: 63924.6 / Max: 81298Min: 60855 / Avg: 64414.47 / Max: 827371. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

10 Results Shown

Mobile Neural Network:
  squeezenetv1.1
  nasnet
  MobileNetV2_224
  mobilenetV3
  inception-v3
  mobilenet-v1-1.0
  SqueezeNetV1.0
  resnet-v2-50
7-Zip Compression:
  Decompression Rating
  Compression Rating