2023-02-23-1703

docker testing on Ubuntu 22.04.1 LTS via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2302238-FBEE-202302260
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs
No Box Plots
On Line Graphs With Missing Data, Connect The Line Gaps

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
warmup
February 23 2023
  2 Hours, 9 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


2023-02-23-1703OpenBenchmarking.orgPhoronix Test SuiteUnknown (64 Cores / 256 Threads)HPE C01_APACHE_MB (L50_5.13_1.14 BIOS)128GB1920GB Toshiba MK001920GWUGK + 0GB Virtual HDisk0 + 0GB Virtual HDisk1 + 0GB Virtual HDisk2 + 0GB Virtual HDisk3 + 0GB Virtual Floppy0 + 0GB Virtual Floppy1 + 0GB Virtual Floppy2 + 0GB Virtual Floppy3astdrmfbUbuntu 22.04.1 LTS5.4.0-137-generic (aarch64)GCC 11.2.0nfs1024x768dockerProcessorMotherboardMemoryDiskGraphicsOSKernelCompilerFile-SystemScreen ResolutionSystem Layer2023-02-23-1703 BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -v - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Vulnerable + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Vulnerable + srbds: Not affected + tsx_async_abort: Not affected

2023-02-23-1703mnn: nasnetmnn: mobilenetV3mnn: squeezenetv1.1mnn: resnet-v2-50mnn: SqueezeNetV1.0mnn: MobileNetV2_224mnn: mobilenet-v1-1.0mnn: inception-v3warmup20.7412.2895.10147.79410.7175.8143.84854.587OpenBenchmarking.org

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetwarmup510152025SE +/- 0.08, N = 320.74MIN: 20.38 / MAX: 39.041. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3warmup0.5151.031.5452.062.575SE +/- 0.040, N = 32.289MIN: 2.05 / MAX: 8.251. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1warmup1.14772.29543.44314.59085.7385SE +/- 0.023, N = 35.101MIN: 4.97 / MAX: 15.461. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50warmup1122334455SE +/- 1.95, N = 347.79MIN: 43.19 / MAX: 152.021. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0warmup3691215SE +/- 0.40, N = 310.72MIN: 9.21 / MAX: 24.71. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224warmup1.30822.61643.92465.23286.541SE +/- 0.030, N = 35.814MIN: 5.66 / MAX: 13.51. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0warmup0.86581.73162.59743.46324.329SE +/- 0.015, N = 33.848MIN: 3.79 / MAX: 17.261. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3warmup1224364860SE +/- 0.73, N = 354.59MIN: 52.15 / MAX: 209.571. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

8 Results Shown

Mobile Neural Network:
  nasnet
  mobilenetV3
  squeezenetv1.1
  resnet-v2-50
  SqueezeNetV1.0
  MobileNetV2_224
  mobilenet-v1-1.0
  inception-v3