nn

Intel Core i3-12100 testing with a ASRock B660M-HDV (6.02 BIOS) and Intel ADL-S GT1 16GB on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2208136-NE-NN882134066
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts

Limit displaying results to tests within:

HPC - High Performance Computing 2 Tests
Machine Learning 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
A
August 13 2022
  5 Hours, 31 Minutes
B
August 13 2022
  15 Hours, 26 Minutes
C
August 13 2022
  1 Hour, 8 Minutes
Invert Hiding All Results Option
  7 Hours, 22 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


nn OpenBenchmarking.orgPhoronix Test SuiteIntel Core i3-12100 @ 5.50GHz (4 Cores / 8 Threads)ASRock B660M-HDV (6.02 BIOS)Intel Device 7aa716GB512GB SabrentIntel ADL-S GT1 16GB (1400MHz)Realtek ALC897MX279IntelUbuntu 22.045.18.0-051800-generic (x86_64)GNOME Shell 42.1X Server 1.20.14 + Wayland4.6 Mesa 22.0.11.2.204GCC 11.2.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionNn BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x1f - Thermald 2.4.9 - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

ABCResult OverviewPhoronix Test Suite100%123%147%170%Mobile Neural NetworkMobile Neural NetworkNCNNMobile Neural NetworkNCNNNCNNNCNNNCNNMobile Neural NetworkNCNNNCNNNCNNNCNNNCNNMobile Neural NetworkMobile Neural NetworkMobile Neural NetworkNCNNNCNNNCNNNCNNNCNNNCNNNCNNsqueezenetv1.1MobileNetV2_224CPU - efficientnet-b0mobilenet-v1-1.0CPU - regnety_400mCPU - mnasnetCPU - shufflenet-v2CPU - blazefaceresnet-v2-50CPU - vgg16CPU - resnet18CPU - alexnetCPU - googlenetCPU - resnet50mobilenetV3SqueezeNetV1.0inception-v3CPU - mobilenetCPU - yolov4-tinyCPU - vision_transformerCPU-v3-v3 - mobilenet-v3CPU - squeezenet_ssdCPU - FastestDetCPU-v2-v2 - mobilenet-v2

nn mnn: resnet-v2-50ncnn: CPU - resnet50mnn: SqueezeNetV1.0mnn: inception-v3ncnn: CPU - yolov4-tinyncnn: CPU - vision_transformerncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - squeezenet_ssdncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - FastestDetncnn: CPU - regnety_400mncnn: CPU - alexnetncnn: CPU - resnet18ncnn: CPU - vgg16ncnn: CPU - googlenetncnn: CPU - blazefacencnn: CPU - efficientnet-b0ncnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU - mobilenetmnn: mobilenet-v1-1.0mnn: MobileNetV2_224mnn: squeezenetv1.1mnn: mobilenetV3ABC34.59525.707.12947.20525.97449.952.7515.933.384.918.157.559.0470.9811.570.917.113.112.8616.863.9193.6903.7301.34937.94727.727.65251.89626.35459.642.8416.003.444.6210.268.5510.3059.3112.771.0110.183.503.3018.014.2654.3634.7391.39430.45223.846.78747.06424.07420.672.6114.833.284.657.547.228.6859.3810.910.816.522.762.6416.383.0112.6232.4531.208OpenBenchmarking.org

Mobile Neural Network

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: resnet-v2-50CBA918273645SE +/- 0.12, N = 12SE +/- 0.52, N = 930.4537.9534.60MIN: 26.66 / MAX: 46.58MIN: 31.59 / MAX: 107.25MIN: 26.54 / MAX: 133.011. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet50CBA714212835SE +/- 0.41, N = 12SE +/- 0.19, N = 323.8427.7225.70MIN: 20.77 / MAX: 28.8MIN: 20.81 / MAX: 46.79MIN: 23.9 / MAX: 35.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: SqueezeNetV1.0CBA246810SE +/- 0.108, N = 12SE +/- 0.140, N = 96.7877.6527.129MIN: 5.26 / MAX: 17.3MIN: 6.24 / MAX: 16.93MIN: 5.2 / MAX: 12.611. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: inception-v3CBA1224364860SE +/- 0.59, N = 12SE +/- 0.75, N = 947.0651.9047.21MIN: 37.93 / MAX: 66.24MIN: 41.75 / MAX: 124.79MIN: 35.29 / MAX: 96.231. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tinyCBA612182430SE +/- 0.34, N = 12SE +/- 0.52, N = 324.0726.3525.97MIN: 22.42 / MAX: 26.41MIN: 22.43 / MAX: 37.69MIN: 22.44 / MAX: 35.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformerCBA100200300400500SE +/- 1.09, N = 12SE +/- 0.77, N = 3420.67459.64449.95MIN: 376.95 / MAX: 464.75MIN: 376.48 / MAX: 695.19MIN: 379.37 / MAX: 693.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v3CBA0.6391.2781.9172.5563.195SE +/- 0.01, N = 12SE +/- 0.04, N = 32.612.842.75MIN: 2.57 / MAX: 3.52MIN: 2.66 / MAX: 5MIN: 2.61 / MAX: 4.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssdCBA48121620SE +/- 0.24, N = 12SE +/- 0.39, N = 314.8316.0015.93MIN: 14.05 / MAX: 15.96MIN: 14.08 / MAX: 27.47MIN: 14.04 / MAX: 29.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v2CBA0.7741.5482.3223.0963.87SE +/- 0.01, N = 12SE +/- 0.02, N = 33.283.443.38MIN: 3.2 / MAX: 4.3MIN: 3.25 / MAX: 4.83MIN: 3.24 / MAX: 4.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDetCBA1.10482.20963.31444.41925.524SE +/- 0.14, N = 12SE +/- 0.38, N = 34.654.624.91MIN: 4 / MAX: 5.03MIN: 4 / MAX: 7.19MIN: 4 / MAX: 14.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400mCBA3691215SE +/- 0.43, N = 12SE +/- 0.10, N = 37.5410.268.15MIN: 7.27 / MAX: 8.98MIN: 7.81 / MAX: 21.35MIN: 7.68 / MAX: 9.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnetCBA246810SE +/- 0.22, N = 12SE +/- 0.01, N = 37.228.557.55MIN: 7.01 / MAX: 9.3MIN: 7.3 / MAX: 15.37MIN: 7.27 / MAX: 9.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet18CBA3691215SE +/- 0.23, N = 12SE +/- 0.04, N = 38.6810.309.04MIN: 8.4 / MAX: 9.74MIN: 8.79 / MAX: 15.07MIN: 8.6 / MAX: 11.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg16CBA1632486480SE +/- 2.47, N = 12SE +/- 2.68, N = 359.3859.3170.98MIN: 46.99 / MAX: 117.49MIN: 46.88 / MAX: 137.98MIN: 47.17 / MAX: 137.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenetCBA3691215SE +/- 0.25, N = 12SE +/- 0.10, N = 310.9112.7711.57MIN: 10.52 / MAX: 12.2MIN: 11.2 / MAX: 20.53MIN: 10.91 / MAX: 13.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazefaceCBA0.22730.45460.68190.90921.1365SE +/- 0.03, N = 12SE +/- 0.01, N = 30.811.010.91MIN: 0.76 / MAX: 2.03MIN: 0.83 / MAX: 2.27MIN: 0.85 / MAX: 1.851. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b0CBA3691215SE +/- 0.70, N = 12SE +/- 0.10, N = 36.5210.187.11MIN: 6.27 / MAX: 7.55MIN: 6.61 / MAX: 31.25MIN: 6.64 / MAX: 13.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnetCBA0.78751.5752.36253.153.9375SE +/- 0.07, N = 11SE +/- 0.06, N = 32.763.503.11MIN: 2.69 / MAX: 3.7MIN: 2.97 / MAX: 5.66MIN: 2.9 / MAX: 4.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v2CBA0.74251.4852.22752.973.7125SE +/- 0.09, N = 12SE +/- 0.06, N = 32.643.302.86MIN: 2.61 / MAX: 3.54MIN: 2.7 / MAX: 9.4MIN: 2.64 / MAX: 3.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenetCBA48121620SE +/- 0.37, N = 12SE +/- 0.13, N = 316.3818.0116.86MIN: 15.17 / MAX: 17.84MIN: 15.17 / MAX: 33.51MIN: 15.2 / MAX: 24.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: mobilenet-v1-1.0CBA0.95961.91922.87883.83844.798SE +/- 0.170, N = 11SE +/- 0.192, N = 93.0114.2653.919MIN: 2.97 / MAX: 3.94MIN: 3.21 / MAX: 15.87MIN: 2.95 / MAX: 12.421. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: MobileNetV2_224CBA0.98171.96342.94513.92684.9085SE +/- 0.178, N = 12SE +/- 0.129, N = 92.6234.3633.690MIN: 2.49 / MAX: 3.77MIN: 3.43 / MAX: 10.2MIN: 2.91 / MAX: 47.881. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: squeezenetv1.1CBA1.06632.13263.19894.26525.3315SE +/- 0.298, N = 12SE +/- 0.217, N = 92.4534.7393.730MIN: 2.42 / MAX: 3.51MIN: 3.31 / MAX: 15.54MIN: 2.8 / MAX: 12.381. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: mobilenetV3CBA0.31370.62740.94111.25481.5685SE +/- 0.025, N = 12SE +/- 0.030, N = 91.2081.3941.349MIN: 1.19 / MAX: 2.13MIN: 1.18 / MAX: 3.73MIN: 1.16 / MAX: 14.491. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl