nn

Intel Core i3-12100 testing with a ASRock B660M-HDV (6.02 BIOS) and Intel ADL-S GT1 16GB on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2208136-NE-NN882134066
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts

Limit displaying results to tests within:

HPC - High Performance Computing 2 Tests
Machine Learning 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
A
August 13 2022
  5 Hours, 31 Minutes
B
August 13 2022
  15 Hours, 26 Minutes
C
August 13 2022
  1 Hour, 8 Minutes
Invert Hiding All Results Option
  7 Hours, 22 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


nn OpenBenchmarking.orgPhoronix Test SuiteIntel Core i3-12100 @ 5.50GHz (4 Cores / 8 Threads)ASRock B660M-HDV (6.02 BIOS)Intel Device 7aa716GB512GB SabrentIntel ADL-S GT1 16GB (1400MHz)Realtek ALC897MX279IntelUbuntu 22.045.18.0-051800-generic (x86_64)GNOME Shell 42.1X Server 1.20.14 + Wayland4.6 Mesa 22.0.11.2.204GCC 11.2.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionNn BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x1f - Thermald 2.4.9 - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

ABCResult OverviewPhoronix Test Suite100%123%147%170%Mobile Neural NetworkMobile Neural NetworkNCNNMobile Neural NetworkNCNNNCNNNCNNNCNNMobile Neural NetworkNCNNNCNNNCNNNCNNNCNNMobile Neural NetworkMobile Neural NetworkMobile Neural NetworkNCNNNCNNNCNNNCNNNCNNNCNNNCNNsqueezenetv1.1MobileNetV2_224CPU - efficientnet-b0mobilenet-v1-1.0CPU - regnety_400mCPU - mnasnetCPU - shufflenet-v2CPU - blazefaceresnet-v2-50CPU - vgg16CPU - resnet18CPU - alexnetCPU - googlenetCPU - resnet50mobilenetV3SqueezeNetV1.0inception-v3CPU - mobilenetCPU - yolov4-tinyCPU - vision_transformerCPU-v3-v3 - mobilenet-v3CPU - squeezenet_ssdCPU - FastestDetCPU-v2-v2 - mobilenet-v2

nn mnn: resnet-v2-50ncnn: CPU - resnet50mnn: SqueezeNetV1.0mnn: inception-v3ncnn: CPU - yolov4-tinyncnn: CPU - vision_transformerncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - squeezenet_ssdncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - FastestDetncnn: CPU - regnety_400mncnn: CPU - alexnetncnn: CPU - resnet18ncnn: CPU - vgg16ncnn: CPU - googlenetncnn: CPU - blazefacencnn: CPU - efficientnet-b0ncnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU - mobilenetmnn: mobilenet-v1-1.0mnn: MobileNetV2_224mnn: squeezenetv1.1mnn: mobilenetV3ABC34.59525.707.12947.20525.97449.952.7515.933.384.918.157.559.0470.9811.570.917.113.112.8616.863.9193.6903.7301.34937.94727.727.65251.89626.35459.642.8416.003.444.6210.268.5510.3059.3112.771.0110.183.503.3018.014.2654.3634.7391.39430.45223.846.78747.06424.07420.672.6114.833.284.657.547.228.6859.3810.910.816.522.762.6416.383.0112.6232.4531.208OpenBenchmarking.org

Mobile Neural Network

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: resnet-v2-50CAB918273645SE +/- 0.52, N = 9SE +/- 0.12, N = 1230.4534.6037.95MIN: 26.66 / MAX: 46.58MIN: 26.54 / MAX: 133.01MIN: 31.59 / MAX: 107.251. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet50CAB714212835SE +/- 0.19, N = 3SE +/- 0.41, N = 1223.8425.7027.72MIN: 20.77 / MAX: 28.8MIN: 23.9 / MAX: 35.82MIN: 20.81 / MAX: 46.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: SqueezeNetV1.0CAB246810SE +/- 0.140, N = 9SE +/- 0.108, N = 126.7877.1297.652MIN: 5.26 / MAX: 17.3MIN: 5.2 / MAX: 12.61MIN: 6.24 / MAX: 16.931. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: inception-v3CAB1224364860SE +/- 0.75, N = 9SE +/- 0.59, N = 1247.0647.2151.90MIN: 37.93 / MAX: 66.24MIN: 35.29 / MAX: 96.23MIN: 41.75 / MAX: 124.791. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tinyCAB612182430SE +/- 0.52, N = 3SE +/- 0.34, N = 1224.0725.9726.35MIN: 22.42 / MAX: 26.41MIN: 22.44 / MAX: 35.93MIN: 22.43 / MAX: 37.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformerCAB100200300400500SE +/- 0.77, N = 3SE +/- 1.09, N = 12420.67449.95459.64MIN: 376.95 / MAX: 464.75MIN: 379.37 / MAX: 693.84MIN: 376.48 / MAX: 695.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v3CAB0.6391.2781.9172.5563.195SE +/- 0.04, N = 3SE +/- 0.01, N = 122.612.752.84MIN: 2.57 / MAX: 3.52MIN: 2.61 / MAX: 4.09MIN: 2.66 / MAX: 51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssdCAB48121620SE +/- 0.39, N = 3SE +/- 0.24, N = 1214.8315.9316.00MIN: 14.05 / MAX: 15.96MIN: 14.04 / MAX: 29.65MIN: 14.08 / MAX: 27.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v2CAB0.7741.5482.3223.0963.87SE +/- 0.02, N = 3SE +/- 0.01, N = 123.283.383.44MIN: 3.2 / MAX: 4.3MIN: 3.24 / MAX: 4.7MIN: 3.25 / MAX: 4.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDetBCA1.10482.20963.31444.41925.524SE +/- 0.14, N = 12SE +/- 0.38, N = 34.624.654.91MIN: 4 / MAX: 7.19MIN: 4 / MAX: 5.03MIN: 4 / MAX: 14.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400mCAB3691215SE +/- 0.10, N = 3SE +/- 0.43, N = 127.548.1510.26MIN: 7.27 / MAX: 8.98MIN: 7.68 / MAX: 9.55MIN: 7.81 / MAX: 21.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnetCAB246810SE +/- 0.01, N = 3SE +/- 0.22, N = 127.227.558.55MIN: 7.01 / MAX: 9.3MIN: 7.27 / MAX: 9.68MIN: 7.3 / MAX: 15.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet18CAB3691215SE +/- 0.04, N = 3SE +/- 0.23, N = 128.689.0410.30MIN: 8.4 / MAX: 9.74MIN: 8.6 / MAX: 11.12MIN: 8.79 / MAX: 15.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg16BCA1632486480SE +/- 2.47, N = 12SE +/- 2.68, N = 359.3159.3870.98MIN: 46.88 / MAX: 137.98MIN: 46.99 / MAX: 117.49MIN: 47.17 / MAX: 137.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenetCAB3691215SE +/- 0.10, N = 3SE +/- 0.25, N = 1210.9111.5712.77MIN: 10.52 / MAX: 12.2MIN: 10.91 / MAX: 13.35MIN: 11.2 / MAX: 20.531. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazefaceCAB0.22730.45460.68190.90921.1365SE +/- 0.01, N = 3SE +/- 0.03, N = 120.810.911.01MIN: 0.76 / MAX: 2.03MIN: 0.85 / MAX: 1.85MIN: 0.83 / MAX: 2.271. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b0CAB3691215SE +/- 0.10, N = 3SE +/- 0.70, N = 126.527.1110.18MIN: 6.27 / MAX: 7.55MIN: 6.64 / MAX: 13.76MIN: 6.61 / MAX: 31.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnetCAB0.78751.5752.36253.153.9375SE +/- 0.06, N = 3SE +/- 0.07, N = 112.763.113.50MIN: 2.69 / MAX: 3.7MIN: 2.9 / MAX: 4.21MIN: 2.97 / MAX: 5.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v2CAB0.74251.4852.22752.973.7125SE +/- 0.06, N = 3SE +/- 0.09, N = 122.642.863.30MIN: 2.61 / MAX: 3.54MIN: 2.64 / MAX: 3.87MIN: 2.7 / MAX: 9.41. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenetCAB48121620SE +/- 0.13, N = 3SE +/- 0.37, N = 1216.3816.8618.01MIN: 15.17 / MAX: 17.84MIN: 15.2 / MAX: 24.82MIN: 15.17 / MAX: 33.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: mobilenet-v1-1.0CAB0.95961.91922.87883.83844.798SE +/- 0.192, N = 9SE +/- 0.170, N = 113.0113.9194.265MIN: 2.97 / MAX: 3.94MIN: 2.95 / MAX: 12.42MIN: 3.21 / MAX: 15.871. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: MobileNetV2_224CAB0.98171.96342.94513.92684.9085SE +/- 0.129, N = 9SE +/- 0.178, N = 122.6233.6904.363MIN: 2.49 / MAX: 3.77MIN: 2.91 / MAX: 47.88MIN: 3.43 / MAX: 10.21. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: squeezenetv1.1CAB1.06632.13263.19894.26525.3315SE +/- 0.217, N = 9SE +/- 0.298, N = 122.4533.7304.739MIN: 2.42 / MAX: 3.51MIN: 2.8 / MAX: 12.38MIN: 3.31 / MAX: 15.541. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: mobilenetV3CAB0.31370.62740.94111.25481.5685SE +/- 0.030, N = 9SE +/- 0.025, N = 121.2081.3491.394MIN: 1.19 / MAX: 2.13MIN: 1.16 / MAX: 14.49MIN: 1.18 / MAX: 3.731. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl