ncnn mnn 2022

AMD Ryzen 9 5950X 16-Core testing with a ASUS ROG CROSSHAIR VIII HERO (WI-FI) (4006 BIOS) and AMD Radeon RX 6700/6700 XT / 6800M on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2208131-PTS-NCNNMNN216
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

HPC - High Performance Computing 2 Tests
Machine Learning 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
A
August 13 2022
  3 Hours, 26 Minutes
B
August 13 2022
  9 Hours, 13 Minutes
C
August 13 2022
  8 Hours, 56 Minutes
D
August 13 2022
  11 Hours, 18 Minutes
E
August 13 2022
  11 Hours, 19 Minutes
Invert Hiding All Results Option
  8 Hours, 51 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


ncnn mnn 2022OpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen 9 5950X 16-Core @ 3.40GHz (16 Cores / 32 Threads)ASUS ROG CROSSHAIR VIII HERO (WI-FI) (4006 BIOS)AMD Starship/Matisse32GB1000GB Sabrent Rocket 4.0 PlusAMD Radeon RX 6700/6700 XT / 6800M (2880/1124MHz)AMD Navi 21 HDMI AudioASUS MG28URealtek RTL8125 2.5GbE + Intel I211 + Intel Wi-Fi 6 AX200Ubuntu 22.045.15.0-46-generic (x86_64)GNOME Shell 42.2X Server 1.21.1.3 + Wayland4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.42)1.3.204GCC 11.2.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionNcnn Mnn 2022 BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201016 - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

ABCDEResult OverviewPhoronix Test Suite100%105%111%116%121%NCNNNCNNNCNNNCNNNCNNNCNNMobile Neural NetworkNCNNNCNNMobile Neural NetworkNCNNNCNNNCNNNCNNNCNNNCNNMobile Neural NetworkNCNNNCNNMobile Neural NetworkMobile Neural NetworkNCNNNCNNNCNNNCNNNCNNNCNNNCNNNCNNMobile Neural NetworkNCNNNCNNNCNNNCNNNCNNNCNNMobile Neural NetworkNCNNNCNNNCNNNCNNCPU - alexnetVulkan GPU - FastestDetVulkan GPU - squeezenet_ssdVulkan GPU - alexnetVulkan GPU - mobilenetCPU - googlenetSqueezeNetV1.0CPU - blazefaceVulkan GPU-v2-v2 - mobilenet-v2resnet-v2-50CPU - mobilenetVulkan GPU - shufflenet-v2Vulkan GPU - blazefaceVulkan GPU - mnasnetVulkan GPU - resnet18CPU - resnet50mobilenet-v1-1.0Vulkan GPU-v3-v3 - mobilenet-v3CPU - FastestDetmobilenetV3squeezenetv1.1Vulkan GPU - vgg16Vulkan GPU - regnety_400mCPU - squeezenet_ssdCPU - mnasnetCPU - efficientnet-b0Vulkan GPU - yolov4-tinyVulkan GPU - googlenetCPU-v3-v3 - mobilenet-v3MobileNetV2_224Vulkan GPU - efficientnet-b0CPU - yolov4-tinyCPU - vgg16Vulkan GPU - resnet50CPU-v2-v2 - mobilenet-v2CPU - resnet18inception-v3CPU - vision_transformerCPU - shufflenet-v2CPU - regnety_400mVulkan GPU - vision_transformer

ncnn mnn 2022ncnn: Vulkan GPU - FastestDetncnn: Vulkan GPU - vision_transformerncnn: Vulkan GPU - regnety_400mncnn: Vulkan GPU - squeezenet_ssdncnn: Vulkan GPU - yolov4-tinyncnn: Vulkan GPU - resnet50ncnn: Vulkan GPU - alexnetncnn: Vulkan GPU - resnet18ncnn: Vulkan GPU - vgg16ncnn: Vulkan GPU - googlenetncnn: Vulkan GPU - blazefacencnn: Vulkan GPU - efficientnet-b0ncnn: Vulkan GPU - mnasnetncnn: Vulkan GPU - shufflenet-v2ncnn: Vulkan GPU-v3-v3 - mobilenet-v3ncnn: Vulkan GPU-v2-v2 - mobilenet-v2ncnn: Vulkan GPU - mobilenetmnn: inception-v3mnn: mobilenet-v1-1.0mnn: MobileNetV2_224mnn: SqueezeNetV1.0mnn: resnet-v2-50mnn: squeezenetv1.1mnn: mobilenetV3ncnn: CPU - FastestDetncnn: CPU - vision_transformerncnn: CPU - regnety_400mncnn: CPU - squeezenet_ssdncnn: CPU - yolov4-tinyncnn: CPU - resnet50ncnn: CPU - alexnetncnn: CPU - resnet18ncnn: CPU - vgg16ncnn: CPU - googlenetncnn: CPU - blazefacencnn: CPU - efficientnet-b0ncnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - mobilenetABCDE2.50220.753.115.2314.584.992.082.636.773.671.644.931.972.042.351.929.4526.0742.6253.4495.23421.9563.2091.9034.98123.5512.8318.3221.5121.477.7912.1747.5812.191.825.913.894.303.784.2911.462.28219.903.035.1414.484.942.242.546.933.711.604.892.042.122.361.959.3425.8062.5423.4225.14421.5643.2271.8604.90122.8912.8118.5321.3320.987.7312.2147.5211.691.805.973.974.303.824.3311.922.33219.573.095.4714.565.002.092.606.753.661.654.902.012.092.341.999.8425.9052.5563.4685.35021.0983.2781.8654.93122.6312.9018.5921.5721.707.6712.0847.5811.381.825.903.894.303.774.2811.712.38220.313.085.5014.595.002.082.616.753.711.644.862.042.092.361.989.9025.9242.5923.4565.30221.7043.2981.9134.86123.8712.8518.3921.3021.487.7212.1647.3411.561.886.023.904.263.774.3111.842.37220.853.065.5514.774.972.122.626.803.721.664.862.042.092.412.0010.0525.9902.5813.4725.38221.5363.2961.9024.84123.4412.7918.1721.4321.399.3012.1447.9411.531.835.903.904.283.764.3011.64OpenBenchmarking.org

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: FastestDetEDCBA0.56251.1251.68752.252.8125SE +/- 0.03, N = 15SE +/- 0.04, N = 15SE +/- 0.01, N = 14SE +/- 0.03, N = 3SE +/- 0.17, N = 32.372.382.332.282.50MIN: 1.85 / MAX: 17.06MIN: 1.84 / MAX: 9.1MIN: 1.84 / MAX: 7.89MIN: 1.85 / MAX: 8.13MIN: 1.86 / MAX: 8.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: FastestDetEDCBA246810Min: 2.23 / Avg: 2.37 / Max: 2.77Min: 2.27 / Avg: 2.38 / Max: 2.78Min: 2.26 / Avg: 2.33 / Max: 2.41Min: 2.25 / Avg: 2.28 / Max: 2.35Min: 2.26 / Avg: 2.5 / Max: 2.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: vision_transformerEDCBA50100150200250SE +/- 0.36, N = 15SE +/- 0.34, N = 15SE +/- 0.33, N = 15SE +/- 1.44, N = 3SE +/- 0.44, N = 3220.85220.31219.57219.90220.75MIN: 204.72 / MAX: 1074.16MIN: 204.86 / MAX: 914.53MIN: 204.22 / MAX: 967.44MIN: 204.44 / MAX: 295.75MIN: 205.2 / MAX: 332.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: vision_transformerEDCBA4080120160200Min: 219.08 / Avg: 220.85 / Max: 224.6Min: 218.15 / Avg: 220.31 / Max: 222.26Min: 217.88 / Avg: 219.57 / Max: 223.29Min: 217.07 / Avg: 219.9 / Max: 221.79Min: 219.88 / Avg: 220.75 / Max: 221.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: regnety_400mEDCBA0.69981.39962.09942.79923.499SE +/- 0.03, N = 15SE +/- 0.03, N = 15SE +/- 0.03, N = 15SE +/- 0.12, N = 3SE +/- 0.13, N = 33.063.083.093.033.11MIN: 2.69 / MAX: 14.14MIN: 2.69 / MAX: 25.76MIN: 2.7 / MAX: 23.46MIN: 2.67 / MAX: 13.33MIN: 2.71 / MAX: 12.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: regnety_400mEDCBA246810Min: 2.9 / Avg: 3.06 / Max: 3.28Min: 2.92 / Avg: 3.08 / Max: 3.29Min: 2.91 / Avg: 3.09 / Max: 3.3Min: 2.89 / Avg: 3.03 / Max: 3.26Min: 2.86 / Avg: 3.11 / Max: 3.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: squeezenet_ssdEDCBA1.24882.49763.74644.99526.244SE +/- 0.14, N = 15SE +/- 0.12, N = 15SE +/- 0.13, N = 15SE +/- 0.10, N = 3SE +/- 0.07, N = 35.555.505.475.145.23MIN: 3.7 / MAX: 22.31MIN: 3.7 / MAX: 21.81MIN: 3.7 / MAX: 27.5MIN: 3.67 / MAX: 16.01MIN: 3.64 / MAX: 17.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: squeezenet_ssdEDCBA246810Min: 4.96 / Avg: 5.55 / Max: 6.78Min: 5.05 / Avg: 5.5 / Max: 6.35Min: 5.07 / Avg: 5.47 / Max: 6.37Min: 4.94 / Avg: 5.14 / Max: 5.27Min: 5.1 / Avg: 5.23 / Max: 5.311. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: yolov4-tinyEDCBA48121620SE +/- 0.25, N = 15SE +/- 0.03, N = 15SE +/- 0.04, N = 15SE +/- 0.09, N = 3SE +/- 0.04, N = 314.7714.5914.5614.4814.58MIN: 12.94 / MAX: 50.63MIN: 12.89 / MAX: 35.38MIN: 12.76 / MAX: 28.55MIN: 13.05 / MAX: 21.86MIN: 13.18 / MAX: 22.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: yolov4-tinyEDCBA48121620Min: 14.36 / Avg: 14.77 / Max: 18.24Min: 14.39 / Avg: 14.59 / Max: 14.88Min: 14.36 / Avg: 14.56 / Max: 14.92Min: 14.39 / Avg: 14.48 / Max: 14.65Min: 14.49 / Avg: 14.58 / Max: 14.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: resnet50EDCBA1.1252.253.3754.55.625SE +/- 0.03, N = 15SE +/- 0.04, N = 15SE +/- 0.03, N = 15SE +/- 0.03, N = 3SE +/- 0.02, N = 34.975.005.004.944.99MIN: 4.49 / MAX: 30.82MIN: 4.5 / MAX: 30.89MIN: 4.49 / MAX: 33.3MIN: 4.5 / MAX: 26.23MIN: 4.5 / MAX: 26.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: resnet50EDCBA246810Min: 4.8 / Avg: 4.97 / Max: 5.14Min: 4.79 / Avg: 5 / Max: 5.35Min: 4.86 / Avg: 5 / Max: 5.18Min: 4.9 / Avg: 4.94 / Max: 5Min: 4.95 / Avg: 4.99 / Max: 5.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: alexnetEDCBA0.5041.0081.5122.0162.52SE +/- 0.03, N = 12SE +/- 0.02, N = 13SE +/- 0.03, N = 15SE +/- 0.08, N = 3SE +/- 0.06, N = 32.122.082.092.242.08MIN: 1.68 / MAX: 19.9MIN: 1.67 / MAX: 18.09MIN: 1.67 / MAX: 18.2MIN: 1.68 / MAX: 18.33MIN: 1.68 / MAX: 11.231. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: alexnetEDCBA246810Min: 1.98 / Avg: 2.12 / Max: 2.32Min: 1.95 / Avg: 2.08 / Max: 2.22Min: 1.9 / Avg: 2.09 / Max: 2.37Min: 2.08 / Avg: 2.24 / Max: 2.33Min: 1.97 / Avg: 2.08 / Max: 2.171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: resnet18EDCBA0.59181.18361.77542.36722.959SE +/- 0.03, N = 13SE +/- 0.03, N = 15SE +/- 0.02, N = 15SE +/- 0.02, N = 3SE +/- 0.03, N = 32.622.612.602.542.63MIN: 2.15 / MAX: 20.98MIN: 2.15 / MAX: 21.7MIN: 2.15 / MAX: 15.22MIN: 2.16 / MAX: 11.84MIN: 2.15 / MAX: 16.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: resnet18EDCBA246810Min: 2.41 / Avg: 2.62 / Max: 2.84Min: 2.45 / Avg: 2.61 / Max: 2.78Min: 2.5 / Avg: 2.6 / Max: 2.69Min: 2.5 / Avg: 2.54 / Max: 2.58Min: 2.56 / Avg: 2.63 / Max: 2.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: vgg16EDCBA246810SE +/- 0.02, N = 15SE +/- 0.02, N = 15SE +/- 0.03, N = 15SE +/- 0.06, N = 3SE +/- 0.05, N = 36.806.756.756.936.77MIN: 6.24 / MAX: 30.4MIN: 6.25 / MAX: 28.85MIN: 6.25 / MAX: 36.3MIN: 6.25 / MAX: 28.94MIN: 6.25 / MAX: 25.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: vgg16EDCBA3691215Min: 6.66 / Avg: 6.8 / Max: 6.9Min: 6.6 / Avg: 6.75 / Max: 6.88Min: 6.55 / Avg: 6.75 / Max: 6.96Min: 6.86 / Avg: 6.93 / Max: 7.05Min: 6.68 / Avg: 6.77 / Max: 6.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: googlenetEDCBA0.8371.6742.5113.3484.185SE +/- 0.03, N = 15SE +/- 0.04, N = 15SE +/- 0.03, N = 15SE +/- 0.08, N = 3SE +/- 0.01, N = 33.723.713.663.713.67MIN: 3.24 / MAX: 18.21MIN: 3.24 / MAX: 25.35MIN: 3.24 / MAX: 16.69MIN: 3.26 / MAX: 16.59MIN: 3.25 / MAX: 17.161. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: googlenetEDCBA246810Min: 3.53 / Avg: 3.72 / Max: 3.92Min: 3.52 / Avg: 3.71 / Max: 4.06Min: 3.41 / Avg: 3.66 / Max: 3.86Min: 3.59 / Avg: 3.71 / Max: 3.86Min: 3.65 / Avg: 3.67 / Max: 3.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: blazefaceEDCBA0.37350.7471.12051.4941.8675SE +/- 0.02, N = 15SE +/- 0.01, N = 15SE +/- 0.01, N = 15SE +/- 0.02, N = 3SE +/- 0.04, N = 31.661.641.651.601.64MIN: 1.15 / MAX: 22.72MIN: 1.16 / MAX: 14.86MIN: 1.15 / MAX: 12.54MIN: 1.25 / MAX: 10.18MIN: 1.18 / MAX: 11.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: blazefaceEDCBA246810Min: 1.54 / Avg: 1.66 / Max: 1.75Min: 1.55 / Avg: 1.64 / Max: 1.74Min: 1.56 / Avg: 1.65 / Max: 1.75Min: 1.57 / Avg: 1.6 / Max: 1.63Min: 1.6 / Avg: 1.64 / Max: 1.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: efficientnet-b0EDCBA1.10932.21863.32794.43725.5465SE +/- 0.01, N = 15SE +/- 0.01, N = 15SE +/- 0.02, N = 15SE +/- 0.05, N = 3SE +/- 0.04, N = 34.864.864.904.894.93MIN: 4.52 / MAX: 18.87MIN: 4.49 / MAX: 18.62MIN: 4.48 / MAX: 29.81MIN: 4.53 / MAX: 27.46MIN: 4.52 / MAX: 29.241. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: efficientnet-b0EDCBA246810Min: 4.82 / Avg: 4.86 / Max: 4.94Min: 4.78 / Avg: 4.86 / Max: 4.96Min: 4.83 / Avg: 4.9 / Max: 5.06Min: 4.83 / Avg: 4.89 / Max: 4.99Min: 4.86 / Avg: 4.93 / Max: 51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: mnasnetEDCBA0.4590.9181.3771.8362.295SE +/- 0.02, N = 14SE +/- 0.02, N = 13SE +/- 0.01, N = 15SE +/- 0.03, N = 3SE +/- 0.02, N = 32.042.042.012.041.97MIN: 1.73 / MAX: 11.29MIN: 1.73 / MAX: 14.09MIN: 1.73 / MAX: 9.57MIN: 1.74 / MAX: 9.58MIN: 1.73 / MAX: 6.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: mnasnetEDCBA246810Min: 1.96 / Avg: 2.04 / Max: 2.22Min: 1.87 / Avg: 2.04 / Max: 2.17Min: 1.93 / Avg: 2.01 / Max: 2.11Min: 1.99 / Avg: 2.04 / Max: 2.1Min: 1.93 / Avg: 1.97 / Max: 21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: shufflenet-v2EDCBA0.4770.9541.4311.9082.385SE +/- 0.01, N = 15SE +/- 0.02, N = 15SE +/- 0.02, N = 15SE +/- 0.02, N = 3SE +/- 0.07, N = 32.092.092.092.122.04MIN: 1.65 / MAX: 9.44MIN: 1.65 / MAX: 13.26MIN: 1.65 / MAX: 9.6MIN: 1.67 / MAX: 7.91MIN: 1.65 / MAX: 11.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: shufflenet-v2EDCBA246810Min: 2.02 / Avg: 2.09 / Max: 2.18Min: 1.99 / Avg: 2.09 / Max: 2.19Min: 1.97 / Avg: 2.09 / Max: 2.21Min: 2.08 / Avg: 2.12 / Max: 2.15Min: 1.9 / Avg: 2.04 / Max: 2.141. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3EDCBA0.54231.08461.62692.16922.7115SE +/- 0.03, N = 15SE +/- 0.01, N = 15SE +/- 0.01, N = 15SE +/- 0.02, N = 3SE +/- 0.02, N = 32.412.362.342.362.35MIN: 2.07 / MAX: 19.36MIN: 2.08 / MAX: 15.13MIN: 2.07 / MAX: 14.62MIN: 2.09 / MAX: 9.49MIN: 2.07 / MAX: 8.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3EDCBA246810Min: 2.25 / Avg: 2.41 / Max: 2.63Min: 2.26 / Avg: 2.36 / Max: 2.44Min: 2.2 / Avg: 2.34 / Max: 2.42Min: 2.33 / Avg: 2.36 / Max: 2.39Min: 2.31 / Avg: 2.35 / Max: 2.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2EDCBA0.450.91.351.82.25SE +/- 0.02, N = 15SE +/- 0.01, N = 15SE +/- 0.04, N = 15SE +/- 0.05, N = 3SE +/- 0.04, N = 32.001.981.991.951.92MIN: 1.7 / MAX: 12.73MIN: 1.69 / MAX: 9.54MIN: 1.69 / MAX: 14.04MIN: 1.71 / MAX: 6.43MIN: 1.7 / MAX: 6.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2EDCBA246810Min: 1.9 / Avg: 2 / Max: 2.15Min: 1.84 / Avg: 1.98 / Max: 2.05Min: 1.79 / Avg: 1.99 / Max: 2.45Min: 1.86 / Avg: 1.95 / Max: 2.01Min: 1.83 / Avg: 1.92 / Max: 1.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: mobilenetEDCBA3691215SE +/- 0.23, N = 15SE +/- 0.17, N = 15SE +/- 0.22, N = 15SE +/- 0.10, N = 3SE +/- 0.13, N = 310.059.909.849.349.45MIN: 4.72 / MAX: 26.35MIN: 4.66 / MAX: 29.74MIN: 5.61 / MAX: 25.64MIN: 5.97 / MAX: 18.47MIN: 5.91 / MAX: 20.241. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: mobilenetEDCBA3691215Min: 8.83 / Avg: 10.05 / Max: 11.38Min: 9.19 / Avg: 9.9 / Max: 11.28Min: 8.78 / Avg: 9.84 / Max: 11.18Min: 9.14 / Avg: 9.34 / Max: 9.48Min: 9.24 / Avg: 9.45 / Max: 9.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: inception-v3EDCBA612182430SE +/- 0.06, N = 15SE +/- 0.08, N = 15SE +/- 0.05, N = 3SE +/- 0.07, N = 3SE +/- 0.18, N = 325.9925.9225.9125.8126.07MIN: 23.91 / MAX: 39.7MIN: 23.04 / MAX: 39.16MIN: 24.43 / MAX: 37.78MIN: 24.39 / MAX: 40.84MIN: 24.37 / MAX: 39.271. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: inception-v3EDCBA612182430Min: 25.51 / Avg: 25.99 / Max: 26.42Min: 25.29 / Avg: 25.92 / Max: 26.4Min: 25.8 / Avg: 25.91 / Max: 25.96Min: 25.68 / Avg: 25.81 / Max: 25.88Min: 25.75 / Avg: 26.07 / Max: 26.391. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: mobilenet-v1-1.0EDCBA0.59061.18121.77182.36242.953SE +/- 0.018, N = 15SE +/- 0.025, N = 15SE +/- 0.031, N = 3SE +/- 0.018, N = 3SE +/- 0.021, N = 32.5812.5922.5562.5422.625MIN: 2.32 / MAX: 11.34MIN: 2.32 / MAX: 11.27MIN: 2.32 / MAX: 11.13MIN: 2.36 / MAX: 11.09MIN: 2.41 / MAX: 11.151. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: mobilenet-v1-1.0EDCBA246810Min: 2.47 / Avg: 2.58 / Max: 2.7Min: 2.45 / Avg: 2.59 / Max: 2.84Min: 2.51 / Avg: 2.56 / Max: 2.62Min: 2.52 / Avg: 2.54 / Max: 2.58Min: 2.6 / Avg: 2.63 / Max: 2.671. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: MobileNetV2_224EDCBA0.78121.56242.34363.12483.906SE +/- 0.024, N = 15SE +/- 0.026, N = 15SE +/- 0.008, N = 3SE +/- 0.019, N = 3SE +/- 0.011, N = 33.4723.4563.4683.4223.449MIN: 3.04 / MAX: 13.37MIN: 3.05 / MAX: 17.71MIN: 3.25 / MAX: 13.17MIN: 3.2 / MAX: 12.93MIN: 3.24 / MAX: 13.321. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: MobileNetV2_224EDCBA246810Min: 3.22 / Avg: 3.47 / Max: 3.59Min: 3.25 / Avg: 3.46 / Max: 3.59Min: 3.45 / Avg: 3.47 / Max: 3.48Min: 3.39 / Avg: 3.42 / Max: 3.46Min: 3.43 / Avg: 3.45 / Max: 3.461. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: SqueezeNetV1.0EDCBA1.2112.4223.6334.8446.055SE +/- 0.042, N = 15SE +/- 0.047, N = 15SE +/- 0.070, N = 3SE +/- 0.059, N = 3SE +/- 0.029, N = 35.3825.3025.3505.1445.234MIN: 4.9 / MAX: 40.68MIN: 4.58 / MAX: 14.4MIN: 5.01 / MAX: 13.89MIN: 4.8 / MAX: 17.58MIN: 4.93 / MAX: 13.771. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: SqueezeNetV1.0EDCBA246810Min: 5.17 / Avg: 5.38 / Max: 5.7Min: 4.84 / Avg: 5.3 / Max: 5.48Min: 5.26 / Avg: 5.35 / Max: 5.49Min: 5.04 / Avg: 5.14 / Max: 5.24Min: 5.2 / Avg: 5.23 / Max: 5.291. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: resnet-v2-50EDCBA510152025SE +/- 0.08, N = 15SE +/- 0.10, N = 15SE +/- 0.14, N = 3SE +/- 0.49, N = 3SE +/- 0.06, N = 321.5421.7021.1021.5621.96MIN: 19.51 / MAX: 34.69MIN: 19.47 / MAX: 39.03MIN: 19.73 / MAX: 46.88MIN: 19.55 / MAX: 34.8MIN: 19.8 / MAX: 82.591. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: resnet-v2-50EDCBA510152025Min: 20.97 / Avg: 21.54 / Max: 22.1Min: 21 / Avg: 21.7 / Max: 22.2Min: 20.91 / Avg: 21.1 / Max: 21.38Min: 20.85 / Avg: 21.56 / Max: 22.5Min: 21.85 / Avg: 21.96 / Max: 22.051. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: squeezenetv1.1EDCBA0.74211.48422.22632.96843.7105SE +/- 0.028, N = 15SE +/- 0.036, N = 15SE +/- 0.046, N = 3SE +/- 0.058, N = 3SE +/- 0.079, N = 33.2963.2983.2783.2273.209MIN: 2.88 / MAX: 12.56MIN: 2.83 / MAX: 11.8MIN: 3.03 / MAX: 11.65MIN: 2.95 / MAX: 11.74MIN: 2.96 / MAX: 11.861. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: squeezenetv1.1EDCBA246810Min: 3.06 / Avg: 3.3 / Max: 3.39Min: 2.98 / Avg: 3.3 / Max: 3.41Min: 3.2 / Avg: 3.28 / Max: 3.36Min: 3.11 / Avg: 3.23 / Max: 3.31Min: 3.11 / Avg: 3.21 / Max: 3.371. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: mobilenetV3EDCBA0.43040.86081.29121.72162.152SE +/- 0.015, N = 15SE +/- 0.015, N = 15SE +/- 0.014, N = 3SE +/- 0.010, N = 3SE +/- 0.020, N = 31.9021.9131.8651.8601.903MIN: 1.69 / MAX: 11.39MIN: 1.69 / MAX: 11.76MIN: 1.74 / MAX: 10.98MIN: 1.74 / MAX: 11.33MIN: 1.71 / MAX: 13.231. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: mobilenetV3EDCBA246810Min: 1.81 / Avg: 1.9 / Max: 1.98Min: 1.8 / Avg: 1.91 / Max: 1.98Min: 1.84 / Avg: 1.86 / Max: 1.88Min: 1.85 / Avg: 1.86 / Max: 1.88Min: 1.87 / Avg: 1.9 / Max: 1.941. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDetEDCBA1.12052.2413.36154.4825.6025SE +/- 0.19, N = 3SE +/- 0.02, N = 3SE +/- 0.08, N = 3SE +/- 0.04, N = 15SE +/- 0.11, N = 34.844.864.934.904.98MIN: 4.19 / MAX: 12.16MIN: 4.61 / MAX: 12MIN: 4.52 / MAX: 11.93MIN: 4.19 / MAX: 12.29MIN: 4.56 / MAX: 12.111. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDetEDCBA246810Min: 4.49 / Avg: 4.84 / Max: 5.15Min: 4.84 / Avg: 4.86 / Max: 4.9Min: 4.77 / Avg: 4.93 / Max: 5.04Min: 4.45 / Avg: 4.9 / Max: 5.14Min: 4.78 / Avg: 4.98 / Max: 5.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformerEDCBA306090120150SE +/- 0.61, N = 3SE +/- 0.36, N = 3SE +/- 0.17, N = 3SE +/- 0.11, N = 15SE +/- 0.06, N = 3123.44123.87122.63122.89123.55MIN: 119.37 / MAX: 173.88MIN: 119.07 / MAX: 187.38MIN: 119.12 / MAX: 162.72MIN: 119.06 / MAX: 171.55MIN: 119.55 / MAX: 191.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformerEDCBA20406080100Min: 122.69 / Avg: 123.44 / Max: 124.65Min: 123.19 / Avg: 123.87 / Max: 124.41Min: 122.3 / Avg: 122.63 / Max: 122.89Min: 122.14 / Avg: 122.89 / Max: 123.87Min: 123.45 / Avg: 123.55 / Max: 123.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400mEDCBA3691215SE +/- 0.10, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 15SE +/- 0.04, N = 312.7912.8512.9012.8112.83MIN: 11.97 / MAX: 21.29MIN: 12.05 / MAX: 26.05MIN: 12.14 / MAX: 20.46MIN: 11.76 / MAX: 25.73MIN: 12.11 / MAX: 26.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400mEDCBA48121620Min: 12.59 / Avg: 12.79 / Max: 12.9Min: 12.8 / Avg: 12.85 / Max: 12.91Min: 12.82 / Avg: 12.9 / Max: 12.96Min: 12.36 / Avg: 12.81 / Max: 13.02Min: 12.79 / Avg: 12.83 / Max: 12.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssdEDCBA510152025SE +/- 0.19, N = 3SE +/- 0.17, N = 3SE +/- 0.14, N = 3SE +/- 0.13, N = 15SE +/- 0.19, N = 318.1718.3918.5918.5318.32MIN: 16.13 / MAX: 27.12MIN: 15.88 / MAX: 27.52MIN: 16.26 / MAX: 43.69MIN: 15.19 / MAX: 66.5MIN: 15.5 / MAX: 34.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssdEDCBA510152025Min: 17.9 / Avg: 18.17 / Max: 18.53Min: 18.21 / Avg: 18.39 / Max: 18.72Min: 18.32 / Avg: 18.59 / Max: 18.81Min: 17.82 / Avg: 18.53 / Max: 19.3Min: 18.01 / Avg: 18.32 / Max: 18.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tinyEDCBA510152025SE +/- 0.14, N = 3SE +/- 0.04, N = 3SE +/- 0.14, N = 3SE +/- 0.09, N = 15SE +/- 0.14, N = 321.4321.3021.5721.3321.51MIN: 19.88 / MAX: 29.43MIN: 19.44 / MAX: 33.91MIN: 19.56 / MAX: 36.88MIN: 19.29 / MAX: 47.85MIN: 19.61 / MAX: 83.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tinyEDCBA510152025Min: 21.21 / Avg: 21.43 / Max: 21.7Min: 21.22 / Avg: 21.3 / Max: 21.34Min: 21.32 / Avg: 21.57 / Max: 21.8Min: 20.65 / Avg: 21.33 / Max: 22.07Min: 21.35 / Avg: 21.51 / Max: 21.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet50EDCBA510152025SE +/- 0.19, N = 3SE +/- 0.10, N = 3SE +/- 0.15, N = 3SE +/- 0.11, N = 15SE +/- 0.28, N = 321.3921.4821.7020.9821.47MIN: 19.66 / MAX: 37.46MIN: 19.75 / MAX: 39.98MIN: 19.94 / MAX: 30.5MIN: 18.95 / MAX: 79.65MIN: 19.56 / MAX: 30.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet50EDCBA510152025Min: 21.16 / Avg: 21.39 / Max: 21.76Min: 21.32 / Avg: 21.48 / Max: 21.65Min: 21.42 / Avg: 21.7 / Max: 21.92Min: 20.38 / Avg: 20.98 / Max: 21.58Min: 21.03 / Avg: 21.47 / Max: 21.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnetEDCBA3691215SE +/- 1.56, N = 3SE +/- 0.03, N = 3SE +/- 0.09, N = 3SE +/- 0.03, N = 15SE +/- 0.05, N = 39.307.727.677.737.79MIN: 7.13 / MAX: 633.17MIN: 7.14 / MAX: 16.18MIN: 7.09 / MAX: 16.27MIN: 7.11 / MAX: 21.38MIN: 7.1 / MAX: 16.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnetEDCBA3691215Min: 7.72 / Avg: 9.3 / Max: 12.41Min: 7.67 / Avg: 7.72 / Max: 7.76Min: 7.54 / Avg: 7.67 / Max: 7.84Min: 7.53 / Avg: 7.73 / Max: 7.92Min: 7.7 / Avg: 7.79 / Max: 7.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet18EDCBA3691215SE +/- 0.08, N = 3SE +/- 0.17, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 15SE +/- 0.02, N = 312.1412.1612.0812.2112.17MIN: 10.93 / MAX: 29.46MIN: 10.93 / MAX: 21.19MIN: 10.9 / MAX: 20.66MIN: 10.82 / MAX: 25.49MIN: 11.06 / MAX: 20.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet18EDCBA48121620Min: 11.99 / Avg: 12.14 / Max: 12.27Min: 11.83 / Avg: 12.16 / Max: 12.41Min: 12.02 / Avg: 12.08 / Max: 12.15Min: 12.01 / Avg: 12.21 / Max: 12.66Min: 12.14 / Avg: 12.17 / Max: 12.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg16EDCBA1122334455SE +/- 0.40, N = 3SE +/- 0.08, N = 3SE +/- 0.04, N = 3SE +/- 0.12, N = 15SE +/- 0.13, N = 347.9447.3447.5847.5247.58MIN: 44.37 / MAX: 85.57MIN: 43.83 / MAX: 71.03MIN: 44.41 / MAX: 87.25MIN: 43.88 / MAX: 109.37MIN: 44.3 / MAX: 69.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg16EDCBA1020304050Min: 47.41 / Avg: 47.94 / Max: 48.72Min: 47.19 / Avg: 47.34 / Max: 47.48Min: 47.51 / Avg: 47.58 / Max: 47.64Min: 46.8 / Avg: 47.52 / Max: 48.34Min: 47.32 / Avg: 47.58 / Max: 47.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenetEDCBA3691215SE +/- 0.11, N = 3SE +/- 0.14, N = 3SE +/- 0.01, N = 3SE +/- 0.10, N = 15SE +/- 0.31, N = 311.5311.5611.3811.6912.19MIN: 10.64 / MAX: 33.91MIN: 10.58 / MAX: 20.15MIN: 10.66 / MAX: 20.39MIN: 10.57 / MAX: 54.38MIN: 10.69 / MAX: 42.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenetEDCBA48121620Min: 11.35 / Avg: 11.53 / Max: 11.74Min: 11.42 / Avg: 11.56 / Max: 11.83Min: 11.36 / Avg: 11.38 / Max: 11.4Min: 11.31 / Avg: 11.69 / Max: 12.45Min: 11.57 / Avg: 12.19 / Max: 12.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazefaceEDCBA0.4230.8461.2691.6922.115SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 15SE +/- 0.01, N = 31.831.881.821.801.82MIN: 1.69 / MAX: 10.06MIN: 1.68 / MAX: 49.69MIN: 1.71 / MAX: 4.41MIN: 1.63 / MAX: 9.64MIN: 1.7 / MAX: 9.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazefaceEDCBA246810Min: 1.81 / Avg: 1.83 / Max: 1.84Min: 1.82 / Avg: 1.88 / Max: 1.97Min: 1.81 / Avg: 1.82 / Max: 1.82Min: 1.73 / Avg: 1.8 / Max: 1.85Min: 1.81 / Avg: 1.82 / Max: 1.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b0EDCBA246810SE +/- 0.01, N = 3SE +/- 0.08, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 15SE +/- 0.02, N = 35.906.025.905.975.91MIN: 5.53 / MAX: 32.65MIN: 5.56 / MAX: 47.25MIN: 5.59 / MAX: 14.02MIN: 5.56 / MAX: 78.45MIN: 5.57 / MAX: 171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b0EDCBA246810Min: 5.89 / Avg: 5.9 / Max: 5.91Min: 5.89 / Avg: 6.02 / Max: 6.16Min: 5.88 / Avg: 5.9 / Max: 5.92Min: 5.86 / Avg: 5.97 / Max: 6.25Min: 5.88 / Avg: 5.91 / Max: 5.951. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnetEDCBA0.89331.78662.67993.57324.4665SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 15SE +/- 0.01, N = 33.903.903.893.973.89MIN: 3.65 / MAX: 24.96MIN: 3.68 / MAX: 11.86MIN: 3.69 / MAX: 11.45MIN: 3.69 / MAX: 14.19MIN: 3.68 / MAX: 11.811. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnetEDCBA246810Min: 3.87 / Avg: 3.9 / Max: 3.91Min: 3.88 / Avg: 3.9 / Max: 3.93Min: 3.87 / Avg: 3.89 / Max: 3.91Min: 3.87 / Avg: 3.97 / Max: 4.6Min: 3.88 / Avg: 3.89 / Max: 3.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v2EDCBA0.96751.9352.90253.874.8375SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 14SE +/- 0.03, N = 24.284.264.304.304.30MIN: 4.01 / MAX: 12.08MIN: 4.01 / MAX: 12.04MIN: 4.07 / MAX: 12.2MIN: 3.97 / MAX: 35.81MIN: 4.03 / MAX: 12.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v2EDCBA246810Min: 4.27 / Avg: 4.28 / Max: 4.29Min: 4.24 / Avg: 4.26 / Max: 4.28Min: 4.28 / Avg: 4.3 / Max: 4.32Min: 4.22 / Avg: 4.3 / Max: 4.4Min: 4.27 / Avg: 4.3 / Max: 4.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v3EDCBA0.85951.7192.57853.4384.2975SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 15SE +/- 0.00, N = 33.763.773.773.823.78MIN: 3.56 / MAX: 12.52MIN: 3.55 / MAX: 11.83MIN: 3.56 / MAX: 11.78MIN: 3.55 / MAX: 35.21MIN: 3.56 / MAX: 11.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v3EDCBA246810Min: 3.74 / Avg: 3.76 / Max: 3.77Min: 3.76 / Avg: 3.77 / Max: 3.79Min: 3.75 / Avg: 3.77 / Max: 3.78Min: 3.74 / Avg: 3.82 / Max: 4.39Min: 3.77 / Avg: 3.78 / Max: 3.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v2EDCBA0.97431.94862.92293.89724.8715SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 15SE +/- 0.01, N = 34.304.314.284.334.29MIN: 3.94 / MAX: 12.37MIN: 3.94 / MAX: 16.66MIN: 3.95 / MAX: 12.13MIN: 3.93 / MAX: 33.03MIN: 3.94 / MAX: 12.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v2EDCBA246810Min: 4.29 / Avg: 4.3 / Max: 4.31Min: 4.28 / Avg: 4.31 / Max: 4.35Min: 4.26 / Avg: 4.28 / Max: 4.29Min: 4.26 / Avg: 4.33 / Max: 4.94Min: 4.27 / Avg: 4.29 / Max: 4.311. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenetEDCBA3691215SE +/- 0.05, N = 3SE +/- 0.14, N = 3SE +/- 0.14, N = 3SE +/- 0.35, N = 15SE +/- 0.16, N = 311.6411.8411.7111.9211.46MIN: 10.9 / MAX: 20.48MIN: 10.94 / MAX: 29.92MIN: 10.59 / MAX: 60.99MIN: 10.28 / MAX: 716.44MIN: 10.61 / MAX: 19.161. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenetEDCBA3691215Min: 11.55 / Avg: 11.64 / Max: 11.7Min: 11.6 / Avg: 11.84 / Max: 12.07Min: 11.47 / Avg: 11.71 / Max: 11.95Min: 10.87 / Avg: 11.92 / Max: 16.49Min: 11.18 / Avg: 11.46 / Max: 11.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread