ncnn mnn 2022

AMD Ryzen 9 5950X 16-Core testing with a ASUS ROG CROSSHAIR VIII HERO (WI-FI) (4006 BIOS) and AMD Radeon RX 6700/6700 XT / 6800M on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2208131-PTS-NCNNMNN216
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
A
August 13 2022
  3 Hours, 26 Minutes
B
August 13 2022
  9 Hours, 13 Minutes
C
August 13 2022
  8 Hours, 56 Minutes
D
August 13 2022
  11 Hours, 18 Minutes
E
August 13 2022
  11 Hours, 19 Minutes
Invert Behavior (Only Show Selected Data)
  8 Hours, 51 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


ncnn mnn 2022OpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen 9 5950X 16-Core @ 3.40GHz (16 Cores / 32 Threads)ASUS ROG CROSSHAIR VIII HERO (WI-FI) (4006 BIOS)AMD Starship/Matisse32GB1000GB Sabrent Rocket 4.0 PlusAMD Radeon RX 6700/6700 XT / 6800M (2880/1124MHz)AMD Navi 21 HDMI AudioASUS MG28URealtek RTL8125 2.5GbE + Intel I211 + Intel Wi-Fi 6 AX200Ubuntu 22.045.15.0-46-generic (x86_64)GNOME Shell 42.2X Server 1.21.1.3 + Wayland4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.42)1.3.204GCC 11.2.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionNcnn Mnn 2022 BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201016 - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

ABCDEResult OverviewPhoronix Test Suite100%105%111%116%121%NCNNNCNNNCNNNCNNNCNNNCNNMobile Neural NetworkNCNNNCNNMobile Neural NetworkNCNNNCNNNCNNNCNNNCNNNCNNMobile Neural NetworkNCNNNCNNMobile Neural NetworkMobile Neural NetworkNCNNNCNNNCNNNCNNNCNNNCNNNCNNNCNNMobile Neural NetworkNCNNNCNNNCNNNCNNNCNNNCNNMobile Neural NetworkNCNNNCNNNCNNNCNNCPU - alexnetVulkan GPU - FastestDetVulkan GPU - squeezenet_ssdVulkan GPU - alexnetVulkan GPU - mobilenetCPU - googlenetSqueezeNetV1.0CPU - blazefaceVulkan GPU-v2-v2 - mobilenet-v2resnet-v2-50CPU - mobilenetVulkan GPU - shufflenet-v2Vulkan GPU - blazefaceVulkan GPU - mnasnetVulkan GPU - resnet18CPU - resnet50mobilenet-v1-1.0Vulkan GPU-v3-v3 - mobilenet-v3CPU - FastestDetmobilenetV3squeezenetv1.1Vulkan GPU - vgg16Vulkan GPU - regnety_400mCPU - squeezenet_ssdCPU - mnasnetCPU - efficientnet-b0Vulkan GPU - yolov4-tinyVulkan GPU - googlenetCPU-v3-v3 - mobilenet-v3MobileNetV2_224Vulkan GPU - efficientnet-b0CPU - yolov4-tinyCPU - vgg16Vulkan GPU - resnet50CPU-v2-v2 - mobilenet-v2CPU - resnet18inception-v3CPU - vision_transformerCPU - shufflenet-v2CPU - regnety_400mVulkan GPU - vision_transformer

ncnn mnn 2022ncnn: CPU - googlenetmnn: SqueezeNetV1.0ncnn: CPU - blazefacemnn: resnet-v2-50ncnn: Vulkan GPU - blazefacencnn: Vulkan GPU - mnasnetncnn: Vulkan GPU - resnet18ncnn: CPU - resnet50mnn: mobilenet-v1-1.0ncnn: Vulkan GPU-v3-v3 - mobilenet-v3mnn: mobilenetV3mnn: squeezenetv1.1ncnn: Vulkan GPU - vgg16ncnn: CPU - squeezenet_ssdncnn: CPU - mnasnetncnn: CPU - efficientnet-b0ncnn: Vulkan GPU - googlenetncnn: CPU-v3-v3 - mobilenet-v3mnn: MobileNetV2_224ncnn: Vulkan GPU - efficientnet-b0ncnn: CPU - yolov4-tinyncnn: CPU - vgg16ncnn: Vulkan GPU - resnet50ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - resnet18mnn: inception-v3ncnn: CPU - vision_transformerncnn: CPU - shufflenet-v2ncnn: CPU - regnety_400mncnn: Vulkan GPU - vision_transformerncnn: Vulkan GPU - FastestDetncnn: Vulkan GPU - regnety_400mncnn: Vulkan GPU - squeezenet_ssdncnn: Vulkan GPU - yolov4-tinyncnn: Vulkan GPU - alexnetncnn: Vulkan GPU - shufflenet-v2ncnn: Vulkan GPU-v2-v2 - mobilenet-v2ncnn: Vulkan GPU - mobilenetncnn: CPU - FastestDetncnn: CPU - alexnetncnn: CPU - mobilenetABCDE12.195.2341.8221.9561.641.972.6321.472.6252.351.9033.2096.7718.323.895.913.673.783.4494.9321.5147.584.994.2912.1726.074123.554.3012.83220.752.503.115.2314.582.082.041.929.454.987.7911.4611.695.1441.8021.5641.602.042.5420.982.5422.361.8603.2276.9318.533.975.973.713.823.4224.8921.3347.524.944.3312.2125.806122.894.3012.81219.902.283.035.1414.482.242.121.959.344.907.7311.9211.385.3501.8221.0981.652.012.6021.702.5562.341.8653.2786.7518.593.895.903.663.773.4684.9021.5747.585.004.2812.0825.905122.634.3012.90219.572.333.095.4714.562.092.091.999.844.937.6711.7111.565.3021.8821.7041.642.042.6121.482.5922.361.9133.2986.7518.393.906.023.713.773.4564.8621.3047.345.004.3112.1625.924123.874.2612.85220.312.383.085.5014.592.082.091.989.904.867.7211.8411.535.3821.8321.5361.662.042.6221.392.5812.411.9023.2966.8018.173.905.903.723.763.4724.8621.4347.944.974.3012.1425.990123.444.2812.79220.852.373.065.5514.772.122.092.0010.054.849.3011.64OpenBenchmarking.org

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenetEDCBA3691215SE +/- 0.11, N = 3SE +/- 0.14, N = 3SE +/- 0.01, N = 3SE +/- 0.10, N = 15SE +/- 0.31, N = 311.5311.5611.3811.6912.19MIN: 10.64 / MAX: 33.91MIN: 10.58 / MAX: 20.15MIN: 10.66 / MAX: 20.39MIN: 10.57 / MAX: 54.38MIN: 10.69 / MAX: 42.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: SqueezeNetV1.0EDCBA1.2112.4223.6334.8446.055SE +/- 0.042, N = 15SE +/- 0.047, N = 15SE +/- 0.070, N = 3SE +/- 0.059, N = 3SE +/- 0.029, N = 35.3825.3025.3505.1445.234MIN: 4.9 / MAX: 40.68MIN: 4.58 / MAX: 14.4MIN: 5.01 / MAX: 13.89MIN: 4.8 / MAX: 17.58MIN: 4.93 / MAX: 13.771. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazefaceEDCBA0.4230.8461.2691.6922.115SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 15SE +/- 0.01, N = 31.831.881.821.801.82MIN: 1.69 / MAX: 10.06MIN: 1.68 / MAX: 49.69MIN: 1.71 / MAX: 4.41MIN: 1.63 / MAX: 9.64MIN: 1.7 / MAX: 9.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: resnet-v2-50EDCBA510152025SE +/- 0.08, N = 15SE +/- 0.10, N = 15SE +/- 0.14, N = 3SE +/- 0.49, N = 3SE +/- 0.06, N = 321.5421.7021.1021.5621.96MIN: 19.51 / MAX: 34.69MIN: 19.47 / MAX: 39.03MIN: 19.73 / MAX: 46.88MIN: 19.55 / MAX: 34.8MIN: 19.8 / MAX: 82.591. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: blazefaceEDCBA0.37350.7471.12051.4941.8675SE +/- 0.02, N = 15SE +/- 0.01, N = 15SE +/- 0.01, N = 15SE +/- 0.02, N = 3SE +/- 0.04, N = 31.661.641.651.601.64MIN: 1.15 / MAX: 22.72MIN: 1.16 / MAX: 14.86MIN: 1.15 / MAX: 12.54MIN: 1.25 / MAX: 10.18MIN: 1.18 / MAX: 11.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: mnasnetEDCBA0.4590.9181.3771.8362.295SE +/- 0.02, N = 14SE +/- 0.02, N = 13SE +/- 0.01, N = 15SE +/- 0.03, N = 3SE +/- 0.02, N = 32.042.042.012.041.97MIN: 1.73 / MAX: 11.29MIN: 1.73 / MAX: 14.09MIN: 1.73 / MAX: 9.57MIN: 1.74 / MAX: 9.58MIN: 1.73 / MAX: 6.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: resnet18EDCBA0.59181.18361.77542.36722.959SE +/- 0.03, N = 13SE +/- 0.03, N = 15SE +/- 0.02, N = 15SE +/- 0.02, N = 3SE +/- 0.03, N = 32.622.612.602.542.63MIN: 2.15 / MAX: 20.98MIN: 2.15 / MAX: 21.7MIN: 2.15 / MAX: 15.22MIN: 2.16 / MAX: 11.84MIN: 2.15 / MAX: 16.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet50EDCBA510152025SE +/- 0.19, N = 3SE +/- 0.10, N = 3SE +/- 0.15, N = 3SE +/- 0.11, N = 15SE +/- 0.28, N = 321.3921.4821.7020.9821.47MIN: 19.66 / MAX: 37.46MIN: 19.75 / MAX: 39.98MIN: 19.94 / MAX: 30.5MIN: 18.95 / MAX: 79.65MIN: 19.56 / MAX: 30.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: mobilenet-v1-1.0EDCBA0.59061.18121.77182.36242.953SE +/- 0.018, N = 15SE +/- 0.025, N = 15SE +/- 0.031, N = 3SE +/- 0.018, N = 3SE +/- 0.021, N = 32.5812.5922.5562.5422.625MIN: 2.32 / MAX: 11.34MIN: 2.32 / MAX: 11.27MIN: 2.32 / MAX: 11.13MIN: 2.36 / MAX: 11.09MIN: 2.41 / MAX: 11.151. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3EDCBA0.54231.08461.62692.16922.7115SE +/- 0.03, N = 15SE +/- 0.01, N = 15SE +/- 0.01, N = 15SE +/- 0.02, N = 3SE +/- 0.02, N = 32.412.362.342.362.35MIN: 2.07 / MAX: 19.36MIN: 2.08 / MAX: 15.13MIN: 2.07 / MAX: 14.62MIN: 2.09 / MAX: 9.49MIN: 2.07 / MAX: 8.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: mobilenetV3EDCBA0.43040.86081.29121.72162.152SE +/- 0.015, N = 15SE +/- 0.015, N = 15SE +/- 0.014, N = 3SE +/- 0.010, N = 3SE +/- 0.020, N = 31.9021.9131.8651.8601.903MIN: 1.69 / MAX: 11.39MIN: 1.69 / MAX: 11.76MIN: 1.74 / MAX: 10.98MIN: 1.74 / MAX: 11.33MIN: 1.71 / MAX: 13.231. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: squeezenetv1.1EDCBA0.74211.48422.22632.96843.7105SE +/- 0.028, N = 15SE +/- 0.036, N = 15SE +/- 0.046, N = 3SE +/- 0.058, N = 3SE +/- 0.079, N = 33.2963.2983.2783.2273.209MIN: 2.88 / MAX: 12.56MIN: 2.83 / MAX: 11.8MIN: 3.03 / MAX: 11.65MIN: 2.95 / MAX: 11.74MIN: 2.96 / MAX: 11.861. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: vgg16EDCBA246810SE +/- 0.02, N = 15SE +/- 0.02, N = 15SE +/- 0.03, N = 15SE +/- 0.06, N = 3SE +/- 0.05, N = 36.806.756.756.936.77MIN: 6.24 / MAX: 30.4MIN: 6.25 / MAX: 28.85MIN: 6.25 / MAX: 36.3MIN: 6.25 / MAX: 28.94MIN: 6.25 / MAX: 25.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssdEDCBA510152025SE +/- 0.19, N = 3SE +/- 0.17, N = 3SE +/- 0.14, N = 3SE +/- 0.13, N = 15SE +/- 0.19, N = 318.1718.3918.5918.5318.32MIN: 16.13 / MAX: 27.12MIN: 15.88 / MAX: 27.52MIN: 16.26 / MAX: 43.69MIN: 15.19 / MAX: 66.5MIN: 15.5 / MAX: 34.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnetEDCBA0.89331.78662.67993.57324.4665SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 15SE +/- 0.01, N = 33.903.903.893.973.89MIN: 3.65 / MAX: 24.96MIN: 3.68 / MAX: 11.86MIN: 3.69 / MAX: 11.45MIN: 3.69 / MAX: 14.19MIN: 3.68 / MAX: 11.811. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b0EDCBA246810SE +/- 0.01, N = 3SE +/- 0.08, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 15SE +/- 0.02, N = 35.906.025.905.975.91MIN: 5.53 / MAX: 32.65MIN: 5.56 / MAX: 47.25MIN: 5.59 / MAX: 14.02MIN: 5.56 / MAX: 78.45MIN: 5.57 / MAX: 171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: googlenetEDCBA0.8371.6742.5113.3484.185SE +/- 0.03, N = 15SE +/- 0.04, N = 15SE +/- 0.03, N = 15SE +/- 0.08, N = 3SE +/- 0.01, N = 33.723.713.663.713.67MIN: 3.24 / MAX: 18.21MIN: 3.24 / MAX: 25.35MIN: 3.24 / MAX: 16.69MIN: 3.26 / MAX: 16.59MIN: 3.25 / MAX: 17.161. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v3EDCBA0.85951.7192.57853.4384.2975SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 15SE +/- 0.00, N = 33.763.773.773.823.78MIN: 3.56 / MAX: 12.52MIN: 3.55 / MAX: 11.83MIN: 3.56 / MAX: 11.78MIN: 3.55 / MAX: 35.21MIN: 3.56 / MAX: 11.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: MobileNetV2_224EDCBA0.78121.56242.34363.12483.906SE +/- 0.024, N = 15SE +/- 0.026, N = 15SE +/- 0.008, N = 3SE +/- 0.019, N = 3SE +/- 0.011, N = 33.4723.4563.4683.4223.449MIN: 3.04 / MAX: 13.37MIN: 3.05 / MAX: 17.71MIN: 3.25 / MAX: 13.17MIN: 3.2 / MAX: 12.93MIN: 3.24 / MAX: 13.321. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: efficientnet-b0EDCBA1.10932.21863.32794.43725.5465SE +/- 0.01, N = 15SE +/- 0.01, N = 15SE +/- 0.02, N = 15SE +/- 0.05, N = 3SE +/- 0.04, N = 34.864.864.904.894.93MIN: 4.52 / MAX: 18.87MIN: 4.49 / MAX: 18.62MIN: 4.48 / MAX: 29.81MIN: 4.53 / MAX: 27.46MIN: 4.52 / MAX: 29.241. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tinyEDCBA510152025SE +/- 0.14, N = 3SE +/- 0.04, N = 3SE +/- 0.14, N = 3SE +/- 0.09, N = 15SE +/- 0.14, N = 321.4321.3021.5721.3321.51MIN: 19.88 / MAX: 29.43MIN: 19.44 / MAX: 33.91MIN: 19.56 / MAX: 36.88MIN: 19.29 / MAX: 47.85MIN: 19.61 / MAX: 83.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg16EDCBA1122334455SE +/- 0.40, N = 3SE +/- 0.08, N = 3SE +/- 0.04, N = 3SE +/- 0.12, N = 15SE +/- 0.13, N = 347.9447.3447.5847.5247.58MIN: 44.37 / MAX: 85.57MIN: 43.83 / MAX: 71.03MIN: 44.41 / MAX: 87.25MIN: 43.88 / MAX: 109.37MIN: 44.3 / MAX: 69.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: resnet50EDCBA1.1252.253.3754.55.625SE +/- 0.03, N = 15SE +/- 0.04, N = 15SE +/- 0.03, N = 15SE +/- 0.03, N = 3SE +/- 0.02, N = 34.975.005.004.944.99MIN: 4.49 / MAX: 30.82MIN: 4.5 / MAX: 30.89MIN: 4.49 / MAX: 33.3MIN: 4.5 / MAX: 26.23MIN: 4.5 / MAX: 26.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v2EDCBA0.97431.94862.92293.89724.8715SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 15SE +/- 0.01, N = 34.304.314.284.334.29MIN: 3.94 / MAX: 12.37MIN: 3.94 / MAX: 16.66MIN: 3.95 / MAX: 12.13MIN: 3.93 / MAX: 33.03MIN: 3.94 / MAX: 12.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet18EDCBA3691215SE +/- 0.08, N = 3SE +/- 0.17, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 15SE +/- 0.02, N = 312.1412.1612.0812.2112.17MIN: 10.93 / MAX: 29.46MIN: 10.93 / MAX: 21.19MIN: 10.9 / MAX: 20.66MIN: 10.82 / MAX: 25.49MIN: 11.06 / MAX: 20.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: inception-v3EDCBA612182430SE +/- 0.06, N = 15SE +/- 0.08, N = 15SE +/- 0.05, N = 3SE +/- 0.07, N = 3SE +/- 0.18, N = 325.9925.9225.9125.8126.07MIN: 23.91 / MAX: 39.7MIN: 23.04 / MAX: 39.16MIN: 24.43 / MAX: 37.78MIN: 24.39 / MAX: 40.84MIN: 24.37 / MAX: 39.271. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformerEDCBA306090120150SE +/- 0.61, N = 3SE +/- 0.36, N = 3SE +/- 0.17, N = 3SE +/- 0.11, N = 15SE +/- 0.06, N = 3123.44123.87122.63122.89123.55MIN: 119.37 / MAX: 173.88MIN: 119.07 / MAX: 187.38MIN: 119.12 / MAX: 162.72MIN: 119.06 / MAX: 171.55MIN: 119.55 / MAX: 191.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v2EDCBA0.96751.9352.90253.874.8375SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 14SE +/- 0.03, N = 24.284.264.304.304.30MIN: 4.01 / MAX: 12.08MIN: 4.01 / MAX: 12.04MIN: 4.07 / MAX: 12.2MIN: 3.97 / MAX: 35.81MIN: 4.03 / MAX: 12.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400mEDCBA3691215SE +/- 0.10, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 15SE +/- 0.04, N = 312.7912.8512.9012.8112.83MIN: 11.97 / MAX: 21.29MIN: 12.05 / MAX: 26.05MIN: 12.14 / MAX: 20.46MIN: 11.76 / MAX: 25.73MIN: 12.11 / MAX: 26.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: vision_transformerEDCBA50100150200250SE +/- 0.36, N = 15SE +/- 0.34, N = 15SE +/- 0.33, N = 15SE +/- 1.44, N = 3SE +/- 0.44, N = 3220.85220.31219.57219.90220.75MIN: 204.72 / MAX: 1074.16MIN: 204.86 / MAX: 914.53MIN: 204.22 / MAX: 967.44MIN: 204.44 / MAX: 295.75MIN: 205.2 / MAX: 332.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: FastestDetEDCBA0.56251.1251.68752.252.8125SE +/- 0.03, N = 15SE +/- 0.04, N = 15SE +/- 0.01, N = 14SE +/- 0.03, N = 3SE +/- 0.17, N = 32.372.382.332.282.50MIN: 1.85 / MAX: 17.06MIN: 1.84 / MAX: 9.1MIN: 1.84 / MAX: 7.89MIN: 1.85 / MAX: 8.13MIN: 1.86 / MAX: 8.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: regnety_400mEDCBA0.69981.39962.09942.79923.499SE +/- 0.03, N = 15SE +/- 0.03, N = 15SE +/- 0.03, N = 15SE +/- 0.12, N = 3SE +/- 0.13, N = 33.063.083.093.033.11MIN: 2.69 / MAX: 14.14MIN: 2.69 / MAX: 25.76MIN: 2.7 / MAX: 23.46MIN: 2.67 / MAX: 13.33MIN: 2.71 / MAX: 12.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: squeezenet_ssdEDCBA1.24882.49763.74644.99526.244SE +/- 0.14, N = 15SE +/- 0.12, N = 15SE +/- 0.13, N = 15SE +/- 0.10, N = 3SE +/- 0.07, N = 35.555.505.475.145.23MIN: 3.7 / MAX: 22.31MIN: 3.7 / MAX: 21.81MIN: 3.7 / MAX: 27.5MIN: 3.67 / MAX: 16.01MIN: 3.64 / MAX: 17.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: yolov4-tinyEDCBA48121620SE +/- 0.25, N = 15SE +/- 0.03, N = 15SE +/- 0.04, N = 15SE +/- 0.09, N = 3SE +/- 0.04, N = 314.7714.5914.5614.4814.58MIN: 12.94 / MAX: 50.63MIN: 12.89 / MAX: 35.38MIN: 12.76 / MAX: 28.55MIN: 13.05 / MAX: 21.86MIN: 13.18 / MAX: 22.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: alexnetEDCBA0.5041.0081.5122.0162.52SE +/- 0.03, N = 12SE +/- 0.02, N = 13SE +/- 0.03, N = 15SE +/- 0.08, N = 3SE +/- 0.06, N = 32.122.082.092.242.08MIN: 1.68 / MAX: 19.9MIN: 1.67 / MAX: 18.09MIN: 1.67 / MAX: 18.2MIN: 1.68 / MAX: 18.33MIN: 1.68 / MAX: 11.231. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: shufflenet-v2EDCBA0.4770.9541.4311.9082.385SE +/- 0.01, N = 15SE +/- 0.02, N = 15SE +/- 0.02, N = 15SE +/- 0.02, N = 3SE +/- 0.07, N = 32.092.092.092.122.04MIN: 1.65 / MAX: 9.44MIN: 1.65 / MAX: 13.26MIN: 1.65 / MAX: 9.6MIN: 1.67 / MAX: 7.91MIN: 1.65 / MAX: 11.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2EDCBA0.450.91.351.82.25SE +/- 0.02, N = 15SE +/- 0.01, N = 15SE +/- 0.04, N = 15SE +/- 0.05, N = 3SE +/- 0.04, N = 32.001.981.991.951.92MIN: 1.7 / MAX: 12.73MIN: 1.69 / MAX: 9.54MIN: 1.69 / MAX: 14.04MIN: 1.71 / MAX: 6.43MIN: 1.7 / MAX: 6.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: mobilenetEDCBA3691215SE +/- 0.23, N = 15SE +/- 0.17, N = 15SE +/- 0.22, N = 15SE +/- 0.10, N = 3SE +/- 0.13, N = 310.059.909.849.349.45MIN: 4.72 / MAX: 26.35MIN: 4.66 / MAX: 29.74MIN: 5.61 / MAX: 25.64MIN: 5.97 / MAX: 18.47MIN: 5.91 / MAX: 20.241. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDetEDCBA1.12052.2413.36154.4825.6025SE +/- 0.19, N = 3SE +/- 0.02, N = 3SE +/- 0.08, N = 3SE +/- 0.04, N = 15SE +/- 0.11, N = 34.844.864.934.904.98MIN: 4.19 / MAX: 12.16MIN: 4.61 / MAX: 12MIN: 4.52 / MAX: 11.93MIN: 4.19 / MAX: 12.29MIN: 4.56 / MAX: 12.111. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnetEDCBA3691215SE +/- 1.56, N = 3SE +/- 0.03, N = 3SE +/- 0.09, N = 3SE +/- 0.03, N = 15SE +/- 0.05, N = 39.307.727.677.737.79MIN: 7.13 / MAX: 633.17MIN: 7.14 / MAX: 16.18MIN: 7.09 / MAX: 16.27MIN: 7.11 / MAX: 21.38MIN: 7.1 / MAX: 16.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenetEDCBA3691215SE +/- 0.05, N = 3SE +/- 0.14, N = 3SE +/- 0.14, N = 3SE +/- 0.35, N = 15SE +/- 0.16, N = 311.6411.8411.7111.9211.46MIN: 10.9 / MAX: 20.48MIN: 10.94 / MAX: 29.92MIN: 10.59 / MAX: 60.99MIN: 10.28 / MAX: 716.44MIN: 10.61 / MAX: 19.161. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread