nn

AMD Ryzen 9 5900X 12-Core testing with a ASUS ROG CROSSHAIR VIII HERO (3501 BIOS) and AMD Radeon RX 6800/6800 XT / 6900 16GB on Ubuntu 21.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2106189-PTS-NN67322042
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

HPC - High Performance Computing 3 Tests
Machine Learning 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
1
June 18 2021
  44 Minutes
1a
June 18 2021
  39 Minutes
2
June 18 2021
  1 Hour, 23 Minutes
3
June 18 2021
  2 Hours, 5 Minutes
Invert Hiding All Results Option
  1 Hour, 13 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


nnProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen Resolution11a23AMD Ryzen 9 5900X 12-Core @ 3.70GHz (12 Cores / 24 Threads)ASUS ROG CROSSHAIR VIII HERO (3501 BIOS)AMD Starship/Matisse16GB1000GB Sabrent Rocket 4.0 Plus + 2000GBAMD Radeon RX 6800/6800 XT / 6900 16GB (2475/1000MHz)AMD Navi 21 HDMI AudioASUS VP28URealtek RTL8125 2.5GbE + Intel I211Ubuntu 21.045.13.0-051300rc6daily20210617-generic (x86_64) 20210616GNOME Shell 3.38.4X Server 1.20.11 + Wayland4.6 Mesa 21.2.0-devel (git-849ab4e 2021-06-17 hirsute-oibaf-ppa) (LLVM 12.0.0)1.2.180GCC 10.3.0 + CUDA 11.3ext43840x2160OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-mutex --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-gDeRY6/gcc-10-10.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-gDeRY6/gcc-10-10.3.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201009Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

nnmnn: mobilenetV3mnn: squeezenetv1.1mnn: resnet-v2-50mnn: SqueezeNetV1.0mnn: MobileNetV2_224mnn: mobilenet-v1-1.0mnn: inception-v3tnn: CPU - DenseNettnn: CPU - MobileNet v2tnn: CPU - SqueezeNet v2tnn: CPU - SqueezeNet v1.1ncnn: CPU - mobilenetncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - shufflenet-v2ncnn: CPU - mnasnetncnn: CPU - efficientnet-b0ncnn: CPU - blazefacencnn: CPU - googlenetncnn: CPU - vgg16ncnn: CPU - resnet18ncnn: CPU - alexnetncnn: CPU - resnet50ncnn: CPU - yolov4-tinyncnn: CPU - squeezenet_ssdncnn: CPU - regnety_400m11a232.1293.74327.9445.0573.2774.38426.1032510.762226.92850.930212.47612.144.103.914.003.675.101.7112.3855.0713.7411.0722.8021.4814.809.422.1163.73627.4035.1213.2464.35226.1692504.442226.52351.113212.73812.124.164.004.103.825.281.7912.4555.2613.5211.2022.5522.0014.789.502.0593.61927.0105.0233.2114.32525.6162505.204224.05150.718211.08612.054.133.964.053.795.131.7012.5155.9813.6111.1322.6921.9114.759.34OpenBenchmarking.org

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenetV31230.4790.9581.4371.9162.395SE +/- 0.027, N = 3SE +/- 0.009, N = 3SE +/- 0.019, N = 72.1292.1162.059MIN: 1.91 / MAX: 11.33MIN: 2.01 / MAX: 10.13MIN: 1.88 / MAX: 10.151. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenetV3123246810Min: 2.08 / Avg: 2.13 / Max: 2.16Min: 2.1 / Avg: 2.12 / Max: 2.13Min: 1.97 / Avg: 2.06 / Max: 2.131. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: squeezenetv1.11230.84221.68442.52663.36884.211SE +/- 0.068, N = 3SE +/- 0.012, N = 3SE +/- 0.058, N = 73.7433.7363.619MIN: 3.36 / MAX: 12.79MIN: 3.56 / MAX: 11.48MIN: 3.25 / MAX: 12.541. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: squeezenetv1.1123246810Min: 3.62 / Avg: 3.74 / Max: 3.85Min: 3.72 / Avg: 3.74 / Max: 3.76Min: 3.39 / Avg: 3.62 / Max: 3.81. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: resnet-v2-50123714212835SE +/- 0.51, N = 3SE +/- 0.15, N = 3SE +/- 0.04, N = 727.9427.4027.01MIN: 25.6 / MAX: 40.37MIN: 26.03 / MAX: 50.79MIN: 25.86 / MAX: 50.681. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: resnet-v2-50123612182430Min: 27.27 / Avg: 27.94 / Max: 28.94Min: 27.13 / Avg: 27.4 / Max: 27.62Min: 26.77 / Avg: 27.01 / Max: 27.131. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: SqueezeNetV1.01231.15222.30443.45664.60885.761SE +/- 0.054, N = 3SE +/- 0.050, N = 3SE +/- 0.049, N = 75.0575.1215.023MIN: 4.64 / MAX: 33.19MIN: 4.85 / MAX: 13.16MIN: 4.59 / MAX: 14.331. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: SqueezeNetV1.0123246810Min: 4.95 / Avg: 5.06 / Max: 5.12Min: 5.06 / Avg: 5.12 / Max: 5.22Min: 4.76 / Avg: 5.02 / Max: 5.181. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: MobileNetV2_2241230.73731.47462.21192.94923.6865SE +/- 0.035, N = 3SE +/- 0.041, N = 3SE +/- 0.032, N = 73.2773.2463.211MIN: 3.07 / MAX: 11.36MIN: 3.07 / MAX: 10.44MIN: 2.97 / MAX: 111. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: MobileNetV2_224123246810Min: 3.22 / Avg: 3.28 / Max: 3.34Min: 3.17 / Avg: 3.25 / Max: 3.32Min: 3.08 / Avg: 3.21 / Max: 3.321. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenet-v1-1.01230.98641.97282.95923.94564.932SE +/- 0.070, N = 3SE +/- 0.051, N = 3SE +/- 0.022, N = 74.3844.3524.325MIN: 4.07 / MAX: 13.17MIN: 4.08 / MAX: 11.71MIN: 4.09 / MAX: 11.491. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenet-v1-1.0123246810Min: 4.27 / Avg: 4.38 / Max: 4.51Min: 4.25 / Avg: 4.35 / Max: 4.41Min: 4.23 / Avg: 4.32 / Max: 4.41. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: inception-v3123612182430SE +/- 0.23, N = 3SE +/- 0.24, N = 3SE +/- 0.13, N = 726.1026.1725.62MIN: 24.2 / MAX: 62.16MIN: 24.88 / MAX: 34.4MIN: 24.49 / MAX: 42.771. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: inception-v3123612182430Min: 25.65 / Avg: 26.1 / Max: 26.38Min: 25.74 / Avg: 26.17 / Max: 26.58Min: 25.3 / Avg: 25.62 / Max: 26.371. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNet1235001000150020002500SE +/- 3.60, N = 3SE +/- 3.49, N = 3SE +/- 1.51, N = 32510.762504.442505.20MIN: 2446.86 / MAX: 2649.04MIN: 2433.14 / MAX: 2583.96MIN: 2430 / MAX: 2579.391. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNet123400800120016002000Min: 2506.22 / Avg: 2510.76 / Max: 2517.87Min: 2497.72 / Avg: 2504.44 / Max: 2509.45Min: 2502.82 / Avg: 2505.2 / Max: 2507.991. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v212350100150200250SE +/- 0.40, N = 3SE +/- 2.57, N = 3SE +/- 1.38, N = 3226.93226.52224.05MIN: 222.06 / MAX: 239.49MIN: 218.62 / MAX: 239.35MIN: 218.93 / MAX: 236.841. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v21234080120160200Min: 226.23 / Avg: 226.93 / Max: 227.63Min: 222.83 / Avg: 226.52 / Max: 231.48Min: 221.37 / Avg: 224.05 / Max: 225.961. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v21231224364860SE +/- 0.28, N = 3SE +/- 0.11, N = 3SE +/- 0.25, N = 350.9351.1150.72MIN: 50.31 / MAX: 51.63MIN: 50.74 / MAX: 51.46MIN: 50.05 / MAX: 51.131. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v21231020304050Min: 50.45 / Avg: 50.93 / Max: 51.43Min: 50.97 / Avg: 51.11 / Max: 51.32Min: 50.22 / Avg: 50.72 / Max: 51.021. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.112350100150200250SE +/- 0.82, N = 3SE +/- 3.04, N = 3SE +/- 0.10, N = 3212.48212.74211.09MIN: 210.54 / MAX: 219.75MIN: 206.57 / MAX: 219.72MIN: 210.04 / MAX: 211.621. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.11234080120160200Min: 211.52 / Avg: 212.48 / Max: 214.11Min: 207.17 / Avg: 212.74 / Max: 217.64Min: 210.93 / Avg: 211.09 / Max: 211.281. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: mobilenet1a233691215SE +/- 0.09, N = 3SE +/- 0.08, N = 3SE +/- 0.02, N = 312.1412.1212.05MIN: 11.26 / MAX: 38.92MIN: 11.38 / MAX: 34.52MIN: 11.3 / MAX: 29.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: mobilenet1a2348121620Min: 12.04 / Avg: 12.14 / Max: 12.31Min: 11.98 / Avg: 12.12 / Max: 12.25Min: 12.02 / Avg: 12.05 / Max: 12.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU-v2-v2 - Model: mobilenet-v21a230.9361.8722.8083.7444.68SE +/- 0.02, N = 3SE +/- 0.06, N = 3SE +/- 0.02, N = 34.104.164.13MIN: 3.86 / MAX: 12.56MIN: 3.87 / MAX: 12.37MIN: 3.88 / MAX: 12.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU-v2-v2 - Model: mobilenet-v21a23246810Min: 4.07 / Avg: 4.1 / Max: 4.14Min: 4.07 / Avg: 4.16 / Max: 4.28Min: 4.08 / Avg: 4.13 / Max: 4.161. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU-v3-v3 - Model: mobilenet-v31a230.91.82.73.64.5SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 33.914.003.96MIN: 3.73 / MAX: 11.8MIN: 3.76 / MAX: 12.05MIN: 3.75 / MAX: 12.141. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU-v3-v3 - Model: mobilenet-v31a23246810Min: 3.86 / Avg: 3.91 / Max: 3.95Min: 3.96 / Avg: 4 / Max: 4.07Min: 3.92 / Avg: 3.96 / Max: 3.991. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: shufflenet-v21a230.92251.8452.76753.694.6125SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 34.004.104.05MIN: 3.74 / MAX: 11.71MIN: 3.89 / MAX: 11.86MIN: 3.86 / MAX: 11.941. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: shufflenet-v21a23246810Min: 3.94 / Avg: 4 / Max: 4.07Min: 4.03 / Avg: 4.1 / Max: 4.14Min: 4.03 / Avg: 4.05 / Max: 4.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: mnasnet1a230.85951.7192.57853.4384.2975SE +/- 0.02, N = 3SE +/- 0.06, N = 3SE +/- 0.09, N = 33.673.823.79MIN: 3.5 / MAX: 11.58MIN: 3.57 / MAX: 11.88MIN: 3.51 / MAX: 12.141. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: mnasnet1a23246810Min: 3.64 / Avg: 3.67 / Max: 3.69Min: 3.71 / Avg: 3.82 / Max: 3.93Min: 3.68 / Avg: 3.79 / Max: 3.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: efficientnet-b01a231.1882.3763.5644.7525.94SE +/- 0.04, N = 3SE +/- 0.09, N = 3SE +/- 0.04, N = 35.105.285.13MIN: 4.83 / MAX: 13.32MIN: 4.9 / MAX: 13.73MIN: 4.8 / MAX: 38.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: efficientnet-b01a23246810Min: 5.02 / Avg: 5.1 / Max: 5.17Min: 5.11 / Avg: 5.28 / Max: 5.42Min: 5.06 / Avg: 5.13 / Max: 5.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: blazeface1a230.40280.80561.20841.61122.014SE +/- 0.00, N = 3SE +/- 0.04, N = 3SE +/- 0.00, N = 31.711.791.70MIN: 1.62 / MAX: 8.08MIN: 1.65 / MAX: 9.61MIN: 1.6 / MAX: 9.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: blazeface1a23246810Min: 1.7 / Avg: 1.71 / Max: 1.71Min: 1.73 / Avg: 1.79 / Max: 1.86Min: 1.7 / Avg: 1.7 / Max: 1.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: googlenet1a233691215SE +/- 0.06, N = 3SE +/- 0.01, N = 3SE +/- 0.17, N = 312.3812.4512.51MIN: 11.66 / MAX: 20.8MIN: 11.62 / MAX: 30.43MIN: 11.61 / MAX: 32.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: googlenet1a2348121620Min: 12.25 / Avg: 12.38 / Max: 12.45Min: 12.42 / Avg: 12.45 / Max: 12.46Min: 12.3 / Avg: 12.51 / Max: 12.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: vgg161a231326395265SE +/- 0.20, N = 3SE +/- 0.21, N = 3SE +/- 0.90, N = 355.0755.2655.98MIN: 52.29 / MAX: 67.12MIN: 52.4 / MAX: 85.07MIN: 51.31 / MAX: 862.271. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: vgg161a231122334455Min: 54.82 / Avg: 55.07 / Max: 55.47Min: 55.02 / Avg: 55.26 / Max: 55.68Min: 55.08 / Avg: 55.98 / Max: 57.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: resnet181a2348121620SE +/- 0.16, N = 3SE +/- 0.04, N = 3SE +/- 0.16, N = 313.7413.5213.61MIN: 12.93 / MAX: 22.43MIN: 12.87 / MAX: 22.24MIN: 12.65 / MAX: 21.631. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: resnet181a2348121620Min: 13.56 / Avg: 13.74 / Max: 14.05Min: 13.45 / Avg: 13.52 / Max: 13.6Min: 13.36 / Avg: 13.61 / Max: 13.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: alexnet1a233691215SE +/- 0.04, N = 3SE +/- 0.06, N = 3SE +/- 0.08, N = 311.0711.2011.13MIN: 10.16 / MAX: 19.67MIN: 10.37 / MAX: 19.33MIN: 10.19 / MAX: 19.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: alexnet1a233691215Min: 11.03 / Avg: 11.07 / Max: 11.14Min: 11.09 / Avg: 11.2 / Max: 11.28Min: 11.01 / Avg: 11.13 / Max: 11.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: resnet501a23510152025SE +/- 0.36, N = 3SE +/- 0.28, N = 3SE +/- 0.18, N = 322.8022.5522.69MIN: 20.6 / MAX: 104.7MIN: 20.93 / MAX: 31.12MIN: 21.25 / MAX: 31.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: resnet501a23510152025Min: 22.23 / Avg: 22.8 / Max: 23.48Min: 21.98 / Avg: 22.55 / Max: 22.87Min: 22.33 / Avg: 22.69 / Max: 22.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: yolov4-tiny1a23510152025SE +/- 0.28, N = 3SE +/- 0.18, N = 3SE +/- 0.11, N = 321.4822.0021.91MIN: 19.65 / MAX: 48.26MIN: 20.67 / MAX: 30.95MIN: 19.58 / MAX: 57.141. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: yolov4-tiny1a23510152025Min: 20.94 / Avg: 21.48 / Max: 21.89Min: 21.67 / Avg: 22 / Max: 22.27Min: 21.69 / Avg: 21.91 / Max: 22.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: squeezenet_ssd1a2348121620SE +/- 0.06, N = 3SE +/- 0.04, N = 3SE +/- 0.04, N = 314.8014.7814.75MIN: 13.52 / MAX: 64.82MIN: 13.64 / MAX: 45.78MIN: 13.81 / MAX: 23.131. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: squeezenet_ssd1a2348121620Min: 14.7 / Avg: 14.8 / Max: 14.9Min: 14.74 / Avg: 14.78 / Max: 14.85Min: 14.68 / Avg: 14.75 / Max: 14.811. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: regnety_400m1a233691215SE +/- 0.08, N = 3SE +/- 0.09, N = 3SE +/- 0.07, N = 39.429.509.34MIN: 9.01 / MAX: 26.23MIN: 9.03 / MAX: 17.41MIN: 8.86 / MAX: 17.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: regnety_400m1a233691215Min: 9.28 / Avg: 9.42 / Max: 9.57Min: 9.35 / Avg: 9.5 / Max: 9.65Min: 9.23 / Avg: 9.34 / Max: 9.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread