nn

AMD Ryzen 9 5900X 12-Core testing with a ASUS ROG CROSSHAIR VIII HERO (3501 BIOS) and AMD Radeon RX 6800/6800 XT / 6900 16GB on Ubuntu 21.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2106189-PTS-NN67322042
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

HPC - High Performance Computing 3 Tests
Machine Learning 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
1
June 18 2021
  44 Minutes
1a
June 18 2021
  39 Minutes
2
June 18 2021
  1 Hour, 23 Minutes
3
June 18 2021
  2 Hours, 5 Minutes
Invert Hiding All Results Option
  1 Hour, 13 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


nnProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen Resolution11a23AMD Ryzen 9 5900X 12-Core @ 3.70GHz (12 Cores / 24 Threads)ASUS ROG CROSSHAIR VIII HERO (3501 BIOS)AMD Starship/Matisse16GB1000GB Sabrent Rocket 4.0 Plus + 2000GBAMD Radeon RX 6800/6800 XT / 6900 16GB (2475/1000MHz)AMD Navi 21 HDMI AudioASUS VP28URealtek RTL8125 2.5GbE + Intel I211Ubuntu 21.045.13.0-051300rc6daily20210617-generic (x86_64) 20210616GNOME Shell 3.38.4X Server 1.20.11 + Wayland4.6 Mesa 21.2.0-devel (git-849ab4e 2021-06-17 hirsute-oibaf-ppa) (LLVM 12.0.0)1.2.180GCC 10.3.0 + CUDA 11.3ext43840x2160OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-mutex --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-gDeRY6/gcc-10-10.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-gDeRY6/gcc-10-10.3.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201009Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

nnmnn: mobilenetV3mnn: squeezenetv1.1mnn: resnet-v2-50mnn: SqueezeNetV1.0mnn: MobileNetV2_224mnn: mobilenet-v1-1.0mnn: inception-v3tnn: CPU - DenseNettnn: CPU - MobileNet v2tnn: CPU - SqueezeNet v2tnn: CPU - SqueezeNet v1.1ncnn: CPU - mobilenetncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - shufflenet-v2ncnn: CPU - mnasnetncnn: CPU - efficientnet-b0ncnn: CPU - blazefacencnn: CPU - googlenetncnn: CPU - vgg16ncnn: CPU - resnet18ncnn: CPU - alexnetncnn: CPU - resnet50ncnn: CPU - yolov4-tinyncnn: CPU - squeezenet_ssdncnn: CPU - regnety_400m11a232.1293.74327.9445.0573.2774.38426.1032510.762226.92850.930212.47612.144.103.914.003.675.101.7112.3855.0713.7411.0722.8021.4814.809.422.1163.73627.4035.1213.2464.35226.1692504.442226.52351.113212.73812.124.164.004.103.825.281.7912.4555.2613.5211.2022.5522.0014.789.502.0593.61927.0105.0233.2114.32525.6162505.204224.05150.718211.08612.054.133.964.053.795.131.7012.5155.9813.6111.1322.6921.9114.759.34OpenBenchmarking.org

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenetV33210.4790.9581.4371.9162.395SE +/- 0.019, N = 7SE +/- 0.009, N = 3SE +/- 0.027, N = 32.0592.1162.129MIN: 1.88 / MAX: 10.15MIN: 2.01 / MAX: 10.13MIN: 1.91 / MAX: 11.331. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenetV3321246810Min: 1.97 / Avg: 2.06 / Max: 2.13Min: 2.1 / Avg: 2.12 / Max: 2.13Min: 2.08 / Avg: 2.13 / Max: 2.161. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: squeezenetv1.13210.84221.68442.52663.36884.211SE +/- 0.058, N = 7SE +/- 0.012, N = 3SE +/- 0.068, N = 33.6193.7363.743MIN: 3.25 / MAX: 12.54MIN: 3.56 / MAX: 11.48MIN: 3.36 / MAX: 12.791. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: squeezenetv1.1321246810Min: 3.39 / Avg: 3.62 / Max: 3.8Min: 3.72 / Avg: 3.74 / Max: 3.76Min: 3.62 / Avg: 3.74 / Max: 3.851. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: resnet-v2-50321714212835SE +/- 0.04, N = 7SE +/- 0.15, N = 3SE +/- 0.51, N = 327.0127.4027.94MIN: 25.86 / MAX: 50.68MIN: 26.03 / MAX: 50.79MIN: 25.6 / MAX: 40.371. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: resnet-v2-50321612182430Min: 26.77 / Avg: 27.01 / Max: 27.13Min: 27.13 / Avg: 27.4 / Max: 27.62Min: 27.27 / Avg: 27.94 / Max: 28.941. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: SqueezeNetV1.03121.15222.30443.45664.60885.761SE +/- 0.049, N = 7SE +/- 0.054, N = 3SE +/- 0.050, N = 35.0235.0575.121MIN: 4.59 / MAX: 14.33MIN: 4.64 / MAX: 33.19MIN: 4.85 / MAX: 13.161. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: SqueezeNetV1.0312246810Min: 4.76 / Avg: 5.02 / Max: 5.18Min: 4.95 / Avg: 5.06 / Max: 5.12Min: 5.06 / Avg: 5.12 / Max: 5.221. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: MobileNetV2_2243210.73731.47462.21192.94923.6865SE +/- 0.032, N = 7SE +/- 0.041, N = 3SE +/- 0.035, N = 33.2113.2463.277MIN: 2.97 / MAX: 11MIN: 3.07 / MAX: 10.44MIN: 3.07 / MAX: 11.361. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: MobileNetV2_224321246810Min: 3.08 / Avg: 3.21 / Max: 3.32Min: 3.17 / Avg: 3.25 / Max: 3.32Min: 3.22 / Avg: 3.28 / Max: 3.341. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenet-v1-1.03210.98641.97282.95923.94564.932SE +/- 0.022, N = 7SE +/- 0.051, N = 3SE +/- 0.070, N = 34.3254.3524.384MIN: 4.09 / MAX: 11.49MIN: 4.08 / MAX: 11.71MIN: 4.07 / MAX: 13.171. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenet-v1-1.0321246810Min: 4.23 / Avg: 4.32 / Max: 4.4Min: 4.25 / Avg: 4.35 / Max: 4.41Min: 4.27 / Avg: 4.38 / Max: 4.511. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: inception-v3312612182430SE +/- 0.13, N = 7SE +/- 0.23, N = 3SE +/- 0.24, N = 325.6226.1026.17MIN: 24.49 / MAX: 42.77MIN: 24.2 / MAX: 62.16MIN: 24.88 / MAX: 34.41. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: inception-v3312612182430Min: 25.3 / Avg: 25.62 / Max: 26.37Min: 25.65 / Avg: 26.1 / Max: 26.38Min: 25.74 / Avg: 26.17 / Max: 26.581. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNet2315001000150020002500SE +/- 3.49, N = 3SE +/- 1.51, N = 3SE +/- 3.60, N = 32504.442505.202510.76MIN: 2433.14 / MAX: 2583.96MIN: 2430 / MAX: 2579.39MIN: 2446.86 / MAX: 2649.041. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNet231400800120016002000Min: 2497.72 / Avg: 2504.44 / Max: 2509.45Min: 2502.82 / Avg: 2505.2 / Max: 2507.99Min: 2506.22 / Avg: 2510.76 / Max: 2517.871. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v232150100150200250SE +/- 1.38, N = 3SE +/- 2.57, N = 3SE +/- 0.40, N = 3224.05226.52226.93MIN: 218.93 / MAX: 236.84MIN: 218.62 / MAX: 239.35MIN: 222.06 / MAX: 239.491. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v23214080120160200Min: 221.37 / Avg: 224.05 / Max: 225.96Min: 222.83 / Avg: 226.52 / Max: 231.48Min: 226.23 / Avg: 226.93 / Max: 227.631. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v23121224364860SE +/- 0.25, N = 3SE +/- 0.28, N = 3SE +/- 0.11, N = 350.7250.9351.11MIN: 50.05 / MAX: 51.13MIN: 50.31 / MAX: 51.63MIN: 50.74 / MAX: 51.461. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v23121020304050Min: 50.22 / Avg: 50.72 / Max: 51.02Min: 50.45 / Avg: 50.93 / Max: 51.43Min: 50.97 / Avg: 51.11 / Max: 51.321. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.131250100150200250SE +/- 0.10, N = 3SE +/- 0.82, N = 3SE +/- 3.04, N = 3211.09212.48212.74MIN: 210.04 / MAX: 211.62MIN: 210.54 / MAX: 219.75MIN: 206.57 / MAX: 219.721. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.13124080120160200Min: 210.93 / Avg: 211.09 / Max: 211.28Min: 211.52 / Avg: 212.48 / Max: 214.11Min: 207.17 / Avg: 212.74 / Max: 217.641. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: mobilenet321a3691215SE +/- 0.02, N = 3SE +/- 0.08, N = 3SE +/- 0.09, N = 312.0512.1212.14MIN: 11.3 / MAX: 29.68MIN: 11.38 / MAX: 34.52MIN: 11.26 / MAX: 38.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: mobilenet321a48121620Min: 12.02 / Avg: 12.05 / Max: 12.08Min: 11.98 / Avg: 12.12 / Max: 12.25Min: 12.04 / Avg: 12.14 / Max: 12.311. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU-v2-v2 - Model: mobilenet-v21a320.9361.8722.8083.7444.68SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.06, N = 34.104.134.16MIN: 3.86 / MAX: 12.56MIN: 3.88 / MAX: 12.67MIN: 3.87 / MAX: 12.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU-v2-v2 - Model: mobilenet-v21a32246810Min: 4.07 / Avg: 4.1 / Max: 4.14Min: 4.08 / Avg: 4.13 / Max: 4.16Min: 4.07 / Avg: 4.16 / Max: 4.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU-v3-v3 - Model: mobilenet-v31a320.91.82.73.64.5SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 33.913.964.00MIN: 3.73 / MAX: 11.8MIN: 3.75 / MAX: 12.14MIN: 3.76 / MAX: 12.051. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU-v3-v3 - Model: mobilenet-v31a32246810Min: 3.86 / Avg: 3.91 / Max: 3.95Min: 3.92 / Avg: 3.96 / Max: 3.99Min: 3.96 / Avg: 4 / Max: 4.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: shufflenet-v21a320.92251.8452.76753.694.6125SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 34.004.054.10MIN: 3.74 / MAX: 11.71MIN: 3.86 / MAX: 11.94MIN: 3.89 / MAX: 11.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: shufflenet-v21a32246810Min: 3.94 / Avg: 4 / Max: 4.07Min: 4.03 / Avg: 4.05 / Max: 4.06Min: 4.03 / Avg: 4.1 / Max: 4.141. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: mnasnet1a320.85951.7192.57853.4384.2975SE +/- 0.02, N = 3SE +/- 0.09, N = 3SE +/- 0.06, N = 33.673.793.82MIN: 3.5 / MAX: 11.58MIN: 3.51 / MAX: 12.14MIN: 3.57 / MAX: 11.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: mnasnet1a32246810Min: 3.64 / Avg: 3.67 / Max: 3.69Min: 3.68 / Avg: 3.79 / Max: 3.96Min: 3.71 / Avg: 3.82 / Max: 3.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: efficientnet-b01a321.1882.3763.5644.7525.94SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.09, N = 35.105.135.28MIN: 4.83 / MAX: 13.32MIN: 4.8 / MAX: 38.8MIN: 4.9 / MAX: 13.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: efficientnet-b01a32246810Min: 5.02 / Avg: 5.1 / Max: 5.17Min: 5.06 / Avg: 5.13 / Max: 5.21Min: 5.11 / Avg: 5.28 / Max: 5.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: blazeface31a20.40280.80561.20841.61122.014SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.04, N = 31.701.711.79MIN: 1.6 / MAX: 9.42MIN: 1.62 / MAX: 8.08MIN: 1.65 / MAX: 9.611. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: blazeface31a2246810Min: 1.7 / Avg: 1.7 / Max: 1.71Min: 1.7 / Avg: 1.71 / Max: 1.71Min: 1.73 / Avg: 1.79 / Max: 1.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: googlenet1a233691215SE +/- 0.06, N = 3SE +/- 0.01, N = 3SE +/- 0.17, N = 312.3812.4512.51MIN: 11.66 / MAX: 20.8MIN: 11.62 / MAX: 30.43MIN: 11.61 / MAX: 32.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: googlenet1a2348121620Min: 12.25 / Avg: 12.38 / Max: 12.45Min: 12.42 / Avg: 12.45 / Max: 12.46Min: 12.3 / Avg: 12.51 / Max: 12.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: vgg161a231326395265SE +/- 0.20, N = 3SE +/- 0.21, N = 3SE +/- 0.90, N = 355.0755.2655.98MIN: 52.29 / MAX: 67.12MIN: 52.4 / MAX: 85.07MIN: 51.31 / MAX: 862.271. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: vgg161a231122334455Min: 54.82 / Avg: 55.07 / Max: 55.47Min: 55.02 / Avg: 55.26 / Max: 55.68Min: 55.08 / Avg: 55.98 / Max: 57.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: resnet18231a48121620SE +/- 0.04, N = 3SE +/- 0.16, N = 3SE +/- 0.16, N = 313.5213.6113.74MIN: 12.87 / MAX: 22.24MIN: 12.65 / MAX: 21.63MIN: 12.93 / MAX: 22.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: resnet18231a48121620Min: 13.45 / Avg: 13.52 / Max: 13.6Min: 13.36 / Avg: 13.61 / Max: 13.9Min: 13.56 / Avg: 13.74 / Max: 14.051. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: alexnet1a323691215SE +/- 0.04, N = 3SE +/- 0.08, N = 3SE +/- 0.06, N = 311.0711.1311.20MIN: 10.16 / MAX: 19.67MIN: 10.19 / MAX: 19.44MIN: 10.37 / MAX: 19.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: alexnet1a323691215Min: 11.03 / Avg: 11.07 / Max: 11.14Min: 11.01 / Avg: 11.13 / Max: 11.29Min: 11.09 / Avg: 11.2 / Max: 11.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: resnet50231a510152025SE +/- 0.28, N = 3SE +/- 0.18, N = 3SE +/- 0.36, N = 322.5522.6922.80MIN: 20.93 / MAX: 31.12MIN: 21.25 / MAX: 31.38MIN: 20.6 / MAX: 104.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: resnet50231a510152025Min: 21.98 / Avg: 22.55 / Max: 22.87Min: 22.33 / Avg: 22.69 / Max: 22.88Min: 22.23 / Avg: 22.8 / Max: 23.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: yolov4-tiny1a32510152025SE +/- 0.28, N = 3SE +/- 0.11, N = 3SE +/- 0.18, N = 321.4821.9122.00MIN: 19.65 / MAX: 48.26MIN: 19.58 / MAX: 57.14MIN: 20.67 / MAX: 30.951. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: yolov4-tiny1a32510152025Min: 20.94 / Avg: 21.48 / Max: 21.89Min: 21.69 / Avg: 21.91 / Max: 22.07Min: 21.67 / Avg: 22 / Max: 22.271. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: squeezenet_ssd321a48121620SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.06, N = 314.7514.7814.80MIN: 13.81 / MAX: 23.13MIN: 13.64 / MAX: 45.78MIN: 13.52 / MAX: 64.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: squeezenet_ssd321a48121620Min: 14.68 / Avg: 14.75 / Max: 14.81Min: 14.74 / Avg: 14.78 / Max: 14.85Min: 14.7 / Avg: 14.8 / Max: 14.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: regnety_400m31a23691215SE +/- 0.07, N = 3SE +/- 0.08, N = 3SE +/- 0.09, N = 39.349.429.50MIN: 8.86 / MAX: 17.28MIN: 9.01 / MAX: 26.23MIN: 9.03 / MAX: 17.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: regnety_400m31a23691215Min: 9.23 / Avg: 9.34 / Max: 9.47Min: 9.28 / Avg: 9.42 / Max: 9.57Min: 9.35 / Avg: 9.5 / Max: 9.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread