mnn ncnn xeon

2 x Intel Xeon Platinum 8380 testing with a Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS) and ASPEED on CentOS Stream 9 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2208133-NE-MNNNCNNXE63
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

HPC - High Performance Computing 2 Tests
Machine Learning 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
A
August 13 2022
  2 Hours, 54 Minutes
B
August 13 2022
  2 Hours, 53 Minutes
C
August 13 2022
  4 Hours, 30 Minutes
D
August 13 2022
  4 Hours, 39 Minutes
E
August 13 2022
  2 Hours, 55 Minutes
Invert Hiding All Results Option
  3 Hours, 34 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


mnn ncnn xeonOpenBenchmarking.orgPhoronix Test Suite2 x Intel Xeon Platinum 8380 @ 3.40GHz (80 Cores / 160 Threads)Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS)Intel Device 0998512GB7682GB INTEL SSDPF2KX076TZASPEEDVE2282 x Intel X710 for 10GBASE-T + 2 x Intel E810-C for QSFPCentOS Stream 95.14.0-142.el9.x86_64 (x86_64)GNOME Shell 40.10X ServerGCC 11.3.1 20220421xfs1920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen ResolutionMnn Ncnn Xeon BenchmarksSystem Logs- Transparent Huge Pages: always- --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-host-bind-now --enable-host-pie --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-link-serialization=1 --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-build-config=bootstrap-lto --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver --without-isl - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xd000363 - SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

ABCDEResult OverviewPhoronix Test Suite100%102%104%107%Mobile Neural NetworkNCNNMobile Neural NetworkNCNNNCNNNCNNNCNNNCNNMobile Neural NetworkMobile Neural NetworkNCNNNCNNNCNNNCNNNCNNNCNNMobile Neural NetworkMobile Neural NetworkNCNNMobile Neural NetworkNCNNNCNNNCNNNCNNMobileNetV2_224CPU - vgg16squeezenetv1.1CPU - efficientnet-b0CPU - alexnetCPU - mobilenetCPU - googlenetCPU - resnet18resnet-v2-50SqueezeNetV1.0CPU - squeezenet_ssdCPU - FastestDetCPU-v2-v2 - mobilenet-v2CPU - blazefaceCPU - yolov4-tinyCPU - resnet50mobilenetV3inception-v3CPU - regnety_400mmobilenet-v1-1.0CPU - vision_transformerCPU - mnasnetCPU - shufflenet-v2CPU-v3-v3 - mobilenet-v3

mnn ncnn xeonmnn: mobilenetV3mnn: squeezenetv1.1mnn: resnet-v2-50mnn: SqueezeNetV1.0mnn: MobileNetV2_224mnn: mobilenet-v1-1.0mnn: inception-v3ncnn: CPU - mobilenetncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - shufflenet-v2ncnn: CPU - mnasnetncnn: CPU - efficientnet-b0ncnn: CPU - blazefacencnn: CPU - googlenetncnn: CPU - vgg16ncnn: CPU - resnet18ncnn: CPU - alexnetncnn: CPU - resnet50ncnn: CPU - yolov4-tinyncnn: CPU - squeezenet_ssdncnn: CPU - regnety_400mncnn: CPU - vision_transformerncnn: CPU - FastestDetABCDE1.8072.3178.7824.1213.4152.22820.69021.9112.9912.0313.4711.9216.897.1523.7430.9113.378.8924.7527.8026.3457.14151.7815.331.8372.3959.0814.2133.1512.19820.81221.6812.8112.2113.7212.0516.677.3422.5329.0513.188.5124.4927.2526.3657.76152.6314.981.8232.4258.7664.2953.2172.16720.85322.3512.7912.2513.4711.9117.757.2823.1129.6313.518.5325.1427.2827.1757.92152.7915.561.8652.4478.8864.2193.2292.21120.82922.8512.8112.1213.4511.7416.547.0623.5231.2013.879.1225.2727.3826.6656.90155.3114.921.8652.5209.1994.3213.1362.21721.34621.8513.3112.2113.6311.7916.507.2322.6428.6813.208.6224.3426.7425.9658.68155.9014.93OpenBenchmarking.org

Mobile Neural Network

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: mobilenetV3ACBDE0.41960.83921.25881.67842.098SE +/- 0.026, N = 3SE +/- 0.019, N = 14SE +/- 0.023, N = 3SE +/- 0.018, N = 15SE +/- 0.024, N = 31.8071.8231.8371.8651.865MIN: 1.72 / MAX: 4.25MIN: 1.67 / MAX: 2.18MIN: 1.78 / MAX: 1.98MIN: 1.77 / MAX: 4.16MIN: 1.79 / MAX: 2.121. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: mobilenetV3ACBDE246810Min: 1.76 / Avg: 1.81 / Max: 1.84Min: 1.7 / Avg: 1.82 / Max: 1.92Min: 1.81 / Avg: 1.84 / Max: 1.88Min: 1.8 / Avg: 1.86 / Max: 2.08Min: 1.82 / Avg: 1.87 / Max: 1.91. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: squeezenetv1.1ABCDE0.5671.1341.7012.2682.835SE +/- 0.101, N = 3SE +/- 0.110, N = 3SE +/- 0.054, N = 14SE +/- 0.027, N = 15SE +/- 0.079, N = 32.3172.3952.4252.4472.520MIN: 2.17 / MAX: 3.61MIN: 2.17 / MAX: 4.44MIN: 2.11 / MAX: 3.4MIN: 2.3 / MAX: 3.94MIN: 2.34 / MAX: 6.71. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: squeezenetv1.1ABCDE246810Min: 2.2 / Avg: 2.32 / Max: 2.52Min: 2.2 / Avg: 2.39 / Max: 2.58Min: 2.14 / Avg: 2.42 / Max: 2.66Min: 2.33 / Avg: 2.45 / Max: 2.64Min: 2.36 / Avg: 2.52 / Max: 2.621. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: resnet-v2-50CADBE3691215SE +/- 0.057, N = 14SE +/- 0.149, N = 3SE +/- 0.044, N = 15SE +/- 0.142, N = 3SE +/- 0.027, N = 38.7668.7828.8869.0819.199MIN: 8.09 / MAX: 22.74MIN: 8.31 / MAX: 21.48MIN: 8.15 / MAX: 22.2MIN: 8.4 / MAX: 24.9MIN: 8.97 / MAX: 9.911. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: resnet-v2-50CADBE3691215Min: 8.51 / Avg: 8.77 / Max: 9.22Min: 8.49 / Avg: 8.78 / Max: 8.98Min: 8.64 / Avg: 8.89 / Max: 9.22Min: 8.88 / Avg: 9.08 / Max: 9.36Min: 9.15 / Avg: 9.2 / Max: 9.251. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: SqueezeNetV1.0ABDCE0.97221.94442.91663.88884.861SE +/- 0.164, N = 3SE +/- 0.097, N = 3SE +/- 0.039, N = 15SE +/- 0.058, N = 14SE +/- 0.116, N = 34.1214.2134.2194.2954.321MIN: 3.72 / MAX: 13.26MIN: 3.63 / MAX: 8.6MIN: 3.69 / MAX: 15.2MIN: 3.71 / MAX: 11.95MIN: 3.82 / MAX: 9.631. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: SqueezeNetV1.0ABDCE246810Min: 3.87 / Avg: 4.12 / Max: 4.43Min: 4.02 / Avg: 4.21 / Max: 4.32Min: 3.99 / Avg: 4.22 / Max: 4.43Min: 3.95 / Avg: 4.3 / Max: 4.55Min: 4.17 / Avg: 4.32 / Max: 4.551. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: MobileNetV2_224EBCDA0.76841.53682.30523.07363.842SE +/- 0.105, N = 3SE +/- 0.076, N = 3SE +/- 0.049, N = 14SE +/- 0.047, N = 15SE +/- 0.143, N = 33.1363.1513.2173.2293.415MIN: 2.79 / MAX: 8.34MIN: 2.57 / MAX: 8.86MIN: 2.68 / MAX: 10.02MIN: 2.53 / MAX: 9.07MIN: 2.58 / MAX: 12.211. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: MobileNetV2_224EBCDA246810Min: 2.93 / Avg: 3.14 / Max: 3.26Min: 3 / Avg: 3.15 / Max: 3.26Min: 2.84 / Avg: 3.22 / Max: 3.66Min: 2.79 / Avg: 3.23 / Max: 3.7Min: 3.21 / Avg: 3.42 / Max: 3.691. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: mobilenet-v1-1.0CBDEA0.50131.00261.50392.00522.5065SE +/- 0.028, N = 13SE +/- 0.007, N = 3SE +/- 0.013, N = 15SE +/- 0.021, N = 3SE +/- 0.021, N = 32.1672.1982.2112.2172.228MIN: 1.83 / MAX: 2.37MIN: 2.15 / MAX: 2.45MIN: 2.07 / MAX: 5.55MIN: 2.16 / MAX: 2.33MIN: 2.17 / MAX: 2.381. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: mobilenet-v1-1.0CBDEA246810Min: 1.87 / Avg: 2.17 / Max: 2.27Min: 2.18 / Avg: 2.2 / Max: 2.21Min: 2.15 / Avg: 2.21 / Max: 2.35Min: 2.19 / Avg: 2.22 / Max: 2.26Min: 2.2 / Avg: 2.23 / Max: 2.271. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: inception-v3ABDCE510152025SE +/- 0.44, N = 3SE +/- 0.18, N = 3SE +/- 0.09, N = 15SE +/- 0.12, N = 14SE +/- 0.10, N = 320.6920.8120.8320.8521.35MIN: 18.2 / MAX: 41.46MIN: 19.89 / MAX: 32.91MIN: 19.44 / MAX: 41.2MIN: 19.34 / MAX: 46.76MIN: 18.43 / MAX: 39.411. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: inception-v3ABDCE510152025Min: 20.04 / Avg: 20.69 / Max: 21.54Min: 20.45 / Avg: 20.81 / Max: 21Min: 20.4 / Avg: 20.83 / Max: 21.49Min: 20.09 / Avg: 20.85 / Max: 21.69Min: 21.24 / Avg: 21.35 / Max: 21.541. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenetBEACD510152025SE +/- 0.02, N = 3SE +/- 0.16, N = 3SE +/- 0.29, N = 3SE +/- 0.16, N = 3SE +/- 0.10, N = 321.6821.8521.9122.3522.85MIN: 21.29 / MAX: 45.51MIN: 21.29 / MAX: 94.26MIN: 21.02 / MAX: 61.03MIN: 21.7 / MAX: 47.14MIN: 21.59 / MAX: 246.41. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenetBEACD510152025Min: 21.65 / Avg: 21.68 / Max: 21.71Min: 21.54 / Avg: 21.85 / Max: 22.08Min: 21.33 / Avg: 21.91 / Max: 22.28Min: 22.11 / Avg: 22.35 / Max: 22.66Min: 22.73 / Avg: 22.85 / Max: 23.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v2CBDAE3691215SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.09, N = 3SE +/- 0.14, N = 3SE +/- 0.30, N = 312.7912.8112.8112.9913.31MIN: 12.42 / MAX: 18.33MIN: 12.31 / MAX: 37.43MIN: 12.34 / MAX: 89.44MIN: 12.1 / MAX: 146.34MIN: 12.14 / MAX: 236.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v2CBDAE48121620Min: 12.69 / Avg: 12.79 / Max: 12.87Min: 12.76 / Avg: 12.81 / Max: 12.89Min: 12.65 / Avg: 12.81 / Max: 12.95Min: 12.85 / Avg: 12.99 / Max: 13.26Min: 12.8 / Avg: 13.31 / Max: 13.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v3ADBEC3691215SE +/- 0.14, N = 3SE +/- 0.03, N = 3SE +/- 0.20, N = 3SE +/- 0.16, N = 3SE +/- 0.06, N = 312.0312.1212.2112.2112.25MIN: 11.55 / MAX: 35.77MIN: 11.84 / MAX: 34.59MIN: 11.64 / MAX: 151.41MIN: 11.67 / MAX: 35.66MIN: 11.87 / MAX: 36.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v3ADBEC48121620Min: 11.76 / Avg: 12.03 / Max: 12.19Min: 12.07 / Avg: 12.12 / Max: 12.15Min: 11.84 / Avg: 12.21 / Max: 12.51Min: 11.98 / Avg: 12.21 / Max: 12.53Min: 12.15 / Avg: 12.25 / Max: 12.361. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v2DACEB48121620SE +/- 0.10, N = 3SE +/- 0.10, N = 3SE +/- 0.16, N = 3SE +/- 0.26, N = 3SE +/- 0.31, N = 313.4513.4713.4713.6313.72MIN: 12.63 / MAX: 79.26MIN: 12.89 / MAX: 36.76MIN: 12.86 / MAX: 37.29MIN: 12.59 / MAX: 105.4MIN: 13.1 / MAX: 82.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v2DACEB48121620Min: 13.26 / Avg: 13.45 / Max: 13.56Min: 13.28 / Avg: 13.47 / Max: 13.57Min: 13.15 / Avg: 13.47 / Max: 13.68Min: 13.2 / Avg: 13.63 / Max: 14.09Min: 13.36 / Avg: 13.72 / Max: 14.341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnetDECAB3691215SE +/- 0.13, N = 3SE +/- 0.08, N = 3SE +/- 0.10, N = 3SE +/- 0.10, N = 3SE +/- 0.16, N = 311.7411.7911.9111.9212.05MIN: 11.27 / MAX: 17.51MIN: 11.19 / MAX: 63.29MIN: 11.32 / MAX: 35.73MIN: 11.57 / MAX: 36.77MIN: 11.5 / MAX: 157.891. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnetDECAB48121620Min: 11.49 / Avg: 11.74 / Max: 11.91Min: 11.63 / Avg: 11.79 / Max: 11.89Min: 11.73 / Avg: 11.91 / Max: 12.08Min: 11.81 / Avg: 11.92 / Max: 12.11Min: 11.79 / Avg: 12.05 / Max: 12.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b0EDBAC48121620SE +/- 0.30, N = 3SE +/- 0.29, N = 3SE +/- 0.14, N = 3SE +/- 0.14, N = 3SE +/- 0.73, N = 316.5016.5416.6716.8917.75MIN: 15.68 / MAX: 40.87MIN: 15.61 / MAX: 86.38MIN: 15.93 / MAX: 60.85MIN: 16.1 / MAX: 73.41MIN: 15.75 / MAX: 590.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b0EDBAC48121620Min: 15.98 / Avg: 16.5 / Max: 17.01Min: 15.99 / Avg: 16.54 / Max: 16.96Min: 16.4 / Avg: 16.67 / Max: 16.85Min: 16.62 / Avg: 16.89 / Max: 17.09Min: 16.77 / Avg: 17.75 / Max: 19.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazefaceDAECB246810SE +/- 0.12, N = 3SE +/- 0.08, N = 3SE +/- 0.05, N = 3SE +/- 0.11, N = 3SE +/- 0.20, N = 37.067.157.237.287.34MIN: 6.68 / MAX: 8.27MIN: 6.83 / MAX: 10.24MIN: 6.99 / MAX: 10.09MIN: 6.88 / MAX: 9.96MIN: 6.93 / MAX: 121.891. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazefaceDAECB3691215Min: 6.84 / Avg: 7.06 / Max: 7.27Min: 7 / Avg: 7.15 / Max: 7.28Min: 7.14 / Avg: 7.23 / Max: 7.29Min: 7.05 / Avg: 7.28 / Max: 7.39Min: 7.08 / Avg: 7.34 / Max: 7.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenetBECDA612182430SE +/- 0.21, N = 3SE +/- 0.29, N = 3SE +/- 0.48, N = 3SE +/- 0.45, N = 3SE +/- 0.56, N = 322.5322.6423.1123.5223.74MIN: 21.73 / MAX: 94.83MIN: 21.47 / MAX: 240.52MIN: 21.75 / MAX: 82.56MIN: 22.04 / MAX: 299.28MIN: 21.23 / MAX: 403.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenetBECDA612182430Min: 22.15 / Avg: 22.53 / Max: 22.87Min: 22.14 / Avg: 22.64 / Max: 23.15Min: 22.2 / Avg: 23.11 / Max: 23.85Min: 22.67 / Avg: 23.52 / Max: 24.2Min: 23.1 / Avg: 23.74 / Max: 24.851. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg16EBCAD714212835SE +/- 0.04, N = 3SE +/- 0.07, N = 3SE +/- 0.23, N = 3SE +/- 1.59, N = 3SE +/- 0.60, N = 328.6829.0529.6330.9131.20MIN: 26.99 / MAX: 114.83MIN: 27.32 / MAX: 149.49MIN: 27.23 / MAX: 201.62MIN: 26.06 / MAX: 119.07MIN: 28.77 / MAX: 277.951. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg16EBCAD714212835Min: 28.63 / Avg: 28.68 / Max: 28.76Min: 28.92 / Avg: 29.05 / Max: 29.15Min: 29.24 / Avg: 29.63 / Max: 30.02Min: 27.74 / Avg: 30.91 / Max: 32.73Min: 30.41 / Avg: 31.2 / Max: 32.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet18BEACD48121620SE +/- 0.11, N = 3SE +/- 0.12, N = 3SE +/- 0.24, N = 3SE +/- 0.15, N = 3SE +/- 0.61, N = 313.1813.2013.3713.5113.87MIN: 12.67 / MAX: 65.17MIN: 12.71 / MAX: 82.75MIN: 12.55 / MAX: 19.07MIN: 12.95 / MAX: 158.24MIN: 12.86 / MAX: 18.891. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet18BEACD48121620Min: 12.99 / Avg: 13.18 / Max: 13.38Min: 13.01 / Avg: 13.2 / Max: 13.42Min: 12.89 / Avg: 13.37 / Max: 13.65Min: 13.25 / Avg: 13.51 / Max: 13.77Min: 13.12 / Avg: 13.87 / Max: 15.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnetBCEAD3691215SE +/- 0.16, N = 3SE +/- 0.08, N = 3SE +/- 0.37, N = 3SE +/- 0.36, N = 3SE +/- 0.39, N = 38.518.538.628.899.12MIN: 8.08 / MAX: 100.34MIN: 8.13 / MAX: 49.08MIN: 7.91 / MAX: 231.7MIN: 7.96 / MAX: 119.99MIN: 8.11 / MAX: 130.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnetBCEAD3691215Min: 8.34 / Avg: 8.51 / Max: 8.83Min: 8.42 / Avg: 8.53 / Max: 8.69Min: 8.16 / Avg: 8.62 / Max: 9.34Min: 8.26 / Avg: 8.89 / Max: 9.49Min: 8.37 / Avg: 9.12 / Max: 9.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet50EBACD612182430SE +/- 0.11, N = 3SE +/- 0.30, N = 3SE +/- 0.66, N = 3SE +/- 0.19, N = 3SE +/- 0.11, N = 324.3424.4924.7525.1425.27MIN: 23.27 / MAX: 121.62MIN: 23.44 / MAX: 111.14MIN: 23.11 / MAX: 123.68MIN: 23.76 / MAX: 106.76MIN: 23.98 / MAX: 190.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet50EBACD612182430Min: 24.12 / Avg: 24.34 / Max: 24.47Min: 23.89 / Avg: 24.49 / Max: 24.82Min: 23.44 / Avg: 24.75 / Max: 25.46Min: 24.94 / Avg: 25.14 / Max: 25.52Min: 25.16 / Avg: 25.27 / Max: 25.491. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tinyEBCDA714212835SE +/- 0.19, N = 3SE +/- 0.25, N = 3SE +/- 0.03, N = 3SE +/- 0.26, N = 3SE +/- 0.43, N = 326.7427.2527.2827.3827.80MIN: 25.94 / MAX: 113.75MIN: 25.94 / MAX: 352.26MIN: 26.44 / MAX: 51.63MIN: 25.63 / MAX: 358.69MIN: 25.29 / MAX: 281.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tinyEBCDA612182430Min: 26.47 / Avg: 26.74 / Max: 27.1Min: 26.98 / Avg: 27.25 / Max: 27.75Min: 27.23 / Avg: 27.28 / Max: 27.34Min: 26.93 / Avg: 27.38 / Max: 27.83Min: 26.94 / Avg: 27.8 / Max: 28.311. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssdEABDC612182430SE +/- 0.13, N = 3SE +/- 0.14, N = 3SE +/- 0.14, N = 3SE +/- 0.27, N = 3SE +/- 0.43, N = 325.9626.3426.3626.6627.17MIN: 25.3 / MAX: 49.75MIN: 24.99 / MAX: 249.3MIN: 25.39 / MAX: 197.69MIN: 24.96 / MAX: 198.39MIN: 25.76 / MAX: 481.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssdEABDC612182430Min: 25.75 / Avg: 25.96 / Max: 26.19Min: 26.14 / Avg: 26.34 / Max: 26.6Min: 26.09 / Avg: 26.36 / Max: 26.5Min: 26.21 / Avg: 26.66 / Max: 27.14Min: 26.63 / Avg: 27.17 / Max: 28.021. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400mDABCE1326395265SE +/- 1.22, N = 3SE +/- 1.60, N = 3SE +/- 0.73, N = 3SE +/- 0.98, N = 3SE +/- 0.57, N = 356.9057.1457.7657.9258.68MIN: 53.28 / MAX: 385.57MIN: 53.37 / MAX: 430.33MIN: 55.37 / MAX: 447.81MIN: 55.21 / MAX: 203.4MIN: 54.21 / MAX: 808.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400mDABCE1224364860Min: 54.98 / Avg: 56.9 / Max: 59.16Min: 54.58 / Avg: 57.14 / Max: 60.08Min: 56.77 / Avg: 57.76 / Max: 59.19Min: 56.17 / Avg: 57.92 / Max: 59.54Min: 57.54 / Avg: 58.68 / Max: 59.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformerABCDE306090120150SE +/- 1.55, N = 3SE +/- 0.18, N = 3SE +/- 0.64, N = 3SE +/- 0.39, N = 3SE +/- 0.33, N = 3151.78152.63152.79155.31155.90MIN: 145.5 / MAX: 656.75MIN: 147.33 / MAX: 369.79MIN: 145.98 / MAX: 797.33MIN: 145.23 / MAX: 1014.26MIN: 146.82 / MAX: 812.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformerABCDE306090120150Min: 149.62 / Avg: 151.78 / Max: 154.79Min: 152.34 / Avg: 152.63 / Max: 152.95Min: 151.73 / Avg: 152.79 / Max: 153.94Min: 154.85 / Avg: 155.31 / Max: 156.09Min: 155.39 / Avg: 155.9 / Max: 156.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDetDEBAC48121620SE +/- 0.22, N = 3SE +/- 0.09, N = 3SE +/- 0.20, N = 3SE +/- 0.14, N = 3SE +/- 0.45, N = 314.9214.9314.9815.3315.56MIN: 14.41 / MAX: 20.23MIN: 14.47 / MAX: 37.58MIN: 14.48 / MAX: 39.53MIN: 14.81 / MAX: 39.13MIN: 14.29 / MAX: 72.491. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDetDEBAC48121620Min: 14.67 / Avg: 14.92 / Max: 15.35Min: 14.74 / Avg: 14.93 / Max: 15.05Min: 14.75 / Avg: 14.98 / Max: 15.38Min: 15.1 / Avg: 15.33 / Max: 15.59Min: 14.8 / Avg: 15.56 / Max: 16.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread