mnn ncnn xeon

2 x Intel Xeon Platinum 8380 testing with a Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS) and ASPEED on CentOS Stream 9 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2208133-NE-MNNNCNNXE63
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

HPC - High Performance Computing 2 Tests
Machine Learning 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
A
August 13 2022
  2 Hours, 54 Minutes
B
August 13 2022
  2 Hours, 53 Minutes
C
August 13 2022
  4 Hours, 30 Minutes
D
August 13 2022
  4 Hours, 39 Minutes
E
August 13 2022
  2 Hours, 55 Minutes
Invert Hiding All Results Option
  3 Hours, 34 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


mnn ncnn xeonOpenBenchmarking.orgPhoronix Test Suite2 x Intel Xeon Platinum 8380 @ 3.40GHz (80 Cores / 160 Threads)Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS)Intel Device 0998512GB7682GB INTEL SSDPF2KX076TZASPEEDVE2282 x Intel X710 for 10GBASE-T + 2 x Intel E810-C for QSFPCentOS Stream 95.14.0-142.el9.x86_64 (x86_64)GNOME Shell 40.10X ServerGCC 11.3.1 20220421xfs1920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen ResolutionMnn Ncnn Xeon BenchmarksSystem Logs- Transparent Huge Pages: always- --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-host-bind-now --enable-host-pie --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-link-serialization=1 --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-build-config=bootstrap-lto --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver --without-isl - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xd000363 - SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

ABCDEResult OverviewPhoronix Test Suite100%102%104%107%Mobile Neural NetworkNCNNMobile Neural NetworkNCNNNCNNNCNNNCNNNCNNMobile Neural NetworkMobile Neural NetworkNCNNNCNNNCNNNCNNNCNNNCNNMobile Neural NetworkMobile Neural NetworkNCNNMobile Neural NetworkNCNNNCNNNCNNNCNNMobileNetV2_224CPU - vgg16squeezenetv1.1CPU - efficientnet-b0CPU - alexnetCPU - mobilenetCPU - googlenetCPU - resnet18resnet-v2-50SqueezeNetV1.0CPU - squeezenet_ssdCPU - FastestDetCPU-v2-v2 - mobilenet-v2CPU - blazefaceCPU - yolov4-tinyCPU - resnet50mobilenetV3inception-v3CPU - regnety_400mmobilenet-v1-1.0CPU - vision_transformerCPU - mnasnetCPU - shufflenet-v2CPU-v3-v3 - mobilenet-v3

mnn ncnn xeonmnn: mobilenetV3mnn: squeezenetv1.1mnn: resnet-v2-50mnn: SqueezeNetV1.0mnn: MobileNetV2_224mnn: mobilenet-v1-1.0mnn: inception-v3ncnn: CPU - mobilenetncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - shufflenet-v2ncnn: CPU - mnasnetncnn: CPU - efficientnet-b0ncnn: CPU - blazefacencnn: CPU - googlenetncnn: CPU - vgg16ncnn: CPU - resnet18ncnn: CPU - alexnetncnn: CPU - resnet50ncnn: CPU - yolov4-tinyncnn: CPU - squeezenet_ssdncnn: CPU - regnety_400mncnn: CPU - vision_transformerncnn: CPU - FastestDetABCDE1.8072.3178.7824.1213.4152.22820.69021.9112.9912.0313.4711.9216.897.1523.7430.9113.378.8924.7527.8026.3457.14151.7815.331.8372.3959.0814.2133.1512.19820.81221.6812.8112.2113.7212.0516.677.3422.5329.0513.188.5124.4927.2526.3657.76152.6314.981.8232.4258.7664.2953.2172.16720.85322.3512.7912.2513.4711.9117.757.2823.1129.6313.518.5325.1427.2827.1757.92152.7915.561.8652.4478.8864.2193.2292.21120.82922.8512.8112.1213.4511.7416.547.0623.5231.2013.879.1225.2727.3826.6656.90155.3114.921.8652.5209.1994.3213.1362.21721.34621.8513.3112.2113.6311.7916.507.2322.6428.6813.208.6224.3426.7425.9658.68155.9014.93OpenBenchmarking.org

Mobile Neural Network

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: mobilenetV3ABCDE0.41960.83921.25881.67842.098SE +/- 0.026, N = 3SE +/- 0.023, N = 3SE +/- 0.019, N = 14SE +/- 0.018, N = 15SE +/- 0.024, N = 31.8071.8371.8231.8651.865MIN: 1.72 / MAX: 4.25MIN: 1.78 / MAX: 1.98MIN: 1.67 / MAX: 2.18MIN: 1.77 / MAX: 4.16MIN: 1.79 / MAX: 2.121. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: mobilenetV3ABCDE246810Min: 1.76 / Avg: 1.81 / Max: 1.84Min: 1.81 / Avg: 1.84 / Max: 1.88Min: 1.7 / Avg: 1.82 / Max: 1.92Min: 1.8 / Avg: 1.86 / Max: 2.08Min: 1.82 / Avg: 1.87 / Max: 1.91. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: squeezenetv1.1ABCDE0.5671.1341.7012.2682.835SE +/- 0.101, N = 3SE +/- 0.110, N = 3SE +/- 0.054, N = 14SE +/- 0.027, N = 15SE +/- 0.079, N = 32.3172.3952.4252.4472.520MIN: 2.17 / MAX: 3.61MIN: 2.17 / MAX: 4.44MIN: 2.11 / MAX: 3.4MIN: 2.3 / MAX: 3.94MIN: 2.34 / MAX: 6.71. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: squeezenetv1.1ABCDE246810Min: 2.2 / Avg: 2.32 / Max: 2.52Min: 2.2 / Avg: 2.39 / Max: 2.58Min: 2.14 / Avg: 2.42 / Max: 2.66Min: 2.33 / Avg: 2.45 / Max: 2.64Min: 2.36 / Avg: 2.52 / Max: 2.621. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: resnet-v2-50ABCDE3691215SE +/- 0.149, N = 3SE +/- 0.142, N = 3SE +/- 0.057, N = 14SE +/- 0.044, N = 15SE +/- 0.027, N = 38.7829.0818.7668.8869.199MIN: 8.31 / MAX: 21.48MIN: 8.4 / MAX: 24.9MIN: 8.09 / MAX: 22.74MIN: 8.15 / MAX: 22.2MIN: 8.97 / MAX: 9.911. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: resnet-v2-50ABCDE3691215Min: 8.49 / Avg: 8.78 / Max: 8.98Min: 8.88 / Avg: 9.08 / Max: 9.36Min: 8.51 / Avg: 8.77 / Max: 9.22Min: 8.64 / Avg: 8.89 / Max: 9.22Min: 9.15 / Avg: 9.2 / Max: 9.251. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: SqueezeNetV1.0ABCDE0.97221.94442.91663.88884.861SE +/- 0.164, N = 3SE +/- 0.097, N = 3SE +/- 0.058, N = 14SE +/- 0.039, N = 15SE +/- 0.116, N = 34.1214.2134.2954.2194.321MIN: 3.72 / MAX: 13.26MIN: 3.63 / MAX: 8.6MIN: 3.71 / MAX: 11.95MIN: 3.69 / MAX: 15.2MIN: 3.82 / MAX: 9.631. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: SqueezeNetV1.0ABCDE246810Min: 3.87 / Avg: 4.12 / Max: 4.43Min: 4.02 / Avg: 4.21 / Max: 4.32Min: 3.95 / Avg: 4.3 / Max: 4.55Min: 3.99 / Avg: 4.22 / Max: 4.43Min: 4.17 / Avg: 4.32 / Max: 4.551. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: MobileNetV2_224ABCDE0.76841.53682.30523.07363.842SE +/- 0.143, N = 3SE +/- 0.076, N = 3SE +/- 0.049, N = 14SE +/- 0.047, N = 15SE +/- 0.105, N = 33.4153.1513.2173.2293.136MIN: 2.58 / MAX: 12.21MIN: 2.57 / MAX: 8.86MIN: 2.68 / MAX: 10.02MIN: 2.53 / MAX: 9.07MIN: 2.79 / MAX: 8.341. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: MobileNetV2_224ABCDE246810Min: 3.21 / Avg: 3.42 / Max: 3.69Min: 3 / Avg: 3.15 / Max: 3.26Min: 2.84 / Avg: 3.22 / Max: 3.66Min: 2.79 / Avg: 3.23 / Max: 3.7Min: 2.93 / Avg: 3.14 / Max: 3.261. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: mobilenet-v1-1.0ABCDE0.50131.00261.50392.00522.5065SE +/- 0.021, N = 3SE +/- 0.007, N = 3SE +/- 0.028, N = 13SE +/- 0.013, N = 15SE +/- 0.021, N = 32.2282.1982.1672.2112.217MIN: 2.17 / MAX: 2.38MIN: 2.15 / MAX: 2.45MIN: 1.83 / MAX: 2.37MIN: 2.07 / MAX: 5.55MIN: 2.16 / MAX: 2.331. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: mobilenet-v1-1.0ABCDE246810Min: 2.2 / Avg: 2.23 / Max: 2.27Min: 2.18 / Avg: 2.2 / Max: 2.21Min: 1.87 / Avg: 2.17 / Max: 2.27Min: 2.15 / Avg: 2.21 / Max: 2.35Min: 2.19 / Avg: 2.22 / Max: 2.261. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: inception-v3ABCDE510152025SE +/- 0.44, N = 3SE +/- 0.18, N = 3SE +/- 0.12, N = 14SE +/- 0.09, N = 15SE +/- 0.10, N = 320.6920.8120.8520.8321.35MIN: 18.2 / MAX: 41.46MIN: 19.89 / MAX: 32.91MIN: 19.34 / MAX: 46.76MIN: 19.44 / MAX: 41.2MIN: 18.43 / MAX: 39.411. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.0Model: inception-v3ABCDE510152025Min: 20.04 / Avg: 20.69 / Max: 21.54Min: 20.45 / Avg: 20.81 / Max: 21Min: 20.09 / Avg: 20.85 / Max: 21.69Min: 20.4 / Avg: 20.83 / Max: 21.49Min: 21.24 / Avg: 21.35 / Max: 21.541. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenetABCDE510152025SE +/- 0.29, N = 3SE +/- 0.02, N = 3SE +/- 0.16, N = 3SE +/- 0.10, N = 3SE +/- 0.16, N = 321.9121.6822.3522.8521.85MIN: 21.02 / MAX: 61.03MIN: 21.29 / MAX: 45.51MIN: 21.7 / MAX: 47.14MIN: 21.59 / MAX: 246.4MIN: 21.29 / MAX: 94.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenetABCDE510152025Min: 21.33 / Avg: 21.91 / Max: 22.28Min: 21.65 / Avg: 21.68 / Max: 21.71Min: 22.11 / Avg: 22.35 / Max: 22.66Min: 22.73 / Avg: 22.85 / Max: 23.06Min: 21.54 / Avg: 21.85 / Max: 22.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v2ABCDE3691215SE +/- 0.14, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.09, N = 3SE +/- 0.30, N = 312.9912.8112.7912.8113.31MIN: 12.1 / MAX: 146.34MIN: 12.31 / MAX: 37.43MIN: 12.42 / MAX: 18.33MIN: 12.34 / MAX: 89.44MIN: 12.14 / MAX: 236.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v2ABCDE48121620Min: 12.85 / Avg: 12.99 / Max: 13.26Min: 12.76 / Avg: 12.81 / Max: 12.89Min: 12.69 / Avg: 12.79 / Max: 12.87Min: 12.65 / Avg: 12.81 / Max: 12.95Min: 12.8 / Avg: 13.31 / Max: 13.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v3ABCDE3691215SE +/- 0.14, N = 3SE +/- 0.20, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.16, N = 312.0312.2112.2512.1212.21MIN: 11.55 / MAX: 35.77MIN: 11.64 / MAX: 151.41MIN: 11.87 / MAX: 36.44MIN: 11.84 / MAX: 34.59MIN: 11.67 / MAX: 35.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v3ABCDE48121620Min: 11.76 / Avg: 12.03 / Max: 12.19Min: 11.84 / Avg: 12.21 / Max: 12.51Min: 12.15 / Avg: 12.25 / Max: 12.36Min: 12.07 / Avg: 12.12 / Max: 12.15Min: 11.98 / Avg: 12.21 / Max: 12.531. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v2ABCDE48121620SE +/- 0.10, N = 3SE +/- 0.31, N = 3SE +/- 0.16, N = 3SE +/- 0.10, N = 3SE +/- 0.26, N = 313.4713.7213.4713.4513.63MIN: 12.89 / MAX: 36.76MIN: 13.1 / MAX: 82.58MIN: 12.86 / MAX: 37.29MIN: 12.63 / MAX: 79.26MIN: 12.59 / MAX: 105.41. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v2ABCDE48121620Min: 13.28 / Avg: 13.47 / Max: 13.57Min: 13.36 / Avg: 13.72 / Max: 14.34Min: 13.15 / Avg: 13.47 / Max: 13.68Min: 13.26 / Avg: 13.45 / Max: 13.56Min: 13.2 / Avg: 13.63 / Max: 14.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnetABCDE3691215SE +/- 0.10, N = 3SE +/- 0.16, N = 3SE +/- 0.10, N = 3SE +/- 0.13, N = 3SE +/- 0.08, N = 311.9212.0511.9111.7411.79MIN: 11.57 / MAX: 36.77MIN: 11.5 / MAX: 157.89MIN: 11.32 / MAX: 35.73MIN: 11.27 / MAX: 17.51MIN: 11.19 / MAX: 63.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnetABCDE48121620Min: 11.81 / Avg: 11.92 / Max: 12.11Min: 11.79 / Avg: 12.05 / Max: 12.35Min: 11.73 / Avg: 11.91 / Max: 12.08Min: 11.49 / Avg: 11.74 / Max: 11.91Min: 11.63 / Avg: 11.79 / Max: 11.891. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b0ABCDE48121620SE +/- 0.14, N = 3SE +/- 0.14, N = 3SE +/- 0.73, N = 3SE +/- 0.29, N = 3SE +/- 0.30, N = 316.8916.6717.7516.5416.50MIN: 16.1 / MAX: 73.41MIN: 15.93 / MAX: 60.85MIN: 15.75 / MAX: 590.87MIN: 15.61 / MAX: 86.38MIN: 15.68 / MAX: 40.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b0ABCDE48121620Min: 16.62 / Avg: 16.89 / Max: 17.09Min: 16.4 / Avg: 16.67 / Max: 16.85Min: 16.77 / Avg: 17.75 / Max: 19.18Min: 15.99 / Avg: 16.54 / Max: 16.96Min: 15.98 / Avg: 16.5 / Max: 17.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazefaceABCDE246810SE +/- 0.08, N = 3SE +/- 0.20, N = 3SE +/- 0.11, N = 3SE +/- 0.12, N = 3SE +/- 0.05, N = 37.157.347.287.067.23MIN: 6.83 / MAX: 10.24MIN: 6.93 / MAX: 121.89MIN: 6.88 / MAX: 9.96MIN: 6.68 / MAX: 8.27MIN: 6.99 / MAX: 10.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazefaceABCDE3691215Min: 7 / Avg: 7.15 / Max: 7.28Min: 7.08 / Avg: 7.34 / Max: 7.73Min: 7.05 / Avg: 7.28 / Max: 7.39Min: 6.84 / Avg: 7.06 / Max: 7.27Min: 7.14 / Avg: 7.23 / Max: 7.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenetABCDE612182430SE +/- 0.56, N = 3SE +/- 0.21, N = 3SE +/- 0.48, N = 3SE +/- 0.45, N = 3SE +/- 0.29, N = 323.7422.5323.1123.5222.64MIN: 21.23 / MAX: 403.67MIN: 21.73 / MAX: 94.83MIN: 21.75 / MAX: 82.56MIN: 22.04 / MAX: 299.28MIN: 21.47 / MAX: 240.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenetABCDE612182430Min: 23.1 / Avg: 23.74 / Max: 24.85Min: 22.15 / Avg: 22.53 / Max: 22.87Min: 22.2 / Avg: 23.11 / Max: 23.85Min: 22.67 / Avg: 23.52 / Max: 24.2Min: 22.14 / Avg: 22.64 / Max: 23.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg16ABCDE714212835SE +/- 1.59, N = 3SE +/- 0.07, N = 3SE +/- 0.23, N = 3SE +/- 0.60, N = 3SE +/- 0.04, N = 330.9129.0529.6331.2028.68MIN: 26.06 / MAX: 119.07MIN: 27.32 / MAX: 149.49MIN: 27.23 / MAX: 201.62MIN: 28.77 / MAX: 277.95MIN: 26.99 / MAX: 114.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg16ABCDE714212835Min: 27.74 / Avg: 30.91 / Max: 32.73Min: 28.92 / Avg: 29.05 / Max: 29.15Min: 29.24 / Avg: 29.63 / Max: 30.02Min: 30.41 / Avg: 31.2 / Max: 32.38Min: 28.63 / Avg: 28.68 / Max: 28.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet18ABCDE48121620SE +/- 0.24, N = 3SE +/- 0.11, N = 3SE +/- 0.15, N = 3SE +/- 0.61, N = 3SE +/- 0.12, N = 313.3713.1813.5113.8713.20MIN: 12.55 / MAX: 19.07MIN: 12.67 / MAX: 65.17MIN: 12.95 / MAX: 158.24MIN: 12.86 / MAX: 18.89MIN: 12.71 / MAX: 82.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet18ABCDE48121620Min: 12.89 / Avg: 13.37 / Max: 13.65Min: 12.99 / Avg: 13.18 / Max: 13.38Min: 13.25 / Avg: 13.51 / Max: 13.77Min: 13.12 / Avg: 13.87 / Max: 15.08Min: 13.01 / Avg: 13.2 / Max: 13.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnetABCDE3691215SE +/- 0.36, N = 3SE +/- 0.16, N = 3SE +/- 0.08, N = 3SE +/- 0.39, N = 3SE +/- 0.37, N = 38.898.518.539.128.62MIN: 7.96 / MAX: 119.99MIN: 8.08 / MAX: 100.34MIN: 8.13 / MAX: 49.08MIN: 8.11 / MAX: 130.97MIN: 7.91 / MAX: 231.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnetABCDE3691215Min: 8.26 / Avg: 8.89 / Max: 9.49Min: 8.34 / Avg: 8.51 / Max: 8.83Min: 8.42 / Avg: 8.53 / Max: 8.69Min: 8.37 / Avg: 9.12 / Max: 9.65Min: 8.16 / Avg: 8.62 / Max: 9.341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet50ABCDE612182430SE +/- 0.66, N = 3SE +/- 0.30, N = 3SE +/- 0.19, N = 3SE +/- 0.11, N = 3SE +/- 0.11, N = 324.7524.4925.1425.2724.34MIN: 23.11 / MAX: 123.68MIN: 23.44 / MAX: 111.14MIN: 23.76 / MAX: 106.76MIN: 23.98 / MAX: 190.68MIN: 23.27 / MAX: 121.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet50ABCDE612182430Min: 23.44 / Avg: 24.75 / Max: 25.46Min: 23.89 / Avg: 24.49 / Max: 24.82Min: 24.94 / Avg: 25.14 / Max: 25.52Min: 25.16 / Avg: 25.27 / Max: 25.49Min: 24.12 / Avg: 24.34 / Max: 24.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tinyABCDE714212835SE +/- 0.43, N = 3SE +/- 0.25, N = 3SE +/- 0.03, N = 3SE +/- 0.26, N = 3SE +/- 0.19, N = 327.8027.2527.2827.3826.74MIN: 25.29 / MAX: 281.03MIN: 25.94 / MAX: 352.26MIN: 26.44 / MAX: 51.63MIN: 25.63 / MAX: 358.69MIN: 25.94 / MAX: 113.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tinyABCDE612182430Min: 26.94 / Avg: 27.8 / Max: 28.31Min: 26.98 / Avg: 27.25 / Max: 27.75Min: 27.23 / Avg: 27.28 / Max: 27.34Min: 26.93 / Avg: 27.38 / Max: 27.83Min: 26.47 / Avg: 26.74 / Max: 27.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssdABCDE612182430SE +/- 0.14, N = 3SE +/- 0.14, N = 3SE +/- 0.43, N = 3SE +/- 0.27, N = 3SE +/- 0.13, N = 326.3426.3627.1726.6625.96MIN: 24.99 / MAX: 249.3MIN: 25.39 / MAX: 197.69MIN: 25.76 / MAX: 481.76MIN: 24.96 / MAX: 198.39MIN: 25.3 / MAX: 49.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssdABCDE612182430Min: 26.14 / Avg: 26.34 / Max: 26.6Min: 26.09 / Avg: 26.36 / Max: 26.5Min: 26.63 / Avg: 27.17 / Max: 28.02Min: 26.21 / Avg: 26.66 / Max: 27.14Min: 25.75 / Avg: 25.96 / Max: 26.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400mABCDE1326395265SE +/- 1.60, N = 3SE +/- 0.73, N = 3SE +/- 0.98, N = 3SE +/- 1.22, N = 3SE +/- 0.57, N = 357.1457.7657.9256.9058.68MIN: 53.37 / MAX: 430.33MIN: 55.37 / MAX: 447.81MIN: 55.21 / MAX: 203.4MIN: 53.28 / MAX: 385.57MIN: 54.21 / MAX: 808.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400mABCDE1224364860Min: 54.58 / Avg: 57.14 / Max: 60.08Min: 56.77 / Avg: 57.76 / Max: 59.19Min: 56.17 / Avg: 57.92 / Max: 59.54Min: 54.98 / Avg: 56.9 / Max: 59.16Min: 57.54 / Avg: 58.68 / Max: 59.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformerABCDE306090120150SE +/- 1.55, N = 3SE +/- 0.18, N = 3SE +/- 0.64, N = 3SE +/- 0.39, N = 3SE +/- 0.33, N = 3151.78152.63152.79155.31155.90MIN: 145.5 / MAX: 656.75MIN: 147.33 / MAX: 369.79MIN: 145.98 / MAX: 797.33MIN: 145.23 / MAX: 1014.26MIN: 146.82 / MAX: 812.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformerABCDE306090120150Min: 149.62 / Avg: 151.78 / Max: 154.79Min: 152.34 / Avg: 152.63 / Max: 152.95Min: 151.73 / Avg: 152.79 / Max: 153.94Min: 154.85 / Avg: 155.31 / Max: 156.09Min: 155.39 / Avg: 155.9 / Max: 156.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDetABCDE48121620SE +/- 0.14, N = 3SE +/- 0.20, N = 3SE +/- 0.45, N = 3SE +/- 0.22, N = 3SE +/- 0.09, N = 315.3314.9815.5614.9214.93MIN: 14.81 / MAX: 39.13MIN: 14.48 / MAX: 39.53MIN: 14.29 / MAX: 72.49MIN: 14.41 / MAX: 20.23MIN: 14.47 / MAX: 37.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDetABCDE48121620Min: 15.1 / Avg: 15.33 / Max: 15.59Min: 14.75 / Avg: 14.98 / Max: 15.38Min: 14.8 / Avg: 15.56 / Max: 16.37Min: 14.67 / Avg: 14.92 / Max: 15.35Min: 14.74 / Avg: 14.93 / Max: 15.051. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread