1280 nn

Intel Xeon E3-1280 v5 testing with a MSI Z170A SLI PLUS (MS-7998) v1.0 (2.A0 BIOS) and ASUS AMD Radeon HD 7850 / R7 265 R9 270 1024SP on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2106187-IB-1280NN86046
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

HPC - High Performance Computing 3 Tests
Machine Learning 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
1
June 18 2021
  2 Hours, 20 Minutes
2
June 18 2021
  2 Hours, 20 Minutes
3
June 18 2021
  2 Hours, 21 Minutes
Invert Hiding All Results Option
  2 Hours, 20 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


1280 nnProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen Resolution123Intel Xeon E3-1280 v5 @ 4.00GHz (4 Cores / 8 Threads)MSI Z170A SLI PLUS (MS-7998) v1.0 (2.A0 BIOS)Intel Xeon E3-1200 v5/E3-150032GB256GB TOSHIBA RD400ASUS AMD Radeon HD 7850 / R7 265 R9 270 1024SPRealtek ALC1150VA2431Intel I219-VUbuntu 20.045.9.0-050900rc2daily20200826-generic (x86_64) 20200825GNOME Shell 3.36.4X Server 1.20.94.5 Mesa 20.0.8 (LLVM 10.0.0)GCC 9.3.0ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_pstate powersave - CPU Microcode: 0xe2 - Thermald 1.9.1 Security Details- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Mitigation of Microcode + tsx_async_abort: Mitigation of Clear buffers; SMT vulnerable

123Result OverviewPhoronix Test Suite100%100%100%101%101%Mobile Neural NetworkNCNNTNN

1280 nnmnn: mobilenetV3mnn: squeezenetv1.1mnn: resnet-v2-50mnn: SqueezeNetV1.0mnn: MobileNetV2_224mnn: mobilenet-v1-1.0mnn: inception-v3ncnn: CPU - mobilenetncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - shufflenet-v2ncnn: CPU - mnasnetncnn: CPU - efficientnet-b0ncnn: CPU - blazefacencnn: CPU - googlenetncnn: CPU - vgg16ncnn: CPU - resnet18ncnn: CPU - alexnetncnn: CPU - resnet50ncnn: CPU - yolov4-tinyncnn: CPU - squeezenet_ssdncnn: CPU - regnety_400mtnn: CPU - DenseNettnn: CPU - MobileNet v2tnn: CPU - SqueezeNet v2tnn: CPU - SqueezeNet v1.11232.6234.55844.0876.7154.6024.50351.13825.966.825.744.855.659.411.9119.9691.0221.9819.8540.6437.6230.2811.824517.977391.26980.914341.8682.6274.60744.2866.7354.6444.50851.60625.746.825.744.855.649.431.9119.9591.1522.0319.6840.6037.5430.1211.854523.532389.95180.888341.4142.6314.61044.2746.7434.6514.52351.76525.876.895.794.865.689.491.9620.0791.4522.0219.8540.6437.7030.2111.844527.506389.91180.974340.964OpenBenchmarking.org

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenetV31230.5921.1841.7762.3682.96SE +/- 0.012, N = 3SE +/- 0.010, N = 3SE +/- 0.015, N = 32.6232.6272.631MIN: 2.5 / MAX: 4.03MIN: 2.58 / MAX: 7.02MIN: 2.58 / MAX: 4.021. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenetV3123246810Min: 2.6 / Avg: 2.62 / Max: 2.64Min: 2.61 / Avg: 2.63 / Max: 2.64Min: 2.6 / Avg: 2.63 / Max: 2.651. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: squeezenetv1.11231.03732.07463.11194.14925.1865SE +/- 0.020, N = 3SE +/- 0.022, N = 3SE +/- 0.021, N = 34.5584.6074.610MIN: 4.47 / MAX: 7.45MIN: 4.53 / MAX: 26.9MIN: 4.54 / MAX: 5.721. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: squeezenetv1.1123246810Min: 4.53 / Avg: 4.56 / Max: 4.59Min: 4.58 / Avg: 4.61 / Max: 4.65Min: 4.57 / Avg: 4.61 / Max: 4.651. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: resnet-v2-501231020304050SE +/- 0.06, N = 3SE +/- 0.27, N = 3SE +/- 0.11, N = 344.0944.2944.27MIN: 43.62 / MAX: 68.3MIN: 43.58 / MAX: 65.39MIN: 43.74 / MAX: 66.751. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: resnet-v2-50123918273645Min: 44 / Avg: 44.09 / Max: 44.21Min: 43.76 / Avg: 44.29 / Max: 44.62Min: 44.14 / Avg: 44.27 / Max: 44.491. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: SqueezeNetV1.0123246810SE +/- 0.028, N = 3SE +/- 0.030, N = 3SE +/- 0.033, N = 36.7156.7356.743MIN: 6.62 / MAX: 24.05MIN: 6.64 / MAX: 9.63MIN: 6.63 / MAX: 11.151. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: SqueezeNetV1.01233691215Min: 6.66 / Avg: 6.72 / Max: 6.76Min: 6.67 / Avg: 6.73 / Max: 6.77Min: 6.68 / Avg: 6.74 / Max: 6.791. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: MobileNetV2_2241231.04652.0933.13954.1865.2325SE +/- 0.038, N = 3SE +/- 0.003, N = 3SE +/- 0.020, N = 34.6024.6444.651MIN: 4.47 / MAX: 7.33MIN: 4.58 / MAX: 7.38MIN: 4.56 / MAX: 14.741. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: MobileNetV2_224123246810Min: 4.54 / Avg: 4.6 / Max: 4.67Min: 4.64 / Avg: 4.64 / Max: 4.65Min: 4.62 / Avg: 4.65 / Max: 4.691. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenet-v1-1.01231.01772.03543.05314.07085.0885SE +/- 0.015, N = 3SE +/- 0.010, N = 3SE +/- 0.007, N = 34.5034.5084.523MIN: 4.46 / MAX: 7.27MIN: 4.47 / MAX: 10.02MIN: 4.49 / MAX: 8.91. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenet-v1-1.0123246810Min: 4.48 / Avg: 4.5 / Max: 4.53Min: 4.49 / Avg: 4.51 / Max: 4.53Min: 4.51 / Avg: 4.52 / Max: 4.541. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: inception-v31231224364860SE +/- 0.34, N = 3SE +/- 0.25, N = 3SE +/- 0.30, N = 351.1451.6151.77MIN: 50.55 / MAX: 72.62MIN: 51.05 / MAX: 74.56MIN: 51.05 / MAX: 74.011. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: inception-v31231020304050Min: 50.75 / Avg: 51.14 / Max: 51.82Min: 51.19 / Avg: 51.61 / Max: 52.04Min: 51.23 / Avg: 51.76 / Max: 52.251. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: mobilenet123612182430SE +/- 0.20, N = 3SE +/- 0.02, N = 3SE +/- 0.11, N = 325.9625.7425.87MIN: 25.63 / MAX: 36.2MIN: 25.62 / MAX: 28.44MIN: 25.6 / MAX: 29.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: mobilenet123612182430Min: 25.75 / Avg: 25.96 / Max: 26.35Min: 25.71 / Avg: 25.74 / Max: 25.77Min: 25.75 / Avg: 25.87 / Max: 26.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU-v2-v2 - Model: mobilenet-v2123246810SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.06, N = 36.826.826.89MIN: 6.73 / MAX: 9.66MIN: 6.74 / MAX: 8.53MIN: 6.73 / MAX: 9.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU-v2-v2 - Model: mobilenet-v21233691215Min: 6.8 / Avg: 6.82 / Max: 6.84Min: 6.82 / Avg: 6.82 / Max: 6.83Min: 6.83 / Avg: 6.89 / Max: 7.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU-v3-v3 - Model: mobilenet-v31231.30282.60563.90845.21126.514SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.05, N = 35.745.745.79MIN: 5.68 / MAX: 7.33MIN: 5.67 / MAX: 7.22MIN: 5.67 / MAX: 17.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU-v3-v3 - Model: mobilenet-v3123246810Min: 5.74 / Avg: 5.74 / Max: 5.75Min: 5.73 / Avg: 5.74 / Max: 5.74Min: 5.73 / Avg: 5.79 / Max: 5.891. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: shufflenet-v21231.09352.1873.28054.3745.4675SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 34.854.854.86MIN: 4.8 / MAX: 6.43MIN: 4.81 / MAX: 6.29MIN: 4.8 / MAX: 6.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: shufflenet-v2123246810Min: 4.84 / Avg: 4.85 / Max: 4.86Min: 4.85 / Avg: 4.85 / Max: 4.86Min: 4.84 / Avg: 4.86 / Max: 4.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: mnasnet1231.2782.5563.8345.1126.39SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 35.655.645.68MIN: 5.58 / MAX: 7.37MIN: 5.59 / MAX: 7.14MIN: 5.58 / MAX: 7.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: mnasnet123246810Min: 5.64 / Avg: 5.65 / Max: 5.66Min: 5.64 / Avg: 5.64 / Max: 5.65Min: 5.63 / Avg: 5.68 / Max: 5.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: efficientnet-b01233691215SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.08, N = 39.419.439.49MIN: 9.35 / MAX: 11.25MIN: 9.36 / MAX: 20.24MIN: 9.36 / MAX: 9.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: efficientnet-b01233691215Min: 9.39 / Avg: 9.41 / Max: 9.42Min: 9.41 / Avg: 9.43 / Max: 9.47Min: 9.4 / Avg: 9.49 / Max: 9.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: blazeface1230.4410.8821.3231.7642.205SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.06, N = 31.911.911.96MIN: 1.88 / MAX: 2.07MIN: 1.88 / MAX: 2.07MIN: 1.88 / MAX: 2.221. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: blazeface123246810Min: 1.9 / Avg: 1.91 / Max: 1.91Min: 1.91 / Avg: 1.91 / Max: 1.91Min: 1.9 / Avg: 1.96 / Max: 2.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: googlenet123510152025SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.13, N = 319.9619.9520.07MIN: 19.87 / MAX: 33.03MIN: 19.86 / MAX: 22.71MIN: 19.86 / MAX: 21.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: googlenet123510152025Min: 19.93 / Avg: 19.96 / Max: 20.01Min: 19.92 / Avg: 19.95 / Max: 19.97Min: 19.92 / Avg: 20.07 / Max: 20.341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: vgg1612320406080100SE +/- 0.09, N = 3SE +/- 0.03, N = 3SE +/- 0.31, N = 391.0291.1591.45MIN: 90.68 / MAX: 94MIN: 90.86 / MAX: 103.83MIN: 90.83 / MAX: 158.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: vgg1612320406080100Min: 90.9 / Avg: 91.02 / Max: 91.19Min: 91.12 / Avg: 91.15 / Max: 91.21Min: 91.06 / Avg: 91.45 / Max: 92.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: resnet18123510152025SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 321.9822.0322.02MIN: 21.89 / MAX: 22.64MIN: 21.92 / MAX: 22.81MIN: 21.92 / MAX: 23.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: resnet18123510152025Min: 21.97 / Avg: 21.98 / Max: 21.99Min: 22 / Avg: 22.03 / Max: 22.05Min: 22.01 / Avg: 22.02 / Max: 22.051. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: alexnet123510152025SE +/- 0.15, N = 3SE +/- 0.01, N = 3SE +/- 0.14, N = 319.8519.6819.85MIN: 19.6 / MAX: 32.37MIN: 19.61 / MAX: 20.82MIN: 19.59 / MAX: 30.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: alexnet123510152025Min: 19.68 / Avg: 19.85 / Max: 20.15Min: 19.66 / Avg: 19.68 / Max: 19.7Min: 19.7 / Avg: 19.85 / Max: 20.131. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: resnet50123918273645SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 340.6440.6040.64MIN: 40.48 / MAX: 65.35MIN: 40.48 / MAX: 44.38MIN: 40.46 / MAX: 53.891. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: resnet50123816243240Min: 40.59 / Avg: 40.64 / Max: 40.72Min: 40.59 / Avg: 40.6 / Max: 40.62Min: 40.61 / Avg: 40.64 / Max: 40.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: yolov4-tiny123918273645SE +/- 0.15, N = 3SE +/- 0.03, N = 3SE +/- 0.12, N = 337.6237.5437.70MIN: 37.32 / MAX: 50.74MIN: 37.34 / MAX: 39.51MIN: 37.42 / MAX: 46.131. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: yolov4-tiny123816243240Min: 37.47 / Avg: 37.62 / Max: 37.93Min: 37.51 / Avg: 37.54 / Max: 37.59Min: 37.58 / Avg: 37.7 / Max: 37.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: squeezenet_ssd123714212835SE +/- 0.13, N = 3SE +/- 0.01, N = 3SE +/- 0.09, N = 330.2830.1230.21MIN: 29.93 / MAX: 33.92MIN: 29.94 / MAX: 32.08MIN: 29.93 / MAX: 43.991. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: squeezenet_ssd123714212835Min: 30.08 / Avg: 30.28 / Max: 30.53Min: 30.11 / Avg: 30.12 / Max: 30.13Min: 30.09 / Avg: 30.21 / Max: 30.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: regnety_400m1233691215SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 311.8211.8511.84MIN: 11.78 / MAX: 12.01MIN: 11.79 / MAX: 12.72MIN: 11.79 / MAX: 11.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: regnety_400m1233691215Min: 11.81 / Avg: 11.82 / Max: 11.84Min: 11.83 / Avg: 11.85 / Max: 11.87Min: 11.83 / Avg: 11.84 / Max: 11.851. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNet12310002000300040005000SE +/- 4.13, N = 3SE +/- 7.80, N = 3SE +/- 13.48, N = 34517.984523.534527.51MIN: 4498.13 / MAX: 4539.84MIN: 4491.56 / MAX: 4557.66MIN: 4494.26 / MAX: 4573.951. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNet1238001600240032004000Min: 4510.6 / Avg: 4517.98 / Max: 4524.87Min: 4511.95 / Avg: 4523.53 / Max: 4538.37Min: 4506.15 / Avg: 4527.51 / Max: 4552.441. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v212380160240320400SE +/- 0.30, N = 3SE +/- 0.09, N = 3SE +/- 0.14, N = 3391.27389.95389.91MIN: 390.1 / MAX: 402.52MIN: 388.73 / MAX: 395.45MIN: 388.91 / MAX: 392.911. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v212370140210280350Min: 390.74 / Avg: 391.27 / Max: 391.77Min: 389.84 / Avg: 389.95 / Max: 390.12Min: 389.66 / Avg: 389.91 / Max: 390.161. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v212320406080100SE +/- 0.06, N = 3SE +/- 0.06, N = 3SE +/- 0.07, N = 380.9180.8980.97MIN: 80.39 / MAX: 81.96MIN: 80.35 / MAX: 81.76MIN: 80.42 / MAX: 82.461. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v21231530456075Min: 80.84 / Avg: 80.91 / Max: 81.03Min: 80.82 / Avg: 80.89 / Max: 81.02Min: 80.84 / Avg: 80.97 / Max: 81.061. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.112370140210280350SE +/- 0.48, N = 3SE +/- 0.06, N = 3SE +/- 0.60, N = 3341.87341.41340.96MIN: 340.07 / MAX: 354.64MIN: 340.29 / MAX: 342.56MIN: 338.98 / MAX: 347.761. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.112360120180240300Min: 341.2 / Avg: 341.87 / Max: 342.81Min: 341.35 / Avg: 341.41 / Max: 341.54Min: 339.76 / Avg: 340.96 / Max: 341.621. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl