1280 nn

Intel Xeon E3-1280 v5 testing with a MSI Z170A SLI PLUS (MS-7998) v1.0 (2.A0 BIOS) and ASUS AMD Radeon HD 7850 / R7 265 R9 270 1024SP on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2106187-IB-1280NN86046
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
1
June 18 2021
  2 Hours, 20 Minutes
2
June 18 2021
  2 Hours, 20 Minutes
3
June 18 2021
  2 Hours, 21 Minutes
Invert Behavior (Only Show Selected Data)
  2 Hours, 20 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


1280 nnProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen Resolution123Intel Xeon E3-1280 v5 @ 4.00GHz (4 Cores / 8 Threads)MSI Z170A SLI PLUS (MS-7998) v1.0 (2.A0 BIOS)Intel Xeon E3-1200 v5/E3-150032GB256GB TOSHIBA RD400ASUS AMD Radeon HD 7850 / R7 265 R9 270 1024SPRealtek ALC1150VA2431Intel I219-VUbuntu 20.045.9.0-050900rc2daily20200826-generic (x86_64) 20200825GNOME Shell 3.36.4X Server 1.20.94.5 Mesa 20.0.8 (LLVM 10.0.0)GCC 9.3.0ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_pstate powersave - CPU Microcode: 0xe2 - Thermald 1.9.1 Security Details- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Mitigation of Microcode + tsx_async_abort: Mitigation of Clear buffers; SMT vulnerable

123Result OverviewPhoronix Test Suite100%100%100%101%101%Mobile Neural NetworkNCNNTNN

1280 nnmnn: mobilenetV3mnn: squeezenetv1.1mnn: resnet-v2-50mnn: SqueezeNetV1.0mnn: MobileNetV2_224mnn: mobilenet-v1-1.0mnn: inception-v3ncnn: CPU - mobilenetncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - shufflenet-v2ncnn: CPU - mnasnetncnn: CPU - efficientnet-b0ncnn: CPU - blazefacencnn: CPU - googlenetncnn: CPU - vgg16ncnn: CPU - resnet18ncnn: CPU - alexnetncnn: CPU - resnet50ncnn: CPU - yolov4-tinyncnn: CPU - squeezenet_ssdncnn: CPU - regnety_400mtnn: CPU - DenseNettnn: CPU - MobileNet v2tnn: CPU - SqueezeNet v2tnn: CPU - SqueezeNet v1.11232.6234.55844.0876.7154.6024.50351.13825.966.825.744.855.659.411.9119.9691.0221.9819.8540.6437.6230.2811.824517.977391.26980.914341.8682.6274.60744.2866.7354.6444.50851.60625.746.825.744.855.649.431.9119.9591.1522.0319.6840.6037.5430.1211.854523.532389.95180.888341.4142.6314.61044.2746.7434.6514.52351.76525.876.895.794.865.689.491.9620.0791.4522.0219.8540.6437.7030.2111.844527.506389.91180.974340.964OpenBenchmarking.org

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenetV33210.5921.1841.7762.3682.96SE +/- 0.015, N = 3SE +/- 0.010, N = 3SE +/- 0.012, N = 32.6312.6272.623MIN: 2.58 / MAX: 4.02MIN: 2.58 / MAX: 7.02MIN: 2.5 / MAX: 4.031. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: squeezenetv1.13211.03732.07463.11194.14925.1865SE +/- 0.021, N = 3SE +/- 0.022, N = 3SE +/- 0.020, N = 34.6104.6074.558MIN: 4.54 / MAX: 5.72MIN: 4.53 / MAX: 26.9MIN: 4.47 / MAX: 7.451. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: resnet-v2-503211020304050SE +/- 0.11, N = 3SE +/- 0.27, N = 3SE +/- 0.06, N = 344.2744.2944.09MIN: 43.74 / MAX: 66.75MIN: 43.58 / MAX: 65.39MIN: 43.62 / MAX: 68.31. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: SqueezeNetV1.0321246810SE +/- 0.033, N = 3SE +/- 0.030, N = 3SE +/- 0.028, N = 36.7436.7356.715MIN: 6.63 / MAX: 11.15MIN: 6.64 / MAX: 9.63MIN: 6.62 / MAX: 24.051. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: MobileNetV2_2243211.04652.0933.13954.1865.2325SE +/- 0.020, N = 3SE +/- 0.003, N = 3SE +/- 0.038, N = 34.6514.6444.602MIN: 4.56 / MAX: 14.74MIN: 4.58 / MAX: 7.38MIN: 4.47 / MAX: 7.331. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenet-v1-1.03211.01772.03543.05314.07085.0885SE +/- 0.007, N = 3SE +/- 0.010, N = 3SE +/- 0.015, N = 34.5234.5084.503MIN: 4.49 / MAX: 8.9MIN: 4.47 / MAX: 10.02MIN: 4.46 / MAX: 7.271. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: inception-v33211224364860SE +/- 0.30, N = 3SE +/- 0.25, N = 3SE +/- 0.34, N = 351.7751.6151.14MIN: 51.05 / MAX: 74.01MIN: 51.05 / MAX: 74.56MIN: 50.55 / MAX: 72.621. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: mobilenet321612182430SE +/- 0.11, N = 3SE +/- 0.02, N = 3SE +/- 0.20, N = 325.8725.7425.96MIN: 25.6 / MAX: 29.21MIN: 25.62 / MAX: 28.44MIN: 25.63 / MAX: 36.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU-v2-v2 - Model: mobilenet-v2321246810SE +/- 0.06, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 36.896.826.82MIN: 6.73 / MAX: 9.5MIN: 6.74 / MAX: 8.53MIN: 6.73 / MAX: 9.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU-v3-v3 - Model: mobilenet-v33211.30282.60563.90845.21126.514SE +/- 0.05, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 35.795.745.74MIN: 5.67 / MAX: 17.62MIN: 5.67 / MAX: 7.22MIN: 5.68 / MAX: 7.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: shufflenet-v23211.09352.1873.28054.3745.4675SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 34.864.854.85MIN: 4.8 / MAX: 6.37MIN: 4.81 / MAX: 6.29MIN: 4.8 / MAX: 6.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: mnasnet3211.2782.5563.8345.1126.39SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 35.685.645.65MIN: 5.58 / MAX: 7.37MIN: 5.59 / MAX: 7.14MIN: 5.58 / MAX: 7.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: efficientnet-b03213691215SE +/- 0.08, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 39.499.439.41MIN: 9.36 / MAX: 9.8MIN: 9.36 / MAX: 20.24MIN: 9.35 / MAX: 11.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: blazeface3210.4410.8821.3231.7642.205SE +/- 0.06, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 31.961.911.91MIN: 1.88 / MAX: 2.22MIN: 1.88 / MAX: 2.07MIN: 1.88 / MAX: 2.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: googlenet321510152025SE +/- 0.13, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 320.0719.9519.96MIN: 19.86 / MAX: 21.48MIN: 19.86 / MAX: 22.71MIN: 19.87 / MAX: 33.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: vgg1632120406080100SE +/- 0.31, N = 3SE +/- 0.03, N = 3SE +/- 0.09, N = 391.4591.1591.02MIN: 90.83 / MAX: 158.8MIN: 90.86 / MAX: 103.83MIN: 90.68 / MAX: 941. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: resnet18321510152025SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 322.0222.0321.98MIN: 21.92 / MAX: 23.68MIN: 21.92 / MAX: 22.81MIN: 21.89 / MAX: 22.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: alexnet321510152025SE +/- 0.14, N = 3SE +/- 0.01, N = 3SE +/- 0.15, N = 319.8519.6819.85MIN: 19.59 / MAX: 30.77MIN: 19.61 / MAX: 20.82MIN: 19.6 / MAX: 32.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: resnet50321918273645SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 340.6440.6040.64MIN: 40.46 / MAX: 53.89MIN: 40.48 / MAX: 44.38MIN: 40.48 / MAX: 65.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: yolov4-tiny321918273645SE +/- 0.12, N = 3SE +/- 0.03, N = 3SE +/- 0.15, N = 337.7037.5437.62MIN: 37.42 / MAX: 46.13MIN: 37.34 / MAX: 39.51MIN: 37.32 / MAX: 50.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: squeezenet_ssd321714212835SE +/- 0.09, N = 3SE +/- 0.01, N = 3SE +/- 0.13, N = 330.2130.1230.28MIN: 29.93 / MAX: 43.99MIN: 29.94 / MAX: 32.08MIN: 29.93 / MAX: 33.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: regnety_400m3213691215SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 311.8411.8511.82MIN: 11.79 / MAX: 11.98MIN: 11.79 / MAX: 12.72MIN: 11.78 / MAX: 12.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNet32110002000300040005000SE +/- 13.48, N = 3SE +/- 7.80, N = 3SE +/- 4.13, N = 34527.514523.534517.98MIN: 4494.26 / MAX: 4573.95MIN: 4491.56 / MAX: 4557.66MIN: 4498.13 / MAX: 4539.841. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v232180160240320400SE +/- 0.14, N = 3SE +/- 0.09, N = 3SE +/- 0.30, N = 3389.91389.95391.27MIN: 388.91 / MAX: 392.91MIN: 388.73 / MAX: 395.45MIN: 390.1 / MAX: 402.521. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v232120406080100SE +/- 0.07, N = 3SE +/- 0.06, N = 3SE +/- 0.06, N = 380.9780.8980.91MIN: 80.42 / MAX: 82.46MIN: 80.35 / MAX: 81.76MIN: 80.39 / MAX: 81.961. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.132170140210280350SE +/- 0.60, N = 3SE +/- 0.06, N = 3SE +/- 0.48, N = 3340.96341.41341.87MIN: 338.98 / MAX: 347.76MIN: 340.29 / MAX: 342.56MIN: 340.07 / MAX: 354.641. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl