1280 nn

Intel Xeon E3-1280 v5 testing with a MSI Z170A SLI PLUS (MS-7998) v1.0 (2.A0 BIOS) and ASUS AMD Radeon HD 7850 / R7 265 R9 270 1024SP on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2106187-IB-1280NN86046
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
1
June 18 2021
  2 Hours, 20 Minutes
2
June 18 2021
  2 Hours, 20 Minutes
3
June 18 2021
  2 Hours, 21 Minutes
Invert Behavior (Only Show Selected Data)
  2 Hours, 20 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


1280 nnProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen Resolution123Intel Xeon E3-1280 v5 @ 4.00GHz (4 Cores / 8 Threads)MSI Z170A SLI PLUS (MS-7998) v1.0 (2.A0 BIOS)Intel Xeon E3-1200 v5/E3-150032GB256GB TOSHIBA RD400ASUS AMD Radeon HD 7850 / R7 265 R9 270 1024SPRealtek ALC1150VA2431Intel I219-VUbuntu 20.045.9.0-050900rc2daily20200826-generic (x86_64) 20200825GNOME Shell 3.36.4X Server 1.20.94.5 Mesa 20.0.8 (LLVM 10.0.0)GCC 9.3.0ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_pstate powersave - CPU Microcode: 0xe2 - Thermald 1.9.1 Security Details- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Mitigation of Microcode + tsx_async_abort: Mitigation of Clear buffers; SMT vulnerable

123Result OverviewPhoronix Test Suite100%100%100%101%101%Mobile Neural NetworkNCNNTNN

1280 nnncnn: CPU - blazefacemnn: inception-v3mnn: squeezenetv1.1mnn: MobileNetV2_224ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - alexnetncnn: CPU - mobilenetncnn: CPU - efficientnet-b0ncnn: CPU - mnasnetncnn: CPU - googlenetncnn: CPU - squeezenet_ssdncnn: CPU - vgg16mnn: resnet-v2-50mnn: mobilenet-v1-1.0ncnn: CPU - yolov4-tinymnn: SqueezeNetV1.0tnn: CPU - MobileNet v2mnn: mobilenetV3tnn: CPU - SqueezeNet v1.1ncnn: CPU - regnety_400mncnn: CPU - resnet18tnn: CPU - DenseNetncnn: CPU - shufflenet-v2tnn: CPU - SqueezeNet v2ncnn: CPU - resnet501231.9151.1384.5584.6026.825.7419.8525.969.415.6519.9630.2891.0244.0874.50337.626.715391.2692.623341.86811.8221.984517.9774.8580.91440.641.9151.6064.6074.6446.825.7419.6825.749.435.6419.9530.1291.1544.2864.50837.546.735389.9512.627341.41411.8522.034523.5324.8580.88840.601.9651.7654.6104.6516.895.7919.8525.879.495.6820.0730.2191.4544.2744.52337.706.743389.9112.631340.96411.8422.024527.5064.8680.97440.64OpenBenchmarking.org

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: blazeface1230.4410.8821.3231.7642.205SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.06, N = 31.911.911.96MIN: 1.88 / MAX: 2.07MIN: 1.88 / MAX: 2.07MIN: 1.88 / MAX: 2.221. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: inception-v31231224364860SE +/- 0.34, N = 3SE +/- 0.25, N = 3SE +/- 0.30, N = 351.1451.6151.77MIN: 50.55 / MAX: 72.62MIN: 51.05 / MAX: 74.56MIN: 51.05 / MAX: 74.011. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: squeezenetv1.11231.03732.07463.11194.14925.1865SE +/- 0.020, N = 3SE +/- 0.022, N = 3SE +/- 0.021, N = 34.5584.6074.610MIN: 4.47 / MAX: 7.45MIN: 4.53 / MAX: 26.9MIN: 4.54 / MAX: 5.721. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: MobileNetV2_2241231.04652.0933.13954.1865.2325SE +/- 0.038, N = 3SE +/- 0.003, N = 3SE +/- 0.020, N = 34.6024.6444.651MIN: 4.47 / MAX: 7.33MIN: 4.58 / MAX: 7.38MIN: 4.56 / MAX: 14.741. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU-v2-v2 - Model: mobilenet-v2123246810SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.06, N = 36.826.826.89MIN: 6.73 / MAX: 9.66MIN: 6.74 / MAX: 8.53MIN: 6.73 / MAX: 9.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU-v3-v3 - Model: mobilenet-v31231.30282.60563.90845.21126.514SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.05, N = 35.745.745.79MIN: 5.68 / MAX: 7.33MIN: 5.67 / MAX: 7.22MIN: 5.67 / MAX: 17.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: alexnet123510152025SE +/- 0.15, N = 3SE +/- 0.01, N = 3SE +/- 0.14, N = 319.8519.6819.85MIN: 19.6 / MAX: 32.37MIN: 19.61 / MAX: 20.82MIN: 19.59 / MAX: 30.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: mobilenet123612182430SE +/- 0.20, N = 3SE +/- 0.02, N = 3SE +/- 0.11, N = 325.9625.7425.87MIN: 25.63 / MAX: 36.2MIN: 25.62 / MAX: 28.44MIN: 25.6 / MAX: 29.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: efficientnet-b01233691215SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.08, N = 39.419.439.49MIN: 9.35 / MAX: 11.25MIN: 9.36 / MAX: 20.24MIN: 9.36 / MAX: 9.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: mnasnet1231.2782.5563.8345.1126.39SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 35.655.645.68MIN: 5.58 / MAX: 7.37MIN: 5.59 / MAX: 7.14MIN: 5.58 / MAX: 7.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: googlenet123510152025SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.13, N = 319.9619.9520.07MIN: 19.87 / MAX: 33.03MIN: 19.86 / MAX: 22.71MIN: 19.86 / MAX: 21.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: squeezenet_ssd123714212835SE +/- 0.13, N = 3SE +/- 0.01, N = 3SE +/- 0.09, N = 330.2830.1230.21MIN: 29.93 / MAX: 33.92MIN: 29.94 / MAX: 32.08MIN: 29.93 / MAX: 43.991. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: vgg1612320406080100SE +/- 0.09, N = 3SE +/- 0.03, N = 3SE +/- 0.31, N = 391.0291.1591.45MIN: 90.68 / MAX: 94MIN: 90.86 / MAX: 103.83MIN: 90.83 / MAX: 158.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: resnet-v2-501231020304050SE +/- 0.06, N = 3SE +/- 0.27, N = 3SE +/- 0.11, N = 344.0944.2944.27MIN: 43.62 / MAX: 68.3MIN: 43.58 / MAX: 65.39MIN: 43.74 / MAX: 66.751. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenet-v1-1.01231.01772.03543.05314.07085.0885SE +/- 0.015, N = 3SE +/- 0.010, N = 3SE +/- 0.007, N = 34.5034.5084.523MIN: 4.46 / MAX: 7.27MIN: 4.47 / MAX: 10.02MIN: 4.49 / MAX: 8.91. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: yolov4-tiny123918273645SE +/- 0.15, N = 3SE +/- 0.03, N = 3SE +/- 0.12, N = 337.6237.5437.70MIN: 37.32 / MAX: 50.74MIN: 37.34 / MAX: 39.51MIN: 37.42 / MAX: 46.131. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: SqueezeNetV1.0123246810SE +/- 0.028, N = 3SE +/- 0.030, N = 3SE +/- 0.033, N = 36.7156.7356.743MIN: 6.62 / MAX: 24.05MIN: 6.64 / MAX: 9.63MIN: 6.63 / MAX: 11.151. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v212380160240320400SE +/- 0.30, N = 3SE +/- 0.09, N = 3SE +/- 0.14, N = 3391.27389.95389.91MIN: 390.1 / MAX: 402.52MIN: 388.73 / MAX: 395.45MIN: 388.91 / MAX: 392.911. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenetV31230.5921.1841.7762.3682.96SE +/- 0.012, N = 3SE +/- 0.010, N = 3SE +/- 0.015, N = 32.6232.6272.631MIN: 2.5 / MAX: 4.03MIN: 2.58 / MAX: 7.02MIN: 2.58 / MAX: 4.021. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.112370140210280350SE +/- 0.48, N = 3SE +/- 0.06, N = 3SE +/- 0.60, N = 3341.87341.41340.96MIN: 340.07 / MAX: 354.64MIN: 340.29 / MAX: 342.56MIN: 338.98 / MAX: 347.761. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: regnety_400m1233691215SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 311.8211.8511.84MIN: 11.78 / MAX: 12.01MIN: 11.79 / MAX: 12.72MIN: 11.79 / MAX: 11.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: resnet18123510152025SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 321.9822.0322.02MIN: 21.89 / MAX: 22.64MIN: 21.92 / MAX: 22.81MIN: 21.92 / MAX: 23.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNet12310002000300040005000SE +/- 4.13, N = 3SE +/- 7.80, N = 3SE +/- 13.48, N = 34517.984523.534527.51MIN: 4498.13 / MAX: 4539.84MIN: 4491.56 / MAX: 4557.66MIN: 4494.26 / MAX: 4573.951. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: shufflenet-v21231.09352.1873.28054.3745.4675SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 34.854.854.86MIN: 4.8 / MAX: 6.43MIN: 4.81 / MAX: 6.29MIN: 4.8 / MAX: 6.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v212320406080100SE +/- 0.06, N = 3SE +/- 0.06, N = 3SE +/- 0.07, N = 380.9180.8980.97MIN: 80.39 / MAX: 81.96MIN: 80.35 / MAX: 81.76MIN: 80.42 / MAX: 82.461. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: resnet50123918273645SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 340.6440.6040.64MIN: 40.48 / MAX: 65.35MIN: 40.48 / MAX: 44.38MIN: 40.46 / MAX: 53.891. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread