nn

AMD Ryzen 9 5900X 12-Core testing with a ASUS ROG CROSSHAIR VIII HERO (3501 BIOS) and AMD Radeon RX 6800/6800 XT / 6900 16GB on Ubuntu 21.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2106189-PTS-NN67322042
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
1
June 18 2021
  44 Minutes
1a
June 18 2021
  39 Minutes
2
June 18 2021
  1 Hour, 23 Minutes
3
June 18 2021
  2 Hours, 5 Minutes
Invert Behavior (Only Show Selected Data)
  1 Hour, 13 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


nnProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen Resolution11a23AMD Ryzen 9 5900X 12-Core @ 3.70GHz (12 Cores / 24 Threads)ASUS ROG CROSSHAIR VIII HERO (3501 BIOS)AMD Starship/Matisse16GB1000GB Sabrent Rocket 4.0 Plus + 2000GBAMD Radeon RX 6800/6800 XT / 6900 16GB (2475/1000MHz)AMD Navi 21 HDMI AudioASUS VP28URealtek RTL8125 2.5GbE + Intel I211Ubuntu 21.045.13.0-051300rc6daily20210617-generic (x86_64) 20210616GNOME Shell 3.38.4X Server 1.20.11 + Wayland4.6 Mesa 21.2.0-devel (git-849ab4e 2021-06-17 hirsute-oibaf-ppa) (LLVM 12.0.0)1.2.180GCC 10.3.0 + CUDA 11.3ext43840x2160OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-mutex --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-gDeRY6/gcc-10-10.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-gDeRY6/gcc-10-10.3.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201009Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

nnmnn: mobilenetV3mnn: squeezenetv1.1mnn: resnet-v2-50mnn: SqueezeNetV1.0mnn: MobileNetV2_224mnn: mobilenet-v1-1.0mnn: inception-v3ncnn: CPU - mobilenetncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - shufflenet-v2ncnn: CPU - mnasnetncnn: CPU - efficientnet-b0ncnn: CPU - blazefacencnn: CPU - googlenetncnn: CPU - vgg16ncnn: CPU - resnet18ncnn: CPU - alexnetncnn: CPU - resnet50ncnn: CPU - yolov4-tinyncnn: CPU - squeezenet_ssdncnn: CPU - regnety_400mtnn: CPU - DenseNettnn: CPU - MobileNet v2tnn: CPU - SqueezeNet v2tnn: CPU - SqueezeNet v1.111a232.1293.74327.9445.0573.2774.38426.1032510.762226.92850.930212.47612.144.103.914.003.675.101.7112.3855.0713.7411.0722.8021.4814.809.422.1163.73627.4035.1213.2464.35226.16912.124.164.004.103.825.281.7912.4555.2613.5211.2022.5522.0014.789.502504.442226.52351.113212.7382.0593.61927.0105.0233.2114.32525.61612.054.133.964.053.795.131.7012.5155.9813.6111.1322.6921.9114.759.342505.204224.05150.718211.086OpenBenchmarking.org

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenetV33210.4790.9581.4371.9162.395SE +/- 0.019, N = 7SE +/- 0.009, N = 3SE +/- 0.027, N = 32.0592.1162.129MIN: 1.88 / MAX: 10.15MIN: 2.01 / MAX: 10.13MIN: 1.91 / MAX: 11.331. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: squeezenetv1.13210.84221.68442.52663.36884.211SE +/- 0.058, N = 7SE +/- 0.012, N = 3SE +/- 0.068, N = 33.6193.7363.743MIN: 3.25 / MAX: 12.54MIN: 3.56 / MAX: 11.48MIN: 3.36 / MAX: 12.791. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: resnet-v2-50321714212835SE +/- 0.04, N = 7SE +/- 0.15, N = 3SE +/- 0.51, N = 327.0127.4027.94MIN: 25.86 / MAX: 50.68MIN: 26.03 / MAX: 50.79MIN: 25.6 / MAX: 40.371. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: SqueezeNetV1.03211.15222.30443.45664.60885.761SE +/- 0.049, N = 7SE +/- 0.050, N = 3SE +/- 0.054, N = 35.0235.1215.057MIN: 4.59 / MAX: 14.33MIN: 4.85 / MAX: 13.16MIN: 4.64 / MAX: 33.191. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: MobileNetV2_2243210.73731.47462.21192.94923.6865SE +/- 0.032, N = 7SE +/- 0.041, N = 3SE +/- 0.035, N = 33.2113.2463.277MIN: 2.97 / MAX: 11MIN: 3.07 / MAX: 10.44MIN: 3.07 / MAX: 11.361. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenet-v1-1.03210.98641.97282.95923.94564.932SE +/- 0.022, N = 7SE +/- 0.051, N = 3SE +/- 0.070, N = 34.3254.3524.384MIN: 4.09 / MAX: 11.49MIN: 4.08 / MAX: 11.71MIN: 4.07 / MAX: 13.171. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: inception-v3321612182430SE +/- 0.13, N = 7SE +/- 0.24, N = 3SE +/- 0.23, N = 325.6226.1726.10MIN: 24.49 / MAX: 42.77MIN: 24.88 / MAX: 34.4MIN: 24.2 / MAX: 62.161. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: mobilenet321a3691215SE +/- 0.02, N = 3SE +/- 0.08, N = 3SE +/- 0.09, N = 312.0512.1212.14MIN: 11.3 / MAX: 29.68MIN: 11.38 / MAX: 34.52MIN: 11.26 / MAX: 38.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU-v2-v2 - Model: mobilenet-v2321a0.9361.8722.8083.7444.68SE +/- 0.02, N = 3SE +/- 0.06, N = 3SE +/- 0.02, N = 34.134.164.10MIN: 3.88 / MAX: 12.67MIN: 3.87 / MAX: 12.37MIN: 3.86 / MAX: 12.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU-v3-v3 - Model: mobilenet-v3321a0.91.82.73.64.5SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 33.964.003.91MIN: 3.75 / MAX: 12.14MIN: 3.76 / MAX: 12.05MIN: 3.73 / MAX: 11.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: shufflenet-v2321a0.92251.8452.76753.694.6125SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 34.054.104.00MIN: 3.86 / MAX: 11.94MIN: 3.89 / MAX: 11.86MIN: 3.74 / MAX: 11.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: mnasnet321a0.85951.7192.57853.4384.2975SE +/- 0.09, N = 3SE +/- 0.06, N = 3SE +/- 0.02, N = 33.793.823.67MIN: 3.51 / MAX: 12.14MIN: 3.57 / MAX: 11.88MIN: 3.5 / MAX: 11.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: efficientnet-b0321a1.1882.3763.5644.7525.94SE +/- 0.04, N = 3SE +/- 0.09, N = 3SE +/- 0.04, N = 35.135.285.10MIN: 4.8 / MAX: 38.8MIN: 4.9 / MAX: 13.73MIN: 4.83 / MAX: 13.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: blazeface321a0.40280.80561.20841.61122.014SE +/- 0.00, N = 3SE +/- 0.04, N = 3SE +/- 0.00, N = 31.701.791.71MIN: 1.6 / MAX: 9.42MIN: 1.65 / MAX: 9.61MIN: 1.62 / MAX: 8.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: googlenet321a3691215SE +/- 0.17, N = 3SE +/- 0.01, N = 3SE +/- 0.06, N = 312.5112.4512.38MIN: 11.61 / MAX: 32.56MIN: 11.62 / MAX: 30.43MIN: 11.66 / MAX: 20.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: vgg16321a1326395265SE +/- 0.90, N = 3SE +/- 0.21, N = 3SE +/- 0.20, N = 355.9855.2655.07MIN: 51.31 / MAX: 862.27MIN: 52.4 / MAX: 85.07MIN: 52.29 / MAX: 67.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: resnet18321a48121620SE +/- 0.16, N = 3SE +/- 0.04, N = 3SE +/- 0.16, N = 313.6113.5213.74MIN: 12.65 / MAX: 21.63MIN: 12.87 / MAX: 22.24MIN: 12.93 / MAX: 22.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: alexnet321a3691215SE +/- 0.08, N = 3SE +/- 0.06, N = 3SE +/- 0.04, N = 311.1311.2011.07MIN: 10.19 / MAX: 19.44MIN: 10.37 / MAX: 19.33MIN: 10.16 / MAX: 19.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: resnet50321a510152025SE +/- 0.18, N = 3SE +/- 0.28, N = 3SE +/- 0.36, N = 322.6922.5522.80MIN: 21.25 / MAX: 31.38MIN: 20.93 / MAX: 31.12MIN: 20.6 / MAX: 104.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: yolov4-tiny321a510152025SE +/- 0.11, N = 3SE +/- 0.18, N = 3SE +/- 0.28, N = 321.9122.0021.48MIN: 19.58 / MAX: 57.14MIN: 20.67 / MAX: 30.95MIN: 19.65 / MAX: 48.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: squeezenet_ssd321a48121620SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.06, N = 314.7514.7814.80MIN: 13.81 / MAX: 23.13MIN: 13.64 / MAX: 45.78MIN: 13.52 / MAX: 64.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: regnety_400m321a3691215SE +/- 0.07, N = 3SE +/- 0.09, N = 3SE +/- 0.08, N = 39.349.509.42MIN: 8.86 / MAX: 17.28MIN: 9.03 / MAX: 17.41MIN: 9.01 / MAX: 26.231. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNet3215001000150020002500SE +/- 1.51, N = 3SE +/- 3.49, N = 3SE +/- 3.60, N = 32505.202504.442510.76MIN: 2430 / MAX: 2579.39MIN: 2433.14 / MAX: 2583.96MIN: 2446.86 / MAX: 2649.041. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v232150100150200250SE +/- 1.38, N = 3SE +/- 2.57, N = 3SE +/- 0.40, N = 3224.05226.52226.93MIN: 218.93 / MAX: 236.84MIN: 218.62 / MAX: 239.35MIN: 222.06 / MAX: 239.491. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v23211224364860SE +/- 0.25, N = 3SE +/- 0.11, N = 3SE +/- 0.28, N = 350.7251.1150.93MIN: 50.05 / MAX: 51.13MIN: 50.74 / MAX: 51.46MIN: 50.31 / MAX: 51.631. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.132150100150200250SE +/- 0.10, N = 3SE +/- 3.04, N = 3SE +/- 0.82, N = 3211.09212.74212.48MIN: 210.04 / MAX: 211.62MIN: 206.57 / MAX: 219.72MIN: 210.54 / MAX: 219.751. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl