3900XT NN

AMD Ryzen 9 3900XT 12-Core testing with a MSI MEG X570 GODLIKE (MS-7C34) v1.0 (1.94 BIOS) and AMD Radeon RX 56/64 8GB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2009250-PTS-3900XTNN87
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

HPC - High Performance Computing 2 Tests
Machine Learning 2 Tests
NVIDIA GPU Compute 2 Tests
Vulkan Compute 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
1
September 25 2020
  1 Hour, 8 Minutes
2
September 25 2020
  1 Hour, 7 Minutes
3
September 25 2020
  1 Hour, 7 Minutes
Invert Hiding All Results Option
  1 Hour, 7 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


3900XT NNProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen Resolution123AMD Ryzen 9 3900XT 12-Core @ 3.80GHz (12 Cores / 24 Threads)MSI MEG X570 GODLIKE (MS-7C34) v1.0 (1.94 BIOS)AMD Starship/Matisse16GB500GB Seagate FireCuda 520 SSD ZP500GM30002AMD Radeon RX 56/64 8GB (1630/945MHz)AMD Vega 10 HDMI AudioASUS MG28URealtek Device 2600 + Realtek Device 3000 + Intel Wi-Fi 6 AX200Ubuntu 20.045.9.0-050900rc6daily20200922-generic (x86_64) 20200921GNOME Shell 3.36.4X Server 1.20.8amdgpu 19.1.04.6 Mesa 20.3.0-devel (git-31f75aa 2020-08-28 focal-oibaf-ppa) (LLVM 10.0.1)GCC 9.3.0ext43840x2160OpenBenchmarking.orgCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq ondemand - CPU Microcode: 0x8701021Graphics Details- GLAMORSecurity Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

123Result OverviewPhoronix Test Suite100%100%101%101%101%TNNRealSR-NCNNNCNN

3900XT NNncnn: Vulkan GPU - vgg16ncnn: CPU - mnasnetncnn: Vulkan GPU - googlenetrealsr-ncnn: 4x - Yesncnn: Vulkan GPU-v3-v3 - mobilenet-v3tnn: CPU - SqueezeNet v1.1ncnn: CPU - efficientnet-b0ncnn: Vulkan GPU-v2-v2 - mobilenet-v2ncnn: CPU - mobilenetncnn: Vulkan GPU - mobilenettnn: CPU - MobileNet v2ncnn: Vulkan GPU - squeezenetncnn: CPU - resnet18ncnn: Vulkan GPU - shufflenet-v2ncnn: CPU - shufflenet-v2ncnn: Vulkan GPU - alexnetncnn: Vulkan GPU - mnasnetncnn: CPU - squeezenetncnn: CPU - vgg16ncnn: Vulkan GPU - efficientnet-b0ncnn: CPU-v2-v2 - mobilenet-v2realsr-ncnn: 4x - Noncnn: CPU - alexnetncnn: CPU - yolov4-tinyncnn: Vulkan GPU - yolov4-tinyncnn: CPU - googlenetncnn: CPU - resnet50ncnn: CPU-v3-v3 - mobilenet-v3ncnn: Vulkan GPU - resnet18ncnn: CPU - blazefacencnn: Vulkan GPU - resnet50ncnn: Vulkan GPU - blazeface12311.954.765.7863.3273.55230.0226.542.5216.337.77245.1544.6616.202.284.823.922.6915.9667.4110.415.369.84516.1328.1510.7017.1727.554.712.101.916.020.9011.464.805.6664.5513.50228.4796.582.4916.177.72246.9614.6816.362.284.783.942.6715.8767.8210.475.339.79716.1828.0710.6817.1827.564.712.11.916.380.8711.694.885.7163.5693.49226.9026.622.4916.367.68244.1434.7116.272.264.803.912.6815.9767.6710.445.359.83816.1828.0810.7117.1427.504.702.101.916.130.85OpenBenchmarking.org

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: vgg161323691215SE +/- 0.36, N = 3SE +/- 0.30, N = 3SE +/- 0.27, N = 311.9511.6911.46MIN: 9.6 / MAX: 31.99MIN: 9.56 / MAX: 36.67MIN: 9.59 / MAX: 31.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: vgg161323691215Min: 11.32 / Avg: 11.95 / Max: 12.58Min: 11.2 / Avg: 11.69 / Max: 12.23Min: 10.96 / Avg: 11.46 / Max: 11.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnet3211.0982.1963.2944.3925.49SE +/- 0.11, N = 3SE +/- 0.06, N = 3SE +/- 0.02, N = 34.884.804.76MIN: 4.69 / MAX: 47.76MIN: 4.67 / MAX: 6.45MIN: 4.68 / MAX: 6.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnet321246810Min: 4.76 / Avg: 4.88 / Max: 5.09Min: 4.72 / Avg: 4.8 / Max: 4.91Min: 4.73 / Avg: 4.76 / Max: 4.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: googlenet1321.30052.6013.90155.2026.5025SE +/- 0.12, N = 3SE +/- 0.06, N = 3SE +/- 0.06, N = 35.785.715.66MIN: 5.53 / MAX: 23.31MIN: 5.52 / MAX: 13.35MIN: 5.53 / MAX: 17.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: googlenet132246810Min: 5.6 / Avg: 5.78 / Max: 6.01Min: 5.59 / Avg: 5.71 / Max: 5.81Min: 5.56 / Avg: 5.66 / Max: 5.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

RealSR-NCNN

RealSR-NCNN is an NCNN neural network implementation of the RealSR project and accelerated using the Vulkan API. RealSR is the Real-World Super Resolution via Kernel Estimation and Noise Injection. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image by a scale of 4x with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: Yes2311428425670SE +/- 0.91, N = 3SE +/- 0.78, N = 3SE +/- 0.49, N = 364.5563.5763.33
OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: Yes2311326395265Min: 62.86 / Avg: 64.55 / Max: 66Min: 62.53 / Avg: 63.57 / Max: 65.1Min: 62.83 / Avg: 63.33 / Max: 64.31

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU-v3-v3 - Model: mobilenet-v31230.79881.59762.39643.19523.994SE +/- 0.04, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 33.553.503.49MIN: 3.47 / MAX: 18.04MIN: 3.47 / MAX: 4.03MIN: 3.46 / MAX: 3.991. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3123246810Min: 3.5 / Avg: 3.55 / Max: 3.62Min: 3.5 / Avg: 3.5 / Max: 3.51Min: 3.47 / Avg: 3.49 / Max: 3.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.112350100150200250SE +/- 3.76, N = 3SE +/- 0.04, N = 3SE +/- 0.19, N = 3230.02228.48226.90MIN: 222.17 / MAX: 240.13MIN: 225.95 / MAX: 229.21MIN: 224.24 / MAX: 228.351. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.11234080120160200Min: 222.5 / Avg: 230.02 / Max: 233.89Min: 228.42 / Avg: 228.48 / Max: 228.55Min: 226.62 / Avg: 226.9 / Max: 227.271. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b0321246810SE +/- 0.09, N = 3SE +/- 0.05, N = 3SE +/- 0.04, N = 36.626.586.54MIN: 6.41 / MAX: 8.49MIN: 6.45 / MAX: 11.26MIN: 6.42 / MAX: 13.951. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b03213691215Min: 6.48 / Avg: 6.62 / Max: 6.8Min: 6.51 / Avg: 6.58 / Max: 6.67Min: 6.46 / Avg: 6.54 / Max: 6.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU-v2-v2 - Model: mobilenet-v21320.5671.1341.7012.2682.835SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 32.522.492.49MIN: 2.47 / MAX: 3.46MIN: 2.46 / MAX: 2.84MIN: 2.47 / MAX: 2.851. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2132246810Min: 2.49 / Avg: 2.52 / Max: 2.55Min: 2.48 / Avg: 2.49 / Max: 2.5Min: 2.49 / Avg: 2.49 / Max: 2.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenet31248121620SE +/- 0.12, N = 3SE +/- 0.16, N = 3SE +/- 0.13, N = 316.3616.3316.17MIN: 16.1 / MAX: 16.93MIN: 15.8 / MAX: 17.16MIN: 15.85 / MAX: 16.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenet31248121620Min: 16.22 / Avg: 16.36 / Max: 16.59Min: 16.01 / Avg: 16.33 / Max: 16.52Min: 15.96 / Avg: 16.17 / Max: 16.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: mobilenet123246810SE +/- 0.05, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 37.777.727.68MIN: 5.52 / MAX: 11.8MIN: 5.6 / MAX: 8.83MIN: 5.58 / MAX: 11.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: mobilenet1233691215Min: 7.69 / Avg: 7.77 / Max: 7.85Min: 7.67 / Avg: 7.72 / Max: 7.82Min: 7.63 / Avg: 7.68 / Max: 7.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v221350100150200250SE +/- 1.34, N = 3SE +/- 1.13, N = 3SE +/- 1.92, N = 3246.96245.15244.14MIN: 233.69 / MAX: 262.21MIN: 233.67 / MAX: 275.36MIN: 232.48 / MAX: 272.941. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v22134080120160200Min: 244.99 / Avg: 246.96 / Max: 249.53Min: 243.7 / Avg: 245.15 / Max: 247.39Min: 240.54 / Avg: 244.14 / Max: 247.091. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: squeezenet3211.05982.11963.17944.23925.299SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 34.714.684.66MIN: 4.51 / MAX: 6.36MIN: 4.51 / MAX: 6.25MIN: 4.48 / MAX: 6.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: squeezenet321246810Min: 4.68 / Avg: 4.71 / Max: 4.76Min: 4.66 / Avg: 4.68 / Max: 4.7Min: 4.61 / Avg: 4.66 / Max: 4.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet1823148121620SE +/- 0.02, N = 3SE +/- 0.13, N = 3SE +/- 0.02, N = 316.3616.2716.20MIN: 16.04 / MAX: 27.68MIN: 16.02 / MAX: 16.66MIN: 16.02 / MAX: 21.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet1823148121620Min: 16.32 / Avg: 16.36 / Max: 16.4Min: 16.14 / Avg: 16.27 / Max: 16.52Min: 16.17 / Avg: 16.2 / Max: 16.241. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: shufflenet-v22130.5131.0261.5392.0522.565SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 32.282.282.26MIN: 2.24 / MAX: 6.97MIN: 2.25 / MAX: 3.05MIN: 2.24 / MAX: 2.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: shufflenet-v2213246810Min: 2.25 / Avg: 2.28 / Max: 2.31Min: 2.26 / Avg: 2.28 / Max: 2.33Min: 2.25 / Avg: 2.26 / Max: 2.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v21321.08452.1693.25354.3385.4225SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.06, N = 34.824.804.78MIN: 4.77 / MAX: 5.98MIN: 4.72 / MAX: 5.75MIN: 4.61 / MAX: 6.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v2132246810Min: 4.81 / Avg: 4.82 / Max: 4.84Min: 4.76 / Avg: 4.8 / Max: 4.82Min: 4.65 / Avg: 4.78 / Max: 4.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: alexnet2130.88651.7732.65953.5464.4325SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 33.943.923.91MIN: 3.8 / MAX: 5.12MIN: 3.79 / MAX: 4.97MIN: 3.8 / MAX: 4.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: alexnet213246810Min: 3.93 / Avg: 3.94 / Max: 3.95Min: 3.9 / Avg: 3.92 / Max: 3.93Min: 3.91 / Avg: 3.91 / Max: 3.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: mnasnet1320.60531.21061.81592.42123.0265SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 32.692.682.67MIN: 2.64 / MAX: 3.72MIN: 2.64 / MAX: 3.71MIN: 2.64 / MAX: 3.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: mnasnet132246810Min: 2.68 / Avg: 2.69 / Max: 2.71Min: 2.67 / Avg: 2.68 / Max: 2.71Min: 2.66 / Avg: 2.67 / Max: 2.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenet31248121620SE +/- 0.05, N = 3SE +/- 0.11, N = 3SE +/- 0.11, N = 315.9715.9615.87MIN: 15.61 / MAX: 16.77MIN: 15.54 / MAX: 18.64MIN: 15.46 / MAX: 16.571. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenet31248121620Min: 15.89 / Avg: 15.97 / Max: 16.07Min: 15.74 / Avg: 15.96 / Max: 16.11Min: 15.69 / Avg: 15.87 / Max: 16.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg162311530456075SE +/- 0.29, N = 3SE +/- 0.16, N = 3SE +/- 0.12, N = 367.8267.6767.41MIN: 66.53 / MAX: 109.38MIN: 66.42 / MAX: 140.75MIN: 66.51 / MAX: 110.231. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg162311326395265Min: 67.31 / Avg: 67.82 / Max: 68.3Min: 67.36 / Avg: 67.67 / Max: 67.87Min: 67.26 / Avg: 67.41 / Max: 67.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: efficientnet-b02313691215SE +/- 0.13, N = 3SE +/- 0.08, N = 3SE +/- 0.17, N = 310.4710.4410.41MIN: 8.76 / MAX: 24.1MIN: 8.74 / MAX: 23.62MIN: 8.75 / MAX: 26.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: efficientnet-b02313691215Min: 10.22 / Avg: 10.47 / Max: 10.66Min: 10.32 / Avg: 10.44 / Max: 10.59Min: 10.2 / Avg: 10.41 / Max: 10.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v21321.2062.4123.6184.8246.03SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.07, N = 35.365.355.33MIN: 5.24 / MAX: 7.13MIN: 5.21 / MAX: 7.22MIN: 5.13 / MAX: 6.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v2132246810Min: 5.33 / Avg: 5.36 / Max: 5.4Min: 5.32 / Avg: 5.35 / Max: 5.39Min: 5.2 / Avg: 5.33 / Max: 5.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

RealSR-NCNN

RealSR-NCNN is an NCNN neural network implementation of the RealSR project and accelerated using the Vulkan API. RealSR is the Real-World Super Resolution via Kernel Estimation and Noise Injection. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image by a scale of 4x with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: No1323691215SE +/- 0.103, N = 3SE +/- 0.038, N = 3SE +/- 0.042, N = 39.8459.8389.797
OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: No1323691215Min: 9.71 / Avg: 9.85 / Max: 10.05Min: 9.78 / Avg: 9.84 / Max: 9.91Min: 9.75 / Avg: 9.8 / Max: 9.88

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnet32148121620SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 316.1816.1816.13MIN: 16.03 / MAX: 16.91MIN: 16.03 / MAX: 16.44MIN: 15.99 / MAX: 18.491. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnet32148121620Min: 16.13 / Avg: 16.18 / Max: 16.21Min: 16.14 / Avg: 16.18 / Max: 16.2Min: 16.09 / Avg: 16.13 / Max: 16.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tiny132714212835SE +/- 0.09, N = 3SE +/- 0.14, N = 3SE +/- 0.19, N = 328.1528.0828.07MIN: 27.83 / MAX: 28.94MIN: 27.74 / MAX: 30.21MIN: 27.62 / MAX: 37.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tiny132612182430Min: 27.99 / Avg: 28.15 / Max: 28.29Min: 27.92 / Avg: 28.08 / Max: 28.37Min: 27.77 / Avg: 28.07 / Max: 28.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: yolov4-tiny3123691215SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 210.7110.7010.68MIN: 10.45 / MAX: 11.19MIN: 10.47 / MAX: 11.01MIN: 10.49 / MAX: 10.951. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: yolov4-tiny3123691215Min: 10.67 / Avg: 10.71 / Max: 10.74Min: 10.67 / Avg: 10.7 / Max: 10.72Min: 10.68 / Avg: 10.68 / Max: 10.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenet21348121620SE +/- 0.16, N = 3SE +/- 0.20, N = 3SE +/- 0.15, N = 317.1817.1717.14MIN: 16.59 / MAX: 19.11MIN: 16.63 / MAX: 121.77MIN: 16.62 / MAX: 22.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenet21348121620Min: 16.89 / Avg: 17.18 / Max: 17.46Min: 16.96 / Avg: 17.17 / Max: 17.57Min: 16.9 / Avg: 17.14 / Max: 17.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50213612182430SE +/- 0.27, N = 3SE +/- 0.08, N = 3SE +/- 0.26, N = 327.5627.5527.50MIN: 27.02 / MAX: 33.89MIN: 27.15 / MAX: 40.33MIN: 27.04 / MAX: 38.341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50213612182430Min: 27.17 / Avg: 27.56 / Max: 28.07Min: 27.42 / Avg: 27.55 / Max: 27.71Min: 27.19 / Avg: 27.5 / Max: 28.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v32131.05982.11963.17944.23925.299SE +/- 0.07, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 34.714.714.70MIN: 4.56 / MAX: 7MIN: 4.64 / MAX: 6.36MIN: 4.65 / MAX: 6.271. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v3213246810Min: 4.59 / Avg: 4.71 / Max: 4.82Min: 4.69 / Avg: 4.71 / Max: 4.72Min: 4.69 / Avg: 4.7 / Max: 4.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: resnet183210.47250.9451.41751.892.3625SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 32.102.102.10MIN: 2.01 / MAX: 2.88MIN: 2.01 / MAX: 2.87MIN: 2.01 / MAX: 2.811. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: resnet18321246810Min: 2.09 / Avg: 2.1 / Max: 2.1Min: 2.1 / Avg: 2.1 / Max: 2.1Min: 2.09 / Avg: 2.1 / Max: 2.111. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazeface3210.42980.85961.28941.71922.149SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 31.911.911.91MIN: 1.85 / MAX: 2.6MIN: 1.86 / MAX: 2.01MIN: 1.87 / MAX: 2.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazeface321246810Min: 1.89 / Avg: 1.91 / Max: 1.96Min: 1.89 / Avg: 1.91 / Max: 1.93Min: 1.89 / Avg: 1.91 / Max: 1.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: resnet50231246810SE +/- 0.32, N = 3SE +/- 0.14, N = 3SE +/- 0.03, N = 36.386.136.02MIN: 5.85 / MAX: 22.05MIN: 5.85 / MAX: 23.94MIN: 5.83 / MAX: 13.111. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: resnet502313691215Min: 6.03 / Avg: 6.38 / Max: 7.01Min: 5.89 / Avg: 6.13 / Max: 6.39Min: 5.96 / Avg: 6.02 / Max: 6.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: blazeface1230.20250.4050.60750.811.0125SE +/- 0.05, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.900.870.85MIN: 0.84 / MAX: 1.56MIN: 0.84 / MAX: 1.67MIN: 0.83 / MAX: 1.361. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: blazeface123246810Min: 0.85 / Avg: 0.9 / Max: 1Min: 0.86 / Avg: 0.87 / Max: 0.87Min: 0.85 / Avg: 0.85 / Max: 0.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread