TR 3960X WK

AMD Ryzen Threadripper 3960X 24-Core testing with a MSI Creator TRX40 (MS-7C59) v1.0 (1.12N1 BIOS) and Sapphire AMD Radeon RX 5500/5500M / Pro 5500M 4GB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2009286-PTS-TR3960XW65
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Bioinformatics 2 Tests
C/C++ Compiler Tests 3 Tests
CPU Massive 4 Tests
Fortran Tests 2 Tests
HPC - High Performance Computing 8 Tests
Machine Learning 3 Tests
Molecular Dynamics 2 Tests
NVIDIA GPU Compute 3 Tests
Python Tests 2 Tests
Scientific Computing 5 Tests
Single-Threaded 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
1
September 27 2020
  4 Hours, 37 Minutes
2
September 27 2020
  4 Hours, 32 Minutes
3
September 28 2020
  4 Hours, 32 Minutes
Invert Hiding All Results Option
  4 Hours, 33 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


TR 3960X WKProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLVulkanCompilerFile-SystemScreen Resolution123AMD Ryzen Threadripper 3960X 24-Core @ 3.80GHz (24 Cores / 48 Threads)MSI Creator TRX40 (MS-7C59) v1.0 (1.12N1 BIOS)AMD Starship/Matisse32GB1000GB Sabrent Rocket 4.0 1TBSapphire AMD Radeon RX 5500/5500M / Pro 5500M 4GB (1900/875MHz)AMD Navi 10 HDMI AudioASUS MG28UAquantia AQC107 NBase-T/IEEE + Intel I211 + Intel Wi-Fi 6 AX200Ubuntu 20.045.9.0-rc5-14sep-patch (x86_64) 20200914GNOME Shell 3.36.4X Server 1.20.8modesetting 1.20.84.6 Mesa 20.0.8 (LLVM 10.0.0)1.2.128GCC 9.3.0ext43840x2160OpenBenchmarking.orgCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq ondemand - CPU Microcode: 0x8301025Python Details- Python 3.8.2Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

123Result OverviewPhoronix Test Suite100%100%101%101%102%Timed MAFFT AlignmentDolfynFFTEMlpack BenchmarkBYTE Unix BenchmarkApache CouchDBTimed HMMer SearchHierarchical INTegrationNCNNCaffeGROMACS

TR 3960X WKmlpack: scikit_icancnn: Vulkan GPU - yolov4-tinyncnn: CPU - vgg16ncnn: Vulkan GPU-v3-v3 - mobilenet-v3ncnn: Vulkan GPU - efficientnet-b0ncnn: Vulkan GPU - squeezenetmlpack: scikit_qdancnn: Vulkan GPU - alexnetncnn: CPU - efficientnet-b0mafft: Multiple Sequence Alignment - LSU RNAncnn: CPU - mobilenetncnn: CPU - blazefacencnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - yolov4-tinyncnn: Vulkan GPU - blazefacencnn: Vulkan GPU - resnet50dolfyn: Computational Fluid Dynamicsncnn: CPU - shufflenet-v2ffte: N=256, 3D Complex FFT Routinencnn: CPU - resnet50ncnn: CPU - mnasnetncnn: Vulkan GPU - vgg16mlpack: scikit_linearridgeregressionbyte: Dhrystone 2caffe: AlexNet - CPU - 1000ncnn: CPU - alexnetcouchdb: 100 - 1000 - 24caffe: GoogleNet - CPU - 200ncnn: Vulkan GPU - resnet18ncnn: CPU - resnet18caffe: AlexNet - CPU - 100mlpack: scikit_svmhint: FLOATcaffe: GoogleNet - CPU - 100hmmer: Pfam Database Searchcaffe: GoogleNet - CPU - 1000caffe: AlexNet - CPU - 200ncnn: CPU - squeezenetncnn: CPU - googlenetgromacs: Water Benchmarkncnn: Vulkan GPU - mnasnetncnn: Vulkan GPU - shufflenet-v2ncnn: Vulkan GPU-v2-v2 - mobilenet-v2ncnn: Vulkan GPU - googlenetncnn: Vulkan GPU - mobilenet12353.5321.7041.108.3511.936.2244.9828.128.798.11917.462.977.286.8227.880.9011.7915.8157.3183303.75521013123.376.5880.331.4446013391.363492511.49108.1543181213.8313.896379719.75387229070.97652158702131.076158536712720916.4017.762.5294.443.204.358.639.6052.5821.3242.308.2312.206.2745.9128.668.648.24317.242.937.246.7828.200.8911.8515.8607.2883979.87595459623.556.5380.681.4345773092.963684311.56107.5563191293.8213.946352519.69387185459.42906158891131.428158989012748816.3917.752.5284.443.204.358.259.7954.3321.0541.338.4612.256.1345.8928.368.698.18717.222.947.336.8628.070.8911.9215.7057.3483465.84213605123.536.5780.101.4346091339.863884811.53108.1063174133.8113.966370019.67388597973.06793159254131.508158675712752716.4317.782.5274.443.204.358.4110.22OpenBenchmarking.org

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_ica2131224364860SE +/- 0.64, N = 3SE +/- 0.37, N = 3SE +/- 0.31, N = 352.5853.5354.33
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_ica2131122334455Min: 51.84 / Avg: 52.58 / Max: 53.85Min: 52.86 / Avg: 53.53 / Max: 54.14Min: 53.82 / Avg: 54.33 / Max: 54.88

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: yolov4-tiny321510152025SE +/- 0.29, N = 3SE +/- 0.10, N = 3SE +/- 0.02, N = 321.0521.3221.70MIN: 11.08 / MAX: 40.17MIN: 13.93 / MAX: 46.1MIN: 12.05 / MAX: 42.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: yolov4-tiny321510152025Min: 20.48 / Avg: 21.05 / Max: 21.43Min: 21.14 / Avg: 21.32 / Max: 21.47Min: 21.67 / Avg: 21.7 / Max: 21.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg161321020304050SE +/- 0.19, N = 3SE +/- 0.34, N = 3SE +/- 0.52, N = 341.1041.3342.30MIN: 40.58 / MAX: 42.83MIN: 40.29 / MAX: 125.07MIN: 40.29 / MAX: 44.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg16132918273645Min: 40.88 / Avg: 41.1 / Max: 41.48Min: 40.69 / Avg: 41.33 / Max: 41.84Min: 41.45 / Avg: 42.3 / Max: 43.241. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3213246810SE +/- 0.24, N = 3SE +/- 0.06, N = 3SE +/- 0.18, N = 38.238.358.46MIN: 7 / MAX: 39.95MIN: 7 / MAX: 36MIN: 7 / MAX: 32.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU-v3-v3 - Model: mobilenet-v32133691215Min: 7.93 / Avg: 8.23 / Max: 8.71Min: 8.23 / Avg: 8.35 / Max: 8.44Min: 8.13 / Avg: 8.46 / Max: 8.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: efficientnet-b01233691215SE +/- 0.25, N = 3SE +/- 0.27, N = 3SE +/- 0.07, N = 311.9312.2012.25MIN: 10.02 / MAX: 43.66MIN: 10.04 / MAX: 36.91MIN: 9.98 / MAX: 38.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: efficientnet-b012348121620Min: 11.57 / Avg: 11.93 / Max: 12.42Min: 11.67 / Avg: 12.2 / Max: 12.58Min: 12.16 / Avg: 12.25 / Max: 12.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: squeezenet312246810SE +/- 0.02, N = 3SE +/- 0.08, N = 3SE +/- 0.09, N = 36.136.226.27MIN: 5.93 / MAX: 9.53MIN: 5.94 / MAX: 30.39MIN: 5.92 / MAX: 16.221. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: squeezenet3123691215Min: 6.09 / Avg: 6.13 / Max: 6.17Min: 6.1 / Avg: 6.22 / Max: 6.38Min: 6.09 / Avg: 6.27 / Max: 6.41. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qda1321020304050SE +/- 0.32, N = 3SE +/- 0.04, N = 3SE +/- 0.12, N = 344.9845.8945.91
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qda132918273645Min: 44.33 / Avg: 44.98 / Max: 45.31Min: 45.81 / Avg: 45.89 / Max: 45.93Min: 45.68 / Avg: 45.91 / Max: 46.03

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: alexnet132714212835SE +/- 0.27, N = 3SE +/- 0.13, N = 3SE +/- 0.29, N = 328.1228.3628.66MIN: 25.02 / MAX: 62.51MIN: 24.67 / MAX: 55.99MIN: 24.96 / MAX: 55.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: alexnet132612182430Min: 27.68 / Avg: 28.12 / Max: 28.62Min: 28.15 / Avg: 28.36 / Max: 28.6Min: 28.2 / Avg: 28.66 / Max: 29.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b0231246810SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.05, N = 38.648.698.79MIN: 8.42 / MAX: 13.4MIN: 8.49 / MAX: 9.24MIN: 8.52 / MAX: 9.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b02313691215Min: 8.6 / Avg: 8.64 / Max: 8.71Min: 8.62 / Avg: 8.69 / Max: 8.78Min: 8.69 / Avg: 8.79 / Max: 8.851. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Timed MAFFT Alignment

This test performs an alignment of 100 pyruvate decarboxylase sequences. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNA132246810SE +/- 0.058, N = 3SE +/- 0.029, N = 3SE +/- 0.032, N = 38.1198.1878.2431. (CC) gcc options: -std=c99 -O3 -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNA1323691215Min: 8.03 / Avg: 8.12 / Max: 8.23Min: 8.16 / Avg: 8.19 / Max: 8.25Min: 8.2 / Avg: 8.24 / Max: 8.31. (CC) gcc options: -std=c99 -O3 -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenet32148121620SE +/- 0.08, N = 3SE +/- 0.19, N = 3SE +/- 0.15, N = 317.2217.2417.46MIN: 16.94 / MAX: 18.34MIN: 16.79 / MAX: 18.18MIN: 16.88 / MAX: 98.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenet32148121620Min: 17.08 / Avg: 17.22 / Max: 17.34Min: 16.96 / Avg: 17.24 / Max: 17.6Min: 17.25 / Avg: 17.46 / Max: 17.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazeface2310.66831.33662.00492.67323.3415SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 32.932.942.97MIN: 2.78 / MAX: 4.11MIN: 2.79 / MAX: 3.29MIN: 2.8 / MAX: 3.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazeface231246810Min: 2.92 / Avg: 2.93 / Max: 2.96Min: 2.93 / Avg: 2.94 / Max: 2.96Min: 2.94 / Avg: 2.97 / Max: 3.021. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v2213246810SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 37.247.287.33MIN: 6.86 / MAX: 8.46MIN: 7.03 / MAX: 9.71MIN: 7.03 / MAX: 12.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v22133691215Min: 7.19 / Avg: 7.24 / Max: 7.32Min: 7.25 / Avg: 7.28 / Max: 7.3Min: 7.27 / Avg: 7.33 / Max: 7.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v3213246810SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 36.786.826.86MIN: 6.64 / MAX: 12.14MIN: 6.6 / MAX: 8.32MIN: 6.67 / MAX: 9.171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v32133691215Min: 6.74 / Avg: 6.78 / Max: 6.85Min: 6.78 / Avg: 6.82 / Max: 6.86Min: 6.79 / Avg: 6.86 / Max: 6.941. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tiny132714212835SE +/- 0.04, N = 3SE +/- 0.10, N = 3SE +/- 0.14, N = 327.8828.0728.20MIN: 27.63 / MAX: 32.81MIN: 27.72 / MAX: 29.15MIN: 27.77 / MAX: 40.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tiny132612182430Min: 27.84 / Avg: 27.88 / Max: 27.96Min: 27.86 / Avg: 28.07 / Max: 28.19Min: 28 / Avg: 28.2 / Max: 28.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: blazeface2310.20250.4050.60750.811.0125SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 30.890.890.90MIN: 0.88 / MAX: 1.08MIN: 0.87 / MAX: 1.05MIN: 0.88 / MAX: 1.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: blazeface231246810Min: 0.89 / Avg: 0.89 / Max: 0.9Min: 0.88 / Avg: 0.89 / Max: 0.9Min: 0.89 / Avg: 0.9 / Max: 0.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: resnet501233691215SE +/- 0.24, N = 3SE +/- 0.39, N = 3SE +/- 0.32, N = 311.7911.8511.92MIN: 10.07 / MAX: 37.47MIN: 10.05 / MAX: 35.67MIN: 10.07 / MAX: 40.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: resnet501233691215Min: 11.34 / Avg: 11.79 / Max: 12.16Min: 11.13 / Avg: 11.85 / Max: 12.47Min: 11.41 / Avg: 11.92 / Max: 12.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Dolfyn

Dolfyn is a Computational Fluid Dynamics (CFD) code of modern numerical simulation techniques. The Dolfyn test profile measures the execution time of the bundled computational fluid dynamics demos that are bundled with Dolfyn. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid Dynamics31248121620SE +/- 0.06, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 315.7115.8215.86
OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid Dynamics31248121620Min: 15.58 / Avg: 15.71 / Max: 15.78Min: 15.8 / Avg: 15.81 / Max: 15.82Min: 15.83 / Avg: 15.86 / Max: 15.88

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v2213246810SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 37.287.317.34MIN: 7.02 / MAX: 8.28MIN: 7.1 / MAX: 9.34MIN: 7.06 / MAX: 8.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v22133691215Min: 7.27 / Avg: 7.28 / Max: 7.3Min: 7.24 / Avg: 7.31 / Max: 7.36Min: 7.31 / Avg: 7.34 / Max: 7.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

FFTE

OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT Routine23120K40K60K80K100KSE +/- 111.80, N = 3SE +/- 308.33, N = 3SE +/- 411.46, N = 383979.8883465.8483303.761. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp
OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT Routine23115K30K45K60K75KMin: 83858.06 / Avg: 83979.88 / Max: 84203.18Min: 82850.35 / Avg: 83465.84 / Max: 83806.37Min: 82553.58 / Avg: 83303.76 / Max: 83971.811. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50132612182430SE +/- 0.11, N = 3SE +/- 0.12, N = 3SE +/- 0.02, N = 323.3723.5323.55MIN: 23.04 / MAX: 24.13MIN: 23.13 / MAX: 25.63MIN: 23.36 / MAX: 28.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50132510152025Min: 23.17 / Avg: 23.37 / Max: 23.55Min: 23.29 / Avg: 23.53 / Max: 23.7Min: 23.52 / Avg: 23.55 / Max: 23.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnet231246810SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 36.536.576.58MIN: 6.37 / MAX: 7.64MIN: 6.42 / MAX: 7.23MIN: 6.37 / MAX: 7.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnet2313691215Min: 6.48 / Avg: 6.53 / Max: 6.56Min: 6.55 / Avg: 6.57 / Max: 6.6Min: 6.54 / Avg: 6.58 / Max: 6.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: vgg1631220406080100SE +/- 0.31, N = 3SE +/- 0.26, N = 3SE +/- 0.21, N = 380.1080.3380.68MIN: 70.02 / MAX: 120.26MIN: 70.05 / MAX: 121.13MIN: 70.8 / MAX: 121.571. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: vgg163121530456075Min: 79.56 / Avg: 80.1 / Max: 80.62Min: 79.96 / Avg: 80.33 / Max: 80.84Min: 80.37 / Avg: 80.68 / Max: 81.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregression2310.3240.6480.9721.2961.62SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 31.431.431.44
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregression231246810Min: 1.41 / Avg: 1.43 / Max: 1.45Min: 1.43 / Avg: 1.43 / Max: 1.44Min: 1.42 / Avg: 1.44 / Max: 1.45

BYTE Unix Benchmark

This is a test of BYTE. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 231210M20M30M40M50MSE +/- 715922.77, N = 3SE +/- 536306.21, N = 6SE +/- 211256.10, N = 346091339.846013391.345773092.9
OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 23128M16M24M32M40MMin: 44678040.1 / Avg: 46091339.8 / Max: 46996922.3Min: 43623340.3 / Avg: 46013391.32 / Max: 47039177.3Min: 45364860.1 / Avg: 45773092.9 / Max: 46071532.7

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 1000123140K280K420K560K700KSE +/- 490.79, N = 3SE +/- 1432.41, N = 3SE +/- 563.79, N = 36349256368436388481. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 1000123110K220K330K440K550KMin: 633946 / Avg: 634925 / Max: 635476Min: 635074 / Avg: 636843 / Max: 639679Min: 638196 / Avg: 638848.33 / Max: 6399711. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnet1323691215SE +/- 0.07, N = 3SE +/- 0.13, N = 3SE +/- 0.12, N = 311.4911.5311.56MIN: 11.33 / MAX: 12MIN: 11.25 / MAX: 15.96MIN: 11.26 / MAX: 12.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnet1323691215Min: 11.37 / Avg: 11.49 / Max: 11.6Min: 11.29 / Avg: 11.53 / Max: 11.74Min: 11.32 / Avg: 11.56 / Max: 11.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.1.1Bulk Size: 100 - Inserts: 1000 - Rounds: 2423120406080100SE +/- 0.44, N = 3SE +/- 0.15, N = 3SE +/- 0.55, N = 3107.56108.11108.151. (CXX) g++ options: -std=c++14 -lmozjs-68 -lm -lerl_interface -lei -fPIC -MMD
OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.1.1Bulk Size: 100 - Inserts: 1000 - Rounds: 2423120406080100Min: 107.04 / Avg: 107.56 / Max: 108.43Min: 107.91 / Avg: 108.11 / Max: 108.41Min: 107.27 / Avg: 108.15 / Max: 109.161. (CXX) g++ options: -std=c++14 -lmozjs-68 -lm -lerl_interface -lei -fPIC -MMD

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 20031270K140K210K280K350KSE +/- 800.55, N = 3SE +/- 136.00, N = 3SE +/- 472.70, N = 33174133181213191291. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 20031260K120K180K240K300KMin: 316118 / Avg: 317413.33 / Max: 318876Min: 317974 / Avg: 318121.33 / Max: 318393Min: 318322 / Avg: 319129 / Max: 3199591. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: resnet183210.86181.72362.58543.44724.309SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 33.813.823.83MIN: 3.74 / MAX: 4.33MIN: 3.75 / MAX: 4.28MIN: 3.75 / MAX: 4.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: resnet18321246810Min: 3.8 / Avg: 3.81 / Max: 3.81Min: 3.82 / Avg: 3.82 / Max: 3.82Min: 3.81 / Avg: 3.83 / Max: 3.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet1812348121620SE +/- 0.05, N = 3SE +/- 0.15, N = 3SE +/- 0.16, N = 313.8913.9413.96MIN: 13.68 / MAX: 15.18MIN: 13.53 / MAX: 14.99MIN: 13.5 / MAX: 15.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet1812348121620Min: 13.81 / Avg: 13.89 / Max: 13.97Min: 13.66 / Avg: 13.94 / Max: 14.15Min: 13.65 / Avg: 13.96 / Max: 14.131. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 10023114K28K42K56K70KSE +/- 97.00, N = 3SE +/- 154.26, N = 3SE +/- 143.57, N = 36352563700637971. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 10023111K22K33K44K55KMin: 63350 / Avg: 63525 / Max: 63685Min: 63536 / Avg: 63699.67 / Max: 64008Min: 63520 / Avg: 63797 / Max: 640011. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svm321510152025SE +/- 0.05, N = 3SE +/- 0.07, N = 3SE +/- 0.05, N = 319.6719.6919.75
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svm321510152025Min: 19.59 / Avg: 19.67 / Max: 19.75Min: 19.55 / Avg: 19.69 / Max: 19.78Min: 19.67 / Avg: 19.75 / Max: 19.86

Hierarchical INTegration

This test runs the U.S. Department of Energy's Ames Laboratory Hierarchical INTegration (HINT) benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOAT31280M160M240M320M400MSE +/- 333985.70, N = 3SE +/- 224641.82, N = 3SE +/- 145428.42, N = 3388597973.07387229070.98387185459.431. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOAT31270M140M210M280M350MMin: 388221605.35 / Avg: 388597973.07 / Max: 389264068.9Min: 386872797.88 / Avg: 387229070.98 / Max: 387644260.17Min: 387014133.96 / Avg: 387185459.43 / Max: 387474675.51. (CC) gcc options: -O3 -march=native -lm

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 10012330K60K90K120K150KSE +/- 235.22, N = 3SE +/- 34.07, N = 3SE +/- 63.49, N = 31587021588911592541. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 10012330K60K90K120K150KMin: 158233 / Avg: 158702.33 / Max: 158965Min: 158832 / Avg: 158890.67 / Max: 158950Min: 159132 / Avg: 159254.33 / Max: 1593451. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database Search123306090120150SE +/- 0.15, N = 3SE +/- 0.06, N = 3SE +/- 0.20, N = 3131.08131.43131.511. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database Search12320406080100Min: 130.78 / Avg: 131.08 / Max: 131.28Min: 131.3 / Avg: 131.43 / Max: 131.51Min: 131.11 / Avg: 131.51 / Max: 131.771. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 1000132300K600K900K1200K1500KSE +/- 339.92, N = 3SE +/- 2136.95, N = 3SE +/- 2111.28, N = 31585367158675715898901. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 1000132300K600K900K1200K1500KMin: 1584820 / Avg: 1585366.67 / Max: 1585990Min: 1582580 / Avg: 1586756.67 / Max: 1589630Min: 1585740 / Avg: 1589890 / Max: 15926401. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 20012330K60K90K120K150KSE +/- 174.78, N = 3SE +/- 239.20, N = 3SE +/- 241.08, N = 31272091274881275271. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 20012320K40K60K80K100KMin: 126913 / Avg: 127208.67 / Max: 127518Min: 127056 / Avg: 127488 / Max: 127882Min: 127128 / Avg: 127527.33 / Max: 1279611. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenet21348121620SE +/- 0.13, N = 3SE +/- 0.07, N = 3SE +/- 0.13, N = 316.3916.4016.43MIN: 15.99 / MAX: 17.08MIN: 16.11 / MAX: 17.71MIN: 16.01 / MAX: 17.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenet21348121620Min: 16.17 / Avg: 16.39 / Max: 16.63Min: 16.26 / Avg: 16.4 / Max: 16.5Min: 16.21 / Avg: 16.43 / Max: 16.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenet21348121620SE +/- 0.33, N = 3SE +/- 0.22, N = 3SE +/- 0.34, N = 317.7517.7617.78MIN: 16.84 / MAX: 18.95MIN: 17.14 / MAX: 19.41MIN: 17.02 / MAX: 54.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenet21348121620Min: 17.08 / Avg: 17.75 / Max: 18.08Min: 17.33 / Avg: 17.76 / Max: 17.99Min: 17.22 / Avg: 17.78 / Max: 18.41. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water Benchmark1230.5691.1381.7072.2762.845SE +/- 0.003, N = 3SE +/- 0.002, N = 3SE +/- 0.002, N = 32.5292.5282.5271. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water Benchmark123246810Min: 2.53 / Avg: 2.53 / Max: 2.53Min: 2.53 / Avg: 2.53 / Max: 2.53Min: 2.52 / Avg: 2.53 / Max: 2.531. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: mnasnet1230.9991.9982.9973.9964.995SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 34.444.444.44MIN: 4.29 / MAX: 5.12MIN: 4.29 / MAX: 4.8MIN: 4.29 / MAX: 9.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: mnasnet123246810Min: 4.43 / Avg: 4.44 / Max: 4.44Min: 4.43 / Avg: 4.44 / Max: 4.45Min: 4.43 / Avg: 4.44 / Max: 4.461. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: shufflenet-v21230.721.442.162.883.6SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 33.203.203.20MIN: 3.14 / MAX: 3.51MIN: 3.14 / MAX: 4.02MIN: 3.15 / MAX: 4.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: shufflenet-v2123246810Min: 3.19 / Avg: 3.2 / Max: 3.2Min: 3.19 / Avg: 3.2 / Max: 3.2Min: 3.19 / Avg: 3.2 / Max: 3.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU-v2-v2 - Model: mobilenet-v21230.97881.95762.93643.91524.894SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 34.354.354.35MIN: 4.18 / MAX: 4.71MIN: 4.16 / MAX: 4.71MIN: 4.18 / MAX: 4.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2123246810Min: 4.34 / Avg: 4.35 / Max: 4.36Min: 4.35 / Avg: 4.35 / Max: 4.36Min: 4.34 / Avg: 4.35 / Max: 4.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: googlenet231246810SE +/- 0.38, N = 3SE +/- 0.41, N = 3SE +/- 0.10, N = 38.258.418.63MIN: 6.93 / MAX: 44.59MIN: 6.92 / MAX: 34.2MIN: 6.94 / MAX: 36.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: googlenet2313691215Min: 7.66 / Avg: 8.25 / Max: 8.97Min: 7.61 / Avg: 8.41 / Max: 8.94Min: 8.47 / Avg: 8.63 / Max: 8.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: mobilenet1233691215SE +/- 0.66, N = 3SE +/- 0.43, N = 3SE +/- 0.43, N = 39.609.7910.22MIN: 7.7 / MAX: 35.6MIN: 7.28 / MAX: 27.36MIN: 6.48 / MAX: 44.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: mobilenet1233691215Min: 8.83 / Avg: 9.6 / Max: 10.91Min: 9.05 / Avg: 9.79 / Max: 10.53Min: 9.36 / Avg: 10.22 / Max: 10.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread