res.txt

2 x Intel Xeon Gold 6244 testing with a Dell 060K5C (2.4.1 BIOS) and NVIDIA Quadro GV100 32GB on Ubuntu 20.04.6 LTS via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2402069-NE-RESTXT08482
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts

Limit displaying results to tests within:

HPC - High Performance Computing 2 Tests
Machine Learning 2 Tests
NVIDIA GPU Compute 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
2024-02-05 13:53
February 05
 
2024-02-05 13:58
February 05
  15 Hours, 3 Minutes
2024-02-05 18:03
February 05
 
2024-02-05 18:36
February 05
 
2024-02-06 08:56
February 06
  16 Hours, 53 Minutes
Invert Hiding All Results Option
  1 Day, 16 Hours

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


res.txtOpenBenchmarking.orgPhoronix Test Suite2 x Intel Xeon Gold 6244 @ 4.40GHz (16 Cores / 32 Threads)Dell 060K5C (2.4.1 BIOS)128GBPM981a NVMe SAMSUNG 2048GB + 4 x 8002GB TOSHIBA MG06ACA8NVIDIA Quadro GV100 32GBUbuntu 20.04.6 LTS3.10.0-1160.95.1.el7.x86_64 (x86_64)NVIDIA1.1.182GCC 9.4.0 + CUDA 12.0xfs800x600ProcessorMotherboardMemoryDiskGraphicsOSKernelDisplay DriverVulkanCompilerFile-SystemScreen ResolutionRes.txt BenchmarksSystem Logs- Transparent Huge Pages: always- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-9QDOt0/gcc-9-9.4.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - 2024-02-05 13:53: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x5003604 - 2024-02-05 13:58: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x5003604 - 2024-02-05 18:03: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x5003604 - 2024-02-05 18:36: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x5003604 - 2024-02-06 08:56: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x5003604 - Python 3.8.10

res.txtncnn: Vulkan GPU - shufflenet-v2ncnn: Vulkan GPU - googlenetncnn: Vulkan GPU - resnet18ncnn: Vulkan GPU - alexnetncnn: CPU - efficientnet-b0plaidml: No - Inference - ResNet 50 - CPUplaidml: No - Training - ResNet 50 - CPUncnn: Vulkan GPU - FastestDetncnn: Vulkan GPU - vision_transformerncnn: Vulkan GPU - regnety_400mncnn: Vulkan GPU - squeezenet_ssdncnn: Vulkan GPU - yolov4-tinyncnn: Vulkan GPU - resnet50ncnn: Vulkan GPU - vgg16ncnn: Vulkan GPU - blazefacencnn: Vulkan GPU - efficientnet-b0ncnn: Vulkan GPU - mnasnetncnn: Vulkan GPU-v3-v3 - mobilenet-v3ncnn: Vulkan GPU-v2-v2 - mobilenet-v2ncnn: Vulkan GPU - mobilenetncnn: CPU - FastestDetncnn: CPU - vision_transformerncnn: CPU - regnety_400mncnn: CPU - squeezenet_ssdncnn: CPU - yolov4-tinyncnn: CPU - resnet50ncnn: CPU - alexnetncnn: CPU - resnet18ncnn: CPU - vgg16ncnn: CPU - googlenetncnn: CPU - blazefacencnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - mobilenet2024-02-05 13:532024-02-05 13:582024-02-05 18:032024-02-05 18:362024-02-06 08:567.5216.6810.016.658.478.1668.4821.4715.4631.5418.1543.353.018.696.466.677.0616.738.2767.7721.4215.3430.8918.466.809.9843.2016.212.916.417.386.727.1816.927.2916.229.856.838.53654.731263.09112.44244.36421.96565.39501.672.948.466.3581.5384.63181.761112.161647.7725.331029.95254.601279.5774.46104.14594.96120.622.83131.20181.58459.15373.6529.717.2116.089.696.648.375.030.377.3284.9820.7114.9338.8018.0385.162.998.216.286.536.9818.168.0987.1220.7216.0142.9717.986.729.9578.5419.813.026.387.226.577.0317.13OpenBenchmarking.org

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: shufflenet-v22024-02-05 13:582024-02-05 18:362024-02-06 08:56246810SE +/- 0.06, N = 15SE +/- 0.11, N = 8SE +/- 0.03, N = 97.527.297.21MIN: 6.49 / MAX: 70.78MIN: 6.09 / MAX: 9.72MIN: 6.9 / MAX: 9.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: googlenet2024-02-05 13:582024-02-05 18:362024-02-06 08:5648121620SE +/- 0.13, N = 15SE +/- 0.28, N = 8SE +/- 0.06, N = 916.6816.2216.08MIN: 14.68 / MAX: 41.63MIN: 13.66 / MAX: 19.49MIN: 15.44 / MAX: 19.021. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: resnet182024-02-05 13:582024-02-05 18:362024-02-06 08:563691215SE +/- 0.11, N = 15SE +/- 0.18, N = 8SE +/- 0.04, N = 910.019.859.69MIN: 8.29 / MAX: 66.34MIN: 8.54 / MAX: 12.61MIN: 9.24 / MAX: 12.41. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: alexnet2024-02-05 18:362024-02-05 13:582024-02-06 08:56246810SE +/- 0.05, N = 8SE +/- 0.05, N = 15SE +/- 0.02, N = 86.836.656.64MIN: 6.38 / MAX: 9.35MIN: 5.83 / MAX: 17.95MIN: 6.38 / MAX: 44.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: efficientnet-b02024-02-05 18:362024-02-05 13:582024-02-06 08:56246810SE +/- 0.14, N = 9SE +/- 0.10, N = 15SE +/- 0.07, N = 118.538.478.37MIN: 7.16 / MAX: 13.3MIN: 7.07 / MAX: 42.9MIN: 8.01 / MAX: 11.41. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU2024-02-06 08:561.13182.26363.39544.52725.659SE +/- 0.02, N = 35.03

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Training - Network: ResNet 50 - Device: CPU2024-02-06 08:560.08330.16660.24990.33320.4165SE +/- 0.00, N = 30.37

FP16: Yes - Mode: Inference - Network: NASNer Large - Device: CPU

2024-02-06 08:56: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

FP16: Yes - Mode: Inference - Network: Inception V3 - Device: CPU

2024-02-06 08:56: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

FP16: Yes - Mode: Inference - Network: DenseNet 201 - Device: CPU

2024-02-06 08:56: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

FP16: Yes - Mode: Training - Network: NASNer Large - Device: CPU

2024-02-06 08:56: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

FP16: Yes - Mode: Training - Network: Inception V3 - Device: CPU

2024-02-06 08:56: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

FP16: Yes - Mode: Training - Network: DenseNet 201 - Device: CPU

2024-02-06 08:56: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

FP16: No - Mode: Inference - Network: NASNer Large - Device: CPU

2024-02-06 08:56: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

FP16: No - Mode: Inference - Network: Inception V3 - Device: CPU

2024-02-06 08:56: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

FP16: No - Mode: Inference - Network: DenseNet 201 - Device: CPU

2024-02-06 08:56: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

FP16: No - Mode: Training - Network: NASNer Large - Device: CPU

2024-02-06 08:56: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

FP16: No - Mode: Training - Network: Inception V3 - Device: CPU

2024-02-06 08:56: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

FP16: No - Mode: Training - Network: DenseNet 201 - Device: CPU

2024-02-06 08:56: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

FP16: Yes - Mode: Inference - Network: ResNet 50 - Device: CPU

2024-02-06 08:56: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

FP16: Yes - Mode: Inference - Network: Mobilenet - Device: CPU

2024-02-06 08:56: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

FP16: Yes - Mode: Inference - Network: IMDB LSTM - Device: CPU

2024-02-06 08:56: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

FP16: Yes - Mode: Training - Network: ResNet 50 - Device: CPU

2024-02-06 08:56: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

FP16: Yes - Mode: Training - Network: Mobilenet - Device: CPU

2024-02-06 08:56: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

FP16: Yes - Mode: Training - Network: IMDB LSTM - Device: CPU

2024-02-06 08:56: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

FP16: No - Mode: Inference - Network: Mobilenet - Device: CPU

2024-02-06 08:56: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

FP16: No - Mode: Inference - Network: IMDB LSTM - Device: CPU

2024-02-06 08:56: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

FP16: No - Mode: Training - Network: Mobilenet - Device: CPU

2024-02-05 13:58: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

2024-02-05 18:36: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

2024-02-06 08:56: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

FP16: No - Mode: Training - Network: IMDB LSTM - Device: CPU

2024-02-05 13:58: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

2024-02-05 18:36: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

2024-02-06 08:56: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

FP16: Yes - Mode: Inference - Network: VGG19 - Device: CPU

2024-02-05 13:58: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

2024-02-05 18:36: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

2024-02-06 08:56: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

FP16: Yes - Mode: Inference - Network: VGG16 - Device: CPU

2024-02-05 13:58: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

2024-02-05 18:36: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

2024-02-06 08:56: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

FP16: Yes - Mode: Training - Network: VGG19 - Device: CPU

2024-02-05 13:58: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

2024-02-05 18:36: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

2024-02-06 08:56: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

FP16: Yes - Mode: Training - Network: VGG16 - Device: CPU

2024-02-05 13:58: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

2024-02-05 18:36: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

2024-02-06 08:56: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

FP16: No - Mode: Inference - Network: VGG19 - Device: CPU

2024-02-05 13:58: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

2024-02-05 18:36: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

2024-02-06 08:56: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

FP16: No - Mode: Inference - Network: VGG16 - Device: CPU

2024-02-05 13:58: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

2024-02-05 18:36: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

2024-02-06 08:56: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

FP16: No - Mode: Training - Network: VGG19 - Device: CPU

2024-02-05 13:58: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

2024-02-05 18:36: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

2024-02-06 08:56: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

FP16: No - Mode: Training - Network: VGG16 - Device: CPU

2024-02-05 13:58: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

2024-02-05 18:36: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

2024-02-06 08:56: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: FastestDet2024-02-05 18:362024-02-05 13:582024-02-06 08:56140280420560700SE +/- 284.19, N = 8SE +/- 0.29, N = 15SE +/- 0.31, N = 9654.738.167.32MIN: 5.88 / MAX: 1709.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: vision_transformer2024-02-05 18:362024-02-06 08:562024-02-05 13:5830060090012001500SE +/- 261.92, N = 8SE +/- 7.30, N = 9SE +/- 0.79, N = 151263.0984.9868.48MIN: 59.44 / MAX: 1769.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: regnety_400m2024-02-05 18:362024-02-05 13:582024-02-06 08:56306090120150SE +/- 91.36, N = 8SE +/- 0.21, N = 15SE +/- 0.04, N = 9112.4421.4720.71MIN: 18.72 / MAX: 8799.8MIN: 18.78 / MAX: 94.51MIN: 20.11 / MAX: 57.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: squeezenet_ssd2024-02-05 18:362024-02-05 13:582024-02-06 08:5650100150200250SE +/- 228.09, N = 8SE +/- 0.25, N = 15SE +/- 0.41, N = 9244.3615.4614.93MIN: 13.25 / MAX: 1854.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: yolov4-tiny2024-02-05 18:362024-02-06 08:562024-02-05 13:5890180270360450SE +/- 120.86, N = 8SE +/- 3.53, N = 9SE +/- 0.99, N = 15421.9638.8031.54MIN: 26.32 / MAX: 796.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: resnet502024-02-05 18:362024-02-05 13:582024-02-06 08:56120240360480600SE +/- 236.61, N = 8SE +/- 0.16, N = 15SE +/- 0.37, N = 9565.3918.1518.03MIN: 16.76 / MAX: 1675.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: vgg162024-02-05 18:362024-02-06 08:562024-02-05 13:58110220330440550SE +/- 94.33, N = 8SE +/- 8.40, N = 9SE +/- 0.55, N = 15501.6785.1643.35MIN: 38.99 / MAX: 680.92MIN: 38.78 / MAX: 657.391. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: blazeface2024-02-05 13:582024-02-06 08:562024-02-05 18:360.67731.35462.03192.70923.3865SE +/- 0.04, N = 15SE +/- 0.01, N = 9SE +/- 0.10, N = 83.012.992.94MIN: 2.56 / MAX: 5.33MIN: 2.86 / MAX: 5.34MIN: 2.2 / MAX: 5.141. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: efficientnet-b02024-02-05 13:582024-02-05 18:362024-02-06 08:56246810SE +/- 0.12, N = 15SE +/- 0.18, N = 8SE +/- 0.03, N = 98.698.468.21MIN: 7.5 / MAX: 90.56MIN: 7.17 / MAX: 12.2MIN: 7.9 / MAX: 14.111. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: mnasnet2024-02-05 13:582024-02-05 18:362024-02-06 08:56246810SE +/- 0.06, N = 15SE +/- 0.16, N = 8SE +/- 0.03, N = 96.466.356.28MIN: 5.63 / MAX: 121.66MIN: 5.01 / MAX: 10.28MIN: 6.07 / MAX: 8.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU-v3-v3 - Model: mobilenet-v32024-02-05 18:362024-02-05 13:582024-02-06 08:5620406080100SE +/- 74.95, N = 8SE +/- 0.05, N = 15SE +/- 0.03, N = 981.536.676.53MIN: 5.52 / MAX: 1367.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU-v2-v2 - Model: mobilenet-v22024-02-05 18:362024-02-05 13:582024-02-06 08:5620406080100SE +/- 77.74, N = 8SE +/- 0.08, N = 15SE +/- 0.02, N = 984.637.066.98MIN: 5.23 / MAX: 1181.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: mobilenet2024-02-05 18:362024-02-06 08:562024-02-05 13:584080120160200SE +/- 164.65, N = 8SE +/- 1.91, N = 9SE +/- 0.31, N = 15181.7618.1616.73MIN: 14.72 / MAX: 1345.891. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: FastestDet2024-02-05 18:362024-02-05 13:582024-02-06 08:562004006008001000SE +/- 194.42, N = 9SE +/- 0.33, N = 15SE +/- 0.36, N = 111112.168.278.09MIN: 5.89 / MAX: 1712.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vision_transformer2024-02-05 18:362024-02-06 08:562024-02-05 13:58400800120016002000SE +/- 36.79, N = 9SE +/- 7.53, N = 11SE +/- 0.47, N = 151647.7787.1267.77MIN: 60.22 / MAX: 1773.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: regnety_400m2024-02-05 18:362024-02-05 13:582024-02-06 08:56612182430SE +/- 4.36, N = 9SE +/- 0.17, N = 15SE +/- 0.08, N = 1125.3321.4220.72MIN: 18.49 / MAX: 8765.4MIN: 18.61 / MAX: 50.31MIN: 20.11 / MAX: 89.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: squeezenet_ssd2024-02-05 18:362024-02-06 08:562024-02-05 13:582004006008001000SE +/- 320.29, N = 9SE +/- 0.62, N = 11SE +/- 0.25, N = 151029.9516.0115.34MIN: 13 / MAX: 1860.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: yolov4-tiny2024-02-05 18:362024-02-06 08:562024-02-05 13:5860120180240300SE +/- 110.09, N = 9SE +/- 5.42, N = 11SE +/- 0.37, N = 15254.6042.9730.89MIN: 26.15 / MAX: 793.43MIN: 25.46 / MAX: 767.17MIN: 25.55 / MAX: 393.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet502024-02-05 18:362024-02-05 13:582024-02-06 08:5630060090012001500SE +/- 237.42, N = 9SE +/- 0.18, N = 15SE +/- 0.20, N = 111279.5718.4617.98MIN: 17.05 / MAX: 1680.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: alexnet2024-02-05 18:362024-02-05 13:582024-02-06 08:5620406080100SE +/- 46.29, N = 9SE +/- 0.08, N = 15SE +/- 0.05, N = 1174.466.806.72MIN: 6.37 / MAX: 397.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet182024-02-05 18:362024-02-05 13:582024-02-06 08:5620406080100SE +/- 94.16, N = 9SE +/- 0.16, N = 15SE +/- 0.14, N = 11104.149.989.95MIN: 8.47 / MAX: 867.98MIN: 8.07 / MAX: 48.75MIN: 9.3 / MAX: 13.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vgg162024-02-05 18:362024-02-06 08:562024-02-05 13:58130260390520650SE +/- 34.48, N = 9SE +/- 4.72, N = 11SE +/- 0.31, N = 15594.9678.5443.20MIN: 39.51 / MAX: 679.59MIN: 39.02 / MAX: 663.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: googlenet2024-02-05 18:362024-02-06 08:562024-02-05 13:58306090120150SE +/- 104.57, N = 9SE +/- 3.51, N = 11SE +/- 0.28, N = 15120.6219.8116.21MIN: 13.63 / MAX: 1852.64MIN: 15.64 / MAX: 1809.29MIN: 12.89 / MAX: 38.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: blazeface2024-02-06 08:562024-02-05 13:582024-02-05 18:360.67951.3592.03852.7183.3975SE +/- 0.02, N = 11SE +/- 0.08, N = 15SE +/- 0.12, N = 93.022.912.83MIN: 2.84 / MAX: 22.6MIN: 2.16 / MAX: 5.98MIN: 2.13 / MAX: 5.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mnasnet2024-02-05 18:362024-02-05 13:582024-02-06 08:56306090120150SE +/- 124.99, N = 9SE +/- 0.09, N = 15SE +/- 0.10, N = 11131.206.416.38MIN: 4.8 / MAX: 1145.531. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: shufflenet-v22024-02-05 18:362024-02-05 13:582024-02-06 08:564080120160200SE +/- 174.29, N = 9SE +/- 0.04, N = 15SE +/- 0.03, N = 11181.587.387.22MIN: 6.15 / MAX: 1598.41. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v3-v3 - Model: mobilenet-v32024-02-05 18:362024-02-05 13:582024-02-06 08:56100200300400500SE +/- 224.10, N = 9SE +/- 0.08, N = 15SE +/- 0.04, N = 11459.156.726.57MIN: 5.6 / MAX: 1369.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v2-v2 - Model: mobilenet-v22024-02-05 18:362024-02-05 13:582024-02-06 08:5680160240320400SE +/- 175.05, N = 9SE +/- 0.12, N = 15SE +/- 0.05, N = 11373.657.187.03MIN: 5.54 / MAX: 11671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mobilenet2024-02-05 18:362024-02-06 08:562024-02-05 13:58714212835SE +/- 12.39, N = 9SE +/- 0.29, N = 11SE +/- 0.30, N = 1529.7117.1316.92MIN: 15.53 / MAX: 1342.94MIN: 14.45 / MAX: 20.43MIN: 13.75 / MAX: 53.311. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread