ml14-mkl-dnn

2 x Intel Xeon Platinum 8260L testing with a Intel S2600WFT (SE5C620.86B.02.01.0008.031920191559 BIOS) and ASPEED on CentOS 7.7.1908 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 1910163-AS-ML14MKLDN46
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
ml14-mkl-dnn-1
October 15 2019
  4 Hours, 13 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


ml14-mkl-dnnOpenBenchmarking.orgPhoronix Test Suite2 x Intel Xeon Platinum 8260L @ 3.90GHz (48 Cores / 96 Threads)Intel S2600WFT (SE5C620.86B.02.01.0008.031920191559 BIOS)Intel Sky Lake-E DMI3 Registers3047424MB2 x 2000GB INTEL SSDPE2KX020T8 + 2 x 480GB INTEL SSDSCKKB48 + 6 x 480GB INTEL SSDSC2KB48ASPEEDSyncMaster2 x Intel XXV710 for 25GbE SFP28 + 2 x Intel X722 for 10GBASE-T + 2 x Intel X722 for 10GbE SFP+CentOS 7.7.19083.10.0-1062.1.2.el7.x86_64 (x86_64)GCC 4.8.5 20150623xfs2560x1440ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelCompilerFile-SystemScreen ResolutionMl14-mkl-dnn BenchmarksSystem Logs- --build=x86_64-redhat-linux --disable-libgcj --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,objc,obj-c++,java,fortran,ada,go,lto --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-linker-hash-style=gnu --with-tune=generic - Scaling Governor: intel_pstate performance

ml14-mkl-dnnmkl-dnn: IP Batch 1D - f32mkl-dnn: IP Batch All - f32mkl-dnn: IP Batch 1D - u8s8f32mkl-dnn: IP Batch All - u8s8f32mkl-dnn: IP Batch 1D - bf16bf16bf16mkl-dnn: IP Batch All - bf16bf16bf16mkl-dnn: Convolution Batch conv_3d - f32mkl-dnn: Convolution Batch conv_all - f32mkl-dnn: Convolution Batch conv_3d - u8s8f32mkl-dnn: Deconvolution Batch deconv_1d - f32mkl-dnn: Deconvolution Batch deconv_3d - f32mkl-dnn: Convolution Batch conv_alexnet - f32mkl-dnn: Convolution Batch conv_all - u8s8f32mkl-dnn: Deconvolution Batch deconv_all - f32mkl-dnn: Deconvolution Batch deconv_1d - u8s8f32mkl-dnn: Deconvolution Batch deconv_3d - u8s8f32mkl-dnn: Recurrent Neural Network Training - f32mkl-dnn: Convolution Batch conv_3d - bf16bf16bf16mkl-dnn: Convolution Batch conv_alexnet - u8s8f32mkl-dnn: Convolution Batch conv_all - bf16bf16bf16mkl-dnn: Convolution Batch conv_googlenet_v3 - f32mkl-dnn: Deconvolution Batch deconv_1d - bf16bf16bf16mkl-dnn: Deconvolution Batch deconv_3d - bf16bf16bf16mkl-dnn: Convolution Batch conv_alexnet - bf16bf16bf16mkl-dnn: Convolution Batch conv_googlenet_v3 - u8s8f32mkl-dnn: Deconvolution Batch deconv_all - bf16bf16bf16mkl-dnn: Convolution Batch conv_googlenet_v3 - bf16bf16bf16ml14-mkl-dnn-13.4011.155.843.114.9436.416.74780.956479.971.561.5169.753290.95815.490.543452.72333.6310.3528.792350.0232.895.415.35422.9911.032126.08116.97OpenBenchmarking.org

MKL-DNN DNNL

This is a test of the Intel MKL-DNN (DNNL / Deep Neural Network Library) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch 1D - Data Type: f32ml14-mkl-dnn-10.7651.532.2953.063.825SE +/- 0.02, N = 33.40MIN: 2.83

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch All - Data Type: f32ml14-mkl-dnn-13691215SE +/- 0.11, N = 311.15MIN: 10.08

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch 1D - Data Type: u8s8f32ml14-mkl-dnn-11.3142.6283.9425.2566.57SE +/- 0.14, N = 155.84MIN: 2.9

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch All - Data Type: u8s8f32ml14-mkl-dnn-10.69981.39962.09942.79923.499SE +/- 0.02, N = 33.11MIN: 2.6

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch 1D - Data Type: bf16bf16bf16ml14-mkl-dnn-11.11152.2233.33454.4465.5575SE +/- 0.07, N = 44.94MIN: 4.47

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch All - Data Type: bf16bf16bf16ml14-mkl-dnn-1816243240SE +/- 0.30, N = 336.41MIN: 26.39

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_3d - Data Type: f32ml14-mkl-dnn-1246810SE +/- 0.30, N = 156.74MIN: 4.91

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_all - Data Type: f32ml14-mkl-dnn-12004006008001000SE +/- 1.32, N = 3780.95MIN: 722.37

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_3d - Data Type: u8s8f32ml14-mkl-dnn-114002800420056007000SE +/- 13.23, N = 36479.97MIN: 6398.15

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_1d - Data Type: f32ml14-mkl-dnn-10.3510.7021.0531.4041.755SE +/- 0.01, N = 31.56MIN: 1.33

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_3d - Data Type: f32ml14-mkl-dnn-10.33980.67961.01941.35921.699SE +/- 0.03, N = 31.51MIN: 1.31

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_alexnet - Data Type: f32ml14-mkl-dnn-11632486480SE +/- 0.08, N = 369.75MIN: 59.75

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_all - Data Type: u8s8f32ml14-mkl-dnn-17001400210028003500SE +/- 34.20, N = 83290.95MIN: 3128.43

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_all - Data Type: f32ml14-mkl-dnn-12004006008001000SE +/- 4.50, N = 3815.49MIN: 751.61

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32ml14-mkl-dnn-10.12150.2430.36450.4860.6075SE +/- 0.00, N = 30.54MIN: 0.42

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32ml14-mkl-dnn-17001400210028003500SE +/- 3.32, N = 33452.72MIN: 3435.37

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Recurrent Neural Network Training - Data Type: f32ml14-mkl-dnn-170140210280350SE +/- 2.34, N = 3333.63MIN: 296.61

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_3d - Data Type: bf16bf16bf16ml14-mkl-dnn-13691215SE +/- 0.09, N = 310.35MIN: 9.51

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_alexnet - Data Type: u8s8f32ml14-mkl-dnn-1714212835SE +/- 0.09, N = 328.79MIN: 23.9

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_all - Data Type: bf16bf16bf16ml14-mkl-dnn-15001000150020002500SE +/- 4.10, N = 32350.02MIN: 2301.17

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_googlenet_v3 - Data Type: f32ml14-mkl-dnn-1816243240SE +/- 0.07, N = 332.89MIN: 28.29

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_1d - Data Type: bf16bf16bf16ml14-mkl-dnn-11.21732.43463.65194.86926.0865SE +/- 0.01, N = 35.41MIN: 4.84

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_3d - Data Type: bf16bf16bf16ml14-mkl-dnn-11.20382.40763.61144.81526.019SE +/- 0.05, N = 35.35MIN: 5.08

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_alexnet - Data Type: bf16bf16bf16ml14-mkl-dnn-190180270360450SE +/- 0.18, N = 3422.99MIN: 418.12

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_googlenet_v3 - Data Type: u8s8f32ml14-mkl-dnn-13691215SE +/- 0.07, N = 311.03MIN: 8.9

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_all - Data Type: bf16bf16bf16ml14-mkl-dnn-15001000150020002500SE +/- 5.70, N = 32126.08MIN: 2036.34

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_googlenet_v3 - Data Type: bf16bf16bf16ml14-mkl-dnn-1306090120150SE +/- 0.05, N = 3116.97MIN: 111.19