plaid laptop

Intel Xeon E-2286M testing with a HP 860C (R92 Ver. 01.03.04 BIOS) and NVIDIA Quadro RTX 5000 16GB on Ubuntu 20.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2003235-NI-PLAIDLAPT18.

plaid laptopProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionIntel Xeon E-2286MIntel Xeon E-2286M @ 5.00GHz (8 Cores / 16 Threads)HP 860C (R92 Ver. 01.03.04 BIOS)Intel Cannon Lake PCH32GB1024GB Western Digital PC SN720 SDAPNTW-1T00-1006NVIDIA Quadro RTX 5000 16GB (300/405MHz)Intel Cannon Lake PCH cAVSIntel I219-LM + Intel Wi-Fi 6 AX200Ubuntu 20.045.4.0-18-generic (x86_64)GNOME Shell 3.35.91X Server 1.20.7NVIDIA 440.644.6.0GCC 9.3.0ext43840x2160OpenBenchmarking.org- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave - CPU Microcode: 0xca- GPU Compute Cores: 3072- + Python 3.8.2- itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + tsx_async_abort: Mitigation of TSX disabled

plaid laptopneat: plaidml: No - Inference - VGG16 - CPUplaidml: No - Inference - VGG19 - CPUplaidml: No - Inference - VGG16 - OpenCLplaidml: No - Inference - VGG19 - OpenCLplaidml: No - Inference - IMDB LSTM - CPUplaidml: No - Inference - Mobilenet - CPUplaidml: No - Inference - ResNet 50 - CPUplaidml: No - Inference - DenseNet 201 - CPUplaidml: No - Inference - IMDB LSTM - OpenCLplaidml: No - Inference - Inception V3 - CPUplaidml: No - Inference - Mobilenet - OpenCLplaidml: No - Inference - ResNet 50 - OpenCLplaidml: No - Inference - DenseNet 201 - OpenCLplaidml: No - Inference - Inception V3 - OpenCLIntel Xeon E-2286M24.9788.216.428.016.451622.6913.835.902.831616.796.7113.835.942.846.69OpenBenchmarking.org

Nebular Empirical Analysis Tool

OpenBenchmarking.orgSeconds, Fewer Is BetterNebular Empirical Analysis Tool 2020-02-29Intel Xeon E-2286M612182430SE +/- 0.33, N = 1224.981. (F9X) gfortran options: -cpp -ffree-line-length-0 -Jsource/ -fopenmp -O3 -fno-backtrace

PlaidML

FP16: No - Mode: Inference - Network: VGG16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: CPUIntel Xeon E-2286M246810SE +/- 0.07, N = 38.21

PlaidML

FP16: No - Mode: Inference - Network: VGG19 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG19 - Device: CPUIntel Xeon E-2286M246810SE +/- 0.01, N = 36.42

PlaidML

FP16: No - Mode: Inference - Network: VGG16 - Device: OpenCL

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: OpenCLIntel Xeon E-2286M246810SE +/- 0.03, N = 38.01

PlaidML

FP16: No - Mode: Inference - Network: VGG19 - Device: OpenCL

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG19 - Device: OpenCLIntel Xeon E-2286M246810SE +/- 0.04, N = 36.45

PlaidML

FP16: No - Mode: Inference - Network: IMDB LSTM - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: IMDB LSTM - Device: CPUIntel Xeon E-2286M30060090012001500SE +/- 26.51, N = 151622.69

PlaidML

FP16: No - Mode: Inference - Network: Mobilenet - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: Mobilenet - Device: CPUIntel Xeon E-2286M48121620SE +/- 0.03, N = 313.83

PlaidML

FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: ResNet 50 - Device: CPUIntel Xeon E-2286M1.32752.6553.98255.316.6375SE +/- 0.02, N = 35.90

PlaidML

FP16: No - Mode: Inference - Network: DenseNet 201 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: DenseNet 201 - Device: CPUIntel Xeon E-2286M0.63681.27361.91042.54723.184SE +/- 0.00, N = 32.83

PlaidML

FP16: No - Mode: Inference - Network: IMDB LSTM - Device: OpenCL

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: IMDB LSTM - Device: OpenCLIntel Xeon E-2286M30060090012001500SE +/- 29.63, N = 151616.79

PlaidML

FP16: No - Mode: Inference - Network: Inception V3 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: Inception V3 - Device: CPUIntel Xeon E-2286M246810SE +/- 0.00, N = 36.71

PlaidML

FP16: No - Mode: Inference - Network: Mobilenet - Device: OpenCL

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: Mobilenet - Device: OpenCLIntel Xeon E-2286M48121620SE +/- 0.04, N = 313.83

PlaidML

FP16: No - Mode: Inference - Network: ResNet 50 - Device: OpenCL

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: ResNet 50 - Device: OpenCLIntel Xeon E-2286M1.33652.6734.00955.3466.6825SE +/- 0.01, N = 35.94

PlaidML

FP16: No - Mode: Inference - Network: DenseNet 201 - Device: OpenCL

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: DenseNet 201 - Device: OpenCLIntel Xeon E-2286M0.6391.2781.9172.5563.195SE +/- 0.01, N = 32.84

PlaidML

FP16: No - Mode: Inference - Network: Inception V3 - Device: OpenCL

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: Inception V3 - Device: OpenCLIntel Xeon E-2286M246810SE +/- 0.01, N = 36.69


Phoronix Test Suite v10.8.4