Xeon_W2295_plaidml_FP16_NO

Intel Xeon W-2295 testing with a GIGABYTE MW51-HP0-00 (5.15 BIOS) and NVIDIA Quadro RTX 8000 45GB on Ubuntu 18.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2005137-NI-XEONW229512
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
plaidml_test2
May 12 2020
  16 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Xeon_W2295_plaidml_FP16_NOOpenBenchmarking.orgPhoronix Test SuiteIntel Xeon W-2295 @ 4.80GHz (18 Cores / 36 Threads)GIGABYTE MW51-HP0-00 (5.15 BIOS)Intel Sky Lake-E DMI3 Registers4 x 32 GB DDR4-2934MT/s Micron 36ASF4G72PZ-2G9E22000GB Samsung SSD 970 EVO Plus 2TBNVIDIA Quadro RTX 8000 45GB (1185/6500MHz)Realtek ALC1150HP Z38c2 x Intel I210 + 2 x Intel 10-Gigabit X540-AT2Ubuntu 18.045.3.0-51-generic (x86_64)GNOME Shell 3.28.4X Server 1.20.5NVIDIA 440.824.6.0OpenCL 1.2 CUDA 10.2.159GCC 7.5.0 + CUDA 10.0ext43840x1600ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLOpenCLCompilerFile-SystemScreen ResolutionXeon_W2295_plaidml_FP16_NO BenchmarksSystem Logs- Scaling Governor: intel_pstate powersave - CPU Microcode: 0x500002c- GPU Compute Cores: 4608- Python 2.7.17 + Python 3.6.9- itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + tsx_async_abort: Mitigation of TSX disabled

Xeon_W2295_plaidml_FP16_NOplaidml: No - Inference - VGG16 - OpenCLplaidml: No - Inference - VGG19 - OpenCLplaidml: No - Training - ResNet 50 - OpenCLplaidml: No - Inference - IMDB LSTM - OpenCLplaidml: No - Inference - Mobilenet - OpenCLplaidml: No - Inference - ResNet 50 - OpenCLplaidml: No - Inference - DenseNet 201 - OpenCLplaidml: No - Inference - Inception V3 - OpenCLplaidml: No - Inference - NASNer Large - OpenCLplaidml_test2240.51190.9366.83762.132040.17623.35205.44339.2056.99OpenBenchmarking.org

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: OpenCLplaidml_test250100150200250SE +/- 0.27, N = 3240.51

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG19 - Device: OpenCLplaidml_test24080120160200SE +/- 0.35, N = 3190.93

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Training - Network: ResNet 50 - Device: OpenCLplaidml_test21530456075SE +/- 0.03, N = 266.83

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: IMDB LSTM - Device: OpenCLplaidml_test2160320480640800SE +/- 1.95, N = 3762.13

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: Mobilenet - Device: OpenCLplaidml_test2400800120016002000SE +/- 4.85, N = 32040.17

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: ResNet 50 - Device: OpenCLplaidml_test2130260390520650SE +/- 1.14, N = 3623.35

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: DenseNet 201 - Device: OpenCLplaidml_test250100150200250SE +/- 0.22, N = 3205.44

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: Inception V3 - Device: OpenCLplaidml_test270140210280350SE +/- 0.10, N = 3339.20

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: NASNer Large - Device: OpenCLplaidml_test21326395265SE +/- 0.08, N = 356.99