CPU PlaidML Linux Benchmarks

Intel Core i9-7980XE testing with a ASUS PRIME X299-A (1602 BIOS) and NVIDIA NV120 12GB on Ubuntu 18.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 1901113-SP-CPUPLAIDM86
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Ubuntu 18.10
January 10 2019
  8 Hours, 57 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


CPU PlaidML Linux BenchmarksOpenBenchmarking.orgPhoronix Test SuiteIntel Core i9-7980XE @ 4.20GHz (18 Cores / 36 Threads)ASUS PRIME X299-A (1602 BIOS)Intel Sky Lake-E DMI3 Registers16384MBSamsung SSD 970 EVO 500GBNVIDIA NV120 12GBRealtek ALC1220ASUS PB278Intel I219-VUbuntu 18.104.18.0-13-generic (x86_64)GNOME Shell 3.30.1X Server 1.20.1modesetting 1.20.14.3 Mesa 18.2.2GCC 8.2.0ext42560x1440ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionCPU PlaidML Linux Benchmarks PerformanceSystem Logs- Scaling Governor: intel_pstate powersave- Python 2.7.15+ + Python 3.6.7- KPTI + __user pointer sanitization + Full generic retpoline IBPB IBRS_FW + SSB disabled via prctl and seccomp + PTE Inversion; VMX: conditional cache flushes SMT vulnerable

CPU PlaidML Linux Benchmarksplaidml: No - Training - VGG16 - CPUplaidml: No - Training - VGG19 - CPUplaidml: No - Inference - VGG16 - CPUplaidml: No - Inference - VGG19 - CPUplaidml: Yes - Inference - VGG16 - CPUplaidml: Yes - Inference - VGG19 - CPUplaidml: No - Training - IMDB LSTM - CPUplaidml: No - Inference - IMDB LSTM - CPUplaidml: No - Inference - Mobilenet - CPUplaidml: No - Inference - ResNet 50 - CPUplaidml: No - Inference - DenseNet 201 - CPUplaidml: No - Inference - NASNer Large - CPUUbuntu 18.101.231.005.734.863.933.492.172.5226.048.242.711.16OpenBenchmarking.org

PlaidML

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Training - Network: VGG16 - Device: CPUUbuntu 18.100.27680.55360.83041.10721.384SE +/- 0.00, N = 31.23

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Training - Network: VGG19 - Device: CPUUbuntu 18.100.2250.450.6750.91.125SE +/- 0.00, N = 31.00

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: CPUUbuntu 18.101.28932.57863.86795.15726.4465SE +/- 0.00, N = 35.73

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG19 - Device: CPUUbuntu 18.101.09352.1873.28054.3745.4675SE +/- 0.00, N = 34.86

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: Yes - Mode: Inference - Network: VGG16 - Device: CPUUbuntu 18.100.88431.76862.65293.53724.4215SE +/- 0.01, N = 33.93

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: Yes - Mode: Inference - Network: VGG19 - Device: CPUUbuntu 18.100.78531.57062.35593.14123.9265SE +/- 0.00, N = 33.49

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Training - Network: IMDB LSTM - Device: CPUUbuntu 18.100.48830.97661.46491.95322.4415SE +/- 0.00, N = 32.17

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: IMDB LSTM - Device: CPUUbuntu 18.100.5671.1341.7012.2682.835SE +/- 0.00, N = 32.52

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: Mobilenet - Device: CPUUbuntu 18.10612182430SE +/- 0.10, N = 326.04

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: ResNet 50 - Device: CPUUbuntu 18.10246810SE +/- 0.00, N = 28.24

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: DenseNet 201 - Device: CPUUbuntu 18.100.60981.21961.82942.43923.049SE +/- 0.00, N = 32.71

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: NASNer Large - Device: CPUUbuntu 18.100.2610.5220.7831.0441.305SE +/- 0.00, N = 31.16