CPU PlaidML Linux Benchmarks

Intel Core i9-7980XE testing with a ASUS PRIME X299-A (1602 BIOS) and NVIDIA NV120 12GB on Ubuntu 18.10 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/1901113-SP-CPUPLAIDM86.

CPU PlaidML Linux BenchmarksProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionUbuntu 18.10Intel Core i9-7980XE @ 4.20GHz (18 Cores / 36 Threads)ASUS PRIME X299-A (1602 BIOS)Intel Sky Lake-E DMI3 Registers16384MBSamsung SSD 970 EVO 500GBNVIDIA NV120 12GBRealtek ALC1220ASUS PB278Intel I219-VUbuntu 18.104.18.0-13-generic (x86_64)GNOME Shell 3.30.1X Server 1.20.1modesetting 1.20.14.3 Mesa 18.2.2GCC 8.2.0ext42560x1440OpenBenchmarking.org- Scaling Governor: intel_pstate powersave- Python 2.7.15+ + Python 3.6.7- KPTI + __user pointer sanitization + Full generic retpoline IBPB IBRS_FW + SSB disabled via prctl and seccomp + PTE Inversion; VMX: conditional cache flushes SMT vulnerable

CPU PlaidML Linux Benchmarksplaidml: No - Training - VGG16 - CPUplaidml: No - Training - VGG19 - CPUplaidml: No - Inference - VGG16 - CPUplaidml: No - Inference - VGG19 - CPUplaidml: Yes - Inference - VGG16 - CPUplaidml: Yes - Inference - VGG19 - CPUplaidml: No - Training - IMDB LSTM - CPUplaidml: No - Inference - IMDB LSTM - CPUplaidml: No - Inference - Mobilenet - CPUplaidml: No - Inference - ResNet 50 - CPUplaidml: No - Inference - DenseNet 201 - CPUplaidml: No - Inference - NASNer Large - CPUUbuntu 18.101.231.005.734.863.933.492.172.5226.048.242.711.16OpenBenchmarking.org

PlaidML

FP16: No - Mode: Training - Network: VGG16 - Device: CPU

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Training - Network: VGG16 - Device: CPUUbuntu 18.100.27680.55360.83041.10721.384SE +/- 0.00, N = 31.23

PlaidML

FP16: No - Mode: Training - Network: VGG19 - Device: CPU

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Training - Network: VGG19 - Device: CPUUbuntu 18.100.2250.450.6750.91.125SE +/- 0.00, N = 31.00

PlaidML

FP16: No - Mode: Inference - Network: VGG16 - Device: CPU

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: CPUUbuntu 18.101.28932.57863.86795.15726.4465SE +/- 0.00, N = 35.73

PlaidML

FP16: No - Mode: Inference - Network: VGG19 - Device: CPU

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG19 - Device: CPUUbuntu 18.101.09352.1873.28054.3745.4675SE +/- 0.00, N = 34.86

PlaidML

FP16: Yes - Mode: Inference - Network: VGG16 - Device: CPU

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: Yes - Mode: Inference - Network: VGG16 - Device: CPUUbuntu 18.100.88431.76862.65293.53724.4215SE +/- 0.01, N = 33.93

PlaidML

FP16: Yes - Mode: Inference - Network: VGG19 - Device: CPU

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: Yes - Mode: Inference - Network: VGG19 - Device: CPUUbuntu 18.100.78531.57062.35593.14123.9265SE +/- 0.00, N = 33.49

PlaidML

FP16: No - Mode: Training - Network: IMDB LSTM - Device: CPU

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Training - Network: IMDB LSTM - Device: CPUUbuntu 18.100.48830.97661.46491.95322.4415SE +/- 0.00, N = 32.17

PlaidML

FP16: No - Mode: Inference - Network: IMDB LSTM - Device: CPU

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: IMDB LSTM - Device: CPUUbuntu 18.100.5671.1341.7012.2682.835SE +/- 0.00, N = 32.52

PlaidML

FP16: No - Mode: Inference - Network: Mobilenet - Device: CPU

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: Mobilenet - Device: CPUUbuntu 18.10612182430SE +/- 0.10, N = 326.04

PlaidML

FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: ResNet 50 - Device: CPUUbuntu 18.10246810SE +/- 0.00, N = 28.24

PlaidML

FP16: No - Mode: Inference - Network: DenseNet 201 - Device: CPU

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: DenseNet 201 - Device: CPUUbuntu 18.100.60981.21961.82942.43923.049SE +/- 0.00, N = 32.71

PlaidML

FP16: No - Mode: Inference - Network: NASNer Large - Device: CPU

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: NASNer Large - Device: CPUUbuntu 18.100.2610.5220.7831.0441.305SE +/- 0.00, N = 31.16


Phoronix Test Suite v10.8.4