RX 590 PlaidML

Intel Core i9-9900K testing with a ASUS PRIME Z390-A (0602 BIOS) and Sapphire AMD Radeon RX 470/480/570/570X/580/580X 8GB on Ubuntu 18.10 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/1901306-SP-RX590PLAI91.

RX 590 PlaidMLProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLOpenCLVulkanCompilerFile-SystemScreen ResolutionRX 590Intel Core i9-9900K @ 5.00GHz (8 Cores / 16 Threads)ASUS PRIME Z390-A (0602 BIOS)Intel Cannon Lake PCH Shared SRAM16384MBSamsung SSD 970 EVO 250GB + 2000GB SABRENTSapphire AMD Radeon RX 470/480/570/570X/580/580X 8GB (1560/2100MHz)Realtek ALC1220Acer B286HKIntel I219-VUbuntu 18.105.0.0-050000rc4-generic (x86_64) 20190127GNOME Shell 3.30.1X Server 1.20.14.5 Mesa 19.0.0-devel padoka PPA (LLVM 9.0.0)OpenCL 2.1 AMD-APP (2783.0)1.1.90GCC 8.2.0ext43840x2160OpenBenchmarking.org- Scaling Governor: intel_pstate performance- Python 2.7.15+ + Python 3.6.7- __user pointer sanitization + Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + SSB disabled via prctl and seccomp

RX 590 PlaidMLplaidml: No - Training - VGG16 - OpenCLplaidml: No - Training - VGG19 - OpenCLplaidml: No - Inference - VGG16 - OpenCLplaidml: No - Inference - VGG19 - OpenCLplaidml: Yes - Inference - VGG16 - OpenCLplaidml: Yes - Inference - VGG19 - OpenCLplaidml: No - Training - IMDB LSTM - OpenCLplaidml: No - Training - Mobilenet - OpenCLplaidml: No - Training - ResNet 50 - OpenCLplaidml: No - Inference - IMDB LSTM - OpenCLplaidml: No - Inference - Mobilenet - OpenCLplaidml: No - Inference - ResNet 50 - OpenCLplaidml: Yes - Inference - Mobilenet - OpenCLplaidml: Yes - Inference - ResNet 50 - OpenCLplaidml: No - Training - Inception V3 - OpenCLplaidml: No - Inference - DenseNet 201 - OpenCLplaidml: No - Inference - Inception V3 - OpenCLplaidml: No - Inference - NASNer Large - OpenCLRX 5909.508.9568.9859.1864.2156.0814545.7417.78220455142.77603.1713714.9061.8975.7020.02OpenBenchmarking.org

PlaidML

FP16: No - Mode: Training - Network: VGG16 - Device: OpenCL

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Training - Network: VGG16 - Device: OpenCLRX 5903691215SE +/- 0.01, N = 39.50

PlaidML

FP16: No - Mode: Training - Network: VGG19 - Device: OpenCL

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Training - Network: VGG19 - Device: OpenCLRX 5903691215SE +/- 0.00, N = 38.95

PlaidML

FP16: No - Mode: Inference - Network: VGG16 - Device: OpenCL

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: OpenCLRX 5901530456075SE +/- 0.02, N = 368.98

PlaidML

FP16: No - Mode: Inference - Network: VGG19 - Device: OpenCL

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG19 - Device: OpenCLRX 5901326395265SE +/- 0.01, N = 359.18

PlaidML

FP16: Yes - Mode: Inference - Network: VGG16 - Device: OpenCL

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: Yes - Mode: Inference - Network: VGG16 - Device: OpenCLRX 5901428425670SE +/- 0.01, N = 364.21

PlaidML

FP16: Yes - Mode: Inference - Network: VGG19 - Device: OpenCL

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: Yes - Mode: Inference - Network: VGG19 - Device: OpenCLRX 5901326395265SE +/- 0.01, N = 356.08

PlaidML

FP16: No - Mode: Training - Network: IMDB LSTM - Device: OpenCL

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Training - Network: IMDB LSTM - Device: OpenCLRX 590306090120150SE +/- 0.02, N = 3145

PlaidML

FP16: No - Mode: Training - Network: Mobilenet - Device: OpenCL

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Training - Network: Mobilenet - Device: OpenCLRX 5901020304050SE +/- 0.01, N = 345.74

PlaidML

FP16: No - Mode: Training - Network: ResNet 50 - Device: OpenCL

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Training - Network: ResNet 50 - Device: OpenCLRX 59048121620SE +/- 0.00, N = 317.78

PlaidML

FP16: No - Mode: Inference - Network: IMDB LSTM - Device: OpenCL

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: IMDB LSTM - Device: OpenCLRX 59050100150200250SE +/- 0.88, N = 3220

PlaidML

FP16: No - Mode: Inference - Network: Mobilenet - Device: OpenCL

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: Mobilenet - Device: OpenCLRX 590100200300400500SE +/- 0.37, N = 3455

PlaidML

FP16: No - Mode: Inference - Network: ResNet 50 - Device: OpenCL

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: ResNet 50 - Device: OpenCLRX 590306090120150SE +/- 0.06, N = 3142.77

PlaidML

FP16: Yes - Mode: Inference - Network: Mobilenet - Device: OpenCL

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: Yes - Mode: Inference - Network: Mobilenet - Device: OpenCLRX 590130260390520650SE +/- 1.63, N = 3603.17

PlaidML

FP16: Yes - Mode: Inference - Network: ResNet 50 - Device: OpenCL

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: Yes - Mode: Inference - Network: ResNet 50 - Device: OpenCLRX 590306090120150SE +/- 0.03, N = 3137

PlaidML

FP16: No - Mode: Training - Network: Inception V3 - Device: OpenCL

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Training - Network: Inception V3 - Device: OpenCLRX 59048121620SE +/- 0.00, N = 314.90

PlaidML

FP16: No - Mode: Inference - Network: DenseNet 201 - Device: OpenCL

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: DenseNet 201 - Device: OpenCLRX 5901428425670SE +/- 0.08, N = 361.89

PlaidML

FP16: No - Mode: Inference - Network: Inception V3 - Device: OpenCL

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: Inception V3 - Device: OpenCLRX 59020406080100SE +/- 0.01, N = 375.70

PlaidML

FP16: No - Mode: Inference - Network: NASNer Large - Device: OpenCL

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: NASNer Large - Device: OpenCLRX 590510152025SE +/- 0.01, N = 320.02


Phoronix Test Suite v10.8.4