RX 590 PlaidML

Intel Core i9-9900K testing with a ASUS PRIME Z390-A (0602 BIOS) and Sapphire AMD Radeon RX 470/480/570/570X/580/580X 8GB on Ubuntu 18.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 1901306-SP-RX590PLAI91
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
RX 590
January 30 2019
  1 Hour, 20 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


RX 590 PlaidMLOpenBenchmarking.orgPhoronix Test SuiteIntel Core i9-9900K @ 5.00GHz (8 Cores / 16 Threads)ASUS PRIME Z390-A (0602 BIOS)Intel Cannon Lake PCH Shared SRAM16384MBSamsung SSD 970 EVO 250GB + 2000GB SABRENTSapphire AMD Radeon RX 470/480/570/570X/580/580X 8GB (1560/2100MHz)Realtek ALC1220Acer B286HKIntel I219-VUbuntu 18.105.0.0-050000rc4-generic (x86_64) 20190127GNOME Shell 3.30.1X Server 1.20.14.5 Mesa 19.0.0-devel padoka PPA (LLVM 9.0.0)OpenCL 2.1 AMD-APP (2783.0)1.1.90GCC 8.2.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLOpenCLVulkanCompilerFile-SystemScreen ResolutionRX 590 PlaidML BenchmarksSystem Logs- Scaling Governor: intel_pstate performance- Python 2.7.15+ + Python 3.6.7- __user pointer sanitization + Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + SSB disabled via prctl and seccomp

RX 590 PlaidMLplaidml: No - Training - VGG16 - OpenCLplaidml: No - Training - VGG19 - OpenCLplaidml: No - Inference - VGG16 - OpenCLplaidml: No - Inference - VGG19 - OpenCLplaidml: Yes - Inference - VGG16 - OpenCLplaidml: Yes - Inference - VGG19 - OpenCLplaidml: No - Training - IMDB LSTM - OpenCLplaidml: No - Training - Mobilenet - OpenCLplaidml: No - Training - ResNet 50 - OpenCLplaidml: No - Inference - IMDB LSTM - OpenCLplaidml: No - Inference - Mobilenet - OpenCLplaidml: No - Inference - ResNet 50 - OpenCLplaidml: Yes - Inference - Mobilenet - OpenCLplaidml: Yes - Inference - ResNet 50 - OpenCLplaidml: No - Training - Inception V3 - OpenCLplaidml: No - Inference - DenseNet 201 - OpenCLplaidml: No - Inference - Inception V3 - OpenCLplaidml: No - Inference - NASNer Large - OpenCLRX 5909.508.9568.9859.1864.2156.0814545.7417.78220455142.77603.1713714.9061.8975.7020.02OpenBenchmarking.org

PlaidML

This test profile uses PlaidML deep learning framework for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Training - Network: VGG16 - Device: OpenCLRX 5903691215SE +/- 0.01, N = 39.50

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Training - Network: VGG19 - Device: OpenCLRX 5903691215SE +/- 0.00, N = 38.95

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: OpenCLRX 5901530456075SE +/- 0.02, N = 368.98

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG19 - Device: OpenCLRX 5901326395265SE +/- 0.01, N = 359.18

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: Yes - Mode: Inference - Network: VGG16 - Device: OpenCLRX 5901428425670SE +/- 0.01, N = 364.21

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: Yes - Mode: Inference - Network: VGG19 - Device: OpenCLRX 5901326395265SE +/- 0.01, N = 356.08

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Training - Network: IMDB LSTM - Device: OpenCLRX 590306090120150SE +/- 0.02, N = 3145

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Training - Network: Mobilenet - Device: OpenCLRX 5901020304050SE +/- 0.01, N = 345.74

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Training - Network: ResNet 50 - Device: OpenCLRX 59048121620SE +/- 0.00, N = 317.78

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: IMDB LSTM - Device: OpenCLRX 59050100150200250SE +/- 0.88, N = 3220

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: Mobilenet - Device: OpenCLRX 590100200300400500SE +/- 0.37, N = 3455

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: ResNet 50 - Device: OpenCLRX 590306090120150SE +/- 0.06, N = 3142.77

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: Yes - Mode: Inference - Network: Mobilenet - Device: OpenCLRX 590130260390520650SE +/- 1.63, N = 3603.17

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: Yes - Mode: Inference - Network: ResNet 50 - Device: OpenCLRX 590306090120150SE +/- 0.03, N = 3137

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Training - Network: Inception V3 - Device: OpenCLRX 59048121620SE +/- 0.00, N = 314.90

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: DenseNet 201 - Device: OpenCLRX 5901428425670SE +/- 0.08, N = 361.89

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: Inception V3 - Device: OpenCLRX 59020406080100SE +/- 0.01, N = 375.70

OpenBenchmarking.orgExamples Per Second, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: NASNer Large - Device: OpenCLRX 590510152025SE +/- 0.01, N = 320.02