PlaidML This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks.
To run this test with the Phoronix Test Suite , the basic command is: phoronix-test-suite benchmark plaidml .
Test Created 10 January 2019
Last Updated 26 September 2019
Test Maintainer Michael Larabel
Test Type Graphics
Average Install Time 22 Seconds
Average Run Time 8 Minutes, 22 Seconds
Test Dependencies Python + OpenCL
Accolades 20k+ Downloads Public Result Uploads Reported Installs* Test Completions* OpenBenchmarking.org Events PlaidML Popularity Statistics pts/plaidml 2019.01 2019.02 2019.03 2019.04 2019.05 2019.06 2019.07 2019.08 2019.09 2019.10 2019.11 2019.12 2020.01 2020.02 2020.03 2020.04 2020.05 2020.06 2020.07 2020.08 2020.09 2020.10 2020.11 2020.12 2021.01 2021.02 2021.03 2021.04 700 1400 2100 2800 3500
* Data based on those opting to upload their test results to OpenBenchmarking.org and users enabling the opt-in anonymous statistics reporting while running benchmarks from an Internet-connected platform. Data current as of Mon, 19 Apr 2021 00:43:43 GMT.
No 81.5% Yes 18.5% FP16 Option Popularity OpenBenchmarking.org
Mobilenet 39.5% VGG19 7.1% DenseNet 201 18.1% ResNet 50 7.2% IMDB LSTM 20.0% VGG16 8.0% Network Option Popularity OpenBenchmarking.org
OpenCL 76.5% CPU 23.5% Device Option Popularity OpenBenchmarking.org
Revision Historypts/plaidml-1.0.4 [View Source ] Thu, 26 Sep 2019 14:22:09 GMT Fixes for latest upstream PlaidML working around configuration files and library issues.
pts/plaidml-1.0.3 [View Source ] Sun, 27 Jan 2019 16:21:22 GMT Set RequiresDisplay = FALSE
pts/plaidml-1.0.2 [View Source ] Fri, 11 Jan 2019 12:06:07 GMT Always set --user for pip3 to avoid issues on some distros.
pts/plaidml-1.0.1 [View Source ] Thu, 10 Jan 2019 14:30:27 GMT Add --train option which works in some configurations.
pts/plaidml-1.0.0 [View Source ] Thu, 10 Jan 2019 10:51:47 GMT Initial commit of PlaidML deep learning framework benchmark, plaidbench.
Performance MetricsAnalyze Test Configuration: pts/plaidml-1.0.x - FP16: No - Mode: Inference - Network: Mobilenet - Device: OpenCL (FPS) pts/plaidml-1.0.x - FP16: No - Mode: Inference - Network: IMDB LSTM - Device: OpenCL (FPS) pts/plaidml-1.0.x - FP16: No - Mode: Inference - Network: DenseNet 201 - Device: OpenCL (FPS) pts/plaidml-1.0.x - FP16: Yes - Mode: Inference - Network: Mobilenet - Device: OpenCL (FPS) pts/plaidml-1.0.x - FP16: No - Mode: Inference - Network: VGG16 - Device: CPU (FPS) pts/plaidml-1.0.x - FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU (FPS) pts/plaidml-1.0.x - FP16: No - Mode: Inference - Network: VGG19 - Device: CPU (FPS) pts/plaidml-1.0.x - FP16: No - Mode: Inference - Network: Mobilenet - Device: CPU (FPS) pts/plaidml-1.0.x - FP16: No - Mode: Inference - Network: IMDB LSTM - Device: CPU (FPS) pts/plaidml-1.0.x - FP16: No - Mode: Inference - Network: Inception V3 - Device: CPU (FPS) pts/plaidml-1.0.x - FP16: No - Mode: Training - Network: Mobilenet - Device: OpenCL (FPS) pts/plaidml-1.0.x - FP16: No - Mode: Inference - Network: VGG19 - Device: OpenCL (FPS) pts/plaidml-1.0.x - FP16: No - Mode: Inference - Network: VGG16 - Device: OpenCL (FPS) pts/plaidml-1.0.x - FP16: No - Mode: Inference - Network: Inception V3 - Device: OpenCL (FPS) pts/plaidml-1.0.x - FP16: No - Mode: Inference - Network: ResNet 50 - Device: OpenCL (FPS) pts/plaidml-1.0.x - FP16: No - Mode: Inference - Network: DenseNet 201 - Device: CPU (FPS) pts/plaidml-1.0.x - FP16: No - Mode: Inference - Network: NASNer Large - Device: OpenCL (FPS) pts/plaidml-1.0.x - FP16: No - Mode: Inference - Network: Mobilenet - Device: OpenCL (Examples Per Second) pts/plaidml-1.0.x - FP16: No - Mode: Inference - Network: ResNet 50 - Device: OpenCL (Examples Per Second) pts/plaidml-1.0.x - FP16: No - Mode: Inference - Network: Inception V3 - Device: OpenCL (Examples Per Second) pts/plaidml-1.0.x - FP16: Yes - Mode: Inference - Network: ResNet 50 - Device: OpenCL (Examples Per Second) pts/plaidml-1.0.x - FP16: Yes - Mode: Inference - Network: Mobilenet - Device: OpenCL (Examples Per Second) pts/plaidml-1.0.x - FP16: No - Mode: Inference - Network: VGG16 - Device: OpenCL (Examples Per Second) pts/plaidml-1.0.x - FP16: No - Mode: Inference - Network: IMDB LSTM - Device: OpenCL (Examples Per Second) pts/plaidml-1.0.x - FP16: Yes - Mode: Inference - Network: VGG16 - Device: OpenCL (Examples Per Second) pts/plaidml-1.0.x - FP16: No - Mode: Training - Network: IMDB LSTM - Device: OpenCL (Examples Per Second) pts/plaidml-1.0.x - FP16: No - Mode: Training - Network: Mobilenet - Device: OpenCL (Examples Per Second) pts/plaidml-1.0.x - FP16: Yes - Mode: Inference - Network: Inception V3 - Device: OpenCL (Examples Per Second) pts/plaidml-1.0.x - FP16: No - Mode: Training - Network: VGG16 - Device: OpenCL (FPS) pts/plaidml-1.0.x - FP16: No - Mode: Training - Network: VGG19 - Device: OpenCL (FPS) pts/plaidml-1.0.x - FP16: No - Mode: Inference - Network: VGG19 - Device: OpenCL (Examples Per Second) PlaidML FP16: No - Mode: Inference - Network: Mobilenet - Device: OpenCL OpenBenchmarking.org metrics for this test profile configuration based on 42 public results since 10 January 2019 with the latest data as of 30 January 2019 .
Additional benchmark metrics will come after OpenBenchmarking.org has collected a sufficient data-set.
OpenBenchmarking.org Distribution Of Public Results - FP16: No - Mode: Inference - Network: Mobilenet - Device: OpenCL 42 Results Range From 270 To 1336 Examples Per Second 270 303 336 369 402 435 468 501 534 567 600 633 666 699 732 765 798 831 864 897 930 963 996 1029 1062 1095 1128 1161 1194 1227 1260 1293 1326 1359 2 4 6 8 10
Based on OpenBenchmarking.org data, the selected test / test configuration (PlaidML - FP16: No - Mode: Inference - Network: Mobilenet - Device: OpenCL ) has an average run-time of 2 minutes . By default this test profile is set to run at least 3 times but may increase if the standard deviation exceeds pre-defined defaults or other calculations deem additional runs necessary for greater statistical accuracy of the result.
OpenBenchmarking.org Minutes Time Required To Complete Benchmark FP16: No - Mode: Inference - Network: Mobilenet - Device: OpenCL Run-Time 2 4 6 8 10 Min: 1 / Avg: 1 / Max: 1
Based on public OpenBenchmarking.org results, the selected test / test configuration has an average standard deviation of 0.2% .
OpenBenchmarking.org Percent, Fewer Is Better Average Deviation Between Runs FP16: No - Mode: Inference - Network: Mobilenet - Device: OpenCL Deviation 2 4 6 8 10 Min: 0 / Avg: 0.22 / Max: 2
Recent Test Results
3 Systems - 31 Benchmark Results
AMD Ryzen Threadripper 2990WX 32-Core - ASUS ROG ZENITH EXTREME - AMD Family 17h
Ubuntu 18.04 - 4.15.0-43-generic - GNOME Shell 3.28.3
18 Systems - 34 Benchmark Results
Intel Core i9-9900K - ASUS PRIME Z390-A - Intel Cannon Lake PCH Shared SRAM
Ubuntu 18.10 - 4.20.3-042003-generic - GNOME Shell 3.30.1
12 Systems - 121 Benchmark Results
Intel Core i9-9900K - ASUS PRIME Z390-A - Intel Cannon Lake PCH Shared SRAM
Ubuntu 18.10 - 4.20.3-042003-generic - GNOME Shell 3.30.1
11 Systems - 121 Benchmark Results
Intel Core i9-9900K - ASUS PRIME Z390-A - Intel Cannon Lake PCH Shared SRAM
Ubuntu 18.10 - 4.20.3-042003-generic - GNOME Shell 3.30.1
1 System - 18 Benchmark Results
Intel Core i9-9900K - ASUS PRIME Z390-A - Intel Cannon Lake PCH Shared SRAM
Ubuntu 18.10 - 5.0.0-050000rc4-generic - GNOME Shell 3.30.1
15 Systems - 128 Benchmark Results
Intel Core i9-9900K - ASUS PRIME Z390-A - Intel Cannon Lake PCH Shared SRAM
Ubuntu 18.10 - 4.20.3-042003-generic - GNOME Shell 3.30.1
1 System - 23 Benchmark Results
Intel Core i9-9900K - ASUS PRIME Z390-A - Intel Cannon Lake PCH Shared SRAM
Ubuntu 18.10 - 4.20.3-042003-generic - GNOME Shell 3.30.1
Featured Graphics Comparison
1 System - 34 Benchmark Results
AMD Ryzen Threadripper 2950X 16-Core - MSI MEG X399 CREATION - AMD Family 17h
Ubuntu 18.10 - 4.18.0-13-generic - GNOME Shell 3.30.1
1 System - 16 Benchmark Results
Intel Core i9-9900K - ASUS PRIME Z390-A - Intel Cannon Lake PCH Shared SRAM
Ubuntu 18.04 - 4.20.0-042000-generic - Xfce 4.12