ml-run1 AMD Ryzen Threadripper 2920X 12-Core testing with a MSI X399 SLI PLUS (MS-7B09) v2.0 (A.70 BIOS) and ASUS NVIDIA GeForce RTX 2080 Ti 11GB on Ubuntu 18.04 via the Phoronix Test Suite. ml-run1: Processor: AMD Ryzen Threadripper 2920X 12-Core (12 Cores / 24 Threads), Motherboard: MSI X399 SLI PLUS (MS-7B09) v2.0 (A.70 BIOS), Chipset: AMD 17h, Memory: 64GB, Disk: 1000GB Samsung SSD 970 EVO 1TB, Graphics: ASUS NVIDIA GeForce RTX 2080 Ti 11GB (1350/7000MHz), Audio: Realtek ALC1220, Monitor: E24, Network: Intel I211 OS: Ubuntu 18.04, Kernel: 5.4.0-42-generic (x86_64), Desktop: GNOME Shell 3.28.4, Display Server: X Server 1.20.8, Display Driver: NVIDIA 440.100, OpenGL: 4.6.0, OpenCL: OpenCL 1.2 CUDA 10.2.185, Compiler: GCC 7.5.0, File-System: ext4, Screen Resolution: 1920x1080 oneDNN 1.5 Harness: IP Batch 1D - Data Type: f32 - Engine: CPU ms < Lower Is Better ml-run1 . 5.39728 |============================================================ oneDNN 1.5 Harness: IP Batch All - Data Type: f32 - Engine: CPU ms < Lower Is Better ml-run1 . 71.47 |============================================================== oneDNN 1.5 Harness: IP Batch 1D - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better ml-run1 . 4.44911 |============================================================ oneDNN 1.5 Harness: IP Batch All - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better ml-run1 . 48.05 |============================================================== oneDNN 1.5 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU ms < Lower Is Better ml-run1 . 10.53 |============================================================== oneDNN 1.5 Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPU ms < Lower Is Better ml-run1 . 5.84482 |============================================================ oneDNN 1.5 Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPU ms < Lower Is Better ml-run1 . 9.17598 |============================================================ oneDNN 1.5 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better ml-run1 . 13.04 |============================================================== oneDNN 1.5 Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better ml-run1 . 7.83018 |============================================================ oneDNN 1.5 Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better ml-run1 . 7.07491 |============================================================ oneDNN 1.5 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU ms < Lower Is Better ml-run1 . 457.18 |============================================================= oneDNN 1.5 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU ms < Lower Is Better ml-run1 . 93.78 |============================================================== oneDNN 1.5 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU ms < Lower Is Better ml-run1 . 2.99577 |============================================================ oneDNN 1.5 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better ml-run1 . 2.80790 |============================================================ Numpy Benchmark Score > Higher Is Better ml-run1 . 287.74 |============================================================= DeepSpeech 0.6 Seconds < Lower Is Better ml-run1 . 88.64 |============================================================== R Benchmark Seconds < Lower Is Better ml-run1 . 0.2495 |============================================================= Tensorflow Build: Cifar10 Seconds < Lower Is Better ml-run1 . 81.02 |============================================================== PlaidML FP16: No - Mode: Inference - Network: VGG16 - Device: CPU FPS > Higher Is Better ml-run1 . 11.62 |============================================================== PlaidML FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU FPS > Higher Is Better ml-run1 . 4.90 |=============================================================== Numenta Anomaly Benchmark 1.1 Detector: EXPoSE Seconds < Lower Is Better ml-run1 . 941.26 |============================================================= Numenta Anomaly Benchmark 1.1 Detector: Relative Entropy Seconds < Lower Is Better ml-run1 . 20.48 |============================================================== Numenta Anomaly Benchmark 1.1 Detector: Windowed Gaussian Seconds < Lower Is Better ml-run1 . 9.601 |============================================================== Numenta Anomaly Benchmark 1.1 Detector: Earthgecko Skyline Seconds < Lower Is Better ml-run1 . 113.03 |============================================================= Numenta Anomaly Benchmark 1.1 Detector: Bayesian Changepoint Seconds < Lower Is Better ml-run1 . 50.12 |============================================================== Mlpack Benchmark Benchmark: scikit_ica Seconds < Lower Is Better ml-run1 . 62.19 |============================================================== Mlpack Benchmark Benchmark: scikit_qda Seconds < Lower Is Better ml-run1 . 175.63 |============================================================= Mlpack Benchmark Benchmark: scikit_svm Seconds < Lower Is Better ml-run1 . 14.19 |============================================================== Mlpack Benchmark Benchmark: scikit_linearridgeregression Seconds < Lower Is Better ml-run1 . 6.09 |=============================================================== Scikit-Learn 0.22.1 Seconds < Lower Is Better ml-run1 . 14.73 |==============================================================