ml-run1

AMD Ryzen Threadripper 2920X 12-Core testing with a MSI X399 SLI PLUS (MS-7B09) v2.0 (A.70 BIOS) and ASUS NVIDIA GeForce RTX 2080 Ti 11GB on Ubuntu 18.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2008010-NE-MLRUN145899
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
ml-run1
August 01 2020
  2 Hours, 28 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


ml-run1OpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen Threadripper 2920X 12-Core (12 Cores / 24 Threads)MSI X399 SLI PLUS (MS-7B09) v2.0 (A.70 BIOS)AMD 17h64GB1000GB Samsung SSD 970 EVO 1TBASUS NVIDIA GeForce RTX 2080 Ti 11GB (1350/7000MHz)Realtek ALC1220E24Intel I211Ubuntu 18.045.4.0-42-generic (x86_64)GNOME Shell 3.28.4X Server 1.20.8NVIDIA 440.1004.6.0OpenCL 1.2 CUDA 10.2.185GCC 7.5.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLOpenCLCompilerFile-SystemScreen ResolutionMl-run1 BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - CPU Microcode: 0x800820b- GPU Compute Cores: 4352- Python 2.7.17 + Python 3.6.9- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Not affected

ml-run1onednn: IP Batch 1D - f32 - CPUonednn: IP Batch All - f32 - CPUonednn: IP Batch 1D - u8s8f32 - CPUonednn: IP Batch All - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch deconv_1d - f32 - CPUonednn: Deconvolution Batch deconv_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Deconvolution Batch deconv_1d - u8s8f32 - CPUonednn: Deconvolution Batch deconv_3d - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUnumpy: deepspeech: rbenchmark: tensorflow: Cifar10plaidml: No - Inference - VGG16 - CPUplaidml: No - Inference - ResNet 50 - CPUnumenta-nab: EXPoSEnumenta-nab: Relative Entropynumenta-nab: Windowed Gaussiannumenta-nab: Earthgecko Skylinenumenta-nab: Bayesian Changepointmlpack: scikit_icamlpack: scikit_qdamlpack: scikit_svmmlpack: scikit_linearridgeregressionscikit-learn: ml-run15.3972871.46524.4491148.048310.52905.844829.1759813.03597.830187.07491457.17893.77842.995772.80790287.7488.640660.249581.0211.624.90941.26220.4759.601113.03050.11962.19175.6314.196.0914.729OpenBenchmarking.org

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: f32 - Engine: CPUml-run11.21442.42883.64324.85766.072SE +/- 0.06856, N = 35.39728MIN: 4.91. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: f32 - Engine: CPUml-run11632486480SE +/- 0.36, N = 371.47MIN: 66.971. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: u8s8f32 - Engine: CPUml-run11.0012.0023.0034.0045.005SE +/- 0.01155, N = 34.44911MIN: 4.241. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: u8s8f32 - Engine: CPUml-run11122334455SE +/- 0.15, N = 348.05MIN: 46.21. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUml-run13691215SE +/- 0.01, N = 310.53MIN: 10.191. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPUml-run11.31512.63023.94535.26046.5755SE +/- 0.02020, N = 35.84482MIN: 5.371. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPUml-run13691215SE +/- 0.07358, N = 39.17598MIN: 8.711. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUml-run13691215SE +/- 0.02, N = 313.04MIN: 11.441. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32 - Engine: CPUml-run1246810SE +/- 0.04665, N = 37.83018MIN: 7.131. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32 - Engine: CPUml-run1246810SE +/- 0.00995, N = 37.07491MIN: 6.881. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUml-run1100200300400500SE +/- 1.26, N = 3457.18MIN: 434.861. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUml-run120406080100SE +/- 0.14, N = 393.78MIN: 89.361. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUml-run10.6741.3482.0222.6963.37SE +/- 0.00327, N = 32.99577MIN: 2.841. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUml-run10.63181.26361.89542.52723.159SE +/- 0.00980, N = 32.80790MIN: 2.581. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy Benchmarkml-run160120180240300SE +/- 0.10, N = 3287.74

DeepSpeech

Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6ml-run120406080100SE +/- 0.60, N = 388.64

R Benchmark

This test is a quick-running survey of general R performance Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterR Benchmarkml-run10.05610.11220.16830.22440.2805SE +/- 0.0008, N = 30.24951. R scripting front-end version 3.4.4 (2018-03-15)

Tensorflow

This is a benchmark of the Tensorflow deep learning framework using the CIFAR10 data set. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTensorflowBuild: Cifar10ml-run120406080100SE +/- 0.12, N = 381.02

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: CPUml-run13691215SE +/- 0.16, N = 311.62

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: ResNet 50 - Device: CPUml-run11.10252.2053.30754.415.5125SE +/- 0.01, N = 34.90

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: EXPoSEml-run12004006008001000SE +/- 9.98, N = 3941.26

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative Entropyml-run1510152025SE +/- 0.35, N = 320.48

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed Gaussianml-run13691215SE +/- 0.068, N = 39.601

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko Skylineml-run1306090120150SE +/- 1.22, N = 3113.03

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian Changepointml-run11122334455SE +/- 0.77, N = 350.12

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_icaml-run11428425670SE +/- 0.14, N = 362.19

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qdaml-run14080120160200SE +/- 0.31, N = 3175.63

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svmml-run148121620SE +/- 0.17, N = 614.19

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregressionml-run1246810SE +/- 0.05, N = 36.09

Scikit-Learn

Scikit-learn is a Python module for machine learning Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 0.22.1ml-run148121620SE +/- 0.03, N = 314.73