ubu20-wk1-ML-05sep2020

VMware testing on Ubuntu 20.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2009061-NE-UBU20WK1M26&gru.

ubu20-wk1-ML-05sep2020ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionSystem Layerubu20-wk1-ML-05sep202016 x AMD Ryzen Threadripper 3960X 24-Core (31 Cores)Intel 440BX (6.00 BIOS)Intel 440BX/ZX/DX16GB193GB VMware Virtual SSVGA3D; build: RELEASE; LLVM;Ensoniq ES1371/ES1373Intel 82545EM + 4 x AMD 79c970Ubuntu 20.045.4.0-45-generic (x86_64)GNOME Shell 3.36.4X Server 1.20.8modesetting 1.20.82.1 Mesa 20.0.8GCC 9.3.0ext41680x968VMwareOpenBenchmarking.org- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - CPU Microcode: 0x8301039- Gallium3D XA- Python 3.8.2- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Not affected

ubu20-wk1-ML-05sep2020plaidml: No - Inference - VGG16 - CPUplaidml: No - Inference - ResNet 50 - CPUnumpy: tensorflow-lite: SqueezeNettensorflow-lite: Inception V4tensorflow-lite: NASNet Mobiletensorflow-lite: Mobilenet Floattensorflow-lite: Mobilenet Quanttensorflow-lite: Inception ResNet V2onednn: IP Batch 1D - f32 - CPUonednn: IP Batch All - f32 - CPUonednn: IP Batch 1D - u8s8f32 - CPUonednn: IP Batch All - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch deconv_1d - f32 - CPUonednn: Deconvolution Batch deconv_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Deconvolution Batch deconv_1d - u8s8f32 - CPUonednn: Deconvolution Batch deconv_3d - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUdeepspeech: numenta-nab: EXPoSEnumenta-nab: Relative Entropynumenta-nab: Windowed Gaussiannumenta-nab: Earthgecko Skylinenumenta-nab: Bayesian Changepointmlpack: scikit_icamlpack: scikit_qdamlpack: scikit_svmmlpack: scikit_linearridgeregressionscikit-learn: ubu20-wk1-ML-05sep202016.656.10345.45155523212776316130210598711200019094376.6840278.43883.8078943.874815.75146.028259.9299316.58387.913156.16964381.47597.91471.840222.9028660.79795800.85416.9809.22397.08829.96854.8460.9921.292.719.024OpenBenchmarking.org

PlaidML

FP16: No - Mode: Inference - Network: VGG16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: CPUubu20-wk1-ML-05sep202048121620SE +/- 0.08, N = 316.65

PlaidML

FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: ResNet 50 - Device: CPUubu20-wk1-ML-05sep2020246810SE +/- 0.03, N = 36.10

Numpy Benchmark

OpenBenchmarking.orgScore, More Is BetterNumpy Benchmarkubu20-wk1-ML-05sep202080160240320400SE +/- 1.82, N = 3345.45

TensorFlow Lite

Model: SqueezeNet

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetubu20-wk1-ML-05sep202030K60K90K120K150KSE +/- 1248.21, N = 3155523

TensorFlow Lite

Model: Inception V4

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4ubu20-wk1-ML-05sep2020500K1000K1500K2000K2500KSE +/- 15033.65, N = 32127763

TensorFlow Lite

Model: NASNet Mobile

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet Mobileubu20-wk1-ML-05sep202030K60K90K120K150KSE +/- 356.93, N = 3161302

TensorFlow Lite

Model: Mobilenet Float

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Floatubu20-wk1-ML-05sep202020K40K60K80K100KSE +/- 73.39, N = 3105987

TensorFlow Lite

Model: Mobilenet Quant

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Quantubu20-wk1-ML-05sep202020K40K60K80K100KSE +/- 217.73, N = 3112000

TensorFlow Lite

Model: Inception ResNet V2

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2ubu20-wk1-ML-05sep2020400K800K1200K1600K2000KSE +/- 7346.01, N = 31909437

oneDNN

Harness: IP Batch 1D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: f32 - Engine: CPUubu20-wk1-ML-05sep2020246810SE +/- 0.31662, N = 156.68402MIN: 3.41. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: IP Batch All - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: f32 - Engine: CPUubu20-wk1-ML-05sep202020406080100SE +/- 1.53, N = 1278.44MIN: 54.011. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: IP Batch 1D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: u8s8f32 - Engine: CPUubu20-wk1-ML-05sep20200.85681.71362.57043.42724.284SE +/- 0.09458, N = 153.80789MIN: 1.681. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: IP Batch All - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: u8s8f32 - Engine: CPUubu20-wk1-ML-05sep20201020304050SE +/- 0.86, N = 1543.87MIN: 26.191. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUubu20-wk1-ML-05sep202048121620SE +/- 0.34, N = 1515.75MIN: 10.341. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPUubu20-wk1-ML-05sep2020246810SE +/- 0.25985, N = 126.02825MIN: 2.791. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPUubu20-wk1-ML-05sep20203691215SE +/- 0.41189, N = 159.92993MIN: 4.481. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUubu20-wk1-ML-05sep202048121620SE +/- 0.33, N = 1516.58MIN: 10.61. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32 - Engine: CPUubu20-wk1-ML-05sep2020246810SE +/- 0.18171, N = 157.91315MIN: 4.831. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32 - Engine: CPUubu20-wk1-ML-05sep2020246810SE +/- 0.05780, N = 36.16964MIN: 3.371. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUubu20-wk1-ML-05sep202080160240320400SE +/- 7.75, N = 15381.48MIN: 289.361. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUubu20-wk1-ML-05sep202020406080100SE +/- 0.31, N = 397.91MIN: 77.571. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUubu20-wk1-ML-05sep20200.4140.8281.2421.6562.07SE +/- 0.07927, N = 121.84022MIN: 1.141. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUubu20-wk1-ML-05sep20200.65311.30621.95932.61243.2655SE +/- 0.03513, N = 32.90286MIN: 2.431. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

DeepSpeech

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6ubu20-wk1-ML-05sep20201428425670SE +/- 0.29, N = 360.80

Numenta Anomaly Benchmark

Detector: EXPoSE

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: EXPoSEubu20-wk1-ML-05sep20202004006008001000SE +/- 23.61, N = 9800.85

Numenta Anomaly Benchmark

Detector: Relative Entropy

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative Entropyubu20-wk1-ML-05sep202048121620SE +/- 0.23, N = 316.98

Numenta Anomaly Benchmark

Detector: Windowed Gaussian

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed Gaussianubu20-wk1-ML-05sep20203691215SE +/- 0.063, N = 39.223

Numenta Anomaly Benchmark

Detector: Earthgecko Skyline

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko Skylineubu20-wk1-ML-05sep202020406080100SE +/- 0.99, N = 397.09

Numenta Anomaly Benchmark

Detector: Bayesian Changepoint

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian Changepointubu20-wk1-ML-05sep2020714212835SE +/- 0.28, N = 329.97

Mlpack Benchmark

Benchmark: scikit_ica

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_icaubu20-wk1-ML-05sep20201224364860SE +/- 0.09, N = 354.84

Mlpack Benchmark

Benchmark: scikit_qda

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qdaubu20-wk1-ML-05sep20201428425670SE +/- 0.31, N = 360.99

Mlpack Benchmark

Benchmark: scikit_svm

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svmubu20-wk1-ML-05sep2020510152025SE +/- 0.04, N = 321.29

Mlpack Benchmark

Benchmark: scikit_linearridgeregression

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregressionubu20-wk1-ML-05sep20200.60981.21961.82942.43923.049SE +/- 0.02, N = 32.71

Scikit-Learn

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 0.22.1ubu20-wk1-ML-05sep20203691215SE +/- 0.027, N = 39.024


Phoronix Test Suite v10.8.5