Coder Radio XPS 13 ML Ubuntu Benchmark Intel Core i5-1135G7 testing with a Dell 0THX8P (1.1.1 BIOS) and Intel Xe 3GB on Ubuntu 20.04 via the Phoronix Test Suite.
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 2101015-FI-CODERRADI28 XPS 13 Tiger Lake Ubuntu 20.04 Processor: Intel Core i5-1135G7 @ 4.20GHz (4 Cores / 8 Threads), Motherboard: Dell 0THX8P (1.1.1 BIOS), Chipset: Intel Device a0ef, Memory: 16GB, Disk: Micron 2300 NVMe 512GB, Graphics: Intel Xe 3GB (1300MHz), Audio: Realtek ALC289, Network: Intel Device a0f0
OS: Ubuntu 20.04, Kernel: 5.6.0-1036-oem (x86_64), Desktop: GNOME Shell 3.36.4, Display Server: X Server 1.20.8, Display Driver: modesetting 1.20.8, OpenGL: 4.6 Mesa 20.0.8, Vulkan: 1.2.131, Compiler: GCC 9.3.0, File-System: ext4, Screen Resolution: 1920x1200
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x60 - Thermald 1.9.1Python Notes: Python 3.8.5Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Coder Radio XPS 13 ML Ubuntu Benchmark OpenBenchmarking.org Phoronix Test Suite Intel Core i5-1135G7 @ 4.20GHz (4 Cores / 8 Threads) Dell 0THX8P (1.1.1 BIOS) Intel Device a0ef 16GB Micron 2300 NVMe 512GB Intel Xe 3GB (1300MHz) Realtek ALC289 Intel Device a0f0 Ubuntu 20.04 5.6.0-1036-oem (x86_64) GNOME Shell 3.36.4 X Server 1.20.8 modesetting 1.20.8 4.6 Mesa 20.0.8 1.2.131 GCC 9.3.0 ext4 1920x1200 Processor Motherboard Chipset Memory Disk Graphics Audio Network OS Kernel Desktop Display Server Display Driver OpenGL Vulkan Compiler File-System Screen Resolution Coder Radio XPS 13 ML Ubuntu Benchmark Performance System Logs - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave - CPU Microcode: 0x60 - Thermald 1.9.1 - Python 3.8.5 - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Coder Radio XPS 13 ML Ubuntu Benchmark opencv: DNN - Deep Neural Network scikit-learn: mlpack: scikit_linearridgeregression mlpack: scikit_svm mlpack: scikit_qda mlpack: scikit_ica ai-benchmark: Device AI Score ai-benchmark: Device Training Score ai-benchmark: Device Inference Score numenta-nab: Bayesian Changepoint numenta-nab: Earthgecko Skyline numenta-nab: Windowed Gaussian numenta-nab: Relative Entropy numenta-nab: EXPoSE plaidml: No - Inference - ResNet 50 - CPU plaidml: No - Inference - VGG16 - CPU ncnn: Vulkan GPU - squeezenet_ssd ncnn: Vulkan GPU - yolov4-tiny ncnn: Vulkan GPU - resnet50 ncnn: Vulkan GPU - alexnet ncnn: Vulkan GPU - resnet18 ncnn: Vulkan GPU - vgg16 ncnn: Vulkan GPU - googlenet ncnn: Vulkan GPU - blazeface ncnn: Vulkan GPU-v3-v3 - mobilenet-v3 ncnn: Vulkan GPU-v2-v2 - mobilenet-v2 ncnn: Vulkan GPU - mobilenet ncnn: CPU - regnety_400m ncnn: CPU - squeezenet_ssd ncnn: CPU - yolov4-tiny ncnn: CPU - resnet50 ncnn: CPU - alexnet ncnn: CPU - resnet18 ncnn: CPU - vgg16 ncnn: CPU - googlenet ncnn: CPU - blazeface ncnn: CPU - efficientnet-b0 ncnn: CPU-v3-v3 - mobilenet-v3 ncnn: CPU-v2-v2 - mobilenet-v2 ncnn: CPU - mobilenet mnn: inception-v3 mnn: MobileNetV2_224 mnn: resnet-v2-50 tensorflow-lite: Inception ResNet V2 tensorflow-lite: Mobilenet Quant tensorflow-lite: Mobilenet Float tensorflow-lite: NASNet Mobile tensorflow-lite: Inception V4 tensorflow-lite: SqueezeNet rnnoise: numpy: onednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPU onednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPU onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPU onednn: Recurrent Neural Network Inference - u8s8f32 - CPU onednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPU onednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPU onednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Training - u8s8f32 - CPU onednn: Recurrent Neural Network Training - f32 - CPU onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPU onednn: Deconvolution Batch shapes_1d - u8s8f32 - CPU onednn: Convolution Batch Shapes Auto - u8s8f32 - CPU onednn: Deconvolution Batch shapes_3d - f32 - CPU onednn: Deconvolution Batch shapes_1d - f32 - CPU onednn: Convolution Batch Shapes Auto - f32 - CPU onednn: IP Shapes 3D - bf16bf16bf16 - CPU onednn: IP Shapes 1D - bf16bf16bf16 - CPU onednn: IP Shapes 3D - u8s8f32 - CPU onednn: IP Shapes 1D - u8s8f32 - CPU onednn: IP Shapes 3D - f32 - CPU onednn: IP Shapes 1D - f32 - CPU ncnn: Vulkan GPU - regnety_400m ncnn: Vulkan GPU - efficientnet-b0 ncnn: Vulkan GPU - mnasnet ncnn: Vulkan GPU - shufflenet-v2 ncnn: CPU - mnasnet ncnn: CPU - shufflenet-v2 mnn: mobilenet-v1-1.0 mnn: SqueezeNetV1.0 onednn: Recurrent Neural Network Inference - f32 - CPU XPS 13 Tiger Lake Ubuntu 20.04 5351 17.900 13.50 34.66 138.24 123.23 1186 630 556 81.062 333.639 27.702 48.609 1139.241 3.27 6.47 39.53 44.68 51.13 19.23 22.25 69.16 24.58 2.80 6.70 7.71 35.11 21.11 39.61 44.65 51.01 19.06 22.07 68.50 25.01 2.85 12.53 6.71 7.78 35.03 68.523 6.238 54.691 8329360 419506 424309 455525 9220230 627487 31.947 293.18 11.8072 2.12307 4516.60 8859.05 3.93298 4528.11 52.6824 57.0546 52.4284 8875.33 8862.65 3.13684 2.94262 7.93450 13.4125 14.6326 11.5632 6.39979 25.8282 2.62981 2.43051 6.59350 10.2773 21.01 11.83 8.13 10.30 8.10 9.99 8.320 11.256 5736.43 OpenBenchmarking.org
OpenCV This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better OpenCV 4.4 Test: DNN - Deep Neural Network XPS 13 Tiger Lake Ubuntu 20.04 1100 2200 3300 4400 5500 SE +/- 87.51, N = 3 5351 1. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt
Numenta Anomaly Benchmark Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: Bayesian Changepoint XPS 13 Tiger Lake Ubuntu 20.04 20 40 60 80 100 SE +/- 1.06, N = 4 81.06
OpenBenchmarking.org FPS, More Is Better PlaidML FP16: No - Mode: Inference - Network: VGG16 - Device: CPU XPS 13 Tiger Lake Ubuntu 20.04 2 4 6 8 10 SE +/- 0.02, N = 3 6.47
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: yolov4-tiny XPS 13 Tiger Lake Ubuntu 20.04 10 20 30 40 50 SE +/- 0.57, N = 3 44.68 MIN: 43.68 / MAX: 64.97 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: resnet50 XPS 13 Tiger Lake Ubuntu 20.04 12 24 36 48 60 SE +/- 0.13, N = 3 51.13 MIN: 50.6 / MAX: 85.33 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: alexnet XPS 13 Tiger Lake Ubuntu 20.04 5 10 15 20 25 SE +/- 0.20, N = 3 19.23 MIN: 18.73 / MAX: 22.46 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: resnet18 XPS 13 Tiger Lake Ubuntu 20.04 5 10 15 20 25 SE +/- 0.17, N = 3 22.25 MIN: 21.22 / MAX: 66.35 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: vgg16 XPS 13 Tiger Lake Ubuntu 20.04 15 30 45 60 75 SE +/- 0.54, N = 3 69.16 MIN: 67.22 / MAX: 89.29 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: googlenet XPS 13 Tiger Lake Ubuntu 20.04 6 12 18 24 30 SE +/- 0.60, N = 3 24.58 MIN: 20.79 / MAX: 125.42 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: blazeface XPS 13 Tiger Lake Ubuntu 20.04 0.63 1.26 1.89 2.52 3.15 SE +/- 0.06, N = 3 2.80 MIN: 2.58 / MAX: 5.23 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 XPS 13 Tiger Lake Ubuntu 20.04 2 4 6 8 10 SE +/- 0.01, N = 3 6.70 MIN: 6.52 / MAX: 9.63 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 XPS 13 Tiger Lake Ubuntu 20.04 2 4 6 8 10 SE +/- 0.03, N = 3 7.71 MIN: 7.52 / MAX: 10.75 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: mobilenet XPS 13 Tiger Lake Ubuntu 20.04 8 16 24 32 40 SE +/- 0.48, N = 3 35.11 MIN: 34.07 / MAX: 62.97 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: regnety_400m XPS 13 Tiger Lake Ubuntu 20.04 5 10 15 20 25 SE +/- 0.17, N = 3 21.11 MIN: 18.89 / MAX: 41.4 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: squeezenet_ssd XPS 13 Tiger Lake Ubuntu 20.04 9 18 27 36 45 SE +/- 0.23, N = 3 39.61 MIN: 39.09 / MAX: 57.3 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: yolov4-tiny XPS 13 Tiger Lake Ubuntu 20.04 10 20 30 40 50 SE +/- 0.55, N = 3 44.65 MIN: 43.73 / MAX: 74 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: resnet50 XPS 13 Tiger Lake Ubuntu 20.04 12 24 36 48 60 SE +/- 0.12, N = 3 51.01 MIN: 50.55 / MAX: 116.37 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: alexnet XPS 13 Tiger Lake Ubuntu 20.04 5 10 15 20 25 SE +/- 0.02, N = 3 19.06 MIN: 18.7 / MAX: 27.76 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: resnet18 XPS 13 Tiger Lake Ubuntu 20.04 5 10 15 20 25 SE +/- 0.02, N = 3 22.07 MIN: 21.29 / MAX: 25.98 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: vgg16 XPS 13 Tiger Lake Ubuntu 20.04 15 30 45 60 75 SE +/- 0.25, N = 3 68.50 MIN: 67.14 / MAX: 86.96 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: googlenet XPS 13 Tiger Lake Ubuntu 20.04 6 12 18 24 30 SE +/- 0.11, N = 3 25.01 MIN: 23.82 / MAX: 51.99 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: blazeface XPS 13 Tiger Lake Ubuntu 20.04 0.6413 1.2826 1.9239 2.5652 3.2065 SE +/- 0.02, N = 3 2.85 MIN: 2.56 / MAX: 3.12 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: efficientnet-b0 XPS 13 Tiger Lake Ubuntu 20.04 3 6 9 12 15 SE +/- 0.04, N = 3 12.53 MIN: 12.32 / MAX: 31.52 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU-v3-v3 - Model: mobilenet-v3 XPS 13 Tiger Lake Ubuntu 20.04 2 4 6 8 10 SE +/- 0.02, N = 3 6.71 MIN: 6.52 / MAX: 10.19 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU-v2-v2 - Model: mobilenet-v2 XPS 13 Tiger Lake Ubuntu 20.04 2 4 6 8 10 SE +/- 0.05, N = 3 7.78 MIN: 7.55 / MAX: 25.73 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: mobilenet XPS 13 Tiger Lake Ubuntu 20.04 8 16 24 32 40 SE +/- 0.56, N = 3 35.03 MIN: 34.13 / MAX: 54.59 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2020-09-17 Model: inception-v3 XPS 13 Tiger Lake Ubuntu 20.04 15 30 45 60 75 SE +/- 0.25, N = 11 68.52 MIN: 66.62 / MAX: 223.45 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2020-09-17 Model: MobileNetV2_224 XPS 13 Tiger Lake Ubuntu 20.04 2 4 6 8 10 SE +/- 0.012, N = 11 6.238 MIN: 6.15 / MAX: 28.03 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2020-09-17 Model: resnet-v2-50 XPS 13 Tiger Lake Ubuntu 20.04 12 24 36 48 60 SE +/- 0.17, N = 11 54.69 MIN: 53.32 / MAX: 132.87 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
RNNoise RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better RNNoise 2020-06-28 XPS 13 Tiger Lake Ubuntu 20.04 7 14 21 28 35 SE +/- 0.01, N = 3 31.95 1. (CC) gcc options: -O2 -pedantic -fvisibility=hidden
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU XPS 13 Tiger Lake Ubuntu 20.04 3 6 9 12 15 SE +/- 0.12, N = 12 11.81 MIN: 10.58 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU XPS 13 Tiger Lake Ubuntu 20.04 0.4777 0.9554 1.4331 1.9108 2.3885 SE +/- 0.00736, N = 3 2.12307 MIN: 1.9 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU XPS 13 Tiger Lake Ubuntu 20.04 1000 2000 3000 4000 5000 SE +/- 5.17, N = 3 4516.60 MIN: 4500.62 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU XPS 13 Tiger Lake Ubuntu 20.04 2K 4K 6K 8K 10K SE +/- 1.56, N = 3 8859.05 MIN: 8815.8 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU XPS 13 Tiger Lake Ubuntu 20.04 0.8849 1.7698 2.6547 3.5396 4.4245 SE +/- 0.01624, N = 3 3.93298 MIN: 3.57 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU XPS 13 Tiger Lake Ubuntu 20.04 1000 2000 3000 4000 5000 SE +/- 10.91, N = 3 4528.11 MIN: 4494.22 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU XPS 13 Tiger Lake Ubuntu 20.04 12 24 36 48 60 SE +/- 0.07, N = 3 52.68 MIN: 52.49 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU XPS 13 Tiger Lake Ubuntu 20.04 13 26 39 52 65 SE +/- 0.64, N = 6 57.05 MIN: 54.12 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU XPS 13 Tiger Lake Ubuntu 20.04 12 24 36 48 60 SE +/- 0.02, N = 3 52.43 MIN: 52.31 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU XPS 13 Tiger Lake Ubuntu 20.04 2K 4K 6K 8K 10K SE +/- 18.79, N = 3 8875.33 MIN: 8833.23 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU XPS 13 Tiger Lake Ubuntu 20.04 2K 4K 6K 8K 10K SE +/- 5.47, N = 3 8862.65 MIN: 8818.95 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU XPS 13 Tiger Lake Ubuntu 20.04 0.7058 1.4116 2.1174 2.8232 3.529 SE +/- 0.00316, N = 3 3.13684 MIN: 3.11 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU XPS 13 Tiger Lake Ubuntu 20.04 0.6621 1.3242 1.9863 2.6484 3.3105 SE +/- 0.02862, N = 12 2.94262 MIN: 2.6 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU XPS 13 Tiger Lake Ubuntu 20.04 2 4 6 8 10 SE +/- 0.02049, N = 3 7.93450 MIN: 7.85 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU XPS 13 Tiger Lake Ubuntu 20.04 3 6 9 12 15 SE +/- 0.05, N = 3 13.41 MIN: 13.24 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU XPS 13 Tiger Lake Ubuntu 20.04 4 8 12 16 20 SE +/- 0.12, N = 13 14.63 MIN: 13.05 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU XPS 13 Tiger Lake Ubuntu 20.04 3 6 9 12 15 SE +/- 0.13, N = 15 11.56 MIN: 10.07 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU XPS 13 Tiger Lake Ubuntu 20.04 2 4 6 8 10 SE +/- 0.06267, N = 3 6.39979 MIN: 5.73 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU XPS 13 Tiger Lake Ubuntu 20.04 6 12 18 24 30 SE +/- 0.10, N = 3 25.83 MIN: 25.21 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU XPS 13 Tiger Lake Ubuntu 20.04 0.5917 1.1834 1.7751 2.3668 2.9585 SE +/- 0.00152, N = 3 2.62981 MIN: 2.56 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU XPS 13 Tiger Lake Ubuntu 20.04 0.5469 1.0938 1.6407 2.1876 2.7345 SE +/- 0.02334, N = 14 2.43051 MIN: 1.9 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU XPS 13 Tiger Lake Ubuntu 20.04 2 4 6 8 10 SE +/- 0.01444, N = 3 6.59350 MIN: 6.09 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU XPS 13 Tiger Lake Ubuntu 20.04 3 6 9 12 15 SE +/- 0.06, N = 3 10.28 MIN: 9.23 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: efficientnet-b0 XPS 13 Tiger Lake Ubuntu 20.04 3 6 9 12 15 SE +/- 0.66, N = 3 11.83 MIN: 10.35 / MAX: 14.92 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: mnasnet XPS 13 Tiger Lake Ubuntu 20.04 2 4 6 8 10 SE +/- 0.54, N = 3 8.13 MIN: 6.99 / MAX: 27.68 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: shufflenet-v2 XPS 13 Tiger Lake Ubuntu 20.04 3 6 9 12 15 SE +/- 0.67, N = 3 10.30 MIN: 8.76 / MAX: 26.22 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: mnasnet XPS 13 Tiger Lake Ubuntu 20.04 2 4 6 8 10 SE +/- 0.50, N = 3 8.10 MIN: 7 / MAX: 12.21 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: shufflenet-v2 XPS 13 Tiger Lake Ubuntu 20.04 3 6 9 12 15 SE +/- 0.59, N = 3 9.99 MIN: 8.78 / MAX: 13.35 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2020-09-17 Model: mobilenet-v1-1.0 XPS 13 Tiger Lake Ubuntu 20.04 2 4 6 8 10 SE +/- 0.171, N = 11 8.320 MIN: 6.48 / MAX: 30.71 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2020-09-17 Model: SqueezeNetV1.0 XPS 13 Tiger Lake Ubuntu 20.04 3 6 9 12 15 SE +/- 0.24, N = 11 11.26 MIN: 8.69 / MAX: 35.37 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU XPS 13 Tiger Lake Ubuntu 20.04 1200 2400 3600 4800 6000 SE +/- 1216.13, N = 12 5736.43 MIN: 4504.31 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
XPS 13 Tiger Lake Ubuntu 20.04 Processor: Intel Core i5-1135G7 @ 4.20GHz (4 Cores / 8 Threads), Motherboard: Dell 0THX8P (1.1.1 BIOS), Chipset: Intel Device a0ef, Memory: 16GB, Disk: Micron 2300 NVMe 512GB, Graphics: Intel Xe 3GB (1300MHz), Audio: Realtek ALC289, Network: Intel Device a0f0
OS: Ubuntu 20.04, Kernel: 5.6.0-1036-oem (x86_64), Desktop: GNOME Shell 3.36.4, Display Server: X Server 1.20.8, Display Driver: modesetting 1.20.8, OpenGL: 4.6 Mesa 20.0.8, Vulkan: 1.2.131, Compiler: GCC 9.3.0, File-System: ext4, Screen Resolution: 1920x1200
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x60 - Thermald 1.9.1Python Notes: Python 3.8.5Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 31 December 2020 13:39 by user studio.