xps13-hpc-baseline-x11-20210121-1
Intel Core i7-1165G7 testing with a Dell 08607K (1.0.3 BIOS) and Intel Xe 3GB on Ubuntu 20.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2101218-HA-XPS13HPCB96.
High Performance Conjugate Gradient
LeelaChessZero
Backend: BLAS
Parboil
Test: OpenMP LBM
Parboil
Test: OpenMP CUTCP
Parboil
Test: OpenMP Stencil
Parboil
Test: OpenMP MRI Gridding
miniFE
Problem Size: Small
CP2K Molecular Dynamics
Fayalite-FIST Data
NAMD
ATPase Simulation - 327,506 Atoms
Dolfyn
Computational Fluid Dynamics
Nebular Empirical Analysis Tool
Algebraic Multi-Grid Benchmark
FFTE
Test: N=256, 1D Complex FFT Routine
Timed MrBayes Analysis
Primate Phylogeny Analysis
Timed HMMer Search
Pfam Database Search
Timed MAFFT Alignment
Multiple Sequence Alignment - LSU RNA
Monte Carlo Simulations of Ionised Nebulae
Input: Dust 2D tau100.0
LULESH
ArrayFire
Test: BLAS CPU
Himeno Benchmark
Poisson Pressure Solver
Numpy Benchmark
DeepSpeech
Acceleration: CPU
R Benchmark
RNNoise
ASKAP
Test: tConvolve MT - Gridding
ASKAP
Test: tConvolve MT - Degridding
ASKAP
Test: tConvolve MPI - Gridding
ASKAP
Test: tConvolve MPI - Degridding
ASKAP
Test: tConvolve OpenMP - Gridding
ASKAP
Test: tConvolve OpenMP - Degridding
Intel MPI Benchmarks
Test: IMB-P2P PingPong
Intel MPI Benchmarks
Test: IMB-MPI1 Exchange
Intel MPI Benchmarks
Test: IMB-MPI1 Exchange
Intel MPI Benchmarks
Test: IMB-MPI1 PingPong
Intel MPI Benchmarks
Test: IMB-MPI1 Sendrecv
Intel MPI Benchmarks
Test: IMB-MPI1 Sendrecv
GROMACS
Water Benchmark
TensorFlow Lite
Model: SqueezeNet
TensorFlow Lite
Model: Inception V4
TensorFlow Lite
Model: NASNet Mobile
TensorFlow Lite
Model: Mobilenet Float
TensorFlow Lite
Model: Mobilenet Quant
TensorFlow Lite
Model: Inception ResNet V2
GNU Octave Benchmark
GPAW
Input: Carbon Nanotube
Mobile Neural Network
Model: SqueezeNetV1.0
Mobile Neural Network
Model: resnet-v2-50
Mobile Neural Network
Model: MobileNetV2_224
Mobile Neural Network
Model: mobilenet-v1-1.0
Mobile Neural Network
Model: inception-v3
NCNN
Target: CPU - Model: mobilenet
NCNN
Target: CPU-v2-v2 - Model: mobilenet-v2
NCNN
Target: CPU-v3-v3 - Model: mobilenet-v3
NCNN
Target: CPU - Model: shufflenet-v2
NCNN
Target: CPU - Model: mnasnet
NCNN
Target: CPU - Model: efficientnet-b0
NCNN
Target: CPU - Model: blazeface
NCNN
Target: CPU - Model: googlenet
NCNN
Target: CPU - Model: vgg16
NCNN
Target: CPU - Model: resnet18
NCNN
Target: CPU - Model: alexnet
NCNN
Target: CPU - Model: resnet50
NCNN
Target: CPU - Model: yolov4-tiny
NCNN
Target: CPU - Model: squeezenet_ssd
NCNN
Target: CPU - Model: regnety_400m
NCNN
Target: Vulkan GPU - Model: mobilenet
NCNN
Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2
NCNN
Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3
NCNN
Target: Vulkan GPU - Model: shufflenet-v2
NCNN
Target: Vulkan GPU - Model: mnasnet
NCNN
Target: Vulkan GPU - Model: efficientnet-b0
NCNN
Target: Vulkan GPU - Model: blazeface
NCNN
Target: Vulkan GPU - Model: googlenet
NCNN
Target: Vulkan GPU - Model: vgg16
NCNN
Target: Vulkan GPU - Model: resnet18
NCNN
Target: Vulkan GPU - Model: alexnet
NCNN
Target: Vulkan GPU - Model: resnet50
NCNN
Target: Vulkan GPU - Model: yolov4-tiny
NCNN
Target: Vulkan GPU - Model: squeezenet_ssd
NCNN
Target: Vulkan GPU - Model: regnety_400m
TNN
Target: CPU - Model: MobileNet v2
TNN
Target: CPU - Model: SqueezeNet v1.1
PlaidML
FP16: No - Mode: Inference - Network: VGG16 - Device: CPU
PlaidML
FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU
AI Benchmark Alpha
Device Inference Score
AI Benchmark Alpha
Device Training Score
AI Benchmark Alpha
Device AI Score
Mlpack Benchmark
Benchmark: scikit_ica
Mlpack Benchmark
Benchmark: scikit_qda
Mlpack Benchmark
Benchmark: scikit_svm
Mlpack Benchmark
Benchmark: scikit_linearridgeregression
Scikit-Learn
Kripke
OpenCV
Test: DNN - Deep Neural Network
Phoronix Test Suite v10.8.4