ldld Tests for a future article. Intel Core Ultra 7 155H testing with a MTL Swift SFG14-72T Coral_MTH (V1.01 BIOS) and Intel Arc MTL 8GB on Ubuntu 23.10 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2403270-NE-LDLD0728551&sor&grr .
ldld Processor Motherboard Chipset Memory Disk Graphics Audio Network OS Kernel Desktop Display Server OpenGL Compiler File-System Screen Resolution a b Intel Core Ultra 7 155H @ 4.80GHz (16 Cores / 22 Threads) MTL Swift SFG14-72T Coral_MTH (V1.01 BIOS) Intel Device 7e7f 8 x 2GB DRAM-6400MT/s Micron MT62F1G32D2DS-026 1024GB Micron_2550_MTFDKBA1T0TGE Intel Arc MTL 8GB (2250MHz) Intel Meteor Lake-P HD Audio Intel Device 7e40 Ubuntu 23.10 6.8.0-060800rc1daily20240126-generic (x86_64) GNOME Shell 45.2 X Server 1.21.1.7 + Wayland 4.6 Mesa 24.1~git2401200600.ebcab1~oibaf~m (git-ebcab14 2024-01-20 mantic-oibaf-ppa) GCC 13.2.0 ext4 1920x1200 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x13 - Thermald 2.5.4 Python Details - Python 3.11.6 Security Details - gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
ldld blender: Barbershop - CPU-Only blender: Pabellon Barcelona - CPU-Only pytorch: CPU - 16 - Efficientnet_v2_l pytorch: CPU - 32 - Efficientnet_v2_l pytorch: CPU - 256 - Efficientnet_v2_l pytorch: CPU - 64 - Efficientnet_v2_l tensorflow: CPU - 64 - ResNet-50 blender: Classroom - CPU-Only pytorch: CPU - 16 - ResNet-152 pytorch: CPU - 32 - ResNet-152 pytorch: CPU - 64 - ResNet-152 pytorch: CPU - 256 - ResNet-152 blender: Junkshop - CPU-Only tensorflow: CPU - 32 - ResNet-50 blender: Fishy Cat - CPU-Only pytorch: CPU - 1 - Efficientnet_v2_l blender: BMW27 - CPU-Only tensorflow: CPU - 64 - GoogLeNet tensorflow: CPU - 16 - ResNet-50 pytorch: CPU - 256 - ResNet-50 pytorch: CPU - 1 - ResNet-152 pytorch: CPU - 64 - ResNet-50 pytorch: CPU - 16 - ResNet-50 pytorch: CPU - 32 - ResNet-50 tensorflow: CPU - 32 - GoogLeNet tensorflow: CPU - 64 - AlexNet pytorch: CPU - 1 - ResNet-50 tensorflow: CPU - 16 - GoogLeNet tensorflow: CPU - 32 - AlexNet build-mesa: Time To Compile tensorflow: CPU - 16 - AlexNet tensorflow: CPU - 1 - ResNet-50 tensorflow: CPU - 1 - AlexNet tensorflow: CPU - 1 - GoogLeNet a b 1770.95 572.7 3.07 3.49 3.85 3.78 15.35 405.38 4.67 5.93 5.86 6.02 249.71 14.77 234.47 6.60 163.16 46.51 14.33 15.86 10.34 15.93 16.03 16.13 47.25 104.18 29.49 48.05 101.75 34.889 92.7 7.91 15.71 30.89 1757.8 578.85 3.68 3.59 3.69 3.77 15.37 440.58 5.42 6.12 5.59 6.15 249.42 14.75 230.59 6.50 161.09 46.41 14.36 13.72 9.21 15.12 15.93 15.98 47.45 106.5 28.30 47.93 101.38 32.622 91.49 9.41 15.71 29.65 OpenBenchmarking.org
Blender Blend File: Barbershop - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.1 Blend File: Barbershop - Compute: CPU-Only b a 400 800 1200 1600 2000 1757.80 1770.95
Blender Blend File: Pabellon Barcelona - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.1 Blend File: Pabellon Barcelona - Compute: CPU-Only a b 130 260 390 520 650 572.70 578.85
PyTorch Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l b a 0.828 1.656 2.484 3.312 4.14 3.68 3.07 MIN: 1.84 / MAX: 4.81 MIN: 1.97 / MAX: 4.84
PyTorch Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l b a 0.8078 1.6156 2.4234 3.2312 4.039 3.59 3.49 MIN: 2.08 / MAX: 4.37 MIN: 2.11 / MAX: 4.99
PyTorch Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l a b 0.8663 1.7326 2.5989 3.4652 4.3315 3.85 3.69 MIN: 2.11 / MAX: 4.99 MIN: 2.02 / MAX: 4.78
PyTorch Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l a b 0.8505 1.701 2.5515 3.402 4.2525 3.78 3.77 MIN: 3.03 / MAX: 4.36 MIN: 2.09 / MAX: 4.51
TensorFlow Device: CPU - Batch Size: 64 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 64 - Model: ResNet-50 b a 4 8 12 16 20 15.37 15.35
Blender Blend File: Classroom - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.1 Blend File: Classroom - Compute: CPU-Only a b 100 200 300 400 500 405.38 440.58
PyTorch Device: CPU - Batch Size: 16 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 16 - Model: ResNet-152 b a 1.2195 2.439 3.6585 4.878 6.0975 5.42 4.67 MIN: 3.09 / MAX: 7.11 MIN: 3.7 / MAX: 6.87
PyTorch Device: CPU - Batch Size: 32 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 32 - Model: ResNet-152 b a 2 4 6 8 10 6.12 5.93 MIN: 4.5 / MAX: 6.7 MIN: 4.07 / MAX: 7.46
PyTorch Device: CPU - Batch Size: 64 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 64 - Model: ResNet-152 a b 1.3185 2.637 3.9555 5.274 6.5925 5.86 5.59 MIN: 4.18 / MAX: 6.58 MIN: 3.08 / MAX: 7.82
PyTorch Device: CPU - Batch Size: 256 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 256 - Model: ResNet-152 b a 2 4 6 8 10 6.15 6.02 MIN: 3.67 / MAX: 6.86 MIN: 5.41 / MAX: 6.78
Blender Blend File: Junkshop - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.1 Blend File: Junkshop - Compute: CPU-Only b a 50 100 150 200 250 249.42 249.71
TensorFlow Device: CPU - Batch Size: 32 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 32 - Model: ResNet-50 a b 4 8 12 16 20 14.77 14.75
Blender Blend File: Fishy Cat - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.1 Blend File: Fishy Cat - Compute: CPU-Only b a 50 100 150 200 250 230.59 234.47
PyTorch Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l a b 2 4 6 8 10 6.60 6.50 MIN: 2.38 / MAX: 8.69 MIN: 3.03 / MAX: 9.45
Blender Blend File: BMW27 - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.1 Blend File: BMW27 - Compute: CPU-Only b a 40 80 120 160 200 161.09 163.16
TensorFlow Device: CPU - Batch Size: 64 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 64 - Model: GoogLeNet a b 11 22 33 44 55 46.51 46.41
TensorFlow Device: CPU - Batch Size: 16 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 b a 4 8 12 16 20 14.36 14.33
PyTorch Device: CPU - Batch Size: 256 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 256 - Model: ResNet-50 a b 4 8 12 16 20 15.86 13.72 MIN: 13.05 / MAX: 19.69 MIN: 5.4 / MAX: 20.19
PyTorch Device: CPU - Batch Size: 1 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 1 - Model: ResNet-152 a b 3 6 9 12 15 10.34 9.21 MIN: 8.48 / MAX: 12.31 MIN: 5.28 / MAX: 12.19
PyTorch Device: CPU - Batch Size: 64 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 64 - Model: ResNet-50 a b 4 8 12 16 20 15.93 15.12 MIN: 14.58 / MAX: 17.56 MIN: 11.4 / MAX: 18.4
PyTorch Device: CPU - Batch Size: 16 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 a b 4 8 12 16 20 16.03 15.93 MIN: 10.94 / MAX: 18.23 MIN: 8.32 / MAX: 18.45
PyTorch Device: CPU - Batch Size: 32 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 32 - Model: ResNet-50 a b 4 8 12 16 20 16.13 15.98 MIN: 14.64 / MAX: 18.27 MIN: 14.51 / MAX: 17.51
TensorFlow Device: CPU - Batch Size: 32 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 32 - Model: GoogLeNet b a 11 22 33 44 55 47.45 47.25
TensorFlow Device: CPU - Batch Size: 64 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 64 - Model: AlexNet b a 20 40 60 80 100 106.50 104.18
PyTorch Device: CPU - Batch Size: 1 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 1 - Model: ResNet-50 a b 7 14 21 28 35 29.49 28.30 MIN: 21.36 / MAX: 33.16 MIN: 19.22 / MAX: 33.62
TensorFlow Device: CPU - Batch Size: 16 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 16 - Model: GoogLeNet a b 11 22 33 44 55 48.05 47.93
TensorFlow Device: CPU - Batch Size: 32 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 32 - Model: AlexNet a b 20 40 60 80 100 101.75 101.38
Timed Mesa Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Mesa Compilation 24.0 Time To Compile b a 8 16 24 32 40 32.62 34.89
TensorFlow Device: CPU - Batch Size: 16 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 16 - Model: AlexNet a b 20 40 60 80 100 92.70 91.49
TensorFlow Device: CPU - Batch Size: 1 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 1 - Model: ResNet-50 b a 3 6 9 12 15 9.41 7.91
TensorFlow Device: CPU - Batch Size: 1 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 1 - Model: AlexNet b a 4 8 12 16 20 15.71 15.71
TensorFlow Device: CPU - Batch Size: 1 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 1 - Model: GoogLeNet a b 7 14 21 28 35 30.89 29.65
Phoronix Test Suite v10.8.5