fhh Tests for a future article. Intel Core i7-1280P testing with a MSI MS-14C6 (E14C6IMS.115 BIOS) and MSI Intel ADL GT2 15GB on Ubuntu 23.10 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2401086-NE-FHH73775740&rdt&grs .
fhh Processor Motherboard Chipset Memory Disk Graphics Audio Network OS Kernel Desktop Display Server OpenGL OpenCL Compiler File-System Screen Resolution a b Intel Core i7-1280P @ 4.70GHz (14 Cores / 20 Threads) MSI MS-14C6 (E14C6IMS.115 BIOS) Intel Alder Lake PCH 16GB 1024GB Micron_3400_MTFDKBA1T0TFH MSI Intel ADL GT2 15GB (1450MHz) Realtek ALC274 Intel Alder Lake-P PCH CNVi WiFi Ubuntu 23.10 6.7.0-060700rc5-generic (x86_64) GNOME Shell 45.1 X Server + Wayland 4.6 Mesa 24.0~git2312190600.51bf1b~oibaf~m (git-51bf1b2 2023-12-19 mantic-oibaf-ppa) OpenCL 3.0 GCC 13.2.0 ext4 1920x1080 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x430 - Thermald 2.5.4 Python Details - Python 3.11.6 Security Details - gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
fhh pytorch: CPU - 1 - ResNet-50 quicksilver: CORAL2 P2 pytorch: CPU - 1 - Efficientnet_v2_l tensorflow: CPU - 1 - VGG-16 opencl-benchmark: Memory Bandwidth Coalesced Read pytorch: CPU - 1 - ResNet-152 y-cruncher: 1B tensorflow: CPU - 1 - AlexNet pytorch: CPU - 16 - Efficientnet_v2_l tensorflow: CPU - 1 - GoogLeNet opencl-benchmark: Memory Bandwidth Coalesced Write pytorch: CPU - 16 - ResNet-152 y-cruncher: 500M quicksilver: CTS2 opencl-benchmark: INT16 Compute tensorflow: CPU - 1 - ResNet-50 pytorch: CPU - 16 - ResNet-50 quicksilver: CORAL2 P1 opencl-benchmark: INT8 Compute opencl-benchmark: INT32 Compute opencl-benchmark: INT64 Compute opencl-benchmark: FP16 Compute opencl-benchmark: FP32 Compute a b 26.15 9076000 6.08 2.69 64.85 10.10 67.887 15.13 3.68 33.91 59.99 6.04 28.618 7066000 7.549 10.23 15.35 7951000 1.386 0.683 0.161 3.492 1.891 23.66 9247000 6.00 2.67 64.42 10.15 68.116 15.18 3.67 33.82 60.11 6.05 28.583 7074000 7.547 10.23 15.35 7951000 1.386 0.683 0.161 3.492 1.891 OpenBenchmarking.org
PyTorch Device: CPU - Batch Size: 1 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-50 a b 6 12 18 24 30 26.15 23.66 MIN: 21.1 / MAX: 35.31 MIN: 20.33 / MAX: 34.33
Quicksilver Input: CORAL2 P2 OpenBenchmarking.org Figure Of Merit, More Is Better Quicksilver 20230818 Input: CORAL2 P2 a b 2M 4M 6M 8M 10M 9076000 9247000 1. (CXX) g++ options: -fopenmp -O3 -march=native
PyTorch Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l a b 2 4 6 8 10 6.08 6.00 MIN: 3.73 / MAX: 7.6 MIN: 3.76 / MAX: 7.29
TensorFlow Device: CPU - Batch Size: 1 - Model: VGG-16 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 1 - Model: VGG-16 a b 0.6053 1.2106 1.8159 2.4212 3.0265 2.69 2.67
ProjectPhysX OpenCL-Benchmark Operation: Memory Bandwidth Coalesced Read OpenBenchmarking.org GB/s, More Is Better ProjectPhysX OpenCL-Benchmark 1.2 Operation: Memory Bandwidth Coalesced Read a b 14 28 42 56 70 64.85 64.42 1. (CXX) g++ options: -std=c++17 -pthread -lOpenCL
PyTorch Device: CPU - Batch Size: 1 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-152 a b 3 6 9 12 15 10.10 10.15 MIN: 9.49 / MAX: 13.36 MIN: 9.58 / MAX: 12.74
Y-Cruncher Pi Digits To Calculate: 1B OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.8.3 Pi Digits To Calculate: 1B a b 15 30 45 60 75 67.89 68.12
TensorFlow Device: CPU - Batch Size: 1 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 1 - Model: AlexNet a b 4 8 12 16 20 15.13 15.18
PyTorch Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l a b 0.828 1.656 2.484 3.312 4.14 3.68 3.67 MIN: 3.58 / MAX: 4.55 MIN: 3.57 / MAX: 4.5
TensorFlow Device: CPU - Batch Size: 1 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 1 - Model: GoogLeNet a b 8 16 24 32 40 33.91 33.82
ProjectPhysX OpenCL-Benchmark Operation: Memory Bandwidth Coalesced Write OpenBenchmarking.org GB/s, More Is Better ProjectPhysX OpenCL-Benchmark 1.2 Operation: Memory Bandwidth Coalesced Write a b 13 26 39 52 65 59.99 60.11 1. (CXX) g++ options: -std=c++17 -pthread -lOpenCL
PyTorch Device: CPU - Batch Size: 16 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-152 a b 2 4 6 8 10 6.04 6.05 MIN: 5.84 / MAX: 7.5 MIN: 5.88 / MAX: 7.52
Y-Cruncher Pi Digits To Calculate: 500M OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.8.3 Pi Digits To Calculate: 500M a b 7 14 21 28 35 28.62 28.58
Quicksilver Input: CTS2 OpenBenchmarking.org Figure Of Merit, More Is Better Quicksilver 20230818 Input: CTS2 a b 1.5M 3M 4.5M 6M 7.5M 7066000 7074000 1. (CXX) g++ options: -fopenmp -O3 -march=native
ProjectPhysX OpenCL-Benchmark Operation: INT16 Compute OpenBenchmarking.org TIOPs/s, More Is Better ProjectPhysX OpenCL-Benchmark 1.2 Operation: INT16 Compute a b 2 4 6 8 10 7.549 7.547 1. (CXX) g++ options: -std=c++17 -pthread -lOpenCL
TensorFlow Device: CPU - Batch Size: 1 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 1 - Model: ResNet-50 a b 3 6 9 12 15 10.23 10.23
PyTorch Device: CPU - Batch Size: 16 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 a b 4 8 12 16 20 15.35 15.35 MIN: 14.66 / MAX: 19.07 MIN: 14.49 / MAX: 19.4
Quicksilver Input: CORAL2 P1 OpenBenchmarking.org Figure Of Merit, More Is Better Quicksilver 20230818 Input: CORAL2 P1 a b 2M 4M 6M 8M 10M 7951000 7951000 1. (CXX) g++ options: -fopenmp -O3 -march=native
ProjectPhysX OpenCL-Benchmark Operation: INT8 Compute OpenBenchmarking.org TIOPs/s, More Is Better ProjectPhysX OpenCL-Benchmark 1.2 Operation: INT8 Compute a b 0.3119 0.6238 0.9357 1.2476 1.5595 1.386 1.386 1. (CXX) g++ options: -std=c++17 -pthread -lOpenCL
ProjectPhysX OpenCL-Benchmark Operation: INT32 Compute OpenBenchmarking.org TIOPs/s, More Is Better ProjectPhysX OpenCL-Benchmark 1.2 Operation: INT32 Compute a b 0.1537 0.3074 0.4611 0.6148 0.7685 0.683 0.683 1. (CXX) g++ options: -std=c++17 -pthread -lOpenCL
ProjectPhysX OpenCL-Benchmark Operation: INT64 Compute OpenBenchmarking.org TIOPs/s, More Is Better ProjectPhysX OpenCL-Benchmark 1.2 Operation: INT64 Compute a b 0.0362 0.0724 0.1086 0.1448 0.181 0.161 0.161 1. (CXX) g++ options: -std=c++17 -pthread -lOpenCL
ProjectPhysX OpenCL-Benchmark Operation: FP16 Compute OpenBenchmarking.org TFLOPs/s, More Is Better ProjectPhysX OpenCL-Benchmark 1.2 Operation: FP16 Compute a b 0.7857 1.5714 2.3571 3.1428 3.9285 3.492 3.492 1. (CXX) g++ options: -std=c++17 -pthread -lOpenCL
ProjectPhysX OpenCL-Benchmark Operation: FP32 Compute OpenBenchmarking.org TFLOPs/s, More Is Better ProjectPhysX OpenCL-Benchmark 1.2 Operation: FP32 Compute a b 0.4255 0.851 1.2765 1.702 2.1275 1.891 1.891 1. (CXX) g++ options: -std=c++17 -pthread -lOpenCL
Phoronix Test Suite v10.8.5