dldl Intel Core i7-1280P testing with a MSI Prestige 14Evo A12M MS-14C6 (E14C6IMS.115 BIOS) and MSI Intel ADL GT2 8GB on Ubuntu 23.10 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2403295-NE-DLDL9539171&rdt&grs .
dldl Processor Motherboard Chipset Memory Disk Graphics Audio Network OS Kernel Desktop Display Server OpenGL OpenCL Compiler File-System Screen Resolution a b Intel Core i7-1280P @ 4.70GHz (14 Cores / 20 Threads) MSI Prestige 14Evo A12M MS-14C6 (E14C6IMS.115 BIOS) Intel Alder Lake PCH 8 x 2GB LPDDR4-4267MT/s SK Hynix H9HCNNNCPMMLXR- 1024GB Micron_3400_MTFDKBA1T0TFH MSI Intel ADL GT2 8GB (1450MHz) Realtek ALC274 Intel Alder Lake-P PCH CNVi WiFi Ubuntu 23.10 6.7.0-060700-generic (x86_64) GNOME Shell 45.2 X Server + Wayland 4.6 Mesa 24.1~git2401210600.c3a64f~oibaf~m (git-c3a64f8 2024-01-21 mantic-oibaf-ppa) OpenCL 3.0 GCC 13.2.0 ext4 1920x1080 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x430 - Thermald 2.5.4 Python Details - Python 3.11.6 Security Details - gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
dldl pytorch: CPU - 1 - ResNet-50 pytorch: CPU - 1 - ResNet-152 pytorch: CPU - 64 - ResNet-152 rocksdb: Read While Writing pytorch: CPU - 32 - ResNet-152 pytorch: CPU - 64 - Efficientnet_v2_l rocksdb: Seq Fill rocksdb: Rand Fill Sync pytorch: CPU - 16 - Efficientnet_v2_l pytorch: CPU - 1 - Efficientnet_v2_l stockfish: Chess Benchmark blender: Classroom - CPU-Only rocksdb: Update Rand build-mesa: Time To Compile pytorch: CPU - 16 - ResNet-50 blender: Junkshop - CPU-Only rocksdb: Rand Read blender: Pabellon Barcelona - CPU-Only rocksdb: Read Rand Write Rand tensorflow: CPU - 32 - ResNet-50 tensorflow: CPU - 64 - AlexNet tensorflow: CPU - 64 - GoogLeNet brl-cad: VGR Performance Metric rocksdb: Overwrite tensorflow: CPU - 1 - AlexNet blender: Fishy Cat - CPU-Only tensorflow: CPU - 16 - ResNet-50 blender: Barbershop - CPU-Only rocksdb: Rand Fill pytorch: CPU - 32 - Efficientnet_v2_l tensorflow: CPU - 32 - GoogLeNet tensorflow: CPU - 64 - ResNet-50 blender: BMW27 - CPU-Only tensorflow: CPU - 16 - AlexNet pytorch: CPU - 16 - ResNet-152 tensorflow: CPU - 1 - ResNet-50 pytorch: CPU - 32 - ResNet-50 pytorch: CPU - 64 - ResNet-50 tensorflow: CPU - 1 - GoogLeNet tensorflow: CPU - 16 - GoogLeNet tensorflow: CPU - 32 - AlexNet a b 27.59 8.60 5.41 1726017 5.46 3.68 945889 10748 4.20 5.92 9022315 573.49 328045 44.071 13.89 281.82 42403226 692.47 1210989 13.87 94.2 42.89 122838 638938 15.39 283.45 13.35 2138.39 640601 4.21 43.23 14.15 201.86 83.8 5.99 10.47 13.97 15.38 33.99 45.74 90.7 23.28 10.12 6.01 1562855 6.03 4.03 899566 10370 4.07 6.10 9265428 561.44 322860 43.435 14.06 279.49 42180062 689.24 1205546 13.93 93.82 43.05 123265 641031 15.44 282.54 13.39 2132.05 642443 4.20 43.33 14.18 201.47 83.64 6.00 10.48 13.96 15.37 34 45.75 90.71 OpenBenchmarking.org
PyTorch Device: CPU - Batch Size: 1 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 1 - Model: ResNet-50 a b 6 12 18 24 30 27.59 23.28 MIN: 22.81 / MAX: 35.81 MIN: 18.17 / MAX: 32.19
PyTorch Device: CPU - Batch Size: 1 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 1 - Model: ResNet-152 a b 3 6 9 12 15 8.60 10.12 MIN: 8.12 / MAX: 11.58 MIN: 9.39 / MAX: 14.02
PyTorch Device: CPU - Batch Size: 64 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 64 - Model: ResNet-152 a b 2 4 6 8 10 5.41 6.01 MIN: 5.24 / MAX: 6.64 MIN: 5.84 / MAX: 7.67
RocksDB Test: Read While Writing OpenBenchmarking.org Op/s, More Is Better RocksDB 9.0 Test: Read While Writing a b 400K 800K 1200K 1600K 2000K 1726017 1562855 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
PyTorch Device: CPU - Batch Size: 32 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 32 - Model: ResNet-152 a b 2 4 6 8 10 5.46 6.03 MIN: 5.26 / MAX: 7.47 MIN: 5.85 / MAX: 7.41
PyTorch Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l a b 0.9068 1.8136 2.7204 3.6272 4.534 3.68 4.03 MIN: 3.51 / MAX: 4.45 MIN: 3.52 / MAX: 5.25
RocksDB Test: Sequential Fill OpenBenchmarking.org Op/s, More Is Better RocksDB 9.0 Test: Sequential Fill a b 200K 400K 600K 800K 1000K 945889 899566 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
RocksDB Test: Random Fill Sync OpenBenchmarking.org Op/s, More Is Better RocksDB 9.0 Test: Random Fill Sync a b 2K 4K 6K 8K 10K 10748 10370 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
PyTorch Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l a b 0.945 1.89 2.835 3.78 4.725 4.20 4.07 MIN: 4.05 / MAX: 5.36 MIN: 3.49 / MAX: 5.33
PyTorch Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l a b 2 4 6 8 10 5.92 6.10 MIN: 5.02 / MAX: 7.41 MIN: 4.02 / MAX: 7.54
Stockfish Chess Benchmark OpenBenchmarking.org Nodes Per Second, More Is Better Stockfish 16.1 Chess Benchmark a b 2M 4M 6M 8M 10M 9022315 9265428 1. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -funroll-loops -msse -msse3 -mpopcnt -mavx2 -mbmi -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto-partition=one -flto=jobserver
Blender Blend File: Classroom - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.1 Blend File: Classroom - Compute: CPU-Only a b 120 240 360 480 600 573.49 561.44
RocksDB Test: Update Random OpenBenchmarking.org Op/s, More Is Better RocksDB 9.0 Test: Update Random a b 70K 140K 210K 280K 350K 328045 322860 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Timed Mesa Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Mesa Compilation 24.0 Time To Compile a b 10 20 30 40 50 44.07 43.44
PyTorch Device: CPU - Batch Size: 16 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 a b 4 8 12 16 20 13.89 14.06 MIN: 13.19 / MAX: 17.7 MIN: 13.29 / MAX: 19.35
Blender Blend File: Junkshop - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.1 Blend File: Junkshop - Compute: CPU-Only a b 60 120 180 240 300 281.82 279.49
RocksDB Test: Random Read OpenBenchmarking.org Op/s, More Is Better RocksDB 9.0 Test: Random Read a b 9M 18M 27M 36M 45M 42403226 42180062 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Blender Blend File: Pabellon Barcelona - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.1 Blend File: Pabellon Barcelona - Compute: CPU-Only a b 150 300 450 600 750 692.47 689.24
RocksDB Test: Read Random Write Random OpenBenchmarking.org Op/s, More Is Better RocksDB 9.0 Test: Read Random Write Random a b 300K 600K 900K 1200K 1500K 1210989 1205546 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
TensorFlow Device: CPU - Batch Size: 32 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 32 - Model: ResNet-50 a b 4 8 12 16 20 13.87 13.93
TensorFlow Device: CPU - Batch Size: 64 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 64 - Model: AlexNet a b 20 40 60 80 100 94.20 93.82
TensorFlow Device: CPU - Batch Size: 64 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 64 - Model: GoogLeNet a b 10 20 30 40 50 42.89 43.05
BRL-CAD VGR Performance Metric OpenBenchmarking.org VGR Performance Metric, More Is Better BRL-CAD 7.38.2 VGR Performance Metric a b 30K 60K 90K 120K 150K 122838 123265 1. (CXX) g++ options: -std=c++17 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lnetpbm -lregex_brl -lz_brl -lassimp -ldl -lm -ltk8.6
RocksDB Test: Overwrite OpenBenchmarking.org Op/s, More Is Better RocksDB 9.0 Test: Overwrite a b 140K 280K 420K 560K 700K 638938 641031 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
TensorFlow Device: CPU - Batch Size: 1 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 1 - Model: AlexNet a b 4 8 12 16 20 15.39 15.44
Blender Blend File: Fishy Cat - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.1 Blend File: Fishy Cat - Compute: CPU-Only a b 60 120 180 240 300 283.45 282.54
TensorFlow Device: CPU - Batch Size: 16 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 a b 3 6 9 12 15 13.35 13.39
Blender Blend File: Barbershop - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.1 Blend File: Barbershop - Compute: CPU-Only a b 500 1000 1500 2000 2500 2138.39 2132.05
RocksDB Test: Random Fill OpenBenchmarking.org Op/s, More Is Better RocksDB 9.0 Test: Random Fill a b 140K 280K 420K 560K 700K 640601 642443 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
PyTorch Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l a b 0.9473 1.8946 2.8419 3.7892 4.7365 4.21 4.20 MIN: 4.11 / MAX: 5.34 MIN: 4.09 / MAX: 5.12
TensorFlow Device: CPU - Batch Size: 32 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 32 - Model: GoogLeNet a b 10 20 30 40 50 43.23 43.33
TensorFlow Device: CPU - Batch Size: 64 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 64 - Model: ResNet-50 a b 4 8 12 16 20 14.15 14.18
Blender Blend File: BMW27 - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.1 Blend File: BMW27 - Compute: CPU-Only a b 40 80 120 160 200 201.86 201.47
TensorFlow Device: CPU - Batch Size: 16 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 16 - Model: AlexNet a b 20 40 60 80 100 83.80 83.64
PyTorch Device: CPU - Batch Size: 16 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 16 - Model: ResNet-152 a b 2 4 6 8 10 5.99 6.00 MIN: 5.79 / MAX: 7.61 MIN: 5.81 / MAX: 7.51
TensorFlow Device: CPU - Batch Size: 1 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 1 - Model: ResNet-50 a b 3 6 9 12 15 10.47 10.48
PyTorch Device: CPU - Batch Size: 32 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 32 - Model: ResNet-50 a b 4 8 12 16 20 13.97 13.96 MIN: 13.3 / MAX: 18.49 MIN: 13.38 / MAX: 17.6
PyTorch Device: CPU - Batch Size: 64 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 64 - Model: ResNet-50 a b 4 8 12 16 20 15.38 15.37 MIN: 14.62 / MAX: 19.8 MIN: 14.3 / MAX: 19.34
TensorFlow Device: CPU - Batch Size: 1 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 1 - Model: GoogLeNet a b 8 16 24 32 40 33.99 34.00
TensorFlow Device: CPU - Batch Size: 16 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 16 - Model: GoogLeNet a b 10 20 30 40 50 45.74 45.75
TensorFlow Device: CPU - Batch Size: 32 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 32 - Model: AlexNet a b 20 40 60 80 100 90.70 90.71
Phoronix Test Suite v10.8.5