lg

AMD Ryzen 7 7840U testing with a Framework FRANMDCP07 (03.03 BIOS) and AMD Phoenix1 512MB on Ubuntu 23.10 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2401082-NE-LG897017407&grr.

lgProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionabcAMD Ryzen 7 7840U @ 5.13GHz (8 Cores / 16 Threads)Framework FRANMDCP07 (03.03 BIOS)AMD Device 14e816GB512GB Western Digital WD PC SN740 SDDPNQD-512GAMD Phoenix1 512MB (2700/2800MHz)AMD Rembrandt Radeon HD AudioMEDIATEK MT7922 802.11ax PCIUbuntu 23.106.7.0-060700rc5-generic (x86_64)GNOME Shell 45.1X Server 1.21.1.7 + Wayland4.6 Mesa 24.0~git2312160600.5d937f~oibaf~m (git-5d937f0 2023-12-16 mantic-oibaf-ppa) (LLVM 16.0.6 DRM 3.56)GCC 13.2.0ext42256x1504OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: amd-pstate-epp powersave (EPP: performance) - Platform Profile: balanced - CPU Microcode: 0xa704103 - ACPI Profile: balanced Python Details- Python 3.11.6Security Details- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Vulnerable: Safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

lgquicksilver: CTS2tensorflow: CPU - 16 - VGG-16pytorch: CPU - 16 - Efficientnet_v2_lquicksilver: CORAL2 P2pytorch: CPU - 16 - ResNet-152pytorch: CPU - 1 - Efficientnet_v2_lquicksilver: CORAL2 P1tensorflow: CPU - 16 - ResNet-50pytorch: CPU - 16 - ResNet-50pytorch: CPU - 1 - ResNet-152y-cruncher: 1Btensorflow: CPU - 1 - VGG-16tensorflow: CPU - 16 - GoogLeNetpytorch: CPU - 1 - ResNet-50tensorflow: CPU - 16 - AlexNety-cruncher: 500Mtensorflow: CPU - 1 - ResNet-50tensorflow: CPU - 1 - AlexNettensorflow: CPU - 1 - GoogLeNetabc105800007.468.522186000012.5011.951158000021.8528.6420.1037.1313.4265.0449.4289.6616.74511.3511.0942.37109100007.468.482186000012.3711.881161000021.8427.8619.9337.213.4265.0947.7088.8516.70211.3811.1141.59105100007.478.402091000012.5011.931150000021.8429.4619.7537.5073.4165.5650.0089.3916.69311.4611.0942.3OpenBenchmarking.org

Quicksilver

Input: CTS2

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CTS2abc2M4M6M8M10M1058000010910000105100001. (CXX) g++ options: -fopenmp -O3 -march=native

TensorFlow

Device: CPU - Batch Size: 16 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: VGG-16abc2468107.467.467.47

PyTorch

Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_labc2468108.528.488.40MIN: 7.31 / MAX: 8.87MIN: 7.03 / MAX: 8.84MIN: 6.7 / MAX: 8.85

Quicksilver

Input: CORAL2 P2

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P2abc5M10M15M20M25M2186000021860000209100001. (CXX) g++ options: -fopenmp -O3 -march=native

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-152abc369121512.5012.3712.50MIN: 12.18 / MAX: 13.17MIN: 12 / MAX: 13.09MIN: 12.06 / MAX: 13.18

PyTorch

Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_labc369121511.9511.8811.93MIN: 11.49 / MAX: 12.13MIN: 11.46 / MAX: 12.01MIN: 11.4 / MAX: 12.16

Quicksilver

Input: CORAL2 P1

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P1abc2M4M6M8M10M1158000011610000115000001. (CXX) g++ options: -fopenmp -O3 -march=native

TensorFlow

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-50abc51015202521.8521.8421.84

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-50abc71421283528.6427.8629.46MIN: 27.24 / MAX: 29.27MIN: 25.37 / MAX: 28.2MIN: 26.85 / MAX: 29.73

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-152abc51015202520.1019.9319.75MIN: 19.01 / MAX: 20.85MIN: 18.59 / MAX: 20.22MIN: 18.47 / MAX: 20.69

Y-Cruncher

Pi Digits To Calculate: 1B

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.3Pi Digits To Calculate: 1Babc91827364537.1337.2137.51

TensorFlow

Device: CPU - Batch Size: 1 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: VGG-16abc0.76951.5392.30853.0783.84753.423.423.41

TensorFlow

Device: CPU - Batch Size: 16 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: GoogLeNetabc153045607565.0465.0965.56

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-50abc112233445549.4247.7050.00MIN: 44.8 / MAX: 51.23MIN: 40.33 / MAX: 49.08MIN: 42.81 / MAX: 51.34

TensorFlow

Device: CPU - Batch Size: 16 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: AlexNetabc2040608010089.6688.8589.39

Y-Cruncher

Pi Digits To Calculate: 500M

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.3Pi Digits To Calculate: 500Mabc4812162016.7516.7016.69

TensorFlow

Device: CPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: ResNet-50abc369121511.3511.3811.46

TensorFlow

Device: CPU - Batch Size: 1 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: AlexNetabc369121511.0911.1111.09

TensorFlow

Device: CPU - Batch Size: 1 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: GoogLeNetabc102030405042.3741.5942.30


Phoronix Test Suite v10.8.5