ssss AMD Ryzen 7 7840HS testing with a Framework Laptop 16 (AMD Ryzen 7040 ) FRANMZCP07 (03.01 BIOS) and AMD Radeon 780M 512MB on Ubuntu 24.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2408129-NE-SSSS7399543&grs&sro .
ssss Processor Motherboard Chipset Memory Disk Graphics Audio Network OS Kernel Desktop Display Server OpenGL Compiler File-System Screen Resolution a b c d AMD Ryzen 7 7840HS @ 5.14GHz (8 Cores / 16 Threads) Framework Laptop 16 (AMD Ryzen 7040 ) FRANMZCP07 (03.01 BIOS) AMD Device 14e8 2 x 8GB DDR5-5600MT/s A-DATA AD5S56008G-B 512GB Western Digital PC SN810 SDCPNRY-512G AMD Radeon 780M 512MB AMD Navi 31 HDMI/DP MEDIATEK MT7922 802.11ax PCI Ubuntu 24.04 6.10.0-061000rc4daily20240621-generic (x86_64) GNOME Shell 46.0 X Server + Wayland 4.6 Mesa 24.2~git2406200600.0ac0fb~oibaf~n (git-0ac0fbc 2024-06-20 noble-oibaf-ppa) (LLVM 17.0.6 DRM 3.57) GCC 13.2.0 ext4 2560x1600 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: amd-pstate-epp powersave (EPP: balance_performance) - Platform Profile: balanced - CPU Microcode: 0xa704103 - ACPI Profile: balanced Security Details - gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Vulnerable: Safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected
ssss mnn: resnet-v2-50 mnn: SqueezeNetV1.0 xnnpack: FP32MobileNetV3Small mnn: mobilenet-v1-1.0 xnnpack: FP16MobileNetV3Large mnn: MobileNetV2_224 xnnpack: FP16MobileNetV2 mnn: inception-v3 mnn: squeezenetv1.1 xnnpack: FP32MobileNetV2 xnnpack: FP32MobileNetV3Large mnn: nasnet xnnpack: QU8MobileNetV3Large xnnpack: QU8MobileNetV2 mnn: mobilenetV3 xnnpack: FP16MobileNetV3Small xnnpack: QU8MobileNetV3Small lczero: Eigen lczero: BLAS y-cruncher: 1B y-cruncher: 500M simdjson: DistinctUserID simdjson: PartialTweets simdjson: Kostya simdjson: TopTweet simdjson: LargeRand a b c d 11.059 3.141 547 2.119 1499 2.228 1414 21.215 2.174 2001 2263 8.923 988 1011 1.182 548 449 74 93 33.38 15.419 9.02 8.22 5.21 8.86 1.55 11.172 3.127 545 2.156 1477 2.268 1412 22.212 2.144 2018 2261 9.339 993 1010 1.136 550 449 75 91 33.206 15.519 8.98 8.04 5.26 8.79 1.56 15.359 4.207 738 2.824 1950 2.900 1849 27.866 2.707 2580 2885 11.035 1234 1202 1.327 656 510 70 87 35.174 16.226 8.89 8.22 5.20 8.83 1.55 15.861 4.347 746 2.896 2012 3.017 1905 28.441 2.808 2579 2882 11.299 1224 1234 1.386 663 516 70 87 35.073 16.139 8.58 8.08 5.2 8.85 1.55 OpenBenchmarking.org
Mobile Neural Network Model: resnet-v2-50 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: resnet-v2-50 a b c d 4 8 12 16 20 SE +/- 0.05, N = 3 11.06 11.17 15.36 15.86 MIN: 10.38 / MAX: 21.91 MIN: 10.36 / MAX: 22.22 MIN: 12.74 / MAX: 27.79 MIN: 13.14 / MAX: 30.47 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
Mobile Neural Network Model: SqueezeNetV1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: SqueezeNetV1.0 a b c d 0.9781 1.9562 2.9343 3.9124 4.8905 SE +/- 0.001, N = 3 3.141 3.127 4.207 4.347 MIN: 2.95 / MAX: 8.05 MIN: 2.98 / MAX: 5.44 MIN: 3.14 / MAX: 16.16 MIN: 3.15 / MAX: 16.74 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
XNNPACK Model: FP32MobileNetV3Small OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP32MobileNetV3Small a b c d 160 320 480 640 800 SE +/- 6.89, N = 3 547 545 738 746 1. (CXX) g++ options: -O3 -lrt -lm
Mobile Neural Network Model: mobilenet-v1-1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: mobilenet-v1-1.0 a b c d 0.6516 1.3032 1.9548 2.6064 3.258 SE +/- 0.012, N = 3 2.119 2.156 2.824 2.896 MIN: 2.06 / MAX: 3.49 MIN: 2.05 / MAX: 18.92 MIN: 2.08 / MAX: 16.25 MIN: 2.07 / MAX: 5.64 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
XNNPACK Model: FP16MobileNetV3Large OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP16MobileNetV3Large a b c d 400 800 1200 1600 2000 SE +/- 5.78, N = 3 1499 1477 1950 2012 1. (CXX) g++ options: -O3 -lrt -lm
Mobile Neural Network Model: MobileNetV2_224 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: MobileNetV2_224 a b c d 0.6788 1.3576 2.0364 2.7152 3.394 SE +/- 0.018, N = 3 2.228 2.268 2.900 3.017 MIN: 2.13 / MAX: 6.79 MIN: 2.13 / MAX: 7.52 MIN: 2.21 / MAX: 16.4 MIN: 2.2 / MAX: 23.33 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
XNNPACK Model: FP16MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP16MobileNetV2 a b c d 400 800 1200 1600 2000 SE +/- 8.74, N = 3 1414 1412 1849 1905 1. (CXX) g++ options: -O3 -lrt -lm
Mobile Neural Network Model: inception-v3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: inception-v3 a b c d 7 14 21 28 35 SE +/- 0.08, N = 3 21.22 22.21 27.87 28.44 MIN: 19.98 / MAX: 36.97 MIN: 20.24 / MAX: 37.96 MIN: 23.24 / MAX: 54.76 MIN: 23.27 / MAX: 44.42 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
Mobile Neural Network Model: squeezenetv1.1 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: squeezenetv1.1 a b c d 0.6318 1.2636 1.8954 2.5272 3.159 SE +/- 0.022, N = 3 2.174 2.144 2.707 2.808 MIN: 2.06 / MAX: 7.45 MIN: 2.07 / MAX: 4.89 MIN: 2 / MAX: 15.51 MIN: 2.03 / MAX: 5.14 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
XNNPACK Model: FP32MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP32MobileNetV2 a b c d 600 1200 1800 2400 3000 SE +/- 12.91, N = 3 2001 2018 2580 2579 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP32MobileNetV3Large OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP32MobileNetV3Large a b c d 600 1200 1800 2400 3000 SE +/- 12.12, N = 3 2263 2261 2885 2882 1. (CXX) g++ options: -O3 -lrt -lm
Mobile Neural Network Model: nasnet OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: nasnet a b c d 3 6 9 12 15 SE +/- 0.030, N = 3 8.923 9.339 11.035 11.299 MIN: 8.41 / MAX: 36.07 MIN: 8.52 / MAX: 20.37 MIN: 8.95 / MAX: 23.97 MIN: 9.02 / MAX: 23.3 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
XNNPACK Model: QU8MobileNetV3Large OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: QU8MobileNetV3Large a b c d 300 600 900 1200 1500 SE +/- 3.21, N = 3 988 993 1234 1224 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: QU8MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: QU8MobileNetV2 a b c d 300 600 900 1200 1500 SE +/- 5.55, N = 3 1011 1010 1202 1234 1. (CXX) g++ options: -O3 -lrt -lm
Mobile Neural Network Model: mobilenetV3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: mobilenetV3 a b c d 0.3119 0.6238 0.9357 1.2476 1.5595 SE +/- 0.009, N = 3 1.182 1.136 1.327 1.386 MIN: 1.14 / MAX: 11.45 MIN: 1.11 / MAX: 8.06 MIN: 1.07 / MAX: 4.99 MIN: 1.07 / MAX: 3.57 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
XNNPACK Model: FP16MobileNetV3Small OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP16MobileNetV3Small a b c d 140 280 420 560 700 SE +/- 2.73, N = 3 548 550 656 663 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: QU8MobileNetV3Small OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: QU8MobileNetV3Small a b c d 110 220 330 440 550 SE +/- 1.67, N = 3 449 449 510 516 1. (CXX) g++ options: -O3 -lrt -lm
LeelaChessZero Backend: Eigen OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.31.1 Backend: Eigen a b c d 20 40 60 80 100 SE +/- 0.58, N = 3 74 75 70 70 1. (CXX) g++ options: -flto -pthread
LeelaChessZero Backend: BLAS OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.31.1 Backend: BLAS a b c d 20 40 60 80 100 SE +/- 0.58, N = 3 93 91 87 87 1. (CXX) g++ options: -flto -pthread
Y-Cruncher Pi Digits To Calculate: 1B OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.8.5 Pi Digits To Calculate: 1B a b c d 8 16 24 32 40 SE +/- 0.06, N = 3 33.38 33.21 35.17 35.07
Y-Cruncher Pi Digits To Calculate: 500M OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.8.5 Pi Digits To Calculate: 500M a b c d 4 8 12 16 20 SE +/- 0.02, N = 3 15.42 15.52 16.23 16.14
simdjson Throughput Test: DistinctUserID OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: DistinctUserID a b c d 3 6 9 12 15 SE +/- 0.07, N = 9 9.02 8.98 8.89 8.58 1. (CXX) g++ options: -O3 -lrt
simdjson Throughput Test: PartialTweets OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: PartialTweets a b c d 2 4 6 8 10 SE +/- 0.05, N = 3 8.22 8.04 8.22 8.08 1. (CXX) g++ options: -O3 -lrt
simdjson Throughput Test: Kostya OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: Kostya a b c d 1.1835 2.367 3.5505 4.734 5.9175 SE +/- 0.01, N = 3 5.21 5.26 5.20 5.20 1. (CXX) g++ options: -O3 -lrt
simdjson Throughput Test: TopTweet OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: TopTweet a b c d 2 4 6 8 10 SE +/- 0.04, N = 3 8.86 8.79 8.83 8.85 1. (CXX) g++ options: -O3 -lrt
simdjson Throughput Test: LargeRandom OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: LargeRandom a b c d 0.351 0.702 1.053 1.404 1.755 SE +/- 0.00, N = 3 1.55 1.56 1.55 1.55 1. (CXX) g++ options: -O3 -lrt
Phoronix Test Suite v10.8.5