fghj

AMD Ryzen 9 5900HX testing with a ASUS ROG Strix G513QY_G513QY G513QY v1.0 (G513QY.318 BIOS) and ASUS AMD Cezanne 512MB on Ubuntu 22.10 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2401107-PTS-FGHJ244998.

fghjProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionabcAMD Ryzen 9 5900HX @ 3.30GHz (8 Cores / 16 Threads)ASUS ROG Strix G513QY_G513QY G513QY v1.0 (G513QY.318 BIOS)AMD Renoir/Cezanne2 x 8 GB DDR4-3200MT/s Micron 4ATF1G64HZ-3G2E2512GB SAMSUNG MZVLQ512HBLU-00B00ASUS AMD Cezanne 512MB (2500/1000MHz)AMD Navi 21/23LQ156M1JW25Realtek RTL8111/8168/8411 + MEDIATEK MT7921 802.11ax PCIUbuntu 22.105.19.0-46-generic (x86_64)GNOME Shell 43.0X Server 1.21.1.4 + Wayland4.6 Mesa 22.2.5 (LLVM 15.0.2 DRM 3.47)1.3.224GCC 12.2.0ext41920x1080ASUS AMD Cezanne 512MBASUS AMD Cezanne 512MB (2500/1000MHz)OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - Platform Profile: balanced - CPU Microcode: 0xa50000c - ACPI Profile: balanced Python Details- Python 3.10.7Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

fghjquicksilver: CTS2quicksilver: CORAL2 P1quicksilver: CORAL2 P2y-cruncher: 1By-cruncher: 500Mpytorch: CPU - 1 - ResNet-50pytorch: CPU - 1 - ResNet-152pytorch: CPU - 16 - ResNet-50pytorch: CPU - 16 - ResNet-152pytorch: CPU - 1 - Efficientnet_v2_lpytorch: CPU - 16 - Efficientnet_v2_ltensorflow: CPU - 1 - VGG-16tensorflow: CPU - 1 - AlexNettensorflow: CPU - 16 - VGG-16tensorflow: CPU - 16 - AlexNettensorflow: CPU - 1 - GoogLeNettensorflow: CPU - 1 - ResNet-50tensorflow: CPU - 16 - GoogLeNettensorflow: CPU - 16 - ResNet-50speedb: Rand Fillspeedb: Rand Readspeedb: Update Randspeedb: Seq Fillspeedb: Rand Fill Syncspeedb: Read While Writingspeedb: Read Rand Write Randabc11390000119900002256000050.42323.26334.0415.1520.229.099.586.221.434.613.540.0512.175.0921.127.5981936651459190458594933961119003035137176969111406667119666672271333349.82823.07734.4415.2718.959.079.466.281.454.693.5240.3412.225.1321.207.658212345106254447234293562161862842258177616411416667119533332253666749.80422.89434.1315.2119.769.149.486.301.464.713.5440.4012.055.1521.257.7081867351108888474392944900506029608391778569OpenBenchmarking.org

Quicksilver

Input: CTS2

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CTS2abc2M4M6M8M10MSE +/- 23333.33, N = 3SE +/- 13333.33, N = 31139000011406667114166671. (CXX) g++ options: -fopenmp -O3 -march=native

Quicksilver

Input: CORAL2 P1

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P1abc3M6M9M12M15MSE +/- 17638.34, N = 3SE +/- 12018.50, N = 31199000011966667119533331. (CXX) g++ options: -fopenmp -O3 -march=native

Quicksilver

Input: CORAL2 P2

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P2abc5M10M15M20M25MSE +/- 102034.85, N = 3SE +/- 39299.42, N = 32256000022713333225366671. (CXX) g++ options: -fopenmp -O3 -march=native

Y-Cruncher

Pi Digits To Calculate: 1B

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.3Pi Digits To Calculate: 1Babc1122334455SE +/- 0.01, N = 3SE +/- 0.04, N = 350.4249.8349.80

Y-Cruncher

Pi Digits To Calculate: 500M

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.3Pi Digits To Calculate: 500Mabc612182430SE +/- 0.04, N = 3SE +/- 0.02, N = 323.2623.0822.89

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-50abc816243240SE +/- 0.48, N = 12SE +/- 0.34, N = 1534.0434.4434.13MIN: 28.91 / MAX: 36.18MIN: 26.93 / MAX: 39.45MIN: 26.23 / MAX: 39.87

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-152abc48121620SE +/- 0.12, N = 3SE +/- 0.08, N = 315.1515.2715.21MIN: 13.55 / MAX: 16.17MIN: 12.92 / MAX: 16.47MIN: 13.44 / MAX: 16.4

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-50abc510152025SE +/- 0.25, N = 15SE +/- 0.21, N = 320.2218.9519.76MIN: 18.6 / MAX: 20.71MIN: 14.69 / MAX: 21.46MIN: 16.58 / MAX: 21.09

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-152abc3691215SE +/- 0.07, N = 12SE +/- 0.08, N = 129.099.079.14MIN: 8.47 / MAX: 9.73MIN: 7.85 / MAX: 10MIN: 6.86 / MAX: 9.99

PyTorch

Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_labc3691215SE +/- 0.04, N = 3SE +/- 0.02, N = 39.589.469.48MIN: 8.52 / MAX: 9.8MIN: 8.4 / MAX: 9.86MIN: 8.48 / MAX: 9.81

PyTorch

Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_labc246810SE +/- 0.03, N = 3SE +/- 0.07, N = 36.226.286.30MIN: 5.85 / MAX: 6.47MIN: 5.5 / MAX: 6.59MIN: 5.7 / MAX: 6.6

TensorFlow

Device: CPU - Batch Size: 1 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: VGG-16abc0.32850.6570.98551.3141.6425SE +/- 0.00, N = 3SE +/- 0.00, N = 31.431.451.46

TensorFlow

Device: CPU - Batch Size: 1 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: AlexNetabc1.05982.11963.17944.23925.299SE +/- 0.00, N = 3SE +/- 0.01, N = 34.614.694.71

TensorFlow

Device: CPU - Batch Size: 16 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: VGG-16abc0.79651.5932.38953.1863.9825SE +/- 0.00, N = 3SE +/- 0.00, N = 33.503.523.54

TensorFlow

Device: CPU - Batch Size: 16 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: AlexNetabc918273645SE +/- 0.02, N = 3SE +/- 0.02, N = 340.0540.3440.40

TensorFlow

Device: CPU - Batch Size: 1 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: GoogLeNetabc3691215SE +/- 0.04, N = 3SE +/- 0.17, N = 312.1712.2212.05

TensorFlow

Device: CPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: ResNet-50abc1.15882.31763.47644.63525.794SE +/- 0.00, N = 3SE +/- 0.00, N = 35.095.135.15

TensorFlow

Device: CPU - Batch Size: 16 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: GoogLeNetabc510152025SE +/- 0.02, N = 3SE +/- 0.04, N = 321.1221.2021.25

TensorFlow

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-50abc246810SE +/- 0.00, N = 3SE +/- 0.01, N = 37.597.657.70

Speedb

Test: Random Fill

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Random Fillabc200K400K600K800K1000KSE +/- 1525.32, N = 3SE +/- 2403.56, N = 38193668212348186731. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Speedb

Test: Random Read

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Random Readabc11M22M33M44M55MSE +/- 51114.25, N = 3SE +/- 78995.12, N = 35145919051062544511088881. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Speedb

Test: Update Random

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Update Randomabc100K200K300K400K500KSE +/- 842.48, N = 3SE +/- 1807.73, N = 34585944723424743921. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Speedb

Test: Sequential Fill

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Sequential Fillabc200K400K600K800K1000KSE +/- 2992.09, N = 3SE +/- 3484.82, N = 39339619356219449001. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Speedb

Test: Random Fill Sync

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Random Fill Syncabc3K6K9K12K15KSE +/- 436.52, N = 15SE +/- 513.98, N = 1511900618650601. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Speedb

Test: Read While Writing

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read While Writingabc700K1400K2100K2800K3500KSE +/- 24784.71, N = 3SE +/- 41229.39, N = 33035137284225829608391. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Speedb

Test: Read Random Write Random

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read Random Write Randomabc400K800K1200K1600K2000KSE +/- 3532.02, N = 3SE +/- 579.88, N = 31769691177616417785691. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread


Phoronix Test Suite v10.8.4