dghhg

AMD Ryzen Threadripper 3990X 64-Core testing with a Gigabyte TRX40 AORUS PRO WIFI (F6 BIOS) and AMD Radeon RX 5700 8GB on Ubuntu 23.10 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2401086-PTS-DGHHG38612&sor&grr.

dghhgProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionabcdAMD Ryzen Threadripper 3990X 64-Core @ 2.90GHz (64 Cores / 128 Threads)Gigabyte TRX40 AORUS PRO WIFI (F6 BIOS)AMD Starship/Matisse128GBSamsung SSD 970 EVO Plus 500GBAMD Radeon RX 5700 8GB (1750/875MHz)AMD Navi 10 HDMI AudioDELL P2415QIntel I211 + Intel Wi-Fi 6 AX200Ubuntu 23.106.5.0-14-generic (x86_64)GNOME Shell 45.0X Server + Wayland4.6 Mesa 23.2.1-1ubuntu3 (LLVM 15.0.7 DRM 3.54)GCC 13.2.0ext43840x2160OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x830107aPython Details- Python 3.11.6Security Details- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_rstack_overflow: Mitigation of safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

dghhgpytorch: CPU - 256 - Efficientnet_v2_lpytorch: CPU - 512 - Efficientnet_v2_lpytorch: CPU - 32 - Efficientnet_v2_lpytorch: CPU - 64 - Efficientnet_v2_lpytorch: CPU - 16 - Efficientnet_v2_ltensorflow: CPU - 16 - VGG-16y-cruncher: 10Bpytorch: CPU - 256 - ResNet-152pytorch: CPU - 512 - ResNet-152pytorch: CPU - 64 - ResNet-152pytorch: CPU - 16 - ResNet-152pytorch: CPU - 32 - ResNet-152pytorch: CPU - 1 - Efficientnet_v2_lquicksilver: CTS2quicksilver: CORAL2 P2y-cruncher: 5Btensorflow: CPU - 16 - ResNet-50pytorch: CPU - 1 - ResNet-152pytorch: CPU - 512 - ResNet-50pytorch: CPU - 256 - ResNet-50pytorch: CPU - 16 - ResNet-50pytorch: CPU - 64 - ResNet-50pytorch: CPU - 32 - ResNet-50tensorflow: CPU - 1 - VGG-16tensorflow: CPU - 16 - GoogLeNetspeedb: Update Randspeedb: Read Rand Write Randspeedb: Read While Writingspeedb: Rand Readpytorch: CPU - 1 - ResNet-50quicksilver: CORAL2 P1y-cruncher: 1Btensorflow: CPU - 16 - AlexNettensorflow: CPU - 1 - ResNet-50tensorflow: CPU - 1 - AlexNettensorflow: CPU - 1 - GoogLeNety-cruncher: 500Mabcd2.922.912.952.942.903.64238.5606.886.987.096.967.034.342074000024610000111.0649.718.5817.7217.6417.3917.6917.371.8430.325996221939331281229218361607321.512565000019.00753.795.024.918.699.5292.923.002.892.933.033.64238.2266.826.886.847.046.924.312080000024600000110.6649.548.3616.4717.2717.7317.2617.851.8330.5325849421827491325674718412958220.542580000019.1154.835.034.898.79.5492.872.902.892.962.933.64238.4686.967.066.917.047.114.362068000024540000110.8589.738.4816.9016.5317.0317.4417.241.8330.2825842021948441304208518389968221.422554000019.12653.825.024.938.729.5322.882.912.992.912.953.64238.5686.996.826.996.877.014.342077000024550000110.6179.668.3817.0717.3817.4816.9417.901.8430.6725816822020681291785818306193620.792574000019.0854.294.994.898.699.542OpenBenchmarking.org

PyTorch

Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_lbadc0.6571.3141.9712.6283.2852.922.922.882.87MIN: 2.81 / MAX: 3.04MIN: 2.82 / MAX: 3.06MIN: 2.78 / MAX: 2.98MIN: 2.74 / MAX: 2.99

PyTorch

Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_lbdac0.6751.352.0252.73.3753.002.912.912.90MIN: 2.91 / MAX: 3.1MIN: 2.8 / MAX: 3.01MIN: 2.81 / MAX: 3.01MIN: 2.77 / MAX: 3.01

PyTorch

Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_ldacb0.67281.34562.01842.69123.3642.992.952.892.89MIN: 2.88 / MAX: 3.07MIN: 2.77 / MAX: 3.07MIN: 2.75 / MAX: 3.02MIN: 2.8 / MAX: 3

PyTorch

Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_lcabd0.6661.3321.9982.6643.332.962.942.932.91MIN: 2.81 / MAX: 3.07MIN: 2.82 / MAX: 3.09MIN: 2.79 / MAX: 3.06MIN: 2.73 / MAX: 3.13

PyTorch

Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_lbdca0.68181.36362.04542.72723.4093.032.952.932.90MIN: 2.92 / MAX: 3.15MIN: 2.85 / MAX: 3.06MIN: 2.83 / MAX: 3.05MIN: 2.8 / MAX: 3.01

TensorFlow

Device: CPU - Batch Size: 16 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: VGG-16dcba0.8191.6382.4573.2764.0953.643.643.643.64

Y-Cruncher

Pi Digits To Calculate: 10B

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.3Pi Digits To Calculate: 10Bbcad50100150200250SE +/- 0.08, N = 3238.23238.47238.56238.57

PyTorch

Device: CPU - Batch Size: 256 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: ResNet-152dcab2468106.996.966.886.82MIN: 6.8 / MAX: 7.16MIN: 6.83 / MAX: 7.08MIN: 6.61 / MAX: 7.02MIN: 6.68 / MAX: 7.01

PyTorch

Device: CPU - Batch Size: 512 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 512 - Model: ResNet-152cabd2468107.066.986.886.82MIN: 6.92 / MAX: 7.17MIN: 6.83 / MAX: 7.16MIN: 6.75 / MAX: 7.02MIN: 6.57 / MAX: 7

PyTorch

Device: CPU - Batch Size: 64 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: ResNet-152adcb2468107.096.996.916.84MIN: 6.94 / MAX: 7.22MIN: 6.81 / MAX: 7.11MIN: 6.75 / MAX: 7.03MIN: 6.7 / MAX: 6.97

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-152cbad2468107.047.046.966.87MIN: 6.57 / MAX: 7.18MIN: 6.9 / MAX: 7.17MIN: 6.82 / MAX: 7.1MIN: 6.73 / MAX: 7

PyTorch

Device: CPU - Batch Size: 32 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-152cadb2468107.117.037.016.92MIN: 6.98 / MAX: 7.26MIN: 6.9 / MAX: 7.16MIN: 6.82 / MAX: 7.14MIN: 6.79 / MAX: 7.07

PyTorch

Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_lcdab0.9811.9622.9433.9244.9054.364.344.344.31MIN: 4.13 / MAX: 4.51MIN: 4.03 / MAX: 4.53MIN: 4.15 / MAX: 4.53MIN: 3.99 / MAX: 4.46

Quicksilver

Input: CTS2

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CTS2bdac4M8M12M16M20M208000002077000020740000206800001. (CXX) g++ options: -fopenmp -O3 -march=native

Quicksilver

Input: CORAL2 P2

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P2abdc5M10M15M20M25M246100002460000024550000245400001. (CXX) g++ options: -fopenmp -O3 -march=native

Y-Cruncher

Pi Digits To Calculate: 5B

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.3Pi Digits To Calculate: 5Bdbca20406080100SE +/- 0.11, N = 3110.62110.66110.86111.06

TensorFlow

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-50cadb36912159.739.719.669.54

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-152acdb2468108.588.488.388.36MIN: 8.38 / MAX: 8.76MIN: 8.26 / MAX: 8.67MIN: 8.19 / MAX: 8.58MIN: 8.21 / MAX: 8.57

PyTorch

Device: CPU - Batch Size: 512 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 512 - Model: ResNet-50adcb4812162017.7217.0716.9016.47MIN: 17.17 / MAX: 18.5MIN: 16.08 / MAX: 17.92MIN: 16.19 / MAX: 17.59MIN: 15.78 / MAX: 17.3

PyTorch

Device: CPU - Batch Size: 256 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: ResNet-50adbc4812162017.6417.3817.2716.53MIN: 16.6 / MAX: 18.41MIN: 16.76 / MAX: 18MIN: 16.5 / MAX: 17.89MIN: 15.99 / MAX: 17.32

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-50bdac4812162017.7317.4817.3917.03MIN: 17.04 / MAX: 18.52MIN: 16.71 / MAX: 18.01MIN: 16.36 / MAX: 18.06MIN: 16.02 / MAX: 17.78

PyTorch

Device: CPU - Batch Size: 64 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: ResNet-50acbd4812162017.6917.4417.2616.94MIN: 16.96 / MAX: 18.46MIN: 15.61 / MAX: 18.15MIN: 16.19 / MAX: 17.95MIN: 16.24 / MAX: 17.77

PyTorch

Device: CPU - Batch Size: 32 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-50dbac4812162017.9017.8517.3717.24MIN: 17.25 / MAX: 18.79MIN: 17.23 / MAX: 18.44MIN: 16.47 / MAX: 17.93MIN: 16.41 / MAX: 17.86

TensorFlow

Device: CPU - Batch Size: 1 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: VGG-16dacb0.4140.8281.2421.6562.071.841.841.831.83

TensorFlow

Device: CPU - Batch Size: 16 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: GoogLeNetdbac71421283530.6730.5330.3030.28

Speedb

Test: Update Random

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Update Randomabcd60K120K180K240K300K2599622584942584202581681. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Speedb

Test: Read Random Write Random

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read Random Write Randomdcab500K1000K1500K2000K2500K22020682194844219393321827491. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Speedb

Test: Read While Writing

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read While Writingbcda3M6M9M12M15M132567471304208512917858128122921. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Speedb

Test: Random Read

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Random Readbcad40M80M120M160M200M1841295821838996821836160731830619361. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-50acdb51015202521.5121.4220.7920.54MIN: 20.69 / MAX: 22.56MIN: 20.69 / MAX: 22.59MIN: 19.68 / MAX: 21.6MIN: 19.65 / MAX: 21.6

Quicksilver

Input: CORAL2 P1

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P1bdac6M12M18M24M30M258000002574000025650000255400001. (CXX) g++ options: -fopenmp -O3 -march=native

Y-Cruncher

Pi Digits To Calculate: 1B

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.3Pi Digits To Calculate: 1Badbc510152025SE +/- 0.01, N = 319.0119.0819.1119.13

TensorFlow

Device: CPU - Batch Size: 16 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: AlexNetbdca122436486054.8354.2953.8253.79

TensorFlow

Device: CPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: ResNet-50bcad1.13182.26363.39544.52725.6595.035.025.024.99

TensorFlow

Device: CPU - Batch Size: 1 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: AlexNetcadb1.10932.21863.32794.43725.54654.934.914.894.89

TensorFlow

Device: CPU - Batch Size: 1 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: GoogLeNetcbda2468108.728.708.698.69

Y-Cruncher

Pi Digits To Calculate: 500M

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.3Pi Digits To Calculate: 500Macdb36912159.5299.5329.5429.549


Phoronix Test Suite v10.8.5