dghhg AMD Ryzen Threadripper 3990X 64-Core testing with a Gigabyte TRX40 AORUS PRO WIFI (F6 BIOS) and AMD Radeon RX 5700 8GB on Ubuntu 23.10 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2401086-PTS-DGHHG38612&sro&grs .
dghhg Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server OpenGL Compiler File-System Screen Resolution a b c d AMD Ryzen Threadripper 3990X 64-Core @ 2.90GHz (64 Cores / 128 Threads) Gigabyte TRX40 AORUS PRO WIFI (F6 BIOS) AMD Starship/Matisse 128GB Samsung SSD 970 EVO Plus 500GB AMD Radeon RX 5700 8GB (1750/875MHz) AMD Navi 10 HDMI Audio DELL P2415Q Intel I211 + Intel Wi-Fi 6 AX200 Ubuntu 23.10 6.5.0-14-generic (x86_64) GNOME Shell 45.0 X Server + Wayland 4.6 Mesa 23.2.1-1ubuntu3 (LLVM 15.0.7 DRM 3.54) GCC 13.2.0 ext4 3840x2160 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x830107a Python Details - Python 3.11.6 Security Details - gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_rstack_overflow: Mitigation of safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
dghhg pytorch: CPU - 512 - ResNet-50 pytorch: CPU - 256 - ResNet-50 pytorch: CPU - 1 - ResNet-50 pytorch: CPU - 16 - Efficientnet_v2_l pytorch: CPU - 64 - ResNet-50 pytorch: CPU - 16 - ResNet-50 pytorch: CPU - 32 - ResNet-50 pytorch: CPU - 64 - ResNet-152 pytorch: CPU - 512 - ResNet-152 speedb: Read While Writing pytorch: CPU - 32 - Efficientnet_v2_l pytorch: CPU - 512 - Efficientnet_v2_l pytorch: CPU - 32 - ResNet-152 pytorch: CPU - 1 - ResNet-152 pytorch: CPU - 256 - ResNet-152 pytorch: CPU - 16 - ResNet-152 tensorflow: CPU - 16 - ResNet-50 tensorflow: CPU - 16 - AlexNet pytorch: CPU - 256 - Efficientnet_v2_l pytorch: CPU - 64 - Efficientnet_v2_l tensorflow: CPU - 16 - GoogLeNet pytorch: CPU - 1 - Efficientnet_v2_l quicksilver: CORAL2 P1 speedb: Read Rand Write Rand tensorflow: CPU - 1 - AlexNet tensorflow: CPU - 1 - ResNet-50 speedb: Update Rand y-cruncher: 1B speedb: Rand Read quicksilver: CTS2 tensorflow: CPU - 1 - VGG-16 y-cruncher: 5B tensorflow: CPU - 1 - GoogLeNet quicksilver: CORAL2 P2 y-cruncher: 500M y-cruncher: 10B tensorflow: CPU - 16 - VGG-16 a b c d 17.72 17.64 21.51 2.90 17.69 17.39 17.37 7.09 6.98 12812292 2.95 2.91 7.03 8.58 6.88 6.96 9.71 53.79 2.92 2.94 30.3 4.34 25650000 2193933 4.91 5.02 259962 19.007 183616073 20740000 1.84 111.064 8.69 24610000 9.529 238.560 3.64 16.47 17.27 20.54 3.03 17.26 17.73 17.85 6.84 6.88 13256747 2.89 3.00 6.92 8.36 6.82 7.04 9.54 54.83 2.92 2.93 30.53 4.31 25800000 2182749 4.89 5.03 258494 19.11 184129582 20800000 1.83 110.664 8.7 24600000 9.549 238.226 3.64 16.90 16.53 21.42 2.93 17.44 17.03 17.24 6.91 7.06 13042085 2.89 2.90 7.11 8.48 6.96 7.04 9.73 53.82 2.87 2.96 30.28 4.36 25540000 2194844 4.93 5.02 258420 19.126 183899682 20680000 1.83 110.858 8.72 24540000 9.532 238.468 3.64 17.07 17.38 20.79 2.95 16.94 17.48 17.90 6.99 6.82 12917858 2.99 2.91 7.01 8.38 6.99 6.87 9.66 54.29 2.88 2.91 30.67 4.34 25740000 2202068 4.89 4.99 258168 19.08 183061936 20770000 1.84 110.617 8.69 24550000 9.542 238.568 3.64 OpenBenchmarking.org
PyTorch Device: CPU - Batch Size: 512 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: ResNet-50 a b c d 4 8 12 16 20 17.72 16.47 16.90 17.07 MIN: 17.17 / MAX: 18.5 MIN: 15.78 / MAX: 17.3 MIN: 16.19 / MAX: 17.59 MIN: 16.08 / MAX: 17.92
PyTorch Device: CPU - Batch Size: 256 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: ResNet-50 a b c d 4 8 12 16 20 17.64 17.27 16.53 17.38 MIN: 16.6 / MAX: 18.41 MIN: 16.5 / MAX: 17.89 MIN: 15.99 / MAX: 17.32 MIN: 16.76 / MAX: 18
PyTorch Device: CPU - Batch Size: 1 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-50 a b c d 5 10 15 20 25 21.51 20.54 21.42 20.79 MIN: 20.69 / MAX: 22.56 MIN: 19.65 / MAX: 21.6 MIN: 20.69 / MAX: 22.59 MIN: 19.68 / MAX: 21.6
PyTorch Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l a b c d 0.6818 1.3636 2.0454 2.7272 3.409 2.90 3.03 2.93 2.95 MIN: 2.8 / MAX: 3.01 MIN: 2.92 / MAX: 3.15 MIN: 2.83 / MAX: 3.05 MIN: 2.85 / MAX: 3.06
PyTorch Device: CPU - Batch Size: 64 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: ResNet-50 a b c d 4 8 12 16 20 17.69 17.26 17.44 16.94 MIN: 16.96 / MAX: 18.46 MIN: 16.19 / MAX: 17.95 MIN: 15.61 / MAX: 18.15 MIN: 16.24 / MAX: 17.77
PyTorch Device: CPU - Batch Size: 16 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 a b c d 4 8 12 16 20 17.39 17.73 17.03 17.48 MIN: 16.36 / MAX: 18.06 MIN: 17.04 / MAX: 18.52 MIN: 16.02 / MAX: 17.78 MIN: 16.71 / MAX: 18.01
PyTorch Device: CPU - Batch Size: 32 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: ResNet-50 a b c d 4 8 12 16 20 17.37 17.85 17.24 17.90 MIN: 16.47 / MAX: 17.93 MIN: 17.23 / MAX: 18.44 MIN: 16.41 / MAX: 17.86 MIN: 17.25 / MAX: 18.79
PyTorch Device: CPU - Batch Size: 64 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: ResNet-152 a b c d 2 4 6 8 10 7.09 6.84 6.91 6.99 MIN: 6.94 / MAX: 7.22 MIN: 6.7 / MAX: 6.97 MIN: 6.75 / MAX: 7.03 MIN: 6.81 / MAX: 7.11
PyTorch Device: CPU - Batch Size: 512 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: ResNet-152 a b c d 2 4 6 8 10 6.98 6.88 7.06 6.82 MIN: 6.83 / MAX: 7.16 MIN: 6.75 / MAX: 7.02 MIN: 6.92 / MAX: 7.17 MIN: 6.57 / MAX: 7
Speedb Test: Read While Writing OpenBenchmarking.org Op/s, More Is Better Speedb 2.7 Test: Read While Writing a b c d 3M 6M 9M 12M 15M 12812292 13256747 13042085 12917858 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
PyTorch Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l a b c d 0.6728 1.3456 2.0184 2.6912 3.364 2.95 2.89 2.89 2.99 MIN: 2.77 / MAX: 3.07 MIN: 2.8 / MAX: 3 MIN: 2.75 / MAX: 3.02 MIN: 2.88 / MAX: 3.07
PyTorch Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l a b c d 0.675 1.35 2.025 2.7 3.375 2.91 3.00 2.90 2.91 MIN: 2.81 / MAX: 3.01 MIN: 2.91 / MAX: 3.1 MIN: 2.77 / MAX: 3.01 MIN: 2.8 / MAX: 3.01
PyTorch Device: CPU - Batch Size: 32 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: ResNet-152 a b c d 2 4 6 8 10 7.03 6.92 7.11 7.01 MIN: 6.9 / MAX: 7.16 MIN: 6.79 / MAX: 7.07 MIN: 6.98 / MAX: 7.26 MIN: 6.82 / MAX: 7.14
PyTorch Device: CPU - Batch Size: 1 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-152 a b c d 2 4 6 8 10 8.58 8.36 8.48 8.38 MIN: 8.38 / MAX: 8.76 MIN: 8.21 / MAX: 8.57 MIN: 8.26 / MAX: 8.67 MIN: 8.19 / MAX: 8.58
PyTorch Device: CPU - Batch Size: 256 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: ResNet-152 a b c d 2 4 6 8 10 6.88 6.82 6.96 6.99 MIN: 6.61 / MAX: 7.02 MIN: 6.68 / MAX: 7.01 MIN: 6.83 / MAX: 7.08 MIN: 6.8 / MAX: 7.16
PyTorch Device: CPU - Batch Size: 16 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-152 a b c d 2 4 6 8 10 6.96 7.04 7.04 6.87 MIN: 6.82 / MAX: 7.1 MIN: 6.9 / MAX: 7.17 MIN: 6.57 / MAX: 7.18 MIN: 6.73 / MAX: 7
TensorFlow Device: CPU - Batch Size: 16 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: ResNet-50 a b c d 3 6 9 12 15 9.71 9.54 9.73 9.66
TensorFlow Device: CPU - Batch Size: 16 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: AlexNet a b c d 12 24 36 48 60 53.79 54.83 53.82 54.29
PyTorch Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l a b c d 0.657 1.314 1.971 2.628 3.285 2.92 2.92 2.87 2.88 MIN: 2.82 / MAX: 3.06 MIN: 2.81 / MAX: 3.04 MIN: 2.74 / MAX: 2.99 MIN: 2.78 / MAX: 2.98
PyTorch Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l a b c d 0.666 1.332 1.998 2.664 3.33 2.94 2.93 2.96 2.91 MIN: 2.82 / MAX: 3.09 MIN: 2.79 / MAX: 3.06 MIN: 2.81 / MAX: 3.07 MIN: 2.73 / MAX: 3.13
TensorFlow Device: CPU - Batch Size: 16 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: GoogLeNet a b c d 7 14 21 28 35 30.30 30.53 30.28 30.67
PyTorch Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l a b c d 0.981 1.962 2.943 3.924 4.905 4.34 4.31 4.36 4.34 MIN: 4.15 / MAX: 4.53 MIN: 3.99 / MAX: 4.46 MIN: 4.13 / MAX: 4.51 MIN: 4.03 / MAX: 4.53
Quicksilver Input: CORAL2 P1 OpenBenchmarking.org Figure Of Merit, More Is Better Quicksilver 20230818 Input: CORAL2 P1 a b c d 6M 12M 18M 24M 30M 25650000 25800000 25540000 25740000 1. (CXX) g++ options: -fopenmp -O3 -march=native
Speedb Test: Read Random Write Random OpenBenchmarking.org Op/s, More Is Better Speedb 2.7 Test: Read Random Write Random a b c d 500K 1000K 1500K 2000K 2500K 2193933 2182749 2194844 2202068 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
TensorFlow Device: CPU - Batch Size: 1 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 1 - Model: AlexNet a b c d 1.1093 2.2186 3.3279 4.4372 5.5465 4.91 4.89 4.93 4.89
TensorFlow Device: CPU - Batch Size: 1 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 1 - Model: ResNet-50 a b c d 1.1318 2.2636 3.3954 4.5272 5.659 5.02 5.03 5.02 4.99
Speedb Test: Update Random OpenBenchmarking.org Op/s, More Is Better Speedb 2.7 Test: Update Random a b c d 60K 120K 180K 240K 300K 259962 258494 258420 258168 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Y-Cruncher Pi Digits To Calculate: 1B OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.8.3 Pi Digits To Calculate: 1B a b c d 5 10 15 20 25 SE +/- 0.01, N = 3 19.01 19.11 19.13 19.08
Speedb Test: Random Read OpenBenchmarking.org Op/s, More Is Better Speedb 2.7 Test: Random Read a b c d 40M 80M 120M 160M 200M 183616073 184129582 183899682 183061936 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Quicksilver Input: CTS2 OpenBenchmarking.org Figure Of Merit, More Is Better Quicksilver 20230818 Input: CTS2 a b c d 4M 8M 12M 16M 20M 20740000 20800000 20680000 20770000 1. (CXX) g++ options: -fopenmp -O3 -march=native
TensorFlow Device: CPU - Batch Size: 1 - Model: VGG-16 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 1 - Model: VGG-16 a b c d 0.414 0.828 1.242 1.656 2.07 1.84 1.83 1.83 1.84
Y-Cruncher Pi Digits To Calculate: 5B OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.8.3 Pi Digits To Calculate: 5B a b c d 20 40 60 80 100 SE +/- 0.11, N = 3 111.06 110.66 110.86 110.62
TensorFlow Device: CPU - Batch Size: 1 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 1 - Model: GoogLeNet a b c d 2 4 6 8 10 8.69 8.70 8.72 8.69
Quicksilver Input: CORAL2 P2 OpenBenchmarking.org Figure Of Merit, More Is Better Quicksilver 20230818 Input: CORAL2 P2 a b c d 5M 10M 15M 20M 25M 24610000 24600000 24540000 24550000 1. (CXX) g++ options: -fopenmp -O3 -march=native
Y-Cruncher Pi Digits To Calculate: 500M OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.8.3 Pi Digits To Calculate: 500M a b c d 3 6 9 12 15 9.529 9.549 9.532 9.542
Y-Cruncher Pi Digits To Calculate: 10B OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.8.3 Pi Digits To Calculate: 10B a b c d 50 100 150 200 250 SE +/- 0.08, N = 3 238.56 238.23 238.47 238.57
TensorFlow Device: CPU - Batch Size: 16 - Model: VGG-16 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: VGG-16 a b c d 0.819 1.638 2.457 3.276 4.095 3.64 3.64 3.64 3.64
Phoronix Test Suite v10.8.5