big bench

AMD Ryzen Threadripper 7980X 64-Cores testing with a ASUS Pro WS TRX50-SAGE WIFI (0217 BIOS) and AMD Radeon RX 7900 XT 20GB on Ubuntu 23.10 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2401079-PTS-BIGBENCH30.

big benchProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionabcAMD Ryzen Threadripper 7980X 64-Cores @ 8.21GHz (64 Cores / 128 Threads)ASUS Pro WS TRX50-SAGE WIFI (0217 BIOS)AMD Device 14a4128GB2000GB Corsair MP700 PRO + 1000GB Western Digital WDS100T1X0E-00AFY0AMD Radeon RX 7900 XT 20GB (2025/1249MHz)Realtek ALC1220DELL U2723QEAquantia Device 04c0 + Intel I226-LM + MEDIATEK MT7922 802.11ax PCIUbuntu 23.106.7.0-060700rc2daily20231126-generic (x86_64)GNOME Shell 45.0X Server 1.21.1.7 + Wayland4.6 Mesa 23.2.1-1ubuntu3 (LLVM 15.0.7 DRM 3.56)GCC 13.2.0ext43840x2160OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: amd-pstate-epp powersave (EPP: balance_performance) - CPU Microcode: 0xa108105 Python Details- Python 3.11.6Security Details- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

big benchquicksilver: CTS2quicksilver: CORAL2 P1quicksilver: CORAL2 P2pytorch: CPU - 1 - ResNet-50pytorch: CPU - 1 - ResNet-152pytorch: CPU - 16 - ResNet-50pytorch: CPU - 32 - ResNet-50pytorch: CPU - 64 - ResNet-50pytorch: CPU - 16 - ResNet-152pytorch: CPU - 256 - ResNet-50pytorch: CPU - 32 - ResNet-152pytorch: CPU - 512 - ResNet-50pytorch: CPU - 64 - ResNet-152pytorch: CPU - 256 - ResNet-152pytorch: CPU - 512 - ResNet-152pytorch: CPU - 1 - Efficientnet_v2_lpytorch: CPU - 16 - Efficientnet_v2_lpytorch: CPU - 32 - Efficientnet_v2_lpytorch: CPU - 64 - Efficientnet_v2_lpytorch: CPU - 256 - Efficientnet_v2_lpytorch: CPU - 512 - Efficientnet_v2_ltensorflow: CPU - 1 - VGG-16tensorflow: CPU - 1 - AlexNettensorflow: CPU - 16 - VGG-16tensorflow: CPU - 32 - VGG-16tensorflow: CPU - 64 - VGG-16tensorflow: CPU - 16 - AlexNettensorflow: CPU - 256 - VGG-16tensorflow: CPU - 32 - AlexNettensorflow: CPU - 512 - VGG-16tensorflow: CPU - 64 - AlexNettensorflow: CPU - 1 - GoogLeNettensorflow: CPU - 1 - ResNet-50tensorflow: CPU - 256 - AlexNettensorflow: CPU - 512 - AlexNettensorflow: CPU - 16 - GoogLeNettensorflow: CPU - 16 - ResNet-50tensorflow: CPU - 32 - GoogLeNettensorflow: CPU - 32 - ResNet-50tensorflow: CPU - 64 - GoogLeNettensorflow: CPU - 64 - ResNet-50tensorflow: CPU - 256 - GoogLeNettensorflow: CPU - 256 - ResNet-50tensorflow: CPU - 512 - GoogLeNettensorflow: CPU - 512 - ResNet-50abc20023333260266671977666759.0421.9447.4747.4747.5718.6847.4318.7647.5418.6918.9918.9612.197.527.497.467.477.479.7425.7244.4148.8952.95311.9757.14510.6657.97740.4621.827.191070.731145.66187.9254.70226.8569.43272.1680.0311.6091.12315.4995.0519973333260100001973333359.2221.9147.4147.2447.2218.6547.2418.6745.9718.6218.8018.4912.297.467.457.457.407.529.7325.8144.548.9353312.0857.11508.5257.97741.7722.597.261073.521148.38188.554.44231.6169.49272.2279.77311.2391.13315.8395.0820090000259200001982000059.6921.8047.6048.1247.8118.7143.0719.0246.6019.3819.1018.8212.277.507.527.527.547.489.7225.9644.4848.8152.91312.1457.15510.457.99741.0422.377.251075.091151.27188.5254.72222.0469.17272.8479.87311.7391.11316.0495.04OpenBenchmarking.org

Quicksilver

Input: CTS2

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CTS2abc4M8M12M16M20MSE +/- 40960.69, N = 3SE +/- 49103.07, N = 32002333319973333200900001. (CXX) g++ options: -fopenmp -O3 -march=native

Quicksilver

Input: CORAL2 P1

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P1abc6M12M18M24M30MSE +/- 92796.07, N = 3SE +/- 79372.54, N = 32602666726010000259200001. (CXX) g++ options: -fopenmp -O3 -march=native

Quicksilver

Input: CORAL2 P2

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P2abc4M8M12M16M20MSE +/- 23333.33, N = 3SE +/- 57831.17, N = 31977666719733333198200001. (CXX) g++ options: -fopenmp -O3 -march=native

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-50abc1326395265SE +/- 0.52, N = 3SE +/- 0.39, N = 1459.0459.2259.69MIN: 49.45 / MAX: 62.07MIN: 49.85 / MAX: 62.35MIN: 53.84 / MAX: 61.85

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-152abc510152025SE +/- 0.08, N = 3SE +/- 0.08, N = 321.9421.9121.80MIN: 20.91 / MAX: 22.35MIN: 20.97 / MAX: 22.39MIN: 21.22 / MAX: 22.14

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-50abc1122334455SE +/- 0.21, N = 3SE +/- 0.15, N = 347.4747.4147.60MIN: 43.56 / MAX: 48.81MIN: 43.15 / MAX: 48.78MIN: 43.77 / MAX: 48.91

PyTorch

Device: CPU - Batch Size: 32 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-50abc1122334455SE +/- 0.12, N = 3SE +/- 0.43, N = 1347.4747.2448.12MIN: 43.75 / MAX: 48.83MIN: 40.13 / MAX: 49.58MIN: 44.48 / MAX: 49.1

PyTorch

Device: CPU - Batch Size: 64 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: ResNet-50abc1122334455SE +/- 0.18, N = 3SE +/- 0.32, N = 347.5747.2247.81MIN: 43.64 / MAX: 49.11MIN: 43.76 / MAX: 48.92MIN: 43.73 / MAX: 48.84

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-152abc510152025SE +/- 0.05, N = 3SE +/- 0.17, N = 318.6818.6518.71MIN: 18.15 / MAX: 18.95MIN: 17.9 / MAX: 19.22MIN: 18.25 / MAX: 18.96

PyTorch

Device: CPU - Batch Size: 256 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: ResNet-50abc1122334455SE +/- 0.44, N = 6SE +/- 0.65, N = 347.4347.2443.07MIN: 39.54 / MAX: 49.38MIN: 39.04 / MAX: 49.14MIN: 39.73 / MAX: 45.38

PyTorch

Device: CPU - Batch Size: 32 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-152abc510152025SE +/- 0.14, N = 318.7618.6719.02MIN: 18.21 / MAX: 19.24MIN: 17.72 / MAX: 18.87MIN: 18.53 / MAX: 19.21

PyTorch

Device: CPU - Batch Size: 512 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 512 - Model: ResNet-50abc1122334455SE +/- 0.26, N = 347.5445.9746.60MIN: 43.78 / MAX: 49.15MIN: 43.46 / MAX: 48.21MIN: 43.8 / MAX: 47.68

PyTorch

Device: CPU - Batch Size: 64 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: ResNet-152abc510152025SE +/- 0.03, N = 318.6918.6219.38MIN: 18.14 / MAX: 18.93MIN: 18.09 / MAX: 18.8MIN: 18.61 / MAX: 19.57

PyTorch

Device: CPU - Batch Size: 256 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: ResNet-152abc510152025SE +/- 0.08, N = 318.9918.8019.10MIN: 18.38 / MAX: 19.33MIN: 18.29 / MAX: 19.02MIN: 18.59 / MAX: 19.3

PyTorch

Device: CPU - Batch Size: 512 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 512 - Model: ResNet-152abc510152025SE +/- 0.05, N = 318.9618.4918.82MIN: 18.41 / MAX: 19.25MIN: 18.01 / MAX: 18.69MIN: 18.25 / MAX: 19.03

PyTorch

Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_labc3691215SE +/- 0.03, N = 312.1912.2912.27MIN: 11.91 / MAX: 12.38MIN: 12.13 / MAX: 12.42MIN: 12 / MAX: 12.4

PyTorch

Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_labc246810SE +/- 0.02, N = 37.527.467.50MIN: 6.92 / MAX: 8.16MIN: 6.82 / MAX: 8.06MIN: 7.02 / MAX: 8.12

PyTorch

Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_labc246810SE +/- 0.01, N = 37.497.457.52MIN: 6.96 / MAX: 8.11MIN: 6.99 / MAX: 8.1MIN: 7.05 / MAX: 8.13

PyTorch

Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_labc246810SE +/- 0.02, N = 37.467.457.52MIN: 6.94 / MAX: 8.17MIN: 6.98 / MAX: 8.07MIN: 5.94 / MAX: 8.1

PyTorch

Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_labc246810SE +/- 0.01, N = 37.477.407.54MIN: 6.98 / MAX: 8.1MIN: 6.92 / MAX: 8.06MIN: 7.01 / MAX: 8.17

PyTorch

Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_labc246810SE +/- 0.01, N = 37.477.527.48MIN: 6.97 / MAX: 8.13MIN: 7.01 / MAX: 8.06MIN: 7.03 / MAX: 8.13

TensorFlow

Device: CPU - Batch Size: 1 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: VGG-16abc3691215SE +/- 0.01, N = 39.749.739.72

TensorFlow

Device: CPU - Batch Size: 1 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: AlexNetabc612182430SE +/- 0.03, N = 325.7225.8125.96

TensorFlow

Device: CPU - Batch Size: 16 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: VGG-16abc1020304050SE +/- 0.01, N = 344.4144.5044.48

TensorFlow

Device: CPU - Batch Size: 32 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: VGG-16abc1122334455SE +/- 0.07, N = 348.8948.9348.81

TensorFlow

Device: CPU - Batch Size: 64 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: VGG-16abc1224364860SE +/- 0.06, N = 352.9553.0052.91

TensorFlow

Device: CPU - Batch Size: 16 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: AlexNetabc70140210280350SE +/- 0.95, N = 3311.97312.08312.14

TensorFlow

Device: CPU - Batch Size: 256 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: VGG-16abc1326395265SE +/- 0.02, N = 357.1457.1157.15

TensorFlow

Device: CPU - Batch Size: 32 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: AlexNetabc110220330440550SE +/- 0.43, N = 3510.66508.52510.40

TensorFlow

Device: CPU - Batch Size: 512 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: VGG-16abc1326395265SE +/- 0.02, N = 357.9757.9757.99

TensorFlow

Device: CPU - Batch Size: 64 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: AlexNetabc160320480640800SE +/- 0.53, N = 3740.46741.77741.04

TensorFlow

Device: CPU - Batch Size: 1 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: GoogLeNetabc510152025SE +/- 0.29, N = 1521.8222.5922.37

TensorFlow

Device: CPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: ResNet-50abc246810SE +/- 0.04, N = 37.197.267.25

TensorFlow

Device: CPU - Batch Size: 256 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: AlexNetabc2004006008001000SE +/- 2.11, N = 31070.731073.521075.09

TensorFlow

Device: CPU - Batch Size: 512 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: AlexNetabc2004006008001000SE +/- 0.89, N = 31145.661148.381151.27

TensorFlow

Device: CPU - Batch Size: 16 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: GoogLeNetabc4080120160200SE +/- 0.56, N = 3187.92188.50188.52

TensorFlow

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-50abc1224364860SE +/- 0.07, N = 354.7054.4454.72

TensorFlow

Device: CPU - Batch Size: 32 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: GoogLeNetabc50100150200250SE +/- 0.94, N = 3226.85231.61222.04

TensorFlow

Device: CPU - Batch Size: 32 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: ResNet-50abc1530456075SE +/- 0.03, N = 369.4369.4969.17

TensorFlow

Device: CPU - Batch Size: 64 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: GoogLeNetabc60120180240300SE +/- 0.93, N = 3272.16272.22272.84

TensorFlow

Device: CPU - Batch Size: 64 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: ResNet-50abc20406080100SE +/- 0.06, N = 380.0079.7779.87

TensorFlow

Device: CPU - Batch Size: 256 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: GoogLeNetabc70140210280350SE +/- 0.26, N = 3311.60311.23311.73

TensorFlow

Device: CPU - Batch Size: 256 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: ResNet-50abc20406080100SE +/- 0.03, N = 391.1291.1391.11

TensorFlow

Device: CPU - Batch Size: 512 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: GoogLeNetabc70140210280350SE +/- 0.17, N = 3315.49315.83316.04

TensorFlow

Device: CPU - Batch Size: 512 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: ResNet-50abc20406080100SE +/- 0.01, N = 395.0595.0895.04


Phoronix Test Suite v10.8.4