pytorch_all_l

AMD Ryzen 9 5900X 12-Core testing with a ASUS TUF GAMING B550M-PLUS (WI-FI) (1801 BIOS) and MSI NVIDIA GeForce RTX 3080 10GB on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2312292-NE-PYTORCHAL51
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
pytorch_all
December 29 2023
  3 Hours, 37 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


pytorch_all_lOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen 9 5900X 12-Core @ 3.70GHz (12 Cores / 24 Threads)ASUS TUF GAMING B550M-PLUS (WI-FI) (1801 BIOS)AMD Starship/Matisse4 x 16 GB DDR4-2666MT/s CRUCIAL1000GB Western Digital WDS100T2B0C-00PXH0MSI NVIDIA GeForce RTX 3080 10GBNVIDIA GA102 HD AudioLG TVRealtek RTL8125 2.5GbE + Intel Wi-Fi 6 AX200Ubuntu 22.046.2.0-39-generic (x86_64)GNOME Shell 42.9X Server 1.21.1.4NVIDIA 535.129.034.6.0OpenCL 3.0 CUDA 12.2.1471.3.242GCC 11.4.0 + CUDA 11.5ext45120x2880ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLOpenCLVulkanCompilerFile-SystemScreen ResolutionPytorch_all_l BenchmarksSystem Logs- Transparent Huge Pages: madvise- Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201009 - Python 3.10.12- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

pytorch_all_lpytorch: CPU - 1 - ResNet-50pytorch: CPU - 1 - ResNet-152pytorch: CPU - 16 - ResNet-50pytorch: CPU - 32 - ResNet-50pytorch: CPU - 64 - ResNet-50pytorch: CPU - 16 - ResNet-152pytorch: CPU - 256 - ResNet-50pytorch: CPU - 32 - ResNet-152pytorch: CPU - 512 - ResNet-50pytorch: CPU - 64 - ResNet-152pytorch: CPU - 256 - ResNet-152pytorch: CPU - 512 - ResNet-152pytorch: CPU - 1 - Efficientnet_v2_lpytorch: CPU - 16 - Efficientnet_v2_lpytorch: CPU - 32 - Efficientnet_v2_lpytorch: CPU - 64 - Efficientnet_v2_lpytorch: CPU - 256 - Efficientnet_v2_lpytorch: CPU - 512 - Efficientnet_v2_lpytorch_all37.3714.4526.6526.7826.2111.1323.3311.1626.3511.1011.2411.089.216.506.506.486.576.53OpenBenchmarking.org

PyTorch

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-50pytorch_all918273645SE +/- 0.26, N = 337.37MIN: 31.52 / MAX: 38.8

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-152pytorch_all48121620SE +/- 0.12, N = 914.45MIN: 13.01 / MAX: 15.49

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-50pytorch_all612182430SE +/- 0.45, N = 1526.65MIN: 20.59 / MAX: 29.47

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-50pytorch_all612182430SE +/- 0.42, N = 1526.78MIN: 20.17 / MAX: 28.27

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: ResNet-50pytorch_all612182430SE +/- 0.59, N = 1526.21MIN: 19.72 / MAX: 29.44

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-152pytorch_all3691215SE +/- 0.04, N = 311.13MIN: 10.87 / MAX: 11.26

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: ResNet-50pytorch_all612182430SE +/- 0.24, N = 323.33MIN: 19.48 / MAX: 24.45

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-152pytorch_all3691215SE +/- 0.05, N = 311.16MIN: 9.15 / MAX: 11.32

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 512 - Model: ResNet-50pytorch_all612182430SE +/- 0.52, N = 1526.35MIN: 19.72 / MAX: 28.69

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: ResNet-152pytorch_all3691215SE +/- 0.03, N = 311.10MIN: 10.21 / MAX: 11.22

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: ResNet-152pytorch_all3691215SE +/- 0.13, N = 311.24MIN: 9.07 / MAX: 11.69

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 512 - Model: ResNet-152pytorch_all3691215SE +/- 0.03, N = 311.08MIN: 10.89 / MAX: 11.21

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_lpytorch_all3691215SE +/- 0.02, N = 39.21MIN: 8.59 / MAX: 9.29

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_lpytorch_all246810SE +/- 0.01, N = 36.50MIN: 6.4 / MAX: 6.56

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_lpytorch_all246810SE +/- 0.01, N = 36.50MIN: 6.23 / MAX: 6.56

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_lpytorch_all246810SE +/- 0.02, N = 36.48MIN: 6.23 / MAX: 6.56

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_lpytorch_all246810SE +/- 0.05, N = 36.57MIN: 5.81 / MAX: 6.71

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_lpytorch_all246810SE +/- 0.03, N = 36.53MIN: 5.82 / MAX: 6.94