1te

ok

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2405272-NE-1TE21756861
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
3090 back from dead
May 28
  41 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


1teOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen 9 3900X 12-Core @ 4.50GHz (12 Cores / 24 Threads)Gigabyte B550M AORUS PRO-P (F14e BIOS)AMD Starship/Matisse128GB2000GB Corsair Force MP600 + PC SN730 NVMe WDC 256GBNVIDIA GeForce RTX 3090 24GBNVIDIA GA102 HD AudioQ32V3WG5Realtek RTL8125 2.5GbEUbuntu 22.045.15.0-107-generic (x86_64)X Server 1.21.1.4NVIDIA1.3.242GCC 11.4.0 + CUDA 12.5ext41024x768ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDisplay ServerDisplay DriverVulkanCompilerFile-SystemScreen Resolution1te BenchmarksSystem Logs- Transparent Huge Pages: madvise- Scaling Governor: acpi-cpufreq schedutil (Boost: Disabled) - CPU Microcode: 0x8701021 - Python 3.10.12- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_rstack_overflow: Mitigation of safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected

1tepytorch: NVIDIA CUDA GPU - 1 - ResNet-50pytorch: NVIDIA CUDA GPU - 1 - ResNet-152pytorch: NVIDIA CUDA GPU - 16 - ResNet-50pytorch: NVIDIA CUDA GPU - 32 - ResNet-50pytorch: NVIDIA CUDA GPU - 64 - ResNet-50pytorch: NVIDIA CUDA GPU - 16 - ResNet-152pytorch: NVIDIA CUDA GPU - 256 - ResNet-50pytorch: NVIDIA CUDA GPU - 32 - ResNet-152pytorch: NVIDIA CUDA GPU - 512 - ResNet-50pytorch: NVIDIA CUDA GPU - 64 - ResNet-152pytorch: NVIDIA CUDA GPU - 256 - ResNet-152pytorch: NVIDIA CUDA GPU - 512 - ResNet-152pytorch: NVIDIA CUDA GPU - 1 - Efficientnet_v2_lpytorch: NVIDIA CUDA GPU - 16 - Efficientnet_v2_lpytorch: NVIDIA CUDA GPU - 32 - Efficientnet_v2_lpytorch: NVIDIA CUDA GPU - 64 - Efficientnet_v2_lpytorch: NVIDIA CUDA GPU - 256 - Efficientnet_v2_lpytorch: NVIDIA CUDA GPU - 512 - Efficientnet_v2_l3090 back from dead208.8971.47205.60204.72206.2572.67206.2873.38207.1573.3873.6772.7638.1837.8437.6637.7537.8937.50OpenBenchmarking.org

PyTorch

This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: ResNet-503090 back from dead50100150200250SE +/- 2.02, N = 3208.89MIN: 184.84 / MAX: 212.57

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: ResNet-1523090 back from dead1632486480SE +/- 0.87, N = 471.47MIN: 65.98 / MAX: 74.07

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: ResNet-503090 back from dead50100150200250SE +/- 0.54, N = 3205.60MIN: 181.67 / MAX: 207.62

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: ResNet-503090 back from dead4080120160200SE +/- 1.75, N = 3204.72MIN: 183.33 / MAX: 210.03

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: ResNet-503090 back from dead50100150200250SE +/- 0.65, N = 3206.25MIN: 182.91 / MAX: 209.25

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: ResNet-1523090 back from dead1632486480SE +/- 0.44, N = 1572.67MIN: 66.15 / MAX: 74.19

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: ResNet-503090 back from dead50100150200250SE +/- 0.41, N = 3206.28MIN: 184.63 / MAX: 209.46

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: ResNet-1523090 back from dead1632486480SE +/- 0.31, N = 373.38MIN: 68.15 / MAX: 74.16

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: ResNet-503090 back from dead50100150200250SE +/- 0.29, N = 3207.15MIN: 183.26 / MAX: 209.01

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: ResNet-1523090 back from dead1632486480SE +/- 0.06, N = 373.38MIN: 68.42 / MAX: 73.88

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: ResNet-1523090 back from dead1632486480SE +/- 0.09, N = 373.67MIN: 68.62 / MAX: 74.23

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: ResNet-1523090 back from dead1632486480SE +/- 0.65, N = 772.76MIN: 67.7 / MAX: 74.31

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: Efficientnet_v2_l3090 back from dead918273645SE +/- 0.24, N = 338.18MIN: 35.52 / MAX: 38.74

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: Efficientnet_v2_l3090 back from dead918273645SE +/- 0.24, N = 337.84MIN: 35.45 / MAX: 38.29

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: Efficientnet_v2_l3090 back from dead918273645SE +/- 0.02, N = 337.66MIN: 35.7 / MAX: 37.99

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: Efficientnet_v2_l3090 back from dead918273645SE +/- 0.08, N = 337.75MIN: 35.68 / MAX: 38.06

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: Efficientnet_v2_l3090 back from dead918273645SE +/- 0.11, N = 337.89MIN: 35.81 / MAX: 38.2

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: Efficientnet_v2_l3090 back from dead918273645SE +/- 0.16, N = 337.50MIN: 35.25 / MAX: 37.95