1te ok
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 2405272-NE-1TE21756861 3090 back from dead Processor: AMD Ryzen 9 3900X 12-Core @ 4.50GHz (12 Cores / 24 Threads), Motherboard: Gigabyte B550M AORUS PRO-P (F14e BIOS), Chipset: AMD Starship/Matisse, Memory: 128GB, Disk: 2000GB Corsair Force MP600 + PC SN730 NVMe WDC 256GB, Graphics: NVIDIA GeForce RTX 3090 24GB, Audio: NVIDIA GA102 HD Audio, Monitor: Q32V3WG5, Network: Realtek RTL8125 2.5GbE
OS: Ubuntu 22.04, Kernel: 5.15.0-107-generic (x86_64), Display Server: X Server 1.21.1.4, Display Driver: NVIDIA, Vulkan: 1.3.242, Compiler: GCC 11.4.0 + CUDA 12.5, File-System: ext4, Screen Resolution: 1024x768
Kernel Notes: Transparent Huge Pages: madviseProcessor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Disabled) - CPU Microcode: 0x8701021Python Notes: Python 3.10.12Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_rstack_overflow: Mitigation of safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: ResNet-152 3090 back from dead 16 32 48 64 80 SE +/- 0.65, N = 7 72.76 MIN: 67.7 / MAX: 74.31
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: Efficientnet_v2_l 3090 back from dead 9 18 27 36 45 SE +/- 0.16, N = 3 37.50 MIN: 35.25 / MAX: 37.95
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: Efficientnet_v2_l 3090 back from dead 9 18 27 36 45 SE +/- 0.02, N = 3 37.66 MIN: 35.7 / MAX: 37.99
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: Efficientnet_v2_l 3090 back from dead 9 18 27 36 45 SE +/- 0.08, N = 3 37.75 MIN: 35.68 / MAX: 38.06
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: Efficientnet_v2_l 3090 back from dead 9 18 27 36 45 SE +/- 0.24, N = 3 37.84 MIN: 35.45 / MAX: 38.29
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: Efficientnet_v2_l 3090 back from dead 9 18 27 36 45 SE +/- 0.11, N = 3 37.89 MIN: 35.81 / MAX: 38.2
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: ResNet-152 3090 back from dead 16 32 48 64 80 SE +/- 0.06, N = 3 73.38 MIN: 68.42 / MAX: 73.88
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: ResNet-152 3090 back from dead 16 32 48 64 80 SE +/- 0.31, N = 3 73.38 MIN: 68.15 / MAX: 74.16
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: ResNet-152 3090 back from dead 16 32 48 64 80 SE +/- 0.09, N = 3 73.67 MIN: 68.62 / MAX: 74.23
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: Efficientnet_v2_l 3090 back from dead 9 18 27 36 45 SE +/- 0.24, N = 3 38.18 MIN: 35.52 / MAX: 38.74
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: ResNet-152 3090 back from dead 16 32 48 64 80 SE +/- 0.87, N = 4 71.47 MIN: 65.98 / MAX: 74.07
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: ResNet-50 3090 back from dead 50 100 150 200 250 SE +/- 0.41, N = 3 206.28 MIN: 184.63 / MAX: 209.46
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: ResNet-50 3090 back from dead 40 80 120 160 200 SE +/- 1.75, N = 3 204.72 MIN: 183.33 / MAX: 210.03
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: ResNet-50 3090 back from dead 50 100 150 200 250 SE +/- 0.54, N = 3 205.60 MIN: 181.67 / MAX: 207.62
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: ResNet-50 3090 back from dead 50 100 150 200 250 SE +/- 0.65, N = 3 206.25 MIN: 182.91 / MAX: 209.25
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: ResNet-50 3090 back from dead 50 100 150 200 250 SE +/- 0.29, N = 3 207.15 MIN: 183.26 / MAX: 209.01
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: ResNet-50 3090 back from dead 50 100 150 200 250 SE +/- 2.02, N = 3 208.89 MIN: 184.84 / MAX: 212.57
3090 back from dead Processor: AMD Ryzen 9 3900X 12-Core @ 4.50GHz (12 Cores / 24 Threads), Motherboard: Gigabyte B550M AORUS PRO-P (F14e BIOS), Chipset: AMD Starship/Matisse, Memory: 128GB, Disk: 2000GB Corsair Force MP600 + PC SN730 NVMe WDC 256GB, Graphics: NVIDIA GeForce RTX 3090 24GB, Audio: NVIDIA GA102 HD Audio, Monitor: Q32V3WG5, Network: Realtek RTL8125 2.5GbE
OS: Ubuntu 22.04, Kernel: 5.15.0-107-generic (x86_64), Display Server: X Server 1.21.1.4, Display Driver: NVIDIA, Vulkan: 1.3.242, Compiler: GCC 11.4.0 + CUDA 12.5, File-System: ext4, Screen Resolution: 1024x768
Kernel Notes: Transparent Huge Pages: madviseProcessor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Disabled) - CPU Microcode: 0x8701021Python Notes: Python 3.10.12Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_rstack_overflow: Mitigation of safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 28 May 2024 02:18 by user emx.