Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 2405272-NE-1TE21756861
1te
ok
3090 back from dead:
Processor: AMD Ryzen 9 3900X 12-Core @ 4.50GHz (12 Cores / 24 Threads), Motherboard: Gigabyte B550M AORUS PRO-P (F14e BIOS), Chipset: AMD Starship/Matisse, Memory: 128GB, Disk: 2000GB Corsair Force MP600 + PC SN730 NVMe WDC 256GB, Graphics: NVIDIA GeForce RTX 3090 24GB, Audio: NVIDIA GA102 HD Audio, Monitor: Q32V3WG5, Network: Realtek RTL8125 2.5GbE
OS: Ubuntu 22.04, Kernel: 5.15.0-107-generic (x86_64), Display Server: X Server 1.21.1.4, Display Driver: NVIDIA, Vulkan: 1.3.242, Compiler: GCC 11.4.0 + CUDA 12.5, File-System: ext4, Screen Resolution: 1024x768
PyTorch 2.2.1
Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: Efficientnet_v2_l
batches/sec > Higher Is Better
3090 back from dead . 37.50 |==================================================
PyTorch 2.2.1
Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: Efficientnet_v2_l
batches/sec > Higher Is Better
3090 back from dead . 37.89 |==================================================
PyTorch 2.2.1
Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: Efficientnet_v2_l
batches/sec > Higher Is Better
3090 back from dead . 37.75 |==================================================
PyTorch 2.2.1
Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: Efficientnet_v2_l
batches/sec > Higher Is Better
3090 back from dead . 37.66 |==================================================
PyTorch 2.2.1
Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: Efficientnet_v2_l
batches/sec > Higher Is Better
3090 back from dead . 37.84 |==================================================
PyTorch 2.2.1
Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: Efficientnet_v2_l
batches/sec > Higher Is Better
3090 back from dead . 38.18 |==================================================
PyTorch 2.2.1
Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: ResNet-152
batches/sec > Higher Is Better
3090 back from dead . 72.76 |==================================================
PyTorch 2.2.1
Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: ResNet-152
batches/sec > Higher Is Better
3090 back from dead . 73.67 |==================================================
PyTorch 2.2.1
Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: ResNet-152
batches/sec > Higher Is Better
3090 back from dead . 73.38 |==================================================
PyTorch 2.2.1
Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: ResNet-50
batches/sec > Higher Is Better
3090 back from dead . 207.15 |=================================================
PyTorch 2.2.1
Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: ResNet-152
batches/sec > Higher Is Better
3090 back from dead . 73.38 |==================================================
PyTorch 2.2.1
Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: ResNet-50
batches/sec > Higher Is Better
3090 back from dead . 206.28 |=================================================
PyTorch 2.2.1
Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: ResNet-152
batches/sec > Higher Is Better
3090 back from dead . 72.67 |==================================================
PyTorch 2.2.1
Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: ResNet-50
batches/sec > Higher Is Better
3090 back from dead . 206.25 |=================================================
PyTorch 2.2.1
Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: ResNet-50
batches/sec > Higher Is Better
3090 back from dead . 204.72 |=================================================
PyTorch 2.2.1
Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: ResNet-50
batches/sec > Higher Is Better
3090 back from dead . 205.60 |=================================================
PyTorch 2.2.1
Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: ResNet-152
batches/sec > Higher Is Better
3090 back from dead . 71.47 |==================================================
PyTorch 2.2.1
Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: ResNet-50
batches/sec > Higher Is Better
3090 back from dead . 208.89 |=================================================