pytorch emerald rapids

2 x INTEL XEON PLATINUM 8592+ testing with a Quanta Cloud QuantaGrid D54Q-2U S6Q-MB-MPS (3B05.TEL4P1 BIOS) and ASPEED on Ubuntu 23.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2403262-NE-PYTORCHEM92
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
a
March 26
  17 Minutes
b
March 26
  17 Minutes
Invert Behavior (Only Show Selected Data)
  17 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


pytorch emerald rapidsOpenBenchmarking.orgPhoronix Test Suite2 x INTEL XEON PLATINUM 8592+ @ 3.90GHz (128 Cores / 256 Threads)Quanta Cloud QuantaGrid D54Q-2U S6Q-MB-MPS (3B05.TEL4P1 BIOS)Intel Device 1bce1008GB3201GB Micron_7450_MTFDKCC3T2TFSASPEED2 x Intel X710 for 10GBASE-TUbuntu 23.106.6.0-rc5-phx-patched (x86_64)GNOME Shell 45.0X Server 1.21.1.7GCC 13.2.0ext41920x1200ProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen ResolutionPytorch Emerald Rapids PerformanceSystem Logs- Transparent Huge Pages: madvise- Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x21000161 - Python 3.11.6- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

a vs. b ComparisonPhoronix Test SuiteBaseline+1.1%+1.1%+2.2%+2.2%+3.3%+3.3%4.2%4%3.5%3.5%3.3%3.1%CPU - 16 - ResNet-152CPU - 32 - ResNet-50CPU - 512 - ResNet-50CPU - 64 - ResNet-152CPU - 64 - ResNet-50CPU - 256 - ResNet-50CPU - 512 - ResNet-1523%CPU - 16 - ResNet-502.7%CPU - 1 - ResNet-502.6%PyTorchPyTorchPyTorchPyTorchPyTorchPyTorchPyTorchPyTorchPyTorchab

pytorch emerald rapidspytorch: CPU - 16 - ResNet-152pytorch: CPU - 256 - ResNet-152pytorch: CPU - 32 - ResNet-152pytorch: CPU - 64 - ResNet-152pytorch: CPU - 512 - ResNet-152pytorch: CPU - 1 - ResNet-152pytorch: CPU - 64 - ResNet-50pytorch: CPU - 256 - ResNet-50pytorch: CPU - 16 - ResNet-50pytorch: CPU - 512 - ResNet-50pytorch: CPU - 32 - ResNet-50pytorch: CPU - 1 - ResNet-50ab16.9817.3617.3117.0017.9119.1741.6143.4244.4843.2443.1349.7417.6917.3817.5217.5917.3919.5342.9844.7543.3044.7644.8748.46OpenBenchmarking.org

PyTorch

This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 16 - Model: ResNet-152ba4812162017.6916.98MIN: 12.47 / MAX: 18.03MIN: 8.88 / MAX: 17.73

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 256 - Model: ResNet-152ba4812162017.3817.36MIN: 9.17 / MAX: 17.74MIN: 13.87 / MAX: 17.68

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 32 - Model: ResNet-152ba4812162017.5217.31MIN: 8.47 / MAX: 17.81MIN: 7.61 / MAX: 17.66

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 64 - Model: ResNet-152ba4812162017.5917.00MIN: 7.39 / MAX: 17.91MIN: 6.37 / MAX: 17.5

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 512 - Model: ResNet-152ab4812162017.9117.39MIN: 6.7 / MAX: 18.43MIN: 8.92 / MAX: 17.83

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 1 - Model: ResNet-152ba51015202519.5319.17MIN: 6.21 / MAX: 20.18MIN: 11.32 / MAX: 19.93

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 64 - Model: ResNet-50ba102030405042.9841.61MIN: 18.82 / MAX: 44.16MIN: 21.64 / MAX: 43.52

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 256 - Model: ResNet-50ba102030405044.7543.42MIN: 15.82 / MAX: 46.04MIN: 40.39 / MAX: 44.48

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 16 - Model: ResNet-50ab102030405044.4843.30MIN: 19.94 / MAX: 45.71MIN: 18.44 / MAX: 45.93

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 512 - Model: ResNet-50ba102030405044.7643.24MIN: 21.38 / MAX: 45.97MIN: 40.07 / MAX: 44.05

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 32 - Model: ResNet-50ba102030405044.8743.13MIN: 37.26 / MAX: 46.02MIN: 21.59 / MAX: 44.07

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 1 - Model: ResNet-50ab112233445549.7448.46MIN: 20.74 / MAX: 51.83MIN: 22.49 / MAX: 51.23