9684x-march

2 x AMD EPYC 9684X 96-Core testing with a AMD Titanite_4G (RTI1007B BIOS) and ASPEED on Ubuntu 23.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2403274-NE-9684XMARC65
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

CPU Massive 3 Tests
Creator Workloads 2 Tests
HPC - High Performance Computing 2 Tests
Machine Learning 2 Tests
Multi-Core 2 Tests
Python Tests 3 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
PRE
March 27
  2 Hours, 34 Minutes
a
March 27
  8 Hours, 3 Minutes
b
March 27
  2 Hours, 46 Minutes
Invert Hiding All Results Option
  4 Hours, 28 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


9684x-march OpenBenchmarking.orgPhoronix Test Suite2 x AMD EPYC 9684X 96-Core @ 2.55GHz (192 Cores / 384 Threads)AMD Titanite_4G (RTI1007B BIOS)AMD Device 14a41520GB3201GB Micron_7450_MTFDKCB3T2TFS + 257GB Flash DriveASPEEDBroadcom NetXtreme BCM5720 PCIeUbuntu 23.106.5.0-25-generic (x86_64)GCC 13.2.0ext4640x480ProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelCompilerFile-SystemScreen Resolution9684x-march BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10113e - Python 3.11.6- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

Timed Mesa Compilation

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 24.0Time To CompilePREba48121620SE +/- 0.04, N = 314.6614.7114.76

PyTorch

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 1 - Model: ResNet-50baPRE612182430SE +/- 0.20, N = 1523.2423.2023.06MIN: 13.48 / MAX: 24.22MIN: 12.21 / MAX: 25.13MIN: 12.95 / MAX: 24.52

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 1 - Model: ResNet-152baPRE3691215SE +/- 0.10, N = 1510.6010.589.97MIN: 4.86 / MAX: 11.57MIN: 4.55 / MAX: 11.67MIN: 4.85 / MAX: 10.69

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 16 - Model: ResNet-50aPREb510152025SE +/- 0.16, N = 321.5320.9320.36MIN: 12.64 / MAX: 22.28MIN: 12.91 / MAX: 21.51MIN: 11.37 / MAX: 21.4

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 32 - Model: ResNet-50baPRE510152025SE +/- 0.16, N = 1521.0320.8420.19MIN: 15.23 / MAX: 21.8MIN: 11.24 / MAX: 22.33MIN: 11.95 / MAX: 21.04

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 64 - Model: ResNet-50PREab510152025SE +/- 0.23, N = 321.5921.0820.90MIN: 14.02 / MAX: 22.21MIN: 13.2 / MAX: 22.07MIN: 13.13 / MAX: 21.57

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 16 - Model: ResNet-152baPRE3691215SE +/- 0.09, N = 39.129.018.93MIN: 8.99 / MAX: 9.29MIN: 4.81 / MAX: 9.31MIN: 8.8 / MAX: 9.04

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 256 - Model: ResNet-50PREba510152025SE +/- 0.10, N = 321.2020.8520.77MIN: 12.68 / MAX: 21.88MIN: 12.74 / MAX: 21.39MIN: 12.97 / MAX: 21.67

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 32 - Model: ResNet-152abPRE3691215SE +/- 0.08, N = 39.349.288.72MIN: 4.74 / MAX: 9.74MIN: 5.31 / MAX: 9.48MIN: 5.23 / MAX: 9.06

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 512 - Model: ResNet-50baPRE510152025SE +/- 0.14, N = 1521.0121.0120.43MIN: 14.13 / MAX: 21.43MIN: 11.92 / MAX: 22.65MIN: 13.46 / MAX: 21.1

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 64 - Model: ResNet-152PREab3691215SE +/- 0.09, N = 129.218.918.79MIN: 4.8 / MAX: 9.43MIN: 4.5 / MAX: 9.7MIN: 4.6 / MAX: 8.97

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 256 - Model: ResNet-152aPREb3691215SE +/- 0.10, N = 129.098.928.85MIN: 4.84 / MAX: 10.03MIN: 5.04 / MAX: 9.16MIN: 5.25 / MAX: 9.05

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 512 - Model: ResNet-152PREab3691215SE +/- 0.10, N = 39.479.338.81MIN: 5.17 / MAX: 9.87MIN: 4.69 / MAX: 9.66MIN: 4.87 / MAX: 8.97

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_lbaPRE246810SE +/- 0.09, N = 36.506.456.29MIN: 3.35 / MAX: 6.62MIN: 3.05 / MAX: 6.85MIN: 3.09 / MAX: 6.44

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_lbaPRE0.52881.05761.58642.11522.644SE +/- 0.01, N = 32.352.332.33MIN: 1.82 / MAX: 2.76MIN: 1.77 / MAX: 2.9MIN: 1.76 / MAX: 2.72

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_lPREba0.52431.04861.57292.09722.6215SE +/- 0.01, N = 32.332.322.31MIN: 1.78 / MAX: 2.8MIN: 1.94 / MAX: 2.8MIN: 1.88 / MAX: 2.74

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_lbPREa0.52431.04861.57292.09722.6215SE +/- 0.01, N = 32.332.322.31MIN: 1.78 / MAX: 2.77MIN: 1.9 / MAX: 2.75MIN: 1.53 / MAX: 2.83

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_labPRE0.52431.04861.57292.09722.6215SE +/- 0.01, N = 32.332.312.29MIN: 1.59 / MAX: 2.78MIN: 1.92 / MAX: 2.67MIN: 1.79 / MAX: 2.72

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_labPRE0.52431.04861.57292.09722.6215SE +/- 0.01, N = 32.332.322.31MIN: 1.58 / MAX: 2.83MIN: 1.79 / MAX: 2.71MIN: 1.7 / MAX: 2.84

TensorFlow

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 1 - Model: AlexNetPREba510152025SE +/- 0.16, N = 1521.1621.0120.78

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 16 - Model: AlexNetaPREb50100150200250SE +/- 2.30, N = 15247.55242.29236.56

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 32 - Model: AlexNetbaPRE100200300400500SE +/- 6.62, N = 15461.60436.25424.06

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 64 - Model: AlexNetPREab170340510680850SE +/- 5.39, N = 15765.55749.46743.50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 1 - Model: GoogLeNetbaPRE3691215SE +/- 0.14, N = 1513.5213.2012.58

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 1 - Model: ResNet-50PREba0.91131.82262.73393.64524.55654.054.013.90

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 256 - Model: AlexNetbPREa4008001200160020001656.791652.231604.52

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 512 - Model: AlexNetbaPRE4008001200160020002010.602010.561980.51

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 16 - Model: GoogLeNetbaPRE306090120150119.22114.26112.64

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 16 - Model: ResNet-50aPREb91827364541.2639.6835.92

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 32 - Model: GoogLeNetbPREa4080120160200190.74185.16176.36

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 32 - Model: ResNet-50bPREa153045607566.6865.8860.25

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 64 - Model: GoogLeNetPREab60120180240300275.34273.68256.87

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 64 - Model: ResNet-50baPRE2040608010088.9588.9387.72

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 256 - Model: GoogLeNetbPREa90180270360450400.61400.03399.46

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 256 - Model: ResNet-50PREab306090120150119.83118.88118.77

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 512 - Model: GoogLeNetbPREa110220330440550494.46493.31484.02

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 512 - Model: ResNet-50bPREa306090120150141.16140.59140.49

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.1Blend File: BMW27 - Compute: CPU-OnlybaPRE2468107.487.557.55

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.1Blend File: Junkshop - Compute: CPU-OnlyPREab369121511.4011.4411.61

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.1Blend File: Classroom - Compute: CPU-OnlyPREba4812162018.0318.0418.08

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.1Blend File: Fishy Cat - Compute: CPU-OnlyabPRE36912159.859.949.96

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.1Blend File: Barbershop - Compute: CPU-OnlyPREba153045607567.3867.6567.66

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.1Blend File: Pabellon Barcelona - Compute: CPU-OnlyPREab61218243022.9923.1023.11

RocksDB

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: OverwritebaPRE90K180K270K360K450K4396024216164210491. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: Random ReadabPRE200M400M600M800M1000M1108892776110846930811053062331. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: Update RandombaPRE90K180K270K360K450K4273914256874212661. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: Read While WritingPREab6M12M18M24M30M2713036326406662261355671. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: Read Random Write RandomabPRE800K1600K2400K3200K4000K3643263363892936191421. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

BRL-CAD

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.38.2VGR Performance MetricPREab1.3M2.6M3.9M5.2M6.5M5956612592756457940401. (CXX) g++ options: -std=c++17 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lnetpbm -lregex_brl -lz_brl -lassimp -ldl -lm -ltk8.6

TensorFlow

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 1 - Model: VGG-16b36912159.39

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 16 - Model: VGG-16b142842567060.69

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 32 - Model: VGG-16b2040608010076.04

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 64 - Model: VGG-16b2040608010095.91

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 256 - Model: VGG-16b306090120150127.18

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 512 - Model: VGG-16b306090120150135.78