Granite Rapids MRDIMMs DDR5 Benchmarks for a future article. 2 x Intel Xeon 6980P testing with a Intel AvenueCity v0.01 (BHSDCRB1.IPC.0035.D44.2408292336 BIOS) and ASPEED on Ubuntu 24.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2410039-NE-GRANITERA46&grw .
Granite Rapids MRDIMMs DDR5 Processor Motherboard Chipset Memory Disk Graphics Network OS Kernel Compiler File-System Screen Resolution 24 x MRDIMM 88800 2 x Intel Xeon 6980P @ 3.90GHz (256 Cores / 512 Threads) Intel AvenueCity v0.01 (BHSDCRB1.IPC.0035.D44.2408292336 BIOS) Intel Ice Lake IEH 1520GB 2 x 1920GB KIOXIA KCD8XPUG1T92 + 960GB SAMSUNG MZ1L2960HCJR-00A07 ASPEED Intel I210 + 2 x Intel 10-Gigabit X540-AT2 Ubuntu 24.04 6.10.0-phx (x86_64) GCC 13.2.0 ext4 1920x1200 OpenBenchmarking.org - Transparent Huge Pages: madvise - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-OiuXZC/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-OiuXZC/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x10002f0 - OpenJDK Runtime Environment (build 21.0.3-ea+7-Ubuntu-1build1) - Python 3.12.2 - gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; RSB filling; PBRSB-eIBRS: Not affected; BHI: BHI_DIS_S + srbds: Not affected + tsx_async_abort: Not affected
Granite Rapids MRDIMMs DDR5 mbw: Memory Copy - 8192 MiB mbw: Memory Copy, Fixed Block Size - 8192 MiB tinymembench: Standard Memcpy tinymembench: Standard Memset stream: Copy stream: Scale stream: Add stream: Triad specfem3d: Layered Halfspace openradioss: Chrysler Neon 1M specfem3d: Tomographic Model specfem3d: Mount St. Helens specfem3d: Homogeneous Halfspace gromacs: MPI CPU - water_GMX50_bare specfem3d: Water-layered Halfspace hpcg: 104 104 104 - 60 hpcg: 144 144 144 - 60 npb: BT.C npb: EP.D npb: LU.C npb: SP.B npb: IS.D npb: MG.C npb: CG.C pennant: leblancbig pennant: sedovbig amg: lulesh: openfoam: drivaerFastback, Small Mesh Size - Mesh Time openfoam: drivaerFastback, Small Mesh Size - Execution Time openfoam: drivaerFastback, Medium Mesh Size - Mesh Time openfoam: drivaerFastback, Medium Mesh Size - Execution Time incompact3d: input.i3d 193 Cells Per Direction incompact3d: X3D-benchmarking input.i3d java-jmh: Throughput build-llvm: Ninja build-llvm: Unix Makefiles build-linux-kernel: defconfig build-linux-kernel: allmodconfig build-nodejs: Time To Compile cassandra: Writes pgbench: 100 - 800 - Read Write pgbench: 100 - 800 - Read Write - Average Latency pgbench: 100 - 800 - Read Only pgbench: 100 - 800 - Read Only - Average Latency pgbench: 100 - 1000 - Read Write pgbench: 100 - 1000 - Read Write - Average Latency libxsmm: 128 pgbench: 100 - 1000 - Read Only libxsmm: 64 pgbench: 100 - 1000 - Read Only - Average Latency 24 x MRDIMM 88800 15316.413 8766.213 15008.9 30001.0 861560.8 952124.6 937751.9 879963.6 7.113553561 60.82 4.379927179 3.955489179 5.417761321 33.306 9.216693066 170.946 169.491 804396.26 32324.72 769386.67 373559.46 15638.56 449470.32 118207.17 1.342768 6.472671 8606553000 124013.27 23.416256 18.389208 137.84871 71.936334 2.62023497 69.8064524 790759705068.81 76.600 198.677 23.431 131.839 138.989 87063 14452 55.372 509160 1.596 13757 72.692 7398.5 475621 4501.4 2.121 OpenBenchmarking.org
MBW Test: Memory Copy - Array Size: 8192 MiB OpenBenchmarking.org MiB/s, More Is Better MBW 2018-09-08 Test: Memory Copy - Array Size: 8192 MiB 24 x MRDIMM 88800 3K 6K 9K 12K 15K SE +/- 26.98, N = 3 15316.41 1. (CC) gcc options: -O3 -march=native
MBW Test: Memory Copy, Fixed Block Size - Array Size: 8192 MiB OpenBenchmarking.org MiB/s, More Is Better MBW 2018-09-08 Test: Memory Copy, Fixed Block Size - Array Size: 8192 MiB 24 x MRDIMM 88800 2K 4K 6K 8K 10K SE +/- 34.71, N = 3 8766.21 1. (CC) gcc options: -O3 -march=native
Tinymembench Standard Memcpy OpenBenchmarking.org MB/s, More Is Better Tinymembench 2018-05-28 Standard Memcpy 24 x MRDIMM 88800 3K 6K 9K 12K 15K SE +/- 10.01, N = 3 15008.9 1. (CC) gcc options: -O2 -lm
Tinymembench Standard Memset OpenBenchmarking.org MB/s, More Is Better Tinymembench 2018-05-28 Standard Memset 24 x MRDIMM 88800 6K 12K 18K 24K 30K SE +/- 19.60, N = 3 30001.0 1. (CC) gcc options: -O2 -lm
Stream Type: Copy OpenBenchmarking.org MB/s, More Is Better Stream 2013-01-17 Type: Copy 24 x MRDIMM 88800 200K 400K 600K 800K 1000K SE +/- 7612.22, N = 25 861560.8 1. (CC) gcc options: -mcmodel=medium -O3 -march=native -fopenmp
Stream Type: Scale OpenBenchmarking.org MB/s, More Is Better Stream 2013-01-17 Type: Scale 24 x MRDIMM 88800 200K 400K 600K 800K 1000K SE +/- 15910.14, N = 5 952124.6 1. (CC) gcc options: -mcmodel=medium -O3 -march=native -fopenmp
Stream Type: Add OpenBenchmarking.org MB/s, More Is Better Stream 2013-01-17 Type: Add 24 x MRDIMM 88800 200K 400K 600K 800K 1000K SE +/- 13677.37, N = 5 937751.9 1. (CC) gcc options: -mcmodel=medium -O3 -march=native -fopenmp
Stream Type: Triad OpenBenchmarking.org MB/s, More Is Better Stream 2013-01-17 Type: Triad 24 x MRDIMM 88800 200K 400K 600K 800K 1000K SE +/- 16674.48, N = 5 879963.6 1. (CC) gcc options: -mcmodel=medium -O3 -march=native -fopenmp
SPECFEM3D Model: Layered Halfspace OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.1.1 Model: Layered Halfspace 24 x MRDIMM 88800 2 4 6 8 10 SE +/- 0.043473328, N = 3 7.113553561 1. (F9X) gfortran options: -O2 -fopenmp -std=f2008 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenRadioss Model: Chrysler Neon 1M OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2023.09.15 Model: Chrysler Neon 1M 24 x MRDIMM 88800 14 28 42 56 70 SE +/- 0.29, N = 3 60.82
SPECFEM3D Model: Tomographic Model OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.1.1 Model: Tomographic Model 24 x MRDIMM 88800 0.9855 1.971 2.9565 3.942 4.9275 SE +/- 0.009660353, N = 3 4.379927179 1. (F9X) gfortran options: -O2 -fopenmp -std=f2008 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
SPECFEM3D Model: Mount St. Helens OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.1.1 Model: Mount St. Helens 24 x MRDIMM 88800 0.89 1.78 2.67 3.56 4.45 SE +/- 0.025428571, N = 3 3.955489179 1. (F9X) gfortran options: -O2 -fopenmp -std=f2008 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
SPECFEM3D Model: Homogeneous Halfspace OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.1.1 Model: Homogeneous Halfspace 24 x MRDIMM 88800 1.219 2.438 3.657 4.876 6.095 SE +/- 0.013586641, N = 3 5.417761321 1. (F9X) gfortran options: -O2 -fopenmp -std=f2008 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
GROMACS Implementation: MPI CPU - Input: water_GMX50_bare OpenBenchmarking.org Ns Per Day, More Is Better GROMACS 2024 Implementation: MPI CPU - Input: water_GMX50_bare 24 x MRDIMM 88800 8 16 24 32 40 SE +/- 0.08, N = 2 33.31 1. (CXX) g++ options: -O3 -lm
SPECFEM3D Model: Water-layered Halfspace OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.1.1 Model: Water-layered Halfspace 24 x MRDIMM 88800 3 6 9 12 15 SE +/- 0.111895654, N = 4 9.216693066 1. (F9X) gfortran options: -O2 -fopenmp -std=f2008 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
High Performance Conjugate Gradient X Y Z: 104 104 104 - RT: 60 OpenBenchmarking.org GFLOP/s, More Is Better High Performance Conjugate Gradient 3.1 X Y Z: 104 104 104 - RT: 60 24 x MRDIMM 88800 40 80 120 160 200 SE +/- 0.19, N = 3 170.95 1. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi
High Performance Conjugate Gradient X Y Z: 144 144 144 - RT: 60 OpenBenchmarking.org GFLOP/s, More Is Better High Performance Conjugate Gradient 3.1 X Y Z: 144 144 144 - RT: 60 24 x MRDIMM 88800 40 80 120 160 200 SE +/- 0.12, N = 3 169.49 1. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi
NAS Parallel Benchmarks Test / Class: BT.C OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: BT.C 24 x MRDIMM 88800 200K 400K 600K 800K 1000K SE +/- 1537.19, N = 3 804396.26 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.6
NAS Parallel Benchmarks Test / Class: EP.D OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: EP.D 24 x MRDIMM 88800 7K 14K 21K 28K 35K SE +/- 1574.53, N = 13 32324.72 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.6
NAS Parallel Benchmarks Test / Class: LU.C OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: LU.C 24 x MRDIMM 88800 160K 320K 480K 640K 800K SE +/- 2025.53, N = 3 769386.67 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.6
NAS Parallel Benchmarks Test / Class: SP.B OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: SP.B 24 x MRDIMM 88800 80K 160K 240K 320K 400K SE +/- 3986.08, N = 4 373559.46 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.6
NAS Parallel Benchmarks Test / Class: IS.D OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: IS.D 24 x MRDIMM 88800 3K 6K 9K 12K 15K SE +/- 36.61, N = 3 15638.56 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.6
NAS Parallel Benchmarks Test / Class: MG.C OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: MG.C 24 x MRDIMM 88800 100K 200K 300K 400K 500K SE +/- 1588.36, N = 3 449470.32 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.6
NAS Parallel Benchmarks Test / Class: CG.C OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: CG.C 24 x MRDIMM 88800 30K 60K 90K 120K 150K SE +/- 1589.02, N = 3 118207.17 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.6
Pennant Test: leblancbig OpenBenchmarking.org Hydro Cycle Time - Seconds, Fewer Is Better Pennant 1.0.1 Test: leblancbig 24 x MRDIMM 88800 0.3021 0.6042 0.9063 1.2084 1.5105 SE +/- 0.011876, N = 15 1.342768 1. (CXX) g++ options: -fopenmp -lmpi_cxx -lmpi
Pennant Test: sedovbig OpenBenchmarking.org Hydro Cycle Time - Seconds, Fewer Is Better Pennant 1.0.1 Test: sedovbig 24 x MRDIMM 88800 2 4 6 8 10 SE +/- 0.009571, N = 3 6.472671 1. (CXX) g++ options: -fopenmp -lmpi_cxx -lmpi
Algebraic Multi-Grid Benchmark OpenBenchmarking.org Figure Of Merit, More Is Better Algebraic Multi-Grid Benchmark 1.2 24 x MRDIMM 88800 2000M 4000M 6000M 8000M 10000M SE +/- 17997194.71, N = 3 8606553000 1. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi
LULESH OpenBenchmarking.org z/s, More Is Better LULESH 2.0.3 24 x MRDIMM 88800 30K 60K 90K 120K 150K SE +/- 801.06, N = 15 124013.27 1. (CXX) g++ options: -O3 -fopenmp -lm -lmpi_cxx -lmpi
OpenFOAM Input: drivaerFastback, Small Mesh Size - Mesh Time OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Mesh Time 24 x MRDIMM 88800 6 12 18 24 30 23.42 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenFOAM Input: drivaerFastback, Small Mesh Size - Execution Time OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Execution Time 24 x MRDIMM 88800 5 10 15 20 25 18.39 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenFOAM Input: drivaerFastback, Medium Mesh Size - Mesh Time OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Medium Mesh Size - Mesh Time 24 x MRDIMM 88800 30 60 90 120 150 137.85 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenFOAM Input: drivaerFastback, Medium Mesh Size - Execution Time OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Medium Mesh Size - Execution Time 24 x MRDIMM 88800 16 32 48 64 80 71.94 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm
Xcompact3d Incompact3d Input: input.i3d 193 Cells Per Direction OpenBenchmarking.org Seconds, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 Input: input.i3d 193 Cells Per Direction 24 x MRDIMM 88800 0.5896 1.1792 1.7688 2.3584 2.948 SE +/- 0.02984810, N = 3 2.62023497 1. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
Xcompact3d Incompact3d Input: X3D-benchmarking input.i3d OpenBenchmarking.org Seconds, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 Input: X3D-benchmarking input.i3d 24 x MRDIMM 88800 16 32 48 64 80 SE +/- 0.04, N = 3 69.81 1. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
Java JMH Throughput OpenBenchmarking.org Ops/s, More Is Better Java JMH Throughput 24 x MRDIMM 88800 200000M 400000M 600000M 800000M 1000000M 790759705068.81
Timed LLVM Compilation Build System: Ninja OpenBenchmarking.org Seconds, Fewer Is Better Timed LLVM Compilation 16.0 Build System: Ninja 24 x MRDIMM 88800 20 40 60 80 100 SE +/- 0.18, N = 3 76.60
Timed LLVM Compilation Build System: Unix Makefiles OpenBenchmarking.org Seconds, Fewer Is Better Timed LLVM Compilation 16.0 Build System: Unix Makefiles 24 x MRDIMM 88800 40 80 120 160 200 SE +/- 1.12, N = 3 198.68
Timed Linux Kernel Compilation Build: defconfig OpenBenchmarking.org Seconds, Fewer Is Better Timed Linux Kernel Compilation 6.8 Build: defconfig 24 x MRDIMM 88800 6 12 18 24 30 SE +/- 0.15, N = 15 23.43
Timed Linux Kernel Compilation Build: allmodconfig OpenBenchmarking.org Seconds, Fewer Is Better Timed Linux Kernel Compilation 6.8 Build: allmodconfig 24 x MRDIMM 88800 30 60 90 120 150 SE +/- 0.82, N = 3 131.84
Timed Node.js Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Node.js Compilation 21.7.2 Time To Compile 24 x MRDIMM 88800 30 60 90 120 150 SE +/- 1.02, N = 3 138.99
Apache Cassandra Test: Writes OpenBenchmarking.org Op/s, More Is Better Apache Cassandra 5.0 Test: Writes 24 x MRDIMM 88800 20K 40K 60K 80K 100K SE +/- 1099.82, N = 12 87063
PostgreSQL Scaling Factor: 100 - Clients: 800 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 17 Scaling Factor: 100 - Clients: 800 - Mode: Read Write 24 x MRDIMM 88800 3K 6K 9K 12K 15K SE +/- 174.94, N = 3 14452 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpq -lpgcommon -lpgport -lm
PostgreSQL Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 17 Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average Latency 24 x MRDIMM 88800 12 24 36 48 60 SE +/- 0.67, N = 3 55.37 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpq -lpgcommon -lpgport -lm
PostgreSQL Scaling Factor: 100 - Clients: 800 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 17 Scaling Factor: 100 - Clients: 800 - Mode: Read Only 24 x MRDIMM 88800 110K 220K 330K 440K 550K SE +/- 19798.84, N = 12 509160 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpq -lpgcommon -lpgport -lm
PostgreSQL Scaling Factor: 100 - Clients: 800 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 17 Scaling Factor: 100 - Clients: 800 - Mode: Read Only - Average Latency 24 x MRDIMM 88800 0.3591 0.7182 1.0773 1.4364 1.7955 SE +/- 0.058, N = 12 1.596 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpq -lpgcommon -lpgport -lm
PostgreSQL Scaling Factor: 100 - Clients: 1000 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 17 Scaling Factor: 100 - Clients: 1000 - Mode: Read Write 24 x MRDIMM 88800 3K 6K 9K 12K 15K SE +/- 20.41, N = 3 13757 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpq -lpgcommon -lpgport -lm
PostgreSQL Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 17 Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average Latency 24 x MRDIMM 88800 16 32 48 64 80 SE +/- 0.11, N = 3 72.69 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpq -lpgcommon -lpgport -lm
libxsmm M N K: 128 OpenBenchmarking.org GFLOPS/s, More Is Better libxsmm 2-1.17-3645 M N K: 128 24 x MRDIMM 88800 1600 3200 4800 6400 8000 SE +/- 660.65, N = 6 7398.5 1. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2
PostgreSQL Scaling Factor: 100 - Clients: 1000 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 17 Scaling Factor: 100 - Clients: 1000 - Mode: Read Only 24 x MRDIMM 88800 100K 200K 300K 400K 500K SE +/- 13155.72, N = 12 475621 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpq -lpgcommon -lpgport -lm
libxsmm M N K: 64 OpenBenchmarking.org GFLOPS/s, More Is Better libxsmm 2-1.17-3645 M N K: 64 24 x MRDIMM 88800 1000 2000 3000 4000 5000 SE +/- 244.26, N = 15 4501.4 1. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2
PostgreSQL Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 17 Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average Latency 24 x MRDIMM 88800 0.4772 0.9544 1.4316 1.9088 2.386 SE +/- 0.060, N = 12 2.121 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpq -lpgcommon -lpgport -lm
Stream System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 198 585 924 OpenBenchmarking.org Watts, Fewer Is Better Stream 2013-01-17 System Power Consumption Monitor 200 400 600 800 1000
MBW System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 204.6 291.3 297.2 OpenBenchmarking.org Watts, Fewer Is Better MBW 2018-09-08 System Power Consumption Monitor 70 140 210 280 350
MBW System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 200.8 289.6 295.1 OpenBenchmarking.org Watts, Fewer Is Better MBW 2018-09-08 System Power Consumption Monitor 70 140 210 280 350
High Performance Conjugate Gradient System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 223 960 1108 OpenBenchmarking.org Watts, Fewer Is Better High Performance Conjugate Gradient 3.1 System Power Consumption Monitor 200 400 600 800 1000
High Performance Conjugate Gradient System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 354 1040 1107 OpenBenchmarking.org Watts, Fewer Is Better High Performance Conjugate Gradient 3.1 System Power Consumption Monitor 200 400 600 800 1000
Tinymembench System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 213.8 286.2 394.6 OpenBenchmarking.org Watts, Fewer Is Better Tinymembench 2018-05-28 System Power Consumption Monitor 110 220 330 440 550
Apache Cassandra System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 217.7 372.0 431.6 OpenBenchmarking.org Watts, Fewer Is Better Apache Cassandra 5.0 System Power Consumption Monitor 110 220 330 440 550
PostgreSQL System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 199.5 339.2 373.5 OpenBenchmarking.org Watts, Fewer Is Better PostgreSQL 17 System Power Consumption Monitor 100 200 300 400 500
PostgreSQL System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 197.7 488.2 566.6 OpenBenchmarking.org Watts, Fewer Is Better PostgreSQL 17 System Power Consumption Monitor 140 280 420 560 700
PostgreSQL System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 204.9 344.0 378.0 OpenBenchmarking.org Watts, Fewer Is Better PostgreSQL 17 System Power Consumption Monitor 100 200 300 400 500
PostgreSQL System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 201.1 483.3 564.2 OpenBenchmarking.org Watts, Fewer Is Better PostgreSQL 17 System Power Consumption Monitor 140 280 420 560 700
NAS Parallel Benchmarks System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 203 476 900 OpenBenchmarking.org Watts, Fewer Is Better NAS Parallel Benchmarks 3.4 System Power Consumption Monitor 200 400 600 800 1000
NAS Parallel Benchmarks System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 204 435 782 OpenBenchmarking.org Watts, Fewer Is Better NAS Parallel Benchmarks 3.4 System Power Consumption Monitor 200 400 600 800 1000
NAS Parallel Benchmarks System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 205 399 760 OpenBenchmarking.org Watts, Fewer Is Better NAS Parallel Benchmarks 3.4 System Power Consumption Monitor 200 400 600 800 1000
NAS Parallel Benchmarks System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 220.3 386.6 564.3 OpenBenchmarking.org Watts, Fewer Is Better NAS Parallel Benchmarks 3.4 System Power Consumption Monitor 140 280 420 560 700
NAS Parallel Benchmarks System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 202.8 367.3 498.6 OpenBenchmarking.org Watts, Fewer Is Better NAS Parallel Benchmarks 3.4 System Power Consumption Monitor 130 260 390 520 650
NAS Parallel Benchmarks System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 201.5 368.8 500.7 OpenBenchmarking.org Watts, Fewer Is Better NAS Parallel Benchmarks 3.4 System Power Consumption Monitor 130 260 390 520 650
NAS Parallel Benchmarks System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 199.6 357.7 502.0 OpenBenchmarking.org Watts, Fewer Is Better NAS Parallel Benchmarks 3.4 System Power Consumption Monitor 130 260 390 520 650
Xcompact3d Incompact3d System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 205 424 852 OpenBenchmarking.org Watts, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 System Power Consumption Monitor 200 400 600 800 1000
Xcompact3d Incompact3d System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 202 807 1051 OpenBenchmarking.org Watts, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 System Power Consumption Monitor 200 400 600 800 1000
OpenFOAM System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 282 523 764 OpenBenchmarking.org Watts, Fewer Is Better OpenFOAM 10 System Power Consumption Monitor 200 400 600 800 1000
OpenFOAM System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 221 652 934 OpenBenchmarking.org Watts, Fewer Is Better OpenFOAM 10 System Power Consumption Monitor 200 400 600 800 1000
OpenRadioss System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 209 534 864 OpenBenchmarking.org Watts, Fewer Is Better OpenRadioss 2023.09.15 System Power Consumption Monitor 200 400 600 800 1000
SPECFEM3D System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 204 422 804 OpenBenchmarking.org Watts, Fewer Is Better SPECFEM3D 4.1.1 System Power Consumption Monitor 200 400 600 800 1000
SPECFEM3D System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 204 470 805 OpenBenchmarking.org Watts, Fewer Is Better SPECFEM3D 4.1.1 System Power Consumption Monitor 200 400 600 800 1000
SPECFEM3D System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 201 421 756 OpenBenchmarking.org Watts, Fewer Is Better SPECFEM3D 4.1.1 System Power Consumption Monitor 200 400 600 800 1000
SPECFEM3D System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 205 429 801 OpenBenchmarking.org Watts, Fewer Is Better SPECFEM3D 4.1.1 System Power Consumption Monitor 200 400 600 800 1000
SPECFEM3D System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 198 402 790 OpenBenchmarking.org Watts, Fewer Is Better SPECFEM3D 4.1.1 System Power Consumption Monitor 200 400 600 800 1000
LULESH System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 198 496 911 OpenBenchmarking.org Watts, Fewer Is Better LULESH 2.0.3 System Power Consumption Monitor 200 400 600 800 1000
Timed Linux Kernel Compilation System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 198 366 848 OpenBenchmarking.org Watts, Fewer Is Better Timed Linux Kernel Compilation 6.8 System Power Consumption Monitor 200 400 600 800 1000
Timed Linux Kernel Compilation System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 204 555 811 OpenBenchmarking.org Watts, Fewer Is Better Timed Linux Kernel Compilation 6.8 System Power Consumption Monitor 200 400 600 800 1000
Timed LLVM Compilation System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 203 470 858 OpenBenchmarking.org Watts, Fewer Is Better Timed LLVM Compilation 16.0 System Power Consumption Monitor 200 400 600 800 1000
Timed LLVM Compilation System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 192 359 827 OpenBenchmarking.org Watts, Fewer Is Better Timed LLVM Compilation 16.0 System Power Consumption Monitor 200 400 600 800 1000
Timed Node.js Compilation System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 207 445 834 OpenBenchmarking.org Watts, Fewer Is Better Timed Node.js Compilation 21.7.2 System Power Consumption Monitor 200 400 600 800 1000
GROMACS System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 184 310 809 OpenBenchmarking.org Watts, Fewer Is Better GROMACS 2024 System Power Consumption Monitor 200 400 600 800 1000
Java JMH System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 206 766 882 OpenBenchmarking.org Watts, Fewer Is Better Java JMH System Power Consumption Monitor 200 400 600 800 1000
libxsmm System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 203 466 879 OpenBenchmarking.org Watts, Fewer Is Better libxsmm 2-1.17-3645 System Power Consumption Monitor 200 400 600 800 1000
libxsmm System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 188.8 310.1 387.9 OpenBenchmarking.org Watts, Fewer Is Better libxsmm 2-1.17-3645 System Power Consumption Monitor 100 200 300 400 500
Algebraic Multi-Grid Benchmark System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 194 564 954 OpenBenchmarking.org Watts, Fewer Is Better Algebraic Multi-Grid Benchmark 1.2 System Power Consumption Monitor 200 400 600 800 1000
Pennant System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 200 394 681 OpenBenchmarking.org Watts, Fewer Is Better Pennant 1.0.1 System Power Consumption Monitor 200 400 600 800 1000
Pennant System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 207 455 805 OpenBenchmarking.org Watts, Fewer Is Better Pennant 1.0.1 System Power Consumption Monitor 200 400 600 800 1000
System Power Consumption Monitor Phoronix Test Suite System Monitoring OpenBenchmarking.org Watts System Power Consumption Monitor Phoronix Test Suite System Monitoring 24 x MRDIMM 88800 200 400 600 800 1000 Min: 183.7 / Avg: 465.09 / Max: 1108.2
Phoronix Test Suite v10.8.5