5x-ssd-btrfs-raid-10-zstd_direct

EXT4 on Dell MD34XX SAS drive (/gnu) on node 125 (berlin)

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2206272-APTE-220627998
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
2 x 240GB MZ7LM240HMHQ0D3
June 18
  1 Hour, 1 Minute
ext4 on SAN, node 129
June 18
  52 Minutes
btrfs on SAN, node 129
June 18
  1 Hour, 5 Minutes
btrfs+zstd on SAN, node 129
June 18
  1 Hour
btrfs+zstd/noatime on SAN, node 129
June 18
  1 Hour, 1 Minute
btrfs+zstd raid10/6 SSDs, node 125
June 27
  51 Minutes
ext4 on Dell MD34XX /gnu, node 125
June 27
  41 Minutes
Invert Hiding All Results Option
  56 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):


5x-ssd-btrfs-raid-10-zstd_directProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelCompilerFile-System2 x 240GB MZ7LM240HMHQ0D3ext4 on SAN, node 129btrfs on SAN, node 129btrfs+zstd on SAN, node 129btrfs+zstd/noatime on SAN, node 129btrfs+zstd raid10/6 SSDs, node 125ext4 on Dell MD34XX /gnu, node 1252 x AMD EPYC 7451 24-Core (48 Cores / 96 Threads)Dell 08V001 (1.17.0 BIOS)AMD 17h192GB2 x 240GB MZ7LM240HMHQ0D3 + 3 x 240GB SanDisk SSD PLUS + 4 x 109951GB Compellent Vol + 4 x 10995GB Compellent VolMatrox G200eW32 x Broadcom BCM57416 NetXtreme-E Dual-Media 10G RDMA + 2 x Broadcom NetXtreme BCM5720 2-port PCIeGuix5.17.14-gnu (x86_64)GCC 12.1.0btrfsDell 08V001 (1.12.2 BIOS)6 x 8002GB Samsung SSD 870 + 1000GB PERC H730P Adp + 39978GB MD34xx5.17.6-gnu (x86_64)GCC 10.3.0OpenBenchmarking.orgKernel Details- Transparent Huge Pages: alwaysDisk Details- 2 x 240GB MZ7LM240HMHQ0D3: MQ-DEADLINE / compress-force=zstd:3,degraded,relatime,rw,space_cache=v2,ssd,subvol=/@home,subvolid=259 / RAID10 Block Size: 4096- ext4 on SAN, node 129: MQ-DEADLINE / compress-force=zstd:3,degraded,relatime,rw,space_cache=v2,ssd,subvol=/@home,subvolid=259 / RAID10 Block Size: 4096- btrfs on SAN, node 129: MQ-DEADLINE / compress-force=zstd:3,degraded,relatime,rw,space_cache=v2,ssd,subvol=/@home,subvolid=259 / RAID10 Block Size: 4096- btrfs+zstd on SAN, node 129: MQ-DEADLINE / compress-force=zstd:3,degraded,relatime,rw,space_cache=v2,ssd,subvol=/@home,subvolid=259 / RAID10 Block Size: 4096- btrfs+zstd/noatime on SAN, node 129: MQ-DEADLINE / compress-force=zstd:3,degraded,relatime,rw,space_cache=v2,ssd,subvol=/@home,subvolid=259 / RAID10 Block Size: 4096- btrfs+zstd raid10/6 SSDs, node 125: MQ-DEADLINE / compress-force=zstd:3,degraded,relatime,rw,space_cache=v2,ssd,subvol=/@home,subvolid=257 / RAID10 Block Size: 4096- ext4 on Dell MD34XX /gnu, node 125: MQ-DEADLINE / compress-force=zstd:3,degraded,relatime,rw,space_cache=v2,ssd,subvol=/@home,subvolid=257 / RAID10 Block Size: 4096Processor Details- 2 x 240GB MZ7LM240HMHQ0D3: CPU Microcode: 0x800126c- ext4 on SAN, node 129: CPU Microcode: 0x800126c- btrfs on SAN, node 129: CPU Microcode: 0x800126c- btrfs+zstd on SAN, node 129: CPU Microcode: 0x800126c- btrfs+zstd/noatime on SAN, node 129: CPU Microcode: 0x800126c- btrfs+zstd raid10/6 SSDs, node 125: CPU Microcode: 0x8001250- ext4 on Dell MD34XX /gnu, node 125: CPU Microcode: 0x8001250Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Not affected

2 x 240GB MZ7LM240HMHQ0D3ext4 on SAN, node 129btrfs on SAN, node 129btrfs+zstd on SAN, node 129btrfs+zstd/noatime on SAN, node 129btrfs+zstd raid10/6 SSDs, node 125ext4 on Dell MD34XX /gnu, node 125Result OverviewPhoronix Test Suite 10.8.4100%142%184%225%267%Flexible IO TesterFlexible IO TesterFlexible IO TesterFlexible IO TesterFlexible IO TesterFlexible IO TesterFlexible IO TesterFlexible IO TesterRand Write - Linux AIO - No - Yes - 4KBRand Write - Linux AIO - No - Yes - 4KBSeq Read - Linux AIO - No - Yes - 4KBSeq Read - Linux AIO - No - Yes - 4KBSeq Write - Linux AIO - No - Yes - 4KBSeq Write - Linux AIO - No - Yes - 4KBRand Read - Linux AIO - No - Yes - 4KBRand Read - Linux AIO - No - Yes - 4KB

5x-ssd-btrfs-raid-10-zstd_directfio: Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directory2 x 240GB MZ7LM240HMHQ0D3ext4 on SAN, node 129btrfs on SAN, node 129btrfs+zstd on SAN, node 129btrfs+zstd/noatime on SAN, node 129btrfs+zstd raid10/6 SSDs, node 125ext4 on Dell MD34XX /gnu, node 1252386101341.510632139.93582764.1164092436231339.510100147.93786071.5183122516414740.310307133.73422071.1181932265789240.510365146.03738067.3172272406145039.510103125.73217365.21667829374920100.0256212977616099.52548029375033105.3269672957557511930575OpenBenchmarking.org

Flexible IO Tester

FIO, the Flexible I/O Tester, is an advanced Linux disk benchmark supporting multiple I/O engines and a wealth of options. FIO was written by Jens Axboe for testing of the Linux I/O subsystem and schedulers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.29Type: Random Read - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory2 x 240GB MZ7LM240HMHQ0D3ext4 on SAN, node 129btrfs on SAN, node 129btrfs+zstd on SAN, node 129btrfs+zstd/noatime on SAN, node 129btrfs+zstd raid10/6 SSDs, node 125ext4 on Dell MD34XX /gnu, node 12560120180240300SE +/- 9.41, N = 15SE +/- 7.31, N = 15SE +/- 9.03, N = 15SE +/- 13.45, N = 12SE +/- 6.95, N = 12SE +/- 3.04, N = 5SE +/- 3.67, N = 32382432512262402932931. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.29Type: Random Read - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory2 x 240GB MZ7LM240HMHQ0D3ext4 on SAN, node 129btrfs on SAN, node 129btrfs+zstd on SAN, node 129btrfs+zstd/noatime on SAN, node 129btrfs+zstd raid10/6 SSDs, node 125ext4 on Dell MD34XX /gnu, node 12550100150200250Min: 159 / Avg: 238.2 / Max: 293Min: 215 / Avg: 243.4 / Max: 291Min: 214 / Avg: 250.73 / Max: 293Min: 104 / Avg: 226.25 / Max: 292Min: 215 / Avg: 240.17 / Max: 291Min: 285 / Avg: 292.6 / Max: 302Min: 286 / Avg: 293.33 / Max: 2971. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.29Type: Random Read - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory2 x 240GB MZ7LM240HMHQ0D3ext4 on SAN, node 129btrfs on SAN, node 129btrfs+zstd on SAN, node 129btrfs+zstd/noatime on SAN, node 129btrfs+zstd raid10/6 SSDs, node 125ext4 on Dell MD34XX /gnu, node 12516K32K48K64K80KSE +/- 2402.36, N = 15SE +/- 1881.33, N = 15SE +/- 2316.58, N = 15SE +/- 3436.93, N = 12SE +/- 1786.76, N = 12SE +/- 766.42, N = 5SE +/- 968.39, N = 3610136231364147578926145074920750331. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.29Type: Random Read - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory2 x 240GB MZ7LM240HMHQ0D3ext4 on SAN, node 129btrfs on SAN, node 129btrfs+zstd on SAN, node 129btrfs+zstd/noatime on SAN, node 129btrfs+zstd raid10/6 SSDs, node 125ext4 on Dell MD34XX /gnu, node 12513K26K39K52K65KMin: 40800 / Avg: 61013.33 / Max: 75000Min: 55100 / Avg: 62313.33 / Max: 74600Min: 54800 / Avg: 64146.67 / Max: 74900Min: 26700 / Avg: 57891.67 / Max: 74600Min: 54900 / Avg: 61450 / Max: 74500Min: 73000 / Avg: 74920 / Max: 77300Min: 73100 / Avg: 75033.33 / Max: 761001. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.29Type: Random Write - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory2 x 240GB MZ7LM240HMHQ0D3ext4 on SAN, node 129btrfs on SAN, node 129btrfs+zstd on SAN, node 129btrfs+zstd/noatime on SAN, node 129btrfs+zstd raid10/6 SSDs, node 125ext4 on Dell MD34XX /gnu, node 12520406080100SE +/- 0.82, N = 12SE +/- 0.27, N = 3SE +/- 0.63, N = 15SE +/- 0.84, N = 14SE +/- 0.67, N = 15SE +/- 7.01, N = 12SE +/- 1.27, N = 1541.539.540.340.539.5100.0105.31. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.29Type: Random Write - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory2 x 240GB MZ7LM240HMHQ0D3ext4 on SAN, node 129btrfs on SAN, node 129btrfs+zstd on SAN, node 129btrfs+zstd/noatime on SAN, node 129btrfs+zstd raid10/6 SSDs, node 125ext4 on Dell MD34XX /gnu, node 12520406080100Min: 37.8 / Avg: 41.53 / Max: 47.3Min: 39.1 / Avg: 39.47 / Max: 40Min: 33.7 / Avg: 40.28 / Max: 43.4Min: 36.3 / Avg: 40.54 / Max: 48.6Min: 34.6 / Avg: 39.48 / Max: 43.8Min: 29.1 / Avg: 99.98 / Max: 121Min: 96.2 / Avg: 105.35 / Max: 1141. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.29Type: Random Write - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory2 x 240GB MZ7LM240HMHQ0D3ext4 on SAN, node 129btrfs on SAN, node 129btrfs+zstd on SAN, node 129btrfs+zstd/noatime on SAN, node 129btrfs+zstd raid10/6 SSDs, node 125ext4 on Dell MD34XX /gnu, node 1256K12K18K24K30KSE +/- 207.95, N = 12SE +/- 57.74, N = 3SE +/- 160.36, N = 15SE +/- 214.31, N = 14SE +/- 171.55, N = 15SE +/- 1796.16, N = 12SE +/- 321.26, N = 15106321010010307103651010325621269671. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.29Type: Random Write - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory2 x 240GB MZ7LM240HMHQ0D3ext4 on SAN, node 129btrfs on SAN, node 129btrfs+zstd on SAN, node 129btrfs+zstd/noatime on SAN, node 129btrfs+zstd raid10/6 SSDs, node 125ext4 on Dell MD34XX /gnu, node 1255K10K15K20K25KMin: 9684 / Avg: 10632.33 / Max: 12100Min: 10000 / Avg: 10100 / Max: 10200Min: 8636 / Avg: 10306.6 / Max: 11100Min: 9284 / Avg: 10364.71 / Max: 12400Min: 8862 / Avg: 10103.07 / Max: 11200Min: 7455 / Avg: 25621.25 / Max: 30900Min: 24600 / Avg: 26966.67 / Max: 291001. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.29Type: Sequential Read - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory2 x 240GB MZ7LM240HMHQ0D3ext4 on SAN, node 129btrfs on SAN, node 129btrfs+zstd on SAN, node 129btrfs+zstd/noatime on SAN, node 129btrfs+zstd raid10/6 SSDs, node 125ext4 on Dell MD34XX /gnu, node 12560120180240300SE +/- 22.37, N = 15SE +/- 22.79, N = 15SE +/- 20.90, N = 15SE +/- 22.26, N = 15SE +/- 22.01, N = 15SE +/- 2.74, N = 15SE +/- 4.97, N = 12139.9147.9133.7146.0125.7297.0295.01. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.29Type: Sequential Read - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory2 x 240GB MZ7LM240HMHQ0D3ext4 on SAN, node 129btrfs on SAN, node 129btrfs+zstd on SAN, node 129btrfs+zstd/noatime on SAN, node 129btrfs+zstd raid10/6 SSDs, node 125ext4 on Dell MD34XX /gnu, node 12550100150200250Min: 49.8 / Avg: 139.91 / Max: 294Min: 48.8 / Avg: 147.91 / Max: 269Min: 47.7 / Avg: 133.65 / Max: 251Min: 49.8 / Avg: 145.99 / Max: 259Min: 49.6 / Avg: 125.67 / Max: 293Min: 277 / Avg: 297.4 / Max: 318Min: 247 / Avg: 295.33 / Max: 3141. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.29Type: Sequential Read - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory2 x 240GB MZ7LM240HMHQ0D3ext4 on SAN, node 129btrfs on SAN, node 129btrfs+zstd on SAN, node 129btrfs+zstd/noatime on SAN, node 129btrfs+zstd raid10/6 SSDs, node 125ext4 on Dell MD34XX /gnu, node 12516K32K48K64K80KSE +/- 5728.15, N = 15SE +/- 5835.83, N = 15SE +/- 5352.50, N = 15SE +/- 5700.20, N = 15SE +/- 5629.58, N = 15SE +/- 703.58, N = 15SE +/- 1263.06, N = 12358273786034220373803217376160755751. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.29Type: Sequential Read - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory2 x 240GB MZ7LM240HMHQ0D3ext4 on SAN, node 129btrfs on SAN, node 129btrfs+zstd on SAN, node 129btrfs+zstd/noatime on SAN, node 129btrfs+zstd raid10/6 SSDs, node 125ext4 on Dell MD34XX /gnu, node 12513K26K39K52K65KMin: 12700 / Avg: 35826.67 / Max: 75200Min: 12500 / Avg: 37860 / Max: 69000Min: 12200 / Avg: 34220 / Max: 64100Min: 12700 / Avg: 37380 / Max: 66300Min: 12700 / Avg: 32173.33 / Max: 74900Min: 71000 / Avg: 76160 / Max: 81500Min: 63300 / Avg: 75575 / Max: 804001. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.29Type: Sequential Write - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory2 x 240GB MZ7LM240HMHQ0D3ext4 on SAN, node 129btrfs on SAN, node 129btrfs+zstd on SAN, node 129btrfs+zstd/noatime on SAN, node 129btrfs+zstd raid10/6 SSDs, node 125ext4 on Dell MD34XX /gnu, node 125306090120150SE +/- 6.76, N = 15SE +/- 8.90, N = 15SE +/- 8.55, N = 15SE +/- 7.72, N = 15SE +/- 6.67, N = 15SE +/- 6.14, N = 15SE +/- 1.05, N = 864.171.571.167.365.299.5119.01. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.29Type: Sequential Write - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory2 x 240GB MZ7LM240HMHQ0D3ext4 on SAN, node 129btrfs on SAN, node 129btrfs+zstd on SAN, node 129btrfs+zstd/noatime on SAN, node 129btrfs+zstd raid10/6 SSDs, node 125ext4 on Dell MD34XX /gnu, node 12520406080100Min: 36 / Avg: 64.14 / Max: 121Min: 36.3 / Avg: 71.52 / Max: 116Min: 39.5 / Avg: 71.09 / Max: 118Min: 37.9 / Avg: 67.3 / Max: 110Min: 37 / Avg: 65.16 / Max: 117Min: 47.1 / Avg: 99.54 / Max: 123Min: 114 / Avg: 119.38 / Max: 1231. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.29Type: Sequential Write - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory2 x 240GB MZ7LM240HMHQ0D3ext4 on SAN, node 129btrfs on SAN, node 129btrfs+zstd on SAN, node 129btrfs+zstd/noatime on SAN, node 129btrfs+zstd raid10/6 SSDs, node 125ext4 on Dell MD34XX /gnu, node 1257K14K21K28K35KSE +/- 1730.43, N = 15SE +/- 2276.32, N = 15SE +/- 2190.64, N = 15SE +/- 1971.17, N = 15SE +/- 1706.12, N = 15SE +/- 1575.26, N = 15SE +/- 268.43, N = 8164091831218193172271667825480305751. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.29Type: Sequential Write - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory2 x 240GB MZ7LM240HMHQ0D3ext4 on SAN, node 129btrfs on SAN, node 129btrfs+zstd on SAN, node 129btrfs+zstd/noatime on SAN, node 129btrfs+zstd raid10/6 SSDs, node 125ext4 on Dell MD34XX /gnu, node 1255K10K15K20K25KMin: 9206 / Avg: 16409.33 / Max: 31000Min: 9280 / Avg: 18312 / Max: 29800Min: 10100 / Avg: 18193.33 / Max: 30300Min: 9710 / Avg: 17227.33 / Max: 28100Min: 9460 / Avg: 16678.4 / Max: 29900Min: 12100 / Avg: 25480 / Max: 31600Min: 29200 / Avg: 30575 / Max: 315001. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native