5x-ssd-btrfs-raid-10-zstd_direct

EXT4 on Dell MD34XX SAS drive (/gnu) on node 125 (berlin)

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2206272-APTE-220627998
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
2 x 240GB MZ7LM240HMHQ0D3
June 18 2022
  1 Hour, 1 Minute
ext4 on SAN, node 129
June 18 2022
  52 Minutes
btrfs on SAN, node 129
June 18 2022
  1 Hour, 5 Minutes
btrfs+zstd on SAN, node 129
June 18 2022
  1 Hour
btrfs+zstd/noatime on SAN, node 129
June 18 2022
  1 Hour, 1 Minute
btrfs+zstd raid10/6 SSDs, node 125
June 27 2022
  51 Minutes
ext4 on Dell MD34XX /gnu, node 125
June 27 2022
  41 Minutes
Invert Hiding All Results Option
  56 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


5x-ssd-btrfs-raid-10-zstd_directProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelCompilerFile-System2 x 240GB MZ7LM240HMHQ0D3ext4 on SAN, node 129btrfs on SAN, node 129btrfs+zstd on SAN, node 129btrfs+zstd/noatime on SAN, node 129btrfs+zstd raid10/6 SSDs, node 125ext4 on Dell MD34XX /gnu, node 1252 x AMD EPYC 7451 24-Core (48 Cores / 96 Threads)Dell 08V001 (1.17.0 BIOS)AMD 17h192GB2 x 240GB MZ7LM240HMHQ0D3 + 3 x 240GB SanDisk SSD PLUS + 4 x 109951GB Compellent Vol + 4 x 10995GB Compellent VolMatrox G200eW32 x Broadcom BCM57416 NetXtreme-E Dual-Media 10G RDMA + 2 x Broadcom NetXtreme BCM5720 2-port PCIeGuix5.17.14-gnu (x86_64)GCC 12.1.0btrfsDell 08V001 (1.12.2 BIOS)6 x 8002GB Samsung SSD 870 + 1000GB PERC H730P Adp + 39978GB MD34xx5.17.6-gnu (x86_64)GCC 10.3.0OpenBenchmarking.orgKernel Details- Transparent Huge Pages: alwaysDisk Details- 2 x 240GB MZ7LM240HMHQ0D3: MQ-DEADLINE / compress-force=zstd:3,degraded,relatime,rw,space_cache=v2,ssd,subvol=/@home,subvolid=259 / RAID10 Block Size: 4096- ext4 on SAN, node 129: MQ-DEADLINE / compress-force=zstd:3,degraded,relatime,rw,space_cache=v2,ssd,subvol=/@home,subvolid=259 / RAID10 Block Size: 4096- btrfs on SAN, node 129: MQ-DEADLINE / compress-force=zstd:3,degraded,relatime,rw,space_cache=v2,ssd,subvol=/@home,subvolid=259 / RAID10 Block Size: 4096- btrfs+zstd on SAN, node 129: MQ-DEADLINE / compress-force=zstd:3,degraded,relatime,rw,space_cache=v2,ssd,subvol=/@home,subvolid=259 / RAID10 Block Size: 4096- btrfs+zstd/noatime on SAN, node 129: MQ-DEADLINE / compress-force=zstd:3,degraded,relatime,rw,space_cache=v2,ssd,subvol=/@home,subvolid=259 / RAID10 Block Size: 4096- btrfs+zstd raid10/6 SSDs, node 125: MQ-DEADLINE / compress-force=zstd:3,degraded,relatime,rw,space_cache=v2,ssd,subvol=/@home,subvolid=257 / RAID10 Block Size: 4096- ext4 on Dell MD34XX /gnu, node 125: MQ-DEADLINE / compress-force=zstd:3,degraded,relatime,rw,space_cache=v2,ssd,subvol=/@home,subvolid=257 / RAID10 Block Size: 4096Processor Details- 2 x 240GB MZ7LM240HMHQ0D3: CPU Microcode: 0x800126c- ext4 on SAN, node 129: CPU Microcode: 0x800126c- btrfs on SAN, node 129: CPU Microcode: 0x800126c- btrfs+zstd on SAN, node 129: CPU Microcode: 0x800126c- btrfs+zstd/noatime on SAN, node 129: CPU Microcode: 0x800126c- btrfs+zstd raid10/6 SSDs, node 125: CPU Microcode: 0x8001250- ext4 on Dell MD34XX /gnu, node 125: CPU Microcode: 0x8001250Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Not affected

2 x 240GB MZ7LM240HMHQ0D3ext4 on SAN, node 129btrfs on SAN, node 129btrfs+zstd on SAN, node 129btrfs+zstd/noatime on SAN, node 129btrfs+zstd raid10/6 SSDs, node 125ext4 on Dell MD34XX /gnu, node 125Result OverviewPhoronix Test Suite100%142%184%225%267%Flexible IO TesterFlexible IO TesterFlexible IO TesterFlexible IO TesterFlexible IO TesterFlexible IO TesterFlexible IO TesterFlexible IO TesterRand Write - Linux AIO - No - Yes - 4KBRand Write - Linux AIO - No - Yes - 4KBSeq Read - Linux AIO - No - Yes - 4KBSeq Read - Linux AIO - No - Yes - 4KBSeq Write - Linux AIO - No - Yes - 4KBSeq Write - Linux AIO - No - Yes - 4KBRand Read - Linux AIO - No - Yes - 4KBRand Read - Linux AIO - No - Yes - 4KB

5x-ssd-btrfs-raid-10-zstd_directfio: Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directory2 x 240GB MZ7LM240HMHQ0D3ext4 on SAN, node 129btrfs on SAN, node 129btrfs+zstd on SAN, node 129btrfs+zstd/noatime on SAN, node 129btrfs+zstd raid10/6 SSDs, node 125ext4 on Dell MD34XX /gnu, node 1252386101341.510632139.93582764.1164092436231339.510100147.93786071.5183122516414740.310307133.73422071.1181932265789240.510365146.03738067.3172272406145039.510103125.73217365.21667829374920100.0256212977616099.52548029375033105.3269672957557511930575OpenBenchmarking.org

Flexible IO Tester

FIO, the Flexible I/O Tester, is an advanced Linux disk benchmark supporting multiple I/O engines and a wealth of options. FIO was written by Jens Axboe for testing of the Linux I/O subsystem and schedulers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.29Type: Random Read - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directoryext4 on Dell MD34XX /gnu, node 125btrfs+zstd raid10/6 SSDs, node 125btrfs+zstd/noatime on SAN, node 129btrfs+zstd on SAN, node 129btrfs on SAN, node 129ext4 on SAN, node 1292 x 240GB MZ7LM240HMHQ0D360120180240300SE +/- 3.67, N = 3SE +/- 3.04, N = 5SE +/- 6.95, N = 12SE +/- 13.45, N = 12SE +/- 9.03, N = 15SE +/- 7.31, N = 15SE +/- 9.41, N = 152932932402262512432381. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.29Type: Random Read - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directoryext4 on Dell MD34XX /gnu, node 125btrfs+zstd raid10/6 SSDs, node 125btrfs+zstd/noatime on SAN, node 129btrfs+zstd on SAN, node 129btrfs on SAN, node 129ext4 on SAN, node 1292 x 240GB MZ7LM240HMHQ0D350100150200250Min: 286 / Avg: 293.33 / Max: 297Min: 285 / Avg: 292.6 / Max: 302Min: 215 / Avg: 240.17 / Max: 291Min: 104 / Avg: 226.25 / Max: 292Min: 214 / Avg: 250.73 / Max: 293Min: 215 / Avg: 243.4 / Max: 291Min: 159 / Avg: 238.2 / Max: 2931. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.29Type: Random Read - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directoryext4 on Dell MD34XX /gnu, node 125btrfs+zstd raid10/6 SSDs, node 125btrfs+zstd/noatime on SAN, node 129btrfs+zstd on SAN, node 129btrfs on SAN, node 129ext4 on SAN, node 1292 x 240GB MZ7LM240HMHQ0D316K32K48K64K80KSE +/- 968.39, N = 3SE +/- 766.42, N = 5SE +/- 1786.76, N = 12SE +/- 3436.93, N = 12SE +/- 2316.58, N = 15SE +/- 1881.33, N = 15SE +/- 2402.36, N = 15750337492061450578926414762313610131. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.29Type: Random Read - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directoryext4 on Dell MD34XX /gnu, node 125btrfs+zstd raid10/6 SSDs, node 125btrfs+zstd/noatime on SAN, node 129btrfs+zstd on SAN, node 129btrfs on SAN, node 129ext4 on SAN, node 1292 x 240GB MZ7LM240HMHQ0D313K26K39K52K65KMin: 73100 / Avg: 75033.33 / Max: 76100Min: 73000 / Avg: 74920 / Max: 77300Min: 54900 / Avg: 61450 / Max: 74500Min: 26700 / Avg: 57891.67 / Max: 74600Min: 54800 / Avg: 64146.67 / Max: 74900Min: 55100 / Avg: 62313.33 / Max: 74600Min: 40800 / Avg: 61013.33 / Max: 750001. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.29Type: Random Write - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directoryext4 on Dell MD34XX /gnu, node 125btrfs+zstd raid10/6 SSDs, node 125btrfs+zstd/noatime on SAN, node 129btrfs+zstd on SAN, node 129btrfs on SAN, node 129ext4 on SAN, node 1292 x 240GB MZ7LM240HMHQ0D320406080100SE +/- 1.27, N = 15SE +/- 7.01, N = 12SE +/- 0.67, N = 15SE +/- 0.84, N = 14SE +/- 0.63, N = 15SE +/- 0.27, N = 3SE +/- 0.82, N = 12105.3100.039.540.540.339.541.51. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.29Type: Random Write - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directoryext4 on Dell MD34XX /gnu, node 125btrfs+zstd raid10/6 SSDs, node 125btrfs+zstd/noatime on SAN, node 129btrfs+zstd on SAN, node 129btrfs on SAN, node 129ext4 on SAN, node 1292 x 240GB MZ7LM240HMHQ0D320406080100Min: 96.2 / Avg: 105.35 / Max: 114Min: 29.1 / Avg: 99.98 / Max: 121Min: 34.6 / Avg: 39.48 / Max: 43.8Min: 36.3 / Avg: 40.54 / Max: 48.6Min: 33.7 / Avg: 40.28 / Max: 43.4Min: 39.1 / Avg: 39.47 / Max: 40Min: 37.8 / Avg: 41.53 / Max: 47.31. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.29Type: Random Write - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directoryext4 on Dell MD34XX /gnu, node 125btrfs+zstd raid10/6 SSDs, node 125btrfs+zstd/noatime on SAN, node 129btrfs+zstd on SAN, node 129btrfs on SAN, node 129ext4 on SAN, node 1292 x 240GB MZ7LM240HMHQ0D36K12K18K24K30KSE +/- 321.26, N = 15SE +/- 1796.16, N = 12SE +/- 171.55, N = 15SE +/- 214.31, N = 14SE +/- 160.36, N = 15SE +/- 57.74, N = 3SE +/- 207.95, N = 12269672562110103103651030710100106321. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.29Type: Random Write - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directoryext4 on Dell MD34XX /gnu, node 125btrfs+zstd raid10/6 SSDs, node 125btrfs+zstd/noatime on SAN, node 129btrfs+zstd on SAN, node 129btrfs on SAN, node 129ext4 on SAN, node 1292 x 240GB MZ7LM240HMHQ0D35K10K15K20K25KMin: 24600 / Avg: 26966.67 / Max: 29100Min: 7455 / Avg: 25621.25 / Max: 30900Min: 8862 / Avg: 10103.07 / Max: 11200Min: 9284 / Avg: 10364.71 / Max: 12400Min: 8636 / Avg: 10306.6 / Max: 11100Min: 10000 / Avg: 10100 / Max: 10200Min: 9684 / Avg: 10632.33 / Max: 121001. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.29Type: Sequential Read - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directoryext4 on Dell MD34XX /gnu, node 125btrfs+zstd raid10/6 SSDs, node 125btrfs+zstd/noatime on SAN, node 129btrfs+zstd on SAN, node 129btrfs on SAN, node 129ext4 on SAN, node 1292 x 240GB MZ7LM240HMHQ0D360120180240300SE +/- 4.97, N = 12SE +/- 2.74, N = 15SE +/- 22.01, N = 15SE +/- 22.26, N = 15SE +/- 20.90, N = 15SE +/- 22.79, N = 15SE +/- 22.37, N = 15295.0297.0125.7146.0133.7147.9139.91. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.29Type: Sequential Read - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directoryext4 on Dell MD34XX /gnu, node 125btrfs+zstd raid10/6 SSDs, node 125btrfs+zstd/noatime on SAN, node 129btrfs+zstd on SAN, node 129btrfs on SAN, node 129ext4 on SAN, node 1292 x 240GB MZ7LM240HMHQ0D350100150200250Min: 247 / Avg: 295.33 / Max: 314Min: 277 / Avg: 297.4 / Max: 318Min: 49.6 / Avg: 125.67 / Max: 293Min: 49.8 / Avg: 145.99 / Max: 259Min: 47.7 / Avg: 133.65 / Max: 251Min: 48.8 / Avg: 147.91 / Max: 269Min: 49.8 / Avg: 139.91 / Max: 2941. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.29Type: Sequential Read - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directoryext4 on Dell MD34XX /gnu, node 125btrfs+zstd raid10/6 SSDs, node 125btrfs+zstd/noatime on SAN, node 129btrfs+zstd on SAN, node 129btrfs on SAN, node 129ext4 on SAN, node 1292 x 240GB MZ7LM240HMHQ0D316K32K48K64K80KSE +/- 1263.06, N = 12SE +/- 703.58, N = 15SE +/- 5629.58, N = 15SE +/- 5700.20, N = 15SE +/- 5352.50, N = 15SE +/- 5835.83, N = 15SE +/- 5728.15, N = 15755757616032173373803422037860358271. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.29Type: Sequential Read - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directoryext4 on Dell MD34XX /gnu, node 125btrfs+zstd raid10/6 SSDs, node 125btrfs+zstd/noatime on SAN, node 129btrfs+zstd on SAN, node 129btrfs on SAN, node 129ext4 on SAN, node 1292 x 240GB MZ7LM240HMHQ0D313K26K39K52K65KMin: 63300 / Avg: 75575 / Max: 80400Min: 71000 / Avg: 76160 / Max: 81500Min: 12700 / Avg: 32173.33 / Max: 74900Min: 12700 / Avg: 37380 / Max: 66300Min: 12200 / Avg: 34220 / Max: 64100Min: 12500 / Avg: 37860 / Max: 69000Min: 12700 / Avg: 35826.67 / Max: 752001. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.29Type: Sequential Write - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directoryext4 on Dell MD34XX /gnu, node 125btrfs+zstd raid10/6 SSDs, node 125btrfs+zstd/noatime on SAN, node 129btrfs+zstd on SAN, node 129btrfs on SAN, node 129ext4 on SAN, node 1292 x 240GB MZ7LM240HMHQ0D3306090120150SE +/- 1.05, N = 8SE +/- 6.14, N = 15SE +/- 6.67, N = 15SE +/- 7.72, N = 15SE +/- 8.55, N = 15SE +/- 8.90, N = 15SE +/- 6.76, N = 15119.099.565.267.371.171.564.11. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.29Type: Sequential Write - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directoryext4 on Dell MD34XX /gnu, node 125btrfs+zstd raid10/6 SSDs, node 125btrfs+zstd/noatime on SAN, node 129btrfs+zstd on SAN, node 129btrfs on SAN, node 129ext4 on SAN, node 1292 x 240GB MZ7LM240HMHQ0D320406080100Min: 114 / Avg: 119.38 / Max: 123Min: 47.1 / Avg: 99.54 / Max: 123Min: 37 / Avg: 65.16 / Max: 117Min: 37.9 / Avg: 67.3 / Max: 110Min: 39.5 / Avg: 71.09 / Max: 118Min: 36.3 / Avg: 71.52 / Max: 116Min: 36 / Avg: 64.14 / Max: 1211. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.29Type: Sequential Write - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directoryext4 on Dell MD34XX /gnu, node 125btrfs+zstd raid10/6 SSDs, node 125btrfs+zstd/noatime on SAN, node 129btrfs+zstd on SAN, node 129btrfs on SAN, node 129ext4 on SAN, node 1292 x 240GB MZ7LM240HMHQ0D37K14K21K28K35KSE +/- 268.43, N = 8SE +/- 1575.26, N = 15SE +/- 1706.12, N = 15SE +/- 1971.17, N = 15SE +/- 2190.64, N = 15SE +/- 2276.32, N = 15SE +/- 1730.43, N = 15305752548016678172271819318312164091. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.29Type: Sequential Write - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directoryext4 on Dell MD34XX /gnu, node 125btrfs+zstd raid10/6 SSDs, node 125btrfs+zstd/noatime on SAN, node 129btrfs+zstd on SAN, node 129btrfs on SAN, node 129ext4 on SAN, node 1292 x 240GB MZ7LM240HMHQ0D35K10K15K20K25KMin: 29200 / Avg: 30575 / Max: 31500Min: 12100 / Avg: 25480 / Max: 31600Min: 9460 / Avg: 16678.4 / Max: 29900Min: 9710 / Avg: 17227.33 / Max: 28100Min: 10100 / Avg: 18193.33 / Max: 30300Min: 9280 / Avg: 18312 / Max: 29800Min: 9206 / Avg: 16409.33 / Max: 310001. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

8 Results Shown

Flexible IO Tester:
  Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directory:
    MB/s
    IOPS
  Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directory:
    MB/s
    IOPS
  Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directory:
    MB/s
    IOPS
  Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directory:
    MB/s
    IOPS