pts-disk-different-nvmes

AMD Ryzen Threadripper 1900X 8-Core testing with a Gigabyte X399 DESIGNARE EX-CF (F13a BIOS) and NVIDIA Quadro P400 on Debian 11 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2211135-NE-PTSDISKDI63
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
ZFS zraid1 4xNVME Pool
November 12 2022
  2 Hours, 25 Minutes
ext4 mdadm raid5 4xNVME
November 12 2022
  3 Hours, 9 Minutes
ext4 Crucial P5 Plus 1TB NVME
November 13 2022
  3 Hours, 15 Minutes
ZFS zraid1 8xNVME Pool
November 13 2022
  2 Hours, 19 Minutes
ZFS zraid1 8xNVME Pool no Compression
November 13 2022
  2 Hours, 19 Minutes
Invert Behavior (Only Show Selected Data)
  2 Hours, 41 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


pts-disk-different-nvmesOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen Threadripper 1900X 8-Core @ 3.80GHz (8 Cores / 16 Threads)Gigabyte X399 DESIGNARE EX-CF (F13a BIOS)AMD 17h64GBSamsung SSD 960 EVO 500GB + 8 x 2000GB Western Digital WD_BLACK SN770 2TB + 1000GB CT1000P5PSSD8NVIDIA Quadro P400NVIDIA GP107GL HD AudioDELL S2340T4 x Intel I350 + Intel 8265 / 8275Debian 115.10.0-19-amd64 (x86_64)GCC 10.2.1 20210110zfsext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelCompilerFile-SystemsScreen ResolutionPts-disk-different-nvmes PerformanceSystem Logs- Transparent Huge Pages: always- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-mutex --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-Km9U7s/gcc-10-10.2.1/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-Km9U7s/gcc-10-10.2.1/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8001137- ZFS zraid1 4xNVME Pool, ZFS zraid1 8xNVME Pool, ZFS zraid1 8xNVME Pool no Compression: NONE- Python 3.9.2- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT vulnerable + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: disabled RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - ext4 mdadm raid5 4xNVME: NONE / relatime,rw,stripe=384 / raid5 nvme4n1p1[4] nvme3n1p1[2] nvme2n1p1[1] nvme1n1p1[0] Block Size: 4096 - ext4 Crucial P5 Plus 1TB NVME: NONE / relatime,rw / Block Size: 4096

ZFS zraid1 4xNVME Poolext4 mdadm raid5 4xNVMEext4 Crucial P5 Plus 1TB NVMEZFS zraid1 8xNVME PoolZFS zraid1 8xNVME Pool no CompressionResult OverviewPhoronix Test Suite100%183%267%350%434%DbenchFS-MarkFlexible IO TesterCompile BenchPostMarkSQLite

pts-disk-different-nvmescompilebench: Compilecompilebench: Initial Createcompilebench: Read Compiled Treedbench: 12 Clientsdbench: 1 Clientsfio: Rand Read - Linux AIO - No - Yes - 2MB - Default Test Directoryfio: Rand Read - Linux AIO - No - Yes - 2MB - Default Test Directoryfio: Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Rand Write - Linux AIO - No - Yes - 2MB - Default Test Directoryfio: Rand Write - Linux AIO - No - Yes - 2MB - Default Test Directoryfio: Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Seq Read - Linux AIO - No - Yes - 2MB - Default Test Directoryfio: Seq Read - Linux AIO - No - Yes - 2MB - Default Test Directoryfio: Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Seq Write - Linux AIO - No - Yes - 2MB - Default Test Directoryfio: Seq Write - Linux AIO - No - Yes - 2MB - Default Test Directoryfio: Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directoryfs-mark: 1000 Files, 1MB Sizefs-mark: 5000 Files, 1MB Size, 4 Threadsfs-mark: 4000 Files, 32 Sub Dirs, 1MB Sizefs-mark: 1000 Files, 1MB Size, No Sync/FSyncpostmark: Disk Transaction Performancesqlite: 1sqlite: 8sqlite: 32sqlite: 64sqlite: 128ZFS zraid1 4xNVME Poolext4 mdadm raid5 4xNVMEext4 Crucial P5 Plus 1TB NVMEZFS zraid1 8xNVME PoolZFS zraid1 8xNVME Pool no Compression1398.57222.041212.732626.03410.1714245211919449567245812251624150040752034128732933320651029670171333646.51632.7684.21359.332897.85211.26723.19137.44876.0881478.00411.072642.032484.85453.170666611993066679584763248296767068652213331308650449115000254.0537.9284.71733.350688.81020.00531.79345.72080.1451483.07423.672747.01658.794102.9853538176512863296673252162387822466735381765131133566732361614891228000584.1522.1435.01791.651378.36715.76727.57840.97959.8901390.71221.831217.802472.42380.3544037201522457433266113271774536740051999127532633323811186665170667646.71655.6644.71340.732758.38012.85726.99940.30178.1531247.36206.771694.752783.64458.9134170208121956167268113371884820039981995129433100024461220682174667589.71240.0592.11160.932758.37912.82726.89240.00377.047OpenBenchmarking.org

Compile Bench

Compilebench tries to age a filesystem by simulating some of the disk IO common in creating, compiling, patching, stating and reading kernel trees. It indirectly measures how well filesystems can maintain directory locality as the disk fills up and directories age. This current test is setup to use the makej mode with 10 initial directories Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: CompileZFS zraid1 4xNVME PoolZFS zraid1 8xNVME PoolZFS zraid1 8xNVME Pool no Compressionext4 Crucial P5 Plus 1TB NVMEext4 mdadm raid5 4xNVME30060090012001500SE +/- 6.06, N = 3SE +/- 7.47, N = 3SE +/- 3.89, N = 3SE +/- 0.00, N = 3SE +/- 11.73, N = 31398.571390.711247.361483.071478.00

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Initial CreateZFS zraid1 4xNVME PoolZFS zraid1 8xNVME PoolZFS zraid1 8xNVME Pool no Compressionext4 Crucial P5 Plus 1TB NVMEext4 mdadm raid5 4xNVME90180270360450SE +/- 1.57, N = 3SE +/- 0.50, N = 3SE +/- 1.46, N = 3SE +/- 1.23, N = 3SE +/- 0.77, N = 3222.04221.83206.77423.67411.07

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Read Compiled TreeZFS zraid1 4xNVME PoolZFS zraid1 8xNVME PoolZFS zraid1 8xNVME Pool no Compressionext4 Crucial P5 Plus 1TB NVMEext4 mdadm raid5 4xNVME6001200180024003000SE +/- 8.00, N = 3SE +/- 8.89, N = 3SE +/- 19.98, N = 3SE +/- 8.94, N = 3SE +/- 42.24, N = 31212.731217.801694.752747.012642.03

Dbench

Dbench is a benchmark designed by the Samba project as a free alternative to netbench, but dbench contains only file-system calls for testing the disk performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.012 ClientsZFS zraid1 4xNVME PoolZFS zraid1 8xNVME PoolZFS zraid1 8xNVME Pool no Compressionext4 Crucial P5 Plus 1TB NVMEext4 mdadm raid5 4xNVME6001200180024003000SE +/- 5.07, N = 3SE +/- 2.03, N = 3SE +/- 16.46, N = 3SE +/- 1.20, N = 3SE +/- 21.52, N = 32626.032472.422783.64658.792484.851. (CC) gcc options: -lpopt -O2

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.01 ClientsZFS zraid1 4xNVME PoolZFS zraid1 8xNVME PoolZFS zraid1 8xNVME Pool no Compressionext4 Crucial P5 Plus 1TB NVMEext4 mdadm raid5 4xNVME100200300400500SE +/- 0.85, N = 3SE +/- 0.25, N = 3SE +/- 1.41, N = 3SE +/- 0.03, N = 3SE +/- 0.61, N = 3410.17380.35458.91102.99453.171. (CC) gcc options: -lpopt -O2

Flexible IO Tester

FIO, the Flexible I/O Tester, is an advanced Linux disk benchmark supporting multiple I/O engines and a wealth of options. FIO was written by Jens Axboe for testing of the Linux I/O subsystem and schedulers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.29Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test DirectoryZFS zraid1 4xNVME PoolZFS zraid1 8xNVME PoolZFS zraid1 8xNVME Pool no Compressionext4 Crucial P5 Plus 1TB NVME9001800270036004500SE +/- 58.64, N = 3SE +/- 13.37, N = 3SE +/- 41.66, N = 342454037417035381. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.29Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test DirectoryZFS zraid1 4xNVME PoolZFS zraid1 8xNVME PoolZFS zraid1 8xNVME Pool no Compressionext4 Crucial P5 Plus 1TB NVMEext4 mdadm raid5 4xNVME14002800420056007000SE +/- 29.49, N = 3SE +/- 6.57, N = 3SE +/- 20.66, N = 3SE +/- 14.99, N = 3211920152081176566661. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.29Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryZFS zraid1 4xNVME PoolZFS zraid1 8xNVME PoolZFS zraid1 8xNVME Pool no Compressionext4 Crucial P5 Plus 1TB NVMEext4 mdadm raid5 4xNVME30060090012001500SE +/- 0.88, N = 3SE +/- 0.67, N = 3SE +/- 2.67, N = 3SE +/- 3.48, N = 3SE +/- 5.17, N = 3194224219128611991. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.29Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryZFS zraid1 4xNVME PoolZFS zraid1 8xNVME PoolZFS zraid1 8xNVME Pool no Compressionext4 Crucial P5 Plus 1TB NVMEext4 mdadm raid5 4xNVME70K140K210K280K350KSE +/- 185.59, N = 3SE +/- 176.38, N = 3SE +/- 633.33, N = 3SE +/- 881.92, N = 3SE +/- 1333.33, N = 34956757433561673296673066671. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.29Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test DirectoryZFS zraid1 4xNVME PoolZFS zraid1 8xNVME PoolZFS zraid1 8xNVME Pool no Compressionext4 Crucial P5 Plus 1TB NVMEext4 mdadm raid5 4xNVME7001400210028003500SE +/- 17.98, N = 3SE +/- 6.39, N = 3SE +/- 10.27, N = 3SE +/- 3.53, N = 324582661268132529581. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.29Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test DirectoryZFS zraid1 4xNVME PoolZFS zraid1 8xNVME PoolZFS zraid1 8xNVME Pool no Compressionext4 Crucial P5 Plus 1TB NVMEext4 mdadm raid5 4xNVME30060090012001500SE +/- 8.99, N = 3SE +/- 4.91, N = 3SE +/- 1.76, N = 312251327133716234761. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.29Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryZFS zraid1 4xNVME PoolZFS zraid1 8xNVME PoolZFS zraid1 8xNVME Pool no Compressionext4 Crucial P5 Plus 1TB NVMEext4 mdadm raid5 4xNVME2004006008001000SE +/- 2.67, N = 3SE +/- 2.00, N = 31621771888783241. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.29Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryZFS zraid1 4xNVME PoolZFS zraid1 8xNVME PoolZFS zraid1 8xNVME Pool no Compressionext4 Crucial P5 Plus 1TB NVMEext4 mdadm raid5 4xNVME50K100K150K200K250KSE +/- 57.74, N = 3SE +/- 33.33, N = 3SE +/- 57.74, N = 3SE +/- 666.67, N = 3SE +/- 569.60, N = 3415004536748200224667829671. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.29Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test DirectoryZFS zraid1 4xNVME PoolZFS zraid1 8xNVME PoolZFS zraid1 8xNVME Pool no Compressionext4 Crucial P5 Plus 1TB NVME9001800270036004500SE +/- 37.36, N = 3SE +/- 50.44, N = 3SE +/- 47.58, N = 340754005399835381. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.29Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test DirectoryZFS zraid1 4xNVME PoolZFS zraid1 8xNVME PoolZFS zraid1 8xNVME Pool no Compressionext4 Crucial P5 Plus 1TB NVMEext4 mdadm raid5 4xNVME14002800420056007000SE +/- 18.75, N = 3SE +/- 25.32, N = 3SE +/- 23.68, N = 3SE +/- 3.61, N = 3203419991995176567061. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.29Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryZFS zraid1 4xNVME PoolZFS zraid1 8xNVME PoolZFS zraid1 8xNVME Pool no Compressionext4 Crucial P5 Plus 1TB NVMEext4 mdadm raid5 4xNVME30060090012001500SE +/- 7.09, N = 3SE +/- 13.25, N = 3SE +/- 4.84, N = 3SE +/- 17.89, N = 3SE +/- 11.46, N = 312871275129413118651. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.29Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryZFS zraid1 4xNVME PoolZFS zraid1 8xNVME PoolZFS zraid1 8xNVME Pool no Compressionext4 Crucial P5 Plus 1TB NVMEext4 mdadm raid5 4xNVME70K140K210K280K350KSE +/- 1855.92, N = 3SE +/- 3382.96, N = 3SE +/- 1000.00, N = 3SE +/- 4666.67, N = 3SE +/- 2728.45, N = 33293333263333310003356672213331. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.29Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test DirectoryZFS zraid1 4xNVME PoolZFS zraid1 8xNVME PoolZFS zraid1 8xNVME Pool no Compressionext4 Crucial P5 Plus 1TB NVMEext4 mdadm raid5 4xNVME7001400210028003500SE +/- 6.36, N = 3SE +/- 4.67, N = 3SE +/- 23.91, N = 15SE +/- 5.70, N = 3SE +/- 1.73, N = 3206523812446323613081. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.29Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test DirectoryZFS zraid1 4xNVME PoolZFS zraid1 8xNVME PoolZFS zraid1 8xNVME Pool no Compressionext4 Crucial P5 Plus 1TB NVMEext4 mdadm raid5 4xNVME30060090012001500SE +/- 2.91, N = 3SE +/- 2.33, N = 3SE +/- 11.98, N = 15SE +/- 2.85, N = 3SE +/- 0.88, N = 310291186122016146501. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.29Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryZFS zraid1 4xNVME PoolZFS zraid1 8xNVME PoolZFS zraid1 8xNVME Pool no Compressionext4 Crucial P5 Plus 1TB NVMEext4 mdadm raid5 4xNVME2004006008001000SE +/- 1.86, N = 3SE +/- 1.67, N = 3SE +/- 2.08, N = 3SE +/- 3.93, N = 3SE +/- 1.15, N = 36706656828914491. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.29Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryZFS zraid1 4xNVME PoolZFS zraid1 8xNVME PoolZFS zraid1 8xNVME Pool no Compressionext4 Crucial P5 Plus 1TB NVMEext4 mdadm raid5 4xNVME50K100K150K200K250KSE +/- 666.67, N = 3SE +/- 333.33, N = 3SE +/- 333.33, N = 3SE +/- 1000.00, N = 3SE +/- 577.35, N = 31713331706671746672280001150001. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

FS-Mark

FS_Mark is designed to test a system's file-system performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.3Test: 1000 Files, 1MB SizeZFS zraid1 4xNVME PoolZFS zraid1 8xNVME PoolZFS zraid1 8xNVME Pool no Compressionext4 Crucial P5 Plus 1TB NVMEext4 mdadm raid5 4xNVME140280420560700SE +/- 5.49, N = 15SE +/- 1.87, N = 3SE +/- 6.13, N = 3SE +/- 13.64, N = 12SE +/- 9.50, N = 15646.5646.7589.7584.1254.0

OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.3Test: 5000 Files, 1MB Size, 4 ThreadsZFS zraid1 4xNVME PoolZFS zraid1 8xNVME PoolZFS zraid1 8xNVME Pool no Compressionext4 Crucial P5 Plus 1TB NVMEext4 mdadm raid5 4xNVME400800120016002000SE +/- 4.95, N = 3SE +/- 6.18, N = 3SE +/- 2.97, N = 3SE +/- 125.73, N = 9SE +/- 4.55, N = 121632.71655.61240.0522.1537.9

OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.3Test: 4000 Files, 32 Sub Dirs, 1MB SizeZFS zraid1 4xNVME PoolZFS zraid1 8xNVME PoolZFS zraid1 8xNVME Pool no Compressionext4 Crucial P5 Plus 1TB NVMEext4 mdadm raid5 4xNVME150300450600750SE +/- 7.90, N = 3SE +/- 3.56, N = 3SE +/- 3.57, N = 3SE +/- 16.63, N = 12SE +/- 2.79, N = 3684.2644.7592.1435.0284.7

OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.3Test: 1000 Files, 1MB Size, No Sync/FSyncZFS zraid1 4xNVME PoolZFS zraid1 8xNVME PoolZFS zraid1 8xNVME Pool no Compressionext4 Crucial P5 Plus 1TB NVMEext4 mdadm raid5 4xNVME400800120016002000SE +/- 12.01, N = 15SE +/- 10.88, N = 15SE +/- 13.85, N = 4SE +/- 16.52, N = 3SE +/- 19.84, N = 41359.31340.71160.91791.61733.3

PostMark

This is a test of NetApp's PostMark benchmark designed to simulate small-file testing similar to the tasks endured by web and mail servers. This test profile will set PostMark to perform 25,000 transactions with 500 files simultaneously with the file sizes ranging between 5 and 512 kilobytes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostMark 1.51Disk Transaction PerformanceZFS zraid1 4xNVME PoolZFS zraid1 8xNVME PoolZFS zraid1 8xNVME Pool no Compressionext4 Crucial P5 Plus 1TB NVMEext4 mdadm raid5 4xNVME11002200330044005500SE +/- 29.00, N = 3SE +/- 14.33, N = 3SE +/- 35.33, N = 3SE +/- 34.00, N = 3328932753275513750681. (CC) gcc options: -O3

SQLite

This is a simple benchmark of SQLite. At present this test profile just measures the time to perform a pre-defined number of insertions on an indexed database. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.30.1Threads / Copies: 1ZFS zraid1 4xNVME PoolZFS zraid1 8xNVME PoolZFS zraid1 8xNVME Pool no Compressionext4 Crucial P5 Plus 1TB NVMEext4 mdadm raid5 4xNVME246810SE +/- 0.071, N = 7SE +/- 0.096, N = 3SE +/- 0.104, N = 3SE +/- 0.067, N = 3SE +/- 0.069, N = 107.8528.3808.3798.3678.8101. (CC) gcc options: -O2 -lz -lm -ldl -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.30.1Threads / Copies: 8ZFS zraid1 4xNVME PoolZFS zraid1 8xNVME PoolZFS zraid1 8xNVME Pool no Compressionext4 Crucial P5 Plus 1TB NVMEext4 mdadm raid5 4xNVME510152025SE +/- 0.08, N = 3SE +/- 0.17, N = 3SE +/- 0.17, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 311.2712.8612.8315.7720.011. (CC) gcc options: -O2 -lz -lm -ldl -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.30.1Threads / Copies: 32ZFS zraid1 4xNVME PoolZFS zraid1 8xNVME PoolZFS zraid1 8xNVME Pool no Compressionext4 Crucial P5 Plus 1TB NVMEext4 mdadm raid5 4xNVME714212835SE +/- 0.08, N = 3SE +/- 0.04, N = 3SE +/- 0.07, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 323.1927.0026.8927.5831.791. (CC) gcc options: -O2 -lz -lm -ldl -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.30.1Threads / Copies: 64ZFS zraid1 4xNVME PoolZFS zraid1 8xNVME PoolZFS zraid1 8xNVME Pool no Compressionext4 Crucial P5 Plus 1TB NVMEext4 mdadm raid5 4xNVME1020304050SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.13, N = 337.4540.3040.0040.9845.721. (CC) gcc options: -O2 -lz -lm -ldl -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.30.1Threads / Copies: 128ZFS zraid1 4xNVME PoolZFS zraid1 8xNVME PoolZFS zraid1 8xNVME Pool no Compressionext4 Crucial P5 Plus 1TB NVMEext4 mdadm raid5 4xNVME20406080100SE +/- 0.10, N = 3SE +/- 0.10, N = 3SE +/- 0.27, N = 3SE +/- 0.03, N = 3SE +/- 0.28, N = 376.0978.1577.0559.8980.151. (CC) gcc options: -O2 -lz -lm -ldl -lpthread

31 Results Shown

Compile Bench:
  Compile
  Initial Create
  Read Compiled Tree
Dbench:
  12 Clients
  1 Clients
Flexible IO Tester:
  Rand Read - Linux AIO - No - Yes - 2MB - Default Test Directory:
    MB/s
    IOPS
  Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directory:
    MB/s
    IOPS
  Rand Write - Linux AIO - No - Yes - 2MB - Default Test Directory:
    MB/s
    IOPS
  Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directory:
    MB/s
    IOPS
  Seq Read - Linux AIO - No - Yes - 2MB - Default Test Directory:
    MB/s
    IOPS
  Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directory:
    MB/s
    IOPS
  Seq Write - Linux AIO - No - Yes - 2MB - Default Test Directory:
    MB/s
    IOPS
  Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directory:
    MB/s
    IOPS
FS-Mark:
  1000 Files, 1MB Size
  5000 Files, 1MB Size, 4 Threads
  4000 Files, 32 Sub Dirs, 1MB Size
  1000 Files, 1MB Size, No Sync/FSync
PostMark
SQLite:
  1
  8
  32
  64
  128