pts-disk-different-nvmes

AMD Ryzen Threadripper 1900X 8-Core testing with a Gigabyte X399 DESIGNARE EX-CF (F13a BIOS) and NVIDIA Quadro P400 on Debian 11 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2211127-NE-PTSDISKDI57
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
ZFS zraid1 4xNVME Pool
November 12 2022
  3 Hours, 23 Minutes
ext4 mdadm raid5 4xNVME
November 12 2022
  4 Hours, 33 Minutes
Invert Hiding All Results Option
  3 Hours, 58 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


pts-disk-different-nvmesOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen Threadripper 1900X 8-Core @ 3.80GHz (8 Cores / 16 Threads)Gigabyte X399 DESIGNARE EX-CF (F13a BIOS)AMD 17h64GBSamsung SSD 960 EVO 500GB + 8 x 2000GB Western Digital WD_BLACK SN770 2TB + 1000GB CT1000P5PSSD8NVIDIA Quadro P400NVIDIA GP107GL HD AudioDELL S2340T4 x Intel I350 + Intel 8265 / 8275Debian 115.10.0-19-amd64 (x86_64)GCC 10.2.1 20210110zfsext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelCompilerFile-SystemsScreen ResolutionPts-disk-different-nvmes PerformanceSystem Logs- Transparent Huge Pages: always- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-mutex --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-Km9U7s/gcc-10-10.2.1/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-Km9U7s/gcc-10-10.2.1/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8001137- ZFS zraid1 4xNVME Pool: NONE- Python 3.9.2- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT vulnerable + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: disabled RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - ext4 mdadm raid5 4xNVME: NONE / relatime,rw,stripe=384 / raid5 nvme4n1p1[4] nvme3n1p1[2] nvme2n1p1[1] nvme1n1p1[0] Block Size: 4096

ZFS zraid1 4xNVME Pool vs. ext4 mdadm raid5 4xNVME ComparisonPhoronix Test SuiteBaseline+129.7%+129.7%+259.4%+259.4%+389.1%+389.1%518.7%518%229.7%214.6%117.9%100%99.9%85.1%54.1%27.5%10.5%5.7%Rand Read - Linux AIO - No - Yes - 4KBRand Read - Linux AIO - No - Yes - 4KBSeq Read - Linux AIO - No - Yes - 2MBRand Read - Linux AIO - No - Yes - 2MB5.F.1.S.4.T203.5%Rand Write - Linux AIO - No - Yes - 2MB157.4%Rand Write - Linux AIO - No - Yes - 2MB156.6%1.F.1.S154.5%4.F.3.S.D.1.S140.3%Read Compiled TreeRand Write - Linux AIO - No - Yes - 4KBRand Write - Linux AIO - No - Yes - 4KBInitial Create877.6%Seq Write - Linux AIO - No - Yes - 2MB58.3%Seq Write - Linux AIO - No - Yes - 2MB57.9%D.T.PSeq Write - Linux AIO - No - Yes - 4KB49.2%Seq Write - Linux AIO - No - Yes - 4KB49%Seq Read - Linux AIO - No - Yes - 4KB48.8%Seq Read - Linux AIO - No - Yes - 4KB48.8%3237.1%1.F.1.S.N.S.F6422.1%112.2%1 Clients12 Clients5.7%Compile1285.3%Flexible IO TesterFlexible IO TesterFlexible IO TesterFlexible IO TesterFS-MarkFlexible IO TesterFlexible IO TesterFS-MarkFS-MarkCompile BenchFlexible IO TesterFlexible IO TesterCompile BenchSQLiteFlexible IO TesterFlexible IO TesterPostMarkFlexible IO TesterFlexible IO TesterFlexible IO TesterFlexible IO TesterSQLiteFS-MarkSQLiteSQLiteDbenchDbenchCompile BenchSQLiteZFS zraid1 4xNVME Poolext4 mdadm raid5 4xNVME

pts-disk-different-nvmescompilebench: Compilecompilebench: Initial Createcompilebench: Read Compiled Treedbench: 12 Clientsfio: Rand Read - Linux AIO - No - Yes - 2MB - Default Test Directoryfio: Rand Read - Linux AIO - No - Yes - 2MB - Default Test Directoryfio: Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Rand Write - Linux AIO - No - Yes - 2MB - Default Test Directoryfio: Rand Write - Linux AIO - No - Yes - 2MB - Default Test Directoryfio: Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Seq Read - Linux AIO - No - Yes - 2MB - Default Test Directoryfio: Seq Read - Linux AIO - No - Yes - 2MB - Default Test Directoryfio: Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Seq Write - Linux AIO - No - Yes - 2MB - Default Test Directoryfio: Seq Write - Linux AIO - No - Yes - 2MB - Default Test Directoryfio: Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directoryfs-mark: 1000 Files, 1MB Sizefs-mark: 5000 Files, 1MB Size, 4 Threadsfs-mark: 4000 Files, 32 Sub Dirs, 1MB Sizefs-mark: 1000 Files, 1MB Size, No Sync/FSyncdbench: 1 Clientspostmark: Disk Transaction Performancesqlite: 1sqlite: 8sqlite: 32sqlite: 64sqlite: 128ZFS zraid1 4xNVME Poolext4 mdadm raid5 4xNVME1398.57222.041212.732626.034245211919449567245812251624150040752034128732933320651029670171333646.51632.7684.21359.3410.17132897.85211.26723.19137.44876.0881478.00411.072642.032484.85666611993066679584763248296767068652213331308650449115000254.0537.9284.71733.3453.17050688.81020.00531.79345.72080.145OpenBenchmarking.org

Compile Bench

Compilebench tries to age a filesystem by simulating some of the disk IO common in creating, compiling, patching, stating and reading kernel trees. It indirectly measures how well filesystems can maintain directory locality as the disk fills up and directories age. This current test is setup to use the makej mode with 10 initial directories Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: CompileZFS zraid1 4xNVME Poolext4 mdadm raid5 4xNVME30060090012001500SE +/- 6.06, N = 3SE +/- 11.73, N = 31398.571478.00

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Initial CreateZFS zraid1 4xNVME Poolext4 mdadm raid5 4xNVME90180270360450SE +/- 1.57, N = 3SE +/- 0.77, N = 3222.04411.07

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Read Compiled TreeZFS zraid1 4xNVME Poolext4 mdadm raid5 4xNVME6001200180024003000SE +/- 8.00, N = 3SE +/- 42.24, N = 31212.732642.03

IOR

IOR is a parallel I/O storage benchmark making use of MPI with a particular focus on HPC (High Performance Computing) systems. IOR is developed at the Lawrence Livermore National Laboratory (LLNL). Learn more via the OpenBenchmarking.org test page.

Block Size: 8MB - Disk Target: Default Test Directory

ZFS zraid1 4xNVME Pool: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

ext4 mdadm raid5 4xNVME: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

Dbench

Dbench is a benchmark designed by the Samba project as a free alternative to netbench, but dbench contains only file-system calls for testing the disk performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.012 ClientsZFS zraid1 4xNVME Poolext4 mdadm raid5 4xNVME6001200180024003000SE +/- 5.07, N = 3SE +/- 21.52, N = 32626.032484.851. (CC) gcc options: -lpopt -O2

IOR

IOR is a parallel I/O storage benchmark making use of MPI with a particular focus on HPC (High Performance Computing) systems. IOR is developed at the Lawrence Livermore National Laboratory (LLNL). Learn more via the OpenBenchmarking.org test page.

Block Size: 64MB - Disk Target: Default Test Directory

ZFS zraid1 4xNVME Pool: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

ext4 mdadm raid5 4xNVME: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

Flexible IO Tester

FIO, the Flexible I/O Tester, is an advanced Linux disk benchmark supporting multiple I/O engines and a wealth of options. FIO was written by Jens Axboe for testing of the Linux I/O subsystem and schedulers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.29Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test DirectoryZFS zraid1 4xNVME Pool9001800270036004500SE +/- 58.64, N = 342451. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.29Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test DirectoryZFS zraid1 4xNVME Poolext4 mdadm raid5 4xNVME14002800420056007000SE +/- 29.49, N = 3SE +/- 14.99, N = 3211966661. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

IOR

IOR is a parallel I/O storage benchmark making use of MPI with a particular focus on HPC (High Performance Computing) systems. IOR is developed at the Lawrence Livermore National Laboratory (LLNL). Learn more via the OpenBenchmarking.org test page.

Block Size: 16MB - Disk Target: /mnt/mdadm_raid5_4disks

ZFS zraid1 4xNVME Pool: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

ext4 mdadm raid5 4xNVME: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

Flexible IO Tester

FIO, the Flexible I/O Tester, is an advanced Linux disk benchmark supporting multiple I/O engines and a wealth of options. FIO was written by Jens Axboe for testing of the Linux I/O subsystem and schedulers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.29Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryZFS zraid1 4xNVME Poolext4 mdadm raid5 4xNVME30060090012001500SE +/- 0.88, N = 3SE +/- 5.17, N = 319411991. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.29Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryZFS zraid1 4xNVME Poolext4 mdadm raid5 4xNVME70K140K210K280K350KSE +/- 185.59, N = 3SE +/- 1333.33, N = 3495673066671. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.29Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test DirectoryZFS zraid1 4xNVME Poolext4 mdadm raid5 4xNVME5001000150020002500SE +/- 17.98, N = 3SE +/- 3.53, N = 324589581. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

IOR

IOR is a parallel I/O storage benchmark making use of MPI with a particular focus on HPC (High Performance Computing) systems. IOR is developed at the Lawrence Livermore National Laboratory (LLNL). Learn more via the OpenBenchmarking.org test page.

Block Size: 64MB - Disk Target: /mnt/mdadm_raid5_4disks

ZFS zraid1 4xNVME Pool: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

ext4 mdadm raid5 4xNVME: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

Flexible IO Tester

FIO, the Flexible I/O Tester, is an advanced Linux disk benchmark supporting multiple I/O engines and a wealth of options. FIO was written by Jens Axboe for testing of the Linux I/O subsystem and schedulers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.29Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test DirectoryZFS zraid1 4xNVME Poolext4 mdadm raid5 4xNVME30060090012001500SE +/- 8.99, N = 3SE +/- 1.76, N = 312254761. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.29Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryZFS zraid1 4xNVME Poolext4 mdadm raid5 4xNVME70140210280350SE +/- 2.00, N = 31623241. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.29Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryZFS zraid1 4xNVME Poolext4 mdadm raid5 4xNVME20K40K60K80K100KSE +/- 57.74, N = 3SE +/- 569.60, N = 341500829671. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.29Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test DirectoryZFS zraid1 4xNVME Pool9001800270036004500SE +/- 37.36, N = 340751. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

IOR

IOR is a parallel I/O storage benchmark making use of MPI with a particular focus on HPC (High Performance Computing) systems. IOR is developed at the Lawrence Livermore National Laboratory (LLNL). Learn more via the OpenBenchmarking.org test page.

Block Size: 1024MB - Disk Target: /mnt/mdadm_raid5_4disks

ZFS zraid1 4xNVME Pool: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

ext4 mdadm raid5 4xNVME: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

Flexible IO Tester

FIO, the Flexible I/O Tester, is an advanced Linux disk benchmark supporting multiple I/O engines and a wealth of options. FIO was written by Jens Axboe for testing of the Linux I/O subsystem and schedulers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.29Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test DirectoryZFS zraid1 4xNVME Poolext4 mdadm raid5 4xNVME14002800420056007000SE +/- 18.75, N = 3SE +/- 3.61, N = 3203467061. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

IOR

IOR is a parallel I/O storage benchmark making use of MPI with a particular focus on HPC (High Performance Computing) systems. IOR is developed at the Lawrence Livermore National Laboratory (LLNL). Learn more via the OpenBenchmarking.org test page.

Block Size: 16MB - Disk Target: Default Test Directory

ZFS zraid1 4xNVME Pool: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

ext4 mdadm raid5 4xNVME: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

Block Size: 32MB - Disk Target: Default Test Directory

ZFS zraid1 4xNVME Pool: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

ext4 mdadm raid5 4xNVME: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

Flexible IO Tester

FIO, the Flexible I/O Tester, is an advanced Linux disk benchmark supporting multiple I/O engines and a wealth of options. FIO was written by Jens Axboe for testing of the Linux I/O subsystem and schedulers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.29Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryZFS zraid1 4xNVME Poolext4 mdadm raid5 4xNVME30060090012001500SE +/- 7.09, N = 3SE +/- 11.46, N = 312878651. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.29Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryZFS zraid1 4xNVME Poolext4 mdadm raid5 4xNVME70K140K210K280K350KSE +/- 1855.92, N = 3SE +/- 2728.45, N = 33293332213331. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.29Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test DirectoryZFS zraid1 4xNVME Poolext4 mdadm raid5 4xNVME400800120016002000SE +/- 6.36, N = 3SE +/- 1.73, N = 3206513081. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.29Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test DirectoryZFS zraid1 4xNVME Poolext4 mdadm raid5 4xNVME2004006008001000SE +/- 2.91, N = 3SE +/- 0.88, N = 310296501. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

IOR

IOR is a parallel I/O storage benchmark making use of MPI with a particular focus on HPC (High Performance Computing) systems. IOR is developed at the Lawrence Livermore National Laboratory (LLNL). Learn more via the OpenBenchmarking.org test page.

Block Size: 256MB - Disk Target: /nvme_pool

ZFS zraid1 4xNVME Pool: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

ext4 mdadm raid5 4xNVME: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

Flexible IO Tester

FIO, the Flexible I/O Tester, is an advanced Linux disk benchmark supporting multiple I/O engines and a wealth of options. FIO was written by Jens Axboe for testing of the Linux I/O subsystem and schedulers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.29Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryZFS zraid1 4xNVME Poolext4 mdadm raid5 4xNVME140280420560700SE +/- 1.86, N = 3SE +/- 1.15, N = 36704491. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

IOR

IOR is a parallel I/O storage benchmark making use of MPI with a particular focus on HPC (High Performance Computing) systems. IOR is developed at the Lawrence Livermore National Laboratory (LLNL). Learn more via the OpenBenchmarking.org test page.

Block Size: 64MB - Disk Target: /nvme_pool

ZFS zraid1 4xNVME Pool: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

ext4 mdadm raid5 4xNVME: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

Block Size: 256MB - Disk Target: /mnt/mdadm_raid5_4disks

ZFS zraid1 4xNVME Pool: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

ext4 mdadm raid5 4xNVME: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

Flexible IO Tester

FIO, the Flexible I/O Tester, is an advanced Linux disk benchmark supporting multiple I/O engines and a wealth of options. FIO was written by Jens Axboe for testing of the Linux I/O subsystem and schedulers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.29Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryZFS zraid1 4xNVME Poolext4 mdadm raid5 4xNVME40K80K120K160K200KSE +/- 666.67, N = 3SE +/- 577.35, N = 31713331150001. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

FS-Mark

FS_Mark is designed to test a system's file-system performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.3Test: 1000 Files, 1MB SizeZFS zraid1 4xNVME Poolext4 mdadm raid5 4xNVME140280420560700SE +/- 5.49, N = 15SE +/- 9.50, N = 15646.5254.0

OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.3Test: 5000 Files, 1MB Size, 4 ThreadsZFS zraid1 4xNVME Poolext4 mdadm raid5 4xNVME400800120016002000SE +/- 4.95, N = 3SE +/- 4.55, N = 121632.7537.9

IOR

IOR is a parallel I/O storage benchmark making use of MPI with a particular focus on HPC (High Performance Computing) systems. IOR is developed at the Lawrence Livermore National Laboratory (LLNL). Learn more via the OpenBenchmarking.org test page.

Block Size: 4MB - Disk Target: /mnt/single_ssd

ZFS zraid1 4xNVME Pool: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

ext4 mdadm raid5 4xNVME: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

FS-Mark

FS_Mark is designed to test a system's file-system performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.3Test: 4000 Files, 32 Sub Dirs, 1MB SizeZFS zraid1 4xNVME Poolext4 mdadm raid5 4xNVME150300450600750SE +/- 7.90, N = 3SE +/- 2.79, N = 3684.2284.7

OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.3Test: 1000 Files, 1MB Size, No Sync/FSyncZFS zraid1 4xNVME Poolext4 mdadm raid5 4xNVME400800120016002000SE +/- 12.01, N = 15SE +/- 19.84, N = 41359.31733.3

IOR

IOR is a parallel I/O storage benchmark making use of MPI with a particular focus on HPC (High Performance Computing) systems. IOR is developed at the Lawrence Livermore National Laboratory (LLNL). Learn more via the OpenBenchmarking.org test page.

Block Size: 2MB - Disk Target: Default Test Directory

ZFS zraid1 4xNVME Pool: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

ext4 mdadm raid5 4xNVME: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

Block Size: 4MB - Disk Target: Default Test Directory

ZFS zraid1 4xNVME Pool: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

ext4 mdadm raid5 4xNVME: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

Block Size: 512MB - Disk Target: /mnt/single_ssd

ZFS zraid1 4xNVME Pool: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

ext4 mdadm raid5 4xNVME: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

Block Size: 1024MB - Disk Target: /mnt/single_ssd

ZFS zraid1 4xNVME Pool: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

ext4 mdadm raid5 4xNVME: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

Block Size: 64MB - Disk Target: /mnt/single_ssd

ZFS zraid1 4xNVME Pool: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

ext4 mdadm raid5 4xNVME: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

Block Size: 256MB - Disk Target: /mnt/single_ssd

ZFS zraid1 4xNVME Pool: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

ext4 mdadm raid5 4xNVME: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

Block Size: 16MB - Disk Target: /mnt/single_ssd

ZFS zraid1 4xNVME Pool: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

ext4 mdadm raid5 4xNVME: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

Block Size: 32MB - Disk Target: /mnt/single_ssd

ZFS zraid1 4xNVME Pool: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

ext4 mdadm raid5 4xNVME: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

Block Size: 8MB - Disk Target: /mnt/single_ssd

ZFS zraid1 4xNVME Pool: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

ext4 mdadm raid5 4xNVME: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

Block Size: 32MB - Disk Target: /mnt/mdadm_raid5_4disks

ZFS zraid1 4xNVME Pool: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

ext4 mdadm raid5 4xNVME: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

Block Size: 2MB - Disk Target: /mnt/single_ssd

ZFS zraid1 4xNVME Pool: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

ext4 mdadm raid5 4xNVME: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

Block Size: 1024MB - Disk Target: /nvme_pool

ZFS zraid1 4xNVME Pool: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

ext4 mdadm raid5 4xNVME: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

IOzone

The IOzone benchmark tests the hard disk drive / file-system performance. Learn more via the OpenBenchmarking.org test page.

8GB Write Performance

ZFS zraid1 4xNVME Pool: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

ext4 mdadm raid5 4xNVME: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

IOR

IOR is a parallel I/O storage benchmark making use of MPI with a particular focus on HPC (High Performance Computing) systems. IOR is developed at the Lawrence Livermore National Laboratory (LLNL). Learn more via the OpenBenchmarking.org test page.

Block Size: 2MB - Disk Target: /mnt/mdadm_raid5_4disks

ZFS zraid1 4xNVME Pool: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

ext4 mdadm raid5 4xNVME: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

Dbench

Dbench is a benchmark designed by the Samba project as a free alternative to netbench, but dbench contains only file-system calls for testing the disk performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.01 ClientsZFS zraid1 4xNVME Poolext4 mdadm raid5 4xNVME100200300400500SE +/- 0.85, N = 3SE +/- 0.61, N = 3410.17453.171. (CC) gcc options: -lpopt -O2

IOR

IOR is a parallel I/O storage benchmark making use of MPI with a particular focus on HPC (High Performance Computing) systems. IOR is developed at the Lawrence Livermore National Laboratory (LLNL). Learn more via the OpenBenchmarking.org test page.

Block Size: 4MB - Disk Target: /mnt/mdadm_raid5_4disks

ZFS zraid1 4xNVME Pool: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

ext4 mdadm raid5 4xNVME: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

Block Size: 2MB - Disk Target: /nvme_pool

ZFS zraid1 4xNVME Pool: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

ext4 mdadm raid5 4xNVME: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

Block Size: 8MB - Disk Target: /mnt/mdadm_raid5_4disks

ZFS zraid1 4xNVME Pool: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

ext4 mdadm raid5 4xNVME: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

Block Size: 4MB - Disk Target: /nvme_pool

ZFS zraid1 4xNVME Pool: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

ext4 mdadm raid5 4xNVME: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

Block Size: 256MB - Disk Target: Default Test Directory

ZFS zraid1 4xNVME Pool: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

ext4 mdadm raid5 4xNVME: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

Block Size: 8MB - Disk Target: /nvme_pool

ZFS zraid1 4xNVME Pool: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

ext4 mdadm raid5 4xNVME: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

Block Size: 512MB - Disk Target: Default Test Directory

ZFS zraid1 4xNVME Pool: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

ext4 mdadm raid5 4xNVME: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

Block Size: 16MB - Disk Target: /nvme_pool

ZFS zraid1 4xNVME Pool: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

ext4 mdadm raid5 4xNVME: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

Block Size: 1024MB - Disk Target: Default Test Directory

ZFS zraid1 4xNVME Pool: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

ext4 mdadm raid5 4xNVME: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

Block Size: 32MB - Disk Target: /nvme_pool

ZFS zraid1 4xNVME Pool: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

ext4 mdadm raid5 4xNVME: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

Block Size: 512MB - Disk Target: /mnt/mdadm_raid5_4disks

ZFS zraid1 4xNVME Pool: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

ext4 mdadm raid5 4xNVME: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

Block Size: 512MB - Disk Target: /nvme_pool

ZFS zraid1 4xNVME Pool: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

ext4 mdadm raid5 4xNVME: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: mpirun was unable to launch the specified application as it could not access

PostMark

This is a test of NetApp's PostMark benchmark designed to simulate small-file testing similar to the tasks endured by web and mail servers. This test profile will set PostMark to perform 25,000 transactions with 500 files simultaneously with the file sizes ranging between 5 and 512 kilobytes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostMark 1.51Disk Transaction PerformanceZFS zraid1 4xNVME Poolext4 mdadm raid5 4xNVME11002200330044005500SE +/- 34.00, N = 3328950681. (CC) gcc options: -O3

SQLite

This is a simple benchmark of SQLite. At present this test profile just measures the time to perform a pre-defined number of insertions on an indexed database. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.30.1Threads / Copies: 1ZFS zraid1 4xNVME Poolext4 mdadm raid5 4xNVME246810SE +/- 0.071, N = 7SE +/- 0.069, N = 107.8528.8101. (CC) gcc options: -O2 -lz -lm -ldl -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.30.1Threads / Copies: 8ZFS zraid1 4xNVME Poolext4 mdadm raid5 4xNVME510152025SE +/- 0.08, N = 3SE +/- 0.05, N = 311.2720.011. (CC) gcc options: -O2 -lz -lm -ldl -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.30.1Threads / Copies: 32ZFS zraid1 4xNVME Poolext4 mdadm raid5 4xNVME714212835SE +/- 0.08, N = 3SE +/- 0.01, N = 323.1931.791. (CC) gcc options: -O2 -lz -lm -ldl -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.30.1Threads / Copies: 64ZFS zraid1 4xNVME Poolext4 mdadm raid5 4xNVME1020304050SE +/- 0.01, N = 3SE +/- 0.13, N = 337.4545.721. (CC) gcc options: -O2 -lz -lm -ldl -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.30.1Threads / Copies: 128ZFS zraid1 4xNVME Poolext4 mdadm raid5 4xNVME20406080100SE +/- 0.10, N = 3SE +/- 0.28, N = 376.0980.151. (CC) gcc options: -O2 -lz -lm -ldl -lpthread

31 Results Shown

Compile Bench:
  Compile
  Initial Create
  Read Compiled Tree
Dbench
Flexible IO Tester:
  Rand Read - Linux AIO - No - Yes - 2MB - Default Test Directory:
    MB/s
    IOPS
  Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directory:
    MB/s
    IOPS
  Rand Write - Linux AIO - No - Yes - 2MB - Default Test Directory:
    MB/s
    IOPS
  Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directory:
    MB/s
    IOPS
  Seq Read - Linux AIO - No - Yes - 2MB - Default Test Directory:
    MB/s
    IOPS
  Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directory:
    MB/s
    IOPS
  Seq Write - Linux AIO - No - Yes - 2MB - Default Test Directory:
    MB/s
    IOPS
  Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directory:
    MB/s
    IOPS
FS-Mark:
  1000 Files, 1MB Size
  5000 Files, 1MB Size, 4 Threads
  4000 Files, 32 Sub Dirs, 1MB Size
  1000 Files, 1MB Size, No Sync/FSync
Dbench
PostMark
SQLite:
  1
  8
  32
  64
  128