pts-disk-different-nvmes AMD Ryzen Threadripper 1900X 8-Core testing with a Gigabyte X399 DESIGNARE EX-CF (F13a BIOS) and NVIDIA Quadro P400 on Debian 11 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2211142-NE-PTSDISKDI73&grs&sor .
pts-disk-different-nvmes Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Compiler File-System Screen Resolution ZFS raidz1 4xNVME ext4 soft raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ext4 soft raid5 8xNVME ext4 WD_Black SN770 2TB NVMe AMD Ryzen Threadripper 1900X 8-Core @ 3.80GHz (8 Cores / 16 Threads) Gigabyte X399 DESIGNARE EX-CF (F13a BIOS) AMD 17h 64GB Samsung SSD 960 EVO 500GB + 8 x 2000GB Western Digital WD_BLACK SN770 2TB + 1000GB CT1000P5PSSD8 NVIDIA Quadro P400 NVIDIA GP107GL HD Audio DELL S2340T 4 x Intel I350 + Intel 8265 / 8275 Debian 11 5.10.0-19-amd64 (x86_64) GCC 10.2.1 20210110 zfs 1920x1080 ext4 zfs ext4 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: always Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-mutex --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-Km9U7s/gcc-10-10.2.1/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-Km9U7s/gcc-10-10.2.1/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8001137 Disk Scheduler Details - ZFS raidz1 4xNVME, ZFS raidz1 8xNVME, ZFS raidz1 8xNVME no Compression, ZFS mirror 8xNVME: NONE Python Details - Python 3.9.2 Security Details - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT vulnerable + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: disabled RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected Disk Details - ext4 soft raid5 4xNVME: NONE / relatime,rw,stripe=384 / raid5 nvme4n1p1[4] nvme3n1p1[2] nvme2n1p1[1] nvme1n1p1[0] Block Size: 4096 - ext4 Crucial P5 Plus 1TB NVME: NONE / relatime,rw / Block Size: 4096 - ext4 soft raid5 8xNVME: NONE / relatime,rw,stripe=896 / raid5 nvme9n1[8] nvme8n1[6] nvme7n1[5] nvme6n1[4] nvme4n1[3] nvme3n1[2] nvme2n1[1] nvme1n1[0] Block Size: 4096 - ext4 WD_Black SN770 2TB NVMe: NONE / relatime,rw / Block Size: 4096
pts-disk-different-nvmes fio: Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Rand Read - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Seq Read - Linux AIO - No - Yes - 2MB - Default Test Directory dbench: 1 Clients dbench: 12 Clients fio: Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Rand Write - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Rand Write - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directory fs-mark: 5000 Files, 1MB Size, 4 Threads fs-mark: 1000 Files, 1MB Size fio: Seq Write - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Seq Write - Linux AIO - No - Yes - 2MB - Default Test Directory sqlite: 32 sqlite: 8 compilebench: Read Compiled Tree compilebench: Initial Create fio: Rand Read - Linux AIO - No - Yes - 2MB - Default Test Directory sqlite: 64 sqlite: 1 postmark: Disk Transaction Performance fs-mark: 1000 Files, 1MB Size, No Sync/FSync sqlite: 128 compilebench: Compile fio: Seq Read - Linux AIO - No - Yes - 2MB - Default Test Directory fs-mark: 4000 Files, 32 Sub Dirs, 1MB Size ZFS raidz1 4xNVME ext4 soft raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ext4 soft raid5 8xNVME ext4 WD_Black SN770 2TB NVMe 162 41500 49567 194 2119 2034 410.171 2626.03 670 171333 1225 2458 329333 1287 1632.7 646.5 1029 2065 23.191 11.267 1212.73 222.04 4245 37.448 7.852 3289 1359.3 76.088 1398.57 4075 684.2 324 82967 306667 1199 6666 6706 453.170 2484.85 449 115000 476 958 221333 865 537.9 254.0 650 1308 31.793 20.005 2642.03 411.07 45.720 8.810 5068 1733.3 80.145 1478.00 284.7 878 224667 329667 1286 1765 1765 102.985 658.794 891 228000 1623 3252 335667 1311 522.1 584.1 1614 3236 27.578 15.767 2747.01 423.67 3538 40.979 8.367 5137 1791.6 59.890 1483.07 3538 435.0 177 45367 57433 224 2015 1999 380.354 2472.42 665 170667 1327 2661 326333 1275 1655.6 646.7 1186 2381 26.999 12.857 1217.80 221.83 4037 40.301 8.380 3275 1340.7 78.153 1390.71 4005 644.7 188 48200 56167 219 2081 1995 458.913 2783.64 682 174667 1337 2681 331000 1294 1240.0 589.7 1220 2446 26.892 12.827 1694.75 206.77 4170 40.003 8.379 3275 1160.9 77.047 1247.36 3998 592.1 267 68333 224333 877 3305 2073 435.704 2730.75 771 197333 1783 3573 332000 1296 1711.4 581.1 1774 3555 17.744 9.043 1454.31 216.21 6617 36.598 7.252 3318 1338.4 77.880 1408.42 4154 658.2 293 75067 272667 1065 13225 13200 423.832 2330.16 387 99067 512 1032 199667 779 570.0 264.4 814 1636 35.993 22.829 2683.76 413.46 50.309 10.426 5137 1782.9 83.859 1474.80 286.6 1491 381400 414000 1616 1771 1775 579.240 3196.68 1514 387333 1659 3325 91333 357 1631.5 693.8 1669 3345 45.163 21.562 2738.07 420.47 3550 67.626 6.487 5211 1806.6 90.490 1513.97 3558 558.2 OpenBenchmarking.org
Flexible IO Tester Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ext4 WD_Black SN770 2TB NVMe ext4 Crucial P5 Plus 1TB NVME ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME ZFS mirror 8xNVME ZFS raidz1 8xNVME no Compression ZFS raidz1 8xNVME ZFS raidz1 4xNVME 300 600 900 1200 1500 SE +/- 14.63, N = 5 SE +/- 2.67, N = 3 SE +/- 2.00, N = 3 SE +/- 0.33, N = 3 1491 878 324 293 267 188 177 162 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ext4 WD_Black SN770 2TB NVMe ext4 Crucial P5 Plus 1TB NVME ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME ZFS mirror 8xNVME ZFS raidz1 8xNVME no Compression ZFS raidz1 8xNVME ZFS raidz1 4xNVME 80K 160K 240K 320K 400K SE +/- 3841.87, N = 5 SE +/- 666.67, N = 3 SE +/- 569.60, N = 3 SE +/- 88.19, N = 3 SE +/- 88.19, N = 3 SE +/- 57.74, N = 3 SE +/- 33.33, N = 3 SE +/- 57.74, N = 3 381400 224667 82967 75067 68333 48200 45367 41500 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ext4 WD_Black SN770 2TB NVMe ext4 Crucial P5 Plus 1TB NVME ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME ZFS mirror 8xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS raidz1 4xNVME 90K 180K 270K 360K 450K SE +/- 2886.75, N = 3 SE +/- 881.92, N = 3 SE +/- 1333.33, N = 3 SE +/- 333.33, N = 3 SE +/- 666.67, N = 3 SE +/- 176.38, N = 3 SE +/- 633.33, N = 3 SE +/- 185.59, N = 3 414000 329667 306667 272667 224333 57433 56167 49567 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ext4 WD_Black SN770 2TB NVMe ext4 Crucial P5 Plus 1TB NVME ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME ZFS mirror 8xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS raidz1 4xNVME 300 600 900 1200 1500 SE +/- 10.97, N = 3 SE +/- 3.48, N = 3 SE +/- 5.17, N = 3 SE +/- 2.03, N = 3 SE +/- 1.86, N = 3 SE +/- 0.67, N = 3 SE +/- 2.67, N = 3 SE +/- 0.88, N = 3 1616 1286 1199 1065 877 224 219 194 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ext4 soft raid5 8xNVME ext4 soft raid5 4xNVME ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME no Compression ZFS raidz1 8xNVME ext4 WD_Black SN770 2TB NVMe ext4 Crucial P5 Plus 1TB NVME 3K 6K 9K 12K 15K SE +/- 143.61, N = 4 SE +/- 14.99, N = 3 SE +/- 12.06, N = 3 SE +/- 29.49, N = 3 SE +/- 20.66, N = 3 SE +/- 6.57, N = 3 SE +/- 3.67, N = 3 13225 6666 3305 2119 2081 2015 1771 1765 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ext4 soft raid5 8xNVME ext4 soft raid5 4xNVME ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 WD_Black SN770 2TB NVMe ext4 Crucial P5 Plus 1TB NVME 3K 6K 9K 12K 15K SE +/- 57.74, N = 3 SE +/- 3.61, N = 3 SE +/- 18.84, N = 3 SE +/- 18.75, N = 3 SE +/- 25.32, N = 3 SE +/- 23.68, N = 3 13200 6706 2073 2034 1999 1995 1775 1765 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Dbench 1 Clients OpenBenchmarking.org MB/s, More Is Better Dbench 4.0 1 Clients ext4 WD_Black SN770 2TB NVMe ZFS raidz1 8xNVME no Compression ext4 soft raid5 4xNVME ZFS mirror 8xNVME ext4 soft raid5 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ext4 Crucial P5 Plus 1TB NVME 130 260 390 520 650 SE +/- 0.89, N = 3 SE +/- 1.41, N = 3 SE +/- 0.61, N = 3 SE +/- 0.70, N = 3 SE +/- 1.02, N = 3 SE +/- 0.85, N = 3 SE +/- 0.25, N = 3 SE +/- 0.03, N = 3 579.24 458.91 453.17 435.70 423.83 410.17 380.35 102.99 1. (CC) gcc options: -lpopt -O2
Dbench 12 Clients OpenBenchmarking.org MB/s, More Is Better Dbench 4.0 12 Clients ext4 WD_Black SN770 2TB NVMe ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ZFS raidz1 4xNVME ext4 soft raid5 4xNVME ZFS raidz1 8xNVME ext4 soft raid5 8xNVME ext4 Crucial P5 Plus 1TB NVME 700 1400 2100 2800 3500 SE +/- 2.70, N = 3 SE +/- 16.46, N = 3 SE +/- 3.38, N = 3 SE +/- 5.07, N = 3 SE +/- 21.52, N = 3 SE +/- 2.03, N = 3 SE +/- 16.88, N = 3 SE +/- 1.20, N = 3 3196.68 2783.64 2730.75 2626.03 2484.85 2472.42 2330.16 658.79 1. (CC) gcc options: -lpopt -O2
Flexible IO Tester Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ext4 WD_Black SN770 2TB NVMe ext4 Crucial P5 Plus 1TB NVME ZFS mirror 8xNVME ZFS raidz1 8xNVME no Compression ZFS raidz1 4xNVME ZFS raidz1 8xNVME ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 300 600 900 1200 1500 SE +/- 9.82, N = 3 SE +/- 3.93, N = 3 SE +/- 0.67, N = 3 SE +/- 2.08, N = 3 SE +/- 1.86, N = 3 SE +/- 1.67, N = 3 SE +/- 1.15, N = 3 SE +/- 2.33, N = 3 1514 891 771 682 670 665 449 387 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ext4 WD_Black SN770 2TB NVMe ext4 Crucial P5 Plus 1TB NVME ZFS mirror 8xNVME ZFS raidz1 8xNVME no Compression ZFS raidz1 4xNVME ZFS raidz1 8xNVME ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 80K 160K 240K 320K 400K SE +/- 2603.42, N = 3 SE +/- 1000.00, N = 3 SE +/- 333.33, N = 3 SE +/- 333.33, N = 3 SE +/- 666.67, N = 3 SE +/- 333.33, N = 3 SE +/- 577.35, N = 3 SE +/- 617.34, N = 3 387333 228000 197333 174667 171333 170667 115000 99067 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS mirror 8xNVME ext4 WD_Black SN770 2TB NVMe ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME no Compression ZFS raidz1 8xNVME ZFS raidz1 4xNVME ext4 soft raid5 8xNVME ext4 soft raid5 4xNVME 400 800 1200 1600 2000 SE +/- 4.16, N = 3 SE +/- 1.20, N = 3 SE +/- 4.91, N = 3 SE +/- 8.99, N = 3 SE +/- 1.45, N = 3 SE +/- 1.76, N = 3 1783 1659 1623 1337 1327 1225 512 476 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS mirror 8xNVME ext4 WD_Black SN770 2TB NVMe ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME no Compression ZFS raidz1 8xNVME ZFS raidz1 4xNVME ext4 soft raid5 8xNVME ext4 soft raid5 4xNVME 800 1600 2400 3200 4000 SE +/- 8.25, N = 3 SE +/- 2.65, N = 3 SE +/- 10.27, N = 3 SE +/- 6.39, N = 3 SE +/- 17.98, N = 3 SE +/- 2.60, N = 3 SE +/- 3.53, N = 3 3573 3325 3252 2681 2661 2458 1032 958 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ext4 Crucial P5 Plus 1TB NVME ZFS mirror 8xNVME ZFS raidz1 8xNVME no Compression ZFS raidz1 4xNVME ZFS raidz1 8xNVME ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME ext4 WD_Black SN770 2TB NVMe 70K 140K 210K 280K 350K SE +/- 4666.67, N = 3 SE +/- 2516.61, N = 3 SE +/- 1000.00, N = 3 SE +/- 1855.92, N = 3 SE +/- 3382.96, N = 3 SE +/- 2728.45, N = 3 SE +/- 333.33, N = 3 SE +/- 202.76, N = 3 335667 332000 331000 329333 326333 221333 199667 91333 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ext4 Crucial P5 Plus 1TB NVME ZFS mirror 8xNVME ZFS raidz1 8xNVME no Compression ZFS raidz1 4xNVME ZFS raidz1 8xNVME ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME ext4 WD_Black SN770 2TB NVMe 300 600 900 1200 1500 SE +/- 17.89, N = 3 SE +/- 9.94, N = 3 SE +/- 4.84, N = 3 SE +/- 7.09, N = 3 SE +/- 13.25, N = 3 SE +/- 11.46, N = 3 SE +/- 0.67, N = 3 1311 1296 1294 1287 1275 865 779 357 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
FS-Mark Test: 5000 Files, 1MB Size, 4 Threads OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 Test: 5000 Files, 1MB Size, 4 Threads ZFS mirror 8xNVME ZFS raidz1 8xNVME ZFS raidz1 4xNVME ext4 WD_Black SN770 2TB NVMe ZFS raidz1 8xNVME no Compression ext4 soft raid5 8xNVME ext4 soft raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME 400 800 1200 1600 2000 SE +/- 2.79, N = 3 SE +/- 6.18, N = 3 SE +/- 4.95, N = 3 SE +/- 8.47, N = 3 SE +/- 2.97, N = 3 SE +/- 4.70, N = 3 SE +/- 4.55, N = 12 SE +/- 125.73, N = 9 1711.4 1655.6 1632.7 1631.5 1240.0 570.0 537.9 522.1
FS-Mark Test: 1000 Files, 1MB Size OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 Test: 1000 Files, 1MB Size ext4 WD_Black SN770 2TB NVMe ZFS raidz1 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ZFS mirror 8xNVME ext4 soft raid5 8xNVME ext4 soft raid5 4xNVME 150 300 450 600 750 SE +/- 9.94, N = 3 SE +/- 1.87, N = 3 SE +/- 5.49, N = 15 SE +/- 6.13, N = 3 SE +/- 13.64, N = 12 SE +/- 4.66, N = 3 SE +/- 0.85, N = 3 SE +/- 9.50, N = 15 693.8 646.7 646.5 589.7 584.1 581.1 264.4 254.0
Flexible IO Tester Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS mirror 8xNVME ext4 WD_Black SN770 2TB NVMe ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME no Compression ZFS raidz1 8xNVME ZFS raidz1 4xNVME ext4 soft raid5 8xNVME ext4 soft raid5 4xNVME 400 800 1200 1600 2000 SE +/- 6.01, N = 3 SE +/- 0.67, N = 3 SE +/- 2.85, N = 3 SE +/- 11.98, N = 15 SE +/- 2.33, N = 3 SE +/- 2.91, N = 3 SE +/- 0.88, N = 3 1774 1669 1614 1220 1186 1029 814 650 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS mirror 8xNVME ext4 WD_Black SN770 2TB NVMe ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME no Compression ZFS raidz1 8xNVME ZFS raidz1 4xNVME ext4 soft raid5 8xNVME ext4 soft raid5 4xNVME 800 1600 2400 3200 4000 SE +/- 11.93, N = 3 SE +/- 0.67, N = 3 SE +/- 5.70, N = 3 SE +/- 23.91, N = 15 SE +/- 4.67, N = 3 SE +/- 6.36, N = 3 SE +/- 0.67, N = 3 SE +/- 1.73, N = 3 3555 3345 3236 2446 2381 2065 1636 1308 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
SQLite Threads / Copies: 32 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 32 ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME no Compression ZFS raidz1 8xNVME ext4 Crucial P5 Plus 1TB NVME ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME ext4 WD_Black SN770 2TB NVMe 10 20 30 40 50 SE +/- 0.05, N = 3 SE +/- 0.08, N = 3 SE +/- 0.07, N = 3 SE +/- 0.04, N = 3 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 SE +/- 0.09, N = 3 17.74 23.19 26.89 27.00 27.58 31.79 35.99 45.16 1. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
SQLite Threads / Copies: 8 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 8 ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME no Compression ZFS raidz1 8xNVME ext4 Crucial P5 Plus 1TB NVME ext4 soft raid5 4xNVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 8xNVME 5 10 15 20 25 SE +/- 0.057, N = 3 SE +/- 0.079, N = 3 SE +/- 0.170, N = 3 SE +/- 0.174, N = 3 SE +/- 0.009, N = 3 SE +/- 0.047, N = 3 SE +/- 0.045, N = 3 SE +/- 0.168, N = 15 9.043 11.267 12.827 12.857 15.767 20.005 21.562 22.829 1. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
Compile Bench Test: Read Compiled Tree OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Read Compiled Tree ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 8xNVME ext4 soft raid5 4xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ZFS raidz1 8xNVME ZFS raidz1 4xNVME 600 1200 1800 2400 3000 SE +/- 8.94, N = 3 SE +/- 16.46, N = 3 SE +/- 15.05, N = 3 SE +/- 42.24, N = 3 SE +/- 19.98, N = 3 SE +/- 12.22, N = 3 SE +/- 8.89, N = 3 SE +/- 8.00, N = 3 2747.01 2738.07 2683.76 2642.03 1694.75 1454.31 1217.80 1212.73
Compile Bench Test: Initial Create OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Initial Create ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 8xNVME ext4 soft raid5 4xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS mirror 8xNVME ZFS raidz1 8xNVME no Compression 90 180 270 360 450 SE +/- 1.23, N = 3 SE +/- 2.28, N = 3 SE +/- 3.20, N = 3 SE +/- 0.77, N = 3 SE +/- 1.57, N = 3 SE +/- 0.50, N = 3 SE +/- 1.18, N = 3 SE +/- 1.46, N = 3 423.67 420.47 413.46 411.07 222.04 221.83 216.21 206.77
Flexible IO Tester Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME no Compression ZFS raidz1 8xNVME ext4 WD_Black SN770 2TB NVMe ext4 Crucial P5 Plus 1TB NVME 1400 2800 4200 5600 7000 SE +/- 23.92, N = 3 SE +/- 58.64, N = 3 SE +/- 41.66, N = 3 SE +/- 13.37, N = 3 SE +/- 7.67, N = 3 6617 4245 4170 4037 3550 3538 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
SQLite Threads / Copies: 64 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 64 ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME no Compression ZFS raidz1 8xNVME ext4 Crucial P5 Plus 1TB NVME ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME ext4 WD_Black SN770 2TB NVMe 15 30 45 60 75 SE +/- 0.08, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.05, N = 3 SE +/- 0.01, N = 3 SE +/- 0.13, N = 3 SE +/- 0.21, N = 3 SE +/- 0.07, N = 3 36.60 37.45 40.00 40.30 40.98 45.72 50.31 67.63 1. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
SQLite Threads / Copies: 1 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 1 ext4 WD_Black SN770 2TB NVMe ZFS mirror 8xNVME ZFS raidz1 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME no Compression ZFS raidz1 8xNVME ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 3 6 9 12 15 SE +/- 0.057, N = 3 SE +/- 0.055, N = 3 SE +/- 0.071, N = 7 SE +/- 0.067, N = 3 SE +/- 0.104, N = 3 SE +/- 0.096, N = 3 SE +/- 0.069, N = 10 SE +/- 0.093, N = 3 6.487 7.252 7.852 8.367 8.379 8.380 8.810 10.426 1. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
PostMark Disk Transaction Performance OpenBenchmarking.org TPS, More Is Better PostMark 1.51 Disk Transaction Performance ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 8xNVME ext4 Crucial P5 Plus 1TB NVME ext4 soft raid5 4xNVME ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME no Compression ZFS raidz1 8xNVME 1100 2200 3300 4400 5500 SE +/- 58.25, N = 5 SE +/- 35.33, N = 3 SE +/- 35.33, N = 3 SE +/- 34.00, N = 3 SE +/- 14.67, N = 3 SE +/- 14.33, N = 3 SE +/- 29.00, N = 3 5211 5137 5137 5068 3318 3289 3275 3275 1. (CC) gcc options: -O3
FS-Mark Test: 1000 Files, 1MB Size, No Sync/FSync OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 Test: 1000 Files, 1MB Size, No Sync/FSync ext4 WD_Black SN770 2TB NVMe ext4 Crucial P5 Plus 1TB NVME ext4 soft raid5 8xNVME ext4 soft raid5 4xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS mirror 8xNVME ZFS raidz1 8xNVME no Compression 400 800 1200 1600 2000 SE +/- 2.57, N = 3 SE +/- 16.52, N = 3 SE +/- 11.02, N = 3 SE +/- 19.84, N = 4 SE +/- 12.01, N = 15 SE +/- 10.88, N = 15 SE +/- 7.95, N = 3 SE +/- 13.85, N = 4 1806.6 1791.6 1782.9 1733.3 1359.3 1340.7 1338.4 1160.9
SQLite Threads / Copies: 128 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 128 ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ZFS raidz1 8xNVME ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME ext4 WD_Black SN770 2TB NVMe 20 40 60 80 100 SE +/- 0.03, N = 3 SE +/- 0.10, N = 3 SE +/- 0.27, N = 3 SE +/- 0.16, N = 3 SE +/- 0.10, N = 3 SE +/- 0.28, N = 3 SE +/- 0.16, N = 3 SE +/- 0.01, N = 3 59.89 76.09 77.05 77.88 78.15 80.15 83.86 90.49 1. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
Compile Bench Test: Compile OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Compile ext4 WD_Black SN770 2TB NVMe ext4 Crucial P5 Plus 1TB NVME ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression 300 600 900 1200 1500 SE +/- 3.97, N = 3 SE +/- 0.00, N = 3 SE +/- 11.73, N = 3 SE +/- 12.18, N = 3 SE +/- 9.96, N = 3 SE +/- 6.06, N = 3 SE +/- 7.47, N = 3 SE +/- 3.89, N = 3 1513.97 1483.07 1478.00 1474.80 1408.42 1398.57 1390.71 1247.36
Flexible IO Tester Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 WD_Black SN770 2TB NVMe ext4 Crucial P5 Plus 1TB NVME 900 1800 2700 3600 4500 SE +/- 37.36, N = 3 SE +/- 37.36, N = 3 SE +/- 50.44, N = 3 SE +/- 47.58, N = 3 4154 4075 4005 3998 3558 3538 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
FS-Mark Test: 4000 Files, 32 Sub Dirs, 1MB Size OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 Test: 4000 Files, 32 Sub Dirs, 1MB Size ZFS raidz1 4xNVME ZFS mirror 8xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 WD_Black SN770 2TB NVMe ext4 Crucial P5 Plus 1TB NVME ext4 soft raid5 8xNVME ext4 soft raid5 4xNVME 150 300 450 600 750 SE +/- 7.90, N = 3 SE +/- 5.50, N = 15 SE +/- 3.56, N = 3 SE +/- 3.57, N = 3 SE +/- 63.54, N = 11 SE +/- 16.63, N = 12 SE +/- 6.52, N = 12 SE +/- 2.79, N = 3 684.2 658.2 644.7 592.1 558.2 435.0 286.6 284.7
Phoronix Test Suite v10.8.5