pts-disk-different-nvmes AMD Ryzen Threadripper 1900X 8-Core testing with a Gigabyte X399 DESIGNARE EX-CF (F13a BIOS) and NVIDIA Quadro P400 on Debian 11 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2211142-NE-PTSDISKDI73&sro&gru .
pts-disk-different-nvmes Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Compiler File-System Screen Resolution ZFS raidz1 4xNVME ext4 soft raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ext4 soft raid5 8xNVME ext4 WD_Black SN770 2TB NVMe AMD Ryzen Threadripper 1900X 8-Core @ 3.80GHz (8 Cores / 16 Threads) Gigabyte X399 DESIGNARE EX-CF (F13a BIOS) AMD 17h 64GB Samsung SSD 960 EVO 500GB + 8 x 2000GB Western Digital WD_BLACK SN770 2TB + 1000GB CT1000P5PSSD8 NVIDIA Quadro P400 NVIDIA GP107GL HD Audio DELL S2340T 4 x Intel I350 + Intel 8265 / 8275 Debian 11 5.10.0-19-amd64 (x86_64) GCC 10.2.1 20210110 zfs 1920x1080 ext4 zfs ext4 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: always Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-mutex --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-Km9U7s/gcc-10-10.2.1/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-Km9U7s/gcc-10-10.2.1/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8001137 Disk Scheduler Details - ZFS raidz1 4xNVME, ZFS raidz1 8xNVME, ZFS raidz1 8xNVME no Compression, ZFS mirror 8xNVME: NONE Python Details - Python 3.9.2 Security Details - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT vulnerable + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: disabled RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected Disk Details - ext4 soft raid5 4xNVME: NONE / relatime,rw,stripe=384 / raid5 nvme4n1p1[4] nvme3n1p1[2] nvme2n1p1[1] nvme1n1p1[0] Block Size: 4096 - ext4 Crucial P5 Plus 1TB NVME: NONE / relatime,rw / Block Size: 4096 - ext4 soft raid5 8xNVME: NONE / relatime,rw,stripe=896 / raid5 nvme9n1[8] nvme8n1[6] nvme7n1[5] nvme6n1[4] nvme4n1[3] nvme3n1[2] nvme2n1[1] nvme1n1[0] Block Size: 4096 - ext4 WD_Black SN770 2TB NVMe: NONE / relatime,rw / Block Size: 4096
pts-disk-different-nvmes fs-mark: 1000 Files, 1MB Size fs-mark: 5000 Files, 1MB Size, 4 Threads fs-mark: 4000 Files, 32 Sub Dirs, 1MB Size fs-mark: 1000 Files, 1MB Size, No Sync/FSync fio: Rand Read - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Rand Write - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Seq Read - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Seq Write - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Rand Read - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Rand Write - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Seq Read - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Seq Write - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directory dbench: 12 Clients dbench: 1 Clients compilebench: Compile compilebench: Initial Create compilebench: Read Compiled Tree postmark: Disk Transaction Performance sqlite: 1 sqlite: 8 sqlite: 32 sqlite: 64 sqlite: 128 ZFS raidz1 4xNVME ext4 soft raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ext4 soft raid5 8xNVME ext4 WD_Black SN770 2TB NVMe 646.5 1632.7 684.2 1359.3 2119 49567 1225 41500 2034 329333 1029 171333 4245 194 2458 162 4075 1287 2065 670 2626.03 410.171 1398.57 222.04 1212.73 3289 7.852 11.267 23.191 37.448 76.088 254.0 537.9 284.7 1733.3 6666 306667 476 82967 6706 221333 650 115000 1199 958 324 865 1308 449 2484.85 453.170 1478.00 411.07 2642.03 5068 8.810 20.005 31.793 45.720 80.145 584.1 522.1 435.0 1791.6 1765 329667 1623 224667 1765 335667 1614 228000 3538 1286 3252 878 3538 1311 3236 891 658.794 102.985 1483.07 423.67 2747.01 5137 8.367 15.767 27.578 40.979 59.890 646.7 1655.6 644.7 1340.7 2015 57433 1327 45367 1999 326333 1186 170667 4037 224 2661 177 4005 1275 2381 665 2472.42 380.354 1390.71 221.83 1217.80 3275 8.380 12.857 26.999 40.301 78.153 589.7 1240.0 592.1 1160.9 2081 56167 1337 48200 1995 331000 1220 174667 4170 219 2681 188 3998 1294 2446 682 2783.64 458.913 1247.36 206.77 1694.75 3275 8.379 12.827 26.892 40.003 77.047 581.1 1711.4 658.2 1338.4 3305 224333 1783 68333 2073 332000 1774 197333 6617 877 3573 267 4154 1296 3555 771 2730.75 435.704 1408.42 216.21 1454.31 3318 7.252 9.043 17.744 36.598 77.880 264.4 570.0 286.6 1782.9 13225 272667 512 75067 13200 199667 814 99067 1065 1032 293 779 1636 387 2330.16 423.832 1474.80 413.46 2683.76 5137 10.426 22.829 35.993 50.309 83.859 693.8 1631.5 558.2 1806.6 1771 414000 1659 381400 1775 91333 1669 387333 3550 1616 3325 1491 3558 357 3345 1514 3196.68 579.240 1513.97 420.47 2738.07 5211 6.487 21.562 45.163 67.626 90.490 OpenBenchmarking.org
FS-Mark Test: 1000 Files, 1MB Size OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 Test: 1000 Files, 1MB Size ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 150 300 450 600 750 SE +/- 4.66, N = 3 SE +/- 5.49, N = 15 SE +/- 1.87, N = 3 SE +/- 6.13, N = 3 SE +/- 13.64, N = 12 SE +/- 9.94, N = 3 SE +/- 9.50, N = 15 SE +/- 0.85, N = 3 581.1 646.5 646.7 589.7 584.1 693.8 254.0 264.4
FS-Mark Test: 5000 Files, 1MB Size, 4 Threads OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 Test: 5000 Files, 1MB Size, 4 Threads ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 400 800 1200 1600 2000 SE +/- 2.79, N = 3 SE +/- 4.95, N = 3 SE +/- 6.18, N = 3 SE +/- 2.97, N = 3 SE +/- 125.73, N = 9 SE +/- 8.47, N = 3 SE +/- 4.55, N = 12 SE +/- 4.70, N = 3 1711.4 1632.7 1655.6 1240.0 522.1 1631.5 537.9 570.0
FS-Mark Test: 4000 Files, 32 Sub Dirs, 1MB Size OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 Test: 4000 Files, 32 Sub Dirs, 1MB Size ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 150 300 450 600 750 SE +/- 5.50, N = 15 SE +/- 7.90, N = 3 SE +/- 3.56, N = 3 SE +/- 3.57, N = 3 SE +/- 16.63, N = 12 SE +/- 63.54, N = 11 SE +/- 2.79, N = 3 SE +/- 6.52, N = 12 658.2 684.2 644.7 592.1 435.0 558.2 284.7 286.6
FS-Mark Test: 1000 Files, 1MB Size, No Sync/FSync OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 Test: 1000 Files, 1MB Size, No Sync/FSync ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 400 800 1200 1600 2000 SE +/- 7.95, N = 3 SE +/- 12.01, N = 15 SE +/- 10.88, N = 15 SE +/- 13.85, N = 4 SE +/- 16.52, N = 3 SE +/- 2.57, N = 3 SE +/- 19.84, N = 4 SE +/- 11.02, N = 3 1338.4 1359.3 1340.7 1160.9 1791.6 1806.6 1733.3 1782.9
Flexible IO Tester Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 3K 6K 9K 12K 15K SE +/- 12.06, N = 3 SE +/- 29.49, N = 3 SE +/- 6.57, N = 3 SE +/- 20.66, N = 3 SE +/- 3.67, N = 3 SE +/- 14.99, N = 3 SE +/- 143.61, N = 4 3305 2119 2015 2081 1765 1771 6666 13225 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 90K 180K 270K 360K 450K SE +/- 666.67, N = 3 SE +/- 185.59, N = 3 SE +/- 176.38, N = 3 SE +/- 633.33, N = 3 SE +/- 881.92, N = 3 SE +/- 2886.75, N = 3 SE +/- 1333.33, N = 3 SE +/- 333.33, N = 3 224333 49567 57433 56167 329667 414000 306667 272667 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 400 800 1200 1600 2000 SE +/- 4.16, N = 3 SE +/- 8.99, N = 3 SE +/- 4.91, N = 3 SE +/- 1.20, N = 3 SE +/- 1.76, N = 3 SE +/- 1.45, N = 3 1783 1225 1327 1337 1623 1659 476 512 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 80K 160K 240K 320K 400K SE +/- 88.19, N = 3 SE +/- 57.74, N = 3 SE +/- 33.33, N = 3 SE +/- 57.74, N = 3 SE +/- 666.67, N = 3 SE +/- 3841.87, N = 5 SE +/- 569.60, N = 3 SE +/- 88.19, N = 3 68333 41500 45367 48200 224667 381400 82967 75067 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 3K 6K 9K 12K 15K SE +/- 18.84, N = 3 SE +/- 18.75, N = 3 SE +/- 25.32, N = 3 SE +/- 23.68, N = 3 SE +/- 3.61, N = 3 SE +/- 57.74, N = 3 2073 2034 1999 1995 1765 1775 6706 13200 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 70K 140K 210K 280K 350K SE +/- 2516.61, N = 3 SE +/- 1855.92, N = 3 SE +/- 3382.96, N = 3 SE +/- 1000.00, N = 3 SE +/- 4666.67, N = 3 SE +/- 202.76, N = 3 SE +/- 2728.45, N = 3 SE +/- 333.33, N = 3 332000 329333 326333 331000 335667 91333 221333 199667 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 400 800 1200 1600 2000 SE +/- 6.01, N = 3 SE +/- 2.91, N = 3 SE +/- 2.33, N = 3 SE +/- 11.98, N = 15 SE +/- 2.85, N = 3 SE +/- 0.67, N = 3 SE +/- 0.88, N = 3 1774 1029 1186 1220 1614 1669 650 814 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 80K 160K 240K 320K 400K SE +/- 333.33, N = 3 SE +/- 666.67, N = 3 SE +/- 333.33, N = 3 SE +/- 333.33, N = 3 SE +/- 1000.00, N = 3 SE +/- 2603.42, N = 3 SE +/- 577.35, N = 3 SE +/- 617.34, N = 3 197333 171333 170667 174667 228000 387333 115000 99067 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe 1400 2800 4200 5600 7000 SE +/- 23.92, N = 3 SE +/- 58.64, N = 3 SE +/- 13.37, N = 3 SE +/- 41.66, N = 3 SE +/- 7.67, N = 3 6617 4245 4037 4170 3538 3550 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 300 600 900 1200 1500 SE +/- 1.86, N = 3 SE +/- 0.88, N = 3 SE +/- 0.67, N = 3 SE +/- 2.67, N = 3 SE +/- 3.48, N = 3 SE +/- 10.97, N = 3 SE +/- 5.17, N = 3 SE +/- 2.03, N = 3 877 194 224 219 1286 1616 1199 1065 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 800 1600 2400 3200 4000 SE +/- 8.25, N = 3 SE +/- 17.98, N = 3 SE +/- 6.39, N = 3 SE +/- 10.27, N = 3 SE +/- 2.65, N = 3 SE +/- 3.53, N = 3 SE +/- 2.60, N = 3 3573 2458 2661 2681 3252 3325 958 1032 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 300 600 900 1200 1500 SE +/- 2.67, N = 3 SE +/- 14.63, N = 5 SE +/- 2.00, N = 3 SE +/- 0.33, N = 3 267 162 177 188 878 1491 324 293 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe 900 1800 2700 3600 4500 SE +/- 37.36, N = 3 SE +/- 37.36, N = 3 SE +/- 50.44, N = 3 SE +/- 47.58, N = 3 4154 4075 4005 3998 3538 3558 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 300 600 900 1200 1500 SE +/- 9.94, N = 3 SE +/- 7.09, N = 3 SE +/- 13.25, N = 3 SE +/- 4.84, N = 3 SE +/- 17.89, N = 3 SE +/- 11.46, N = 3 SE +/- 0.67, N = 3 1296 1287 1275 1294 1311 357 865 779 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 800 1600 2400 3200 4000 SE +/- 11.93, N = 3 SE +/- 6.36, N = 3 SE +/- 4.67, N = 3 SE +/- 23.91, N = 15 SE +/- 5.70, N = 3 SE +/- 0.67, N = 3 SE +/- 1.73, N = 3 SE +/- 0.67, N = 3 3555 2065 2381 2446 3236 3345 1308 1636 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 300 600 900 1200 1500 SE +/- 0.67, N = 3 SE +/- 1.86, N = 3 SE +/- 1.67, N = 3 SE +/- 2.08, N = 3 SE +/- 3.93, N = 3 SE +/- 9.82, N = 3 SE +/- 1.15, N = 3 SE +/- 2.33, N = 3 771 670 665 682 891 1514 449 387 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Dbench 12 Clients OpenBenchmarking.org MB/s, More Is Better Dbench 4.0 12 Clients ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 700 1400 2100 2800 3500 SE +/- 3.38, N = 3 SE +/- 5.07, N = 3 SE +/- 2.03, N = 3 SE +/- 16.46, N = 3 SE +/- 1.20, N = 3 SE +/- 2.70, N = 3 SE +/- 21.52, N = 3 SE +/- 16.88, N = 3 2730.75 2626.03 2472.42 2783.64 658.79 3196.68 2484.85 2330.16 1. (CC) gcc options: -lpopt -O2
Dbench 1 Clients OpenBenchmarking.org MB/s, More Is Better Dbench 4.0 1 Clients ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 130 260 390 520 650 SE +/- 0.70, N = 3 SE +/- 0.85, N = 3 SE +/- 0.25, N = 3 SE +/- 1.41, N = 3 SE +/- 0.03, N = 3 SE +/- 0.89, N = 3 SE +/- 0.61, N = 3 SE +/- 1.02, N = 3 435.70 410.17 380.35 458.91 102.99 579.24 453.17 423.83 1. (CC) gcc options: -lpopt -O2
Compile Bench Test: Compile OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Compile ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 300 600 900 1200 1500 SE +/- 9.96, N = 3 SE +/- 6.06, N = 3 SE +/- 7.47, N = 3 SE +/- 3.89, N = 3 SE +/- 0.00, N = 3 SE +/- 3.97, N = 3 SE +/- 11.73, N = 3 SE +/- 12.18, N = 3 1408.42 1398.57 1390.71 1247.36 1483.07 1513.97 1478.00 1474.80
Compile Bench Test: Initial Create OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Initial Create ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 90 180 270 360 450 SE +/- 1.18, N = 3 SE +/- 1.57, N = 3 SE +/- 0.50, N = 3 SE +/- 1.46, N = 3 SE +/- 1.23, N = 3 SE +/- 2.28, N = 3 SE +/- 0.77, N = 3 SE +/- 3.20, N = 3 216.21 222.04 221.83 206.77 423.67 420.47 411.07 413.46
Compile Bench Test: Read Compiled Tree OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Read Compiled Tree ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 600 1200 1800 2400 3000 SE +/- 12.22, N = 3 SE +/- 8.00, N = 3 SE +/- 8.89, N = 3 SE +/- 19.98, N = 3 SE +/- 8.94, N = 3 SE +/- 16.46, N = 3 SE +/- 42.24, N = 3 SE +/- 15.05, N = 3 1454.31 1212.73 1217.80 1694.75 2747.01 2738.07 2642.03 2683.76
PostMark Disk Transaction Performance OpenBenchmarking.org TPS, More Is Better PostMark 1.51 Disk Transaction Performance ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 1100 2200 3300 4400 5500 SE +/- 14.67, N = 3 SE +/- 29.00, N = 3 SE +/- 14.33, N = 3 SE +/- 35.33, N = 3 SE +/- 58.25, N = 5 SE +/- 34.00, N = 3 SE +/- 35.33, N = 3 3318 3289 3275 3275 5137 5211 5068 5137 1. (CC) gcc options: -O3
SQLite Threads / Copies: 1 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 1 ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 3 6 9 12 15 SE +/- 0.055, N = 3 SE +/- 0.071, N = 7 SE +/- 0.096, N = 3 SE +/- 0.104, N = 3 SE +/- 0.067, N = 3 SE +/- 0.057, N = 3 SE +/- 0.069, N = 10 SE +/- 0.093, N = 3 7.252 7.852 8.380 8.379 8.367 6.487 8.810 10.426 1. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
SQLite Threads / Copies: 8 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 8 ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 5 10 15 20 25 SE +/- 0.057, N = 3 SE +/- 0.079, N = 3 SE +/- 0.174, N = 3 SE +/- 0.170, N = 3 SE +/- 0.009, N = 3 SE +/- 0.045, N = 3 SE +/- 0.047, N = 3 SE +/- 0.168, N = 15 9.043 11.267 12.857 12.827 15.767 21.562 20.005 22.829 1. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
SQLite Threads / Copies: 32 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 32 ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 10 20 30 40 50 SE +/- 0.05, N = 3 SE +/- 0.08, N = 3 SE +/- 0.04, N = 3 SE +/- 0.07, N = 3 SE +/- 0.00, N = 3 SE +/- 0.09, N = 3 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 17.74 23.19 27.00 26.89 27.58 45.16 31.79 35.99 1. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
SQLite Threads / Copies: 64 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 64 ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 15 30 45 60 75 SE +/- 0.08, N = 3 SE +/- 0.01, N = 3 SE +/- 0.05, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.07, N = 3 SE +/- 0.13, N = 3 SE +/- 0.21, N = 3 36.60 37.45 40.30 40.00 40.98 67.63 45.72 50.31 1. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
SQLite Threads / Copies: 128 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 128 ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 20 40 60 80 100 SE +/- 0.16, N = 3 SE +/- 0.10, N = 3 SE +/- 0.10, N = 3 SE +/- 0.27, N = 3 SE +/- 0.03, N = 3 SE +/- 0.01, N = 3 SE +/- 0.28, N = 3 SE +/- 0.16, N = 3 77.88 76.09 78.15 77.05 59.89 90.49 80.15 83.86 1. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
Phoronix Test Suite v10.8.5