pts-disk-different-nvmes AMD Ryzen Threadripper 1900X 8-Core testing with a Gigabyte X399 DESIGNARE EX-CF (F13a BIOS) and NVIDIA Quadro P400 on Debian 11 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2211142-NE-PTSDISKDI73&grr&sro .
pts-disk-different-nvmes Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Compiler File-System Screen Resolution ZFS raidz1 4xNVME ext4 soft raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ext4 soft raid5 8xNVME ext4 WD_Black SN770 2TB NVMe AMD Ryzen Threadripper 1900X 8-Core @ 3.80GHz (8 Cores / 16 Threads) Gigabyte X399 DESIGNARE EX-CF (F13a BIOS) AMD 17h 64GB Samsung SSD 960 EVO 500GB + 8 x 2000GB Western Digital WD_BLACK SN770 2TB + 1000GB CT1000P5PSSD8 NVIDIA Quadro P400 NVIDIA GP107GL HD Audio DELL S2340T 4 x Intel I350 + Intel 8265 / 8275 Debian 11 5.10.0-19-amd64 (x86_64) GCC 10.2.1 20210110 zfs 1920x1080 ext4 zfs ext4 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: always Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-mutex --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-Km9U7s/gcc-10-10.2.1/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-Km9U7s/gcc-10-10.2.1/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8001137 Disk Scheduler Details - ZFS raidz1 4xNVME, ZFS raidz1 8xNVME, ZFS raidz1 8xNVME no Compression, ZFS mirror 8xNVME: NONE Python Details - Python 3.9.2 Security Details - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT vulnerable + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: disabled RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected Disk Details - ext4 soft raid5 4xNVME: NONE / relatime,rw,stripe=384 / raid5 nvme4n1p1[4] nvme3n1p1[2] nvme2n1p1[1] nvme1n1p1[0] Block Size: 4096 - ext4 Crucial P5 Plus 1TB NVME: NONE / relatime,rw / Block Size: 4096 - ext4 soft raid5 8xNVME: NONE / relatime,rw,stripe=896 / raid5 nvme9n1[8] nvme8n1[6] nvme7n1[5] nvme6n1[4] nvme4n1[3] nvme3n1[2] nvme2n1[1] nvme1n1[0] Block Size: 4096 - ext4 WD_Black SN770 2TB NVMe: NONE / relatime,rw / Block Size: 4096
pts-disk-different-nvmes dbench: 12 Clients dbench: 1 Clients fs-mark: 4000 Files, 32 Sub Dirs, 1MB Size fs-mark: 5000 Files, 1MB Size, 4 Threads fs-mark: 1000 Files, 1MB Size, No Sync/FSync fs-mark: 1000 Files, 1MB Size sqlite: 128 postmark: Disk Transaction Performance fio: Seq Write - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Seq Write - Linux AIO - No - Yes - 2MB - Default Test Directory sqlite: 64 fio: Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Rand Read - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Seq Read - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Rand Read - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Seq Read - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Rand Write - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Rand Write - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directory sqlite: 32 compilebench: Compile sqlite: 8 sqlite: 1 compilebench: Read Compiled Tree compilebench: Initial Create ZFS raidz1 4xNVME ext4 soft raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ext4 soft raid5 8xNVME ext4 WD_Black SN770 2TB NVMe 2626.03 410.171 684.2 1632.7 1359.3 646.5 76.088 3289 1029 2065 37.448 41500 162 2119 329333 1287 49567 194 2034 4245 4075 1225 2458 171333 670 23.191 1398.57 11.267 7.852 1212.73 222.04 2484.85 453.170 284.7 537.9 1733.3 254.0 80.145 5068 650 1308 45.720 82967 324 6666 221333 865 306667 1199 6706 476 958 115000 449 31.793 1478.00 20.005 8.810 2642.03 411.07 658.794 102.985 435.0 522.1 1791.6 584.1 59.890 5137 1614 3236 40.979 224667 878 1765 335667 1311 329667 1286 1765 3538 3538 1623 3252 228000 891 27.578 1483.07 15.767 8.367 2747.01 423.67 2472.42 380.354 644.7 1655.6 1340.7 646.7 78.153 3275 1186 2381 40.301 45367 177 2015 326333 1275 57433 224 1999 4037 4005 1327 2661 170667 665 26.999 1390.71 12.857 8.380 1217.80 221.83 2783.64 458.913 592.1 1240.0 1160.9 589.7 77.047 3275 1220 2446 40.003 48200 188 2081 331000 1294 56167 219 1995 4170 3998 1337 2681 174667 682 26.892 1247.36 12.827 8.379 1694.75 206.77 2730.75 435.704 658.2 1711.4 1338.4 581.1 77.880 3318 1774 3555 36.598 68333 267 3305 332000 1296 224333 877 2073 6617 4154 1783 3573 197333 771 17.744 1408.42 9.043 7.252 1454.31 216.21 2330.16 423.832 286.6 570.0 1782.9 264.4 83.859 5137 814 1636 50.309 75067 293 13225 199667 779 272667 1065 13200 512 1032 99067 387 35.993 1474.80 22.829 10.426 2683.76 413.46 3196.68 579.240 558.2 1631.5 1806.6 693.8 90.490 5211 1669 3345 67.626 381400 1491 1771 91333 357 414000 1616 1775 3550 3558 1659 3325 387333 1514 45.163 1513.97 21.562 6.487 2738.07 420.47 OpenBenchmarking.org
Dbench 12 Clients OpenBenchmarking.org MB/s, More Is Better Dbench 4.0 12 Clients ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 700 1400 2100 2800 3500 SE +/- 3.38, N = 3 SE +/- 5.07, N = 3 SE +/- 2.03, N = 3 SE +/- 16.46, N = 3 SE +/- 1.20, N = 3 SE +/- 2.70, N = 3 SE +/- 21.52, N = 3 SE +/- 16.88, N = 3 2730.75 2626.03 2472.42 2783.64 658.79 3196.68 2484.85 2330.16 1. (CC) gcc options: -lpopt -O2
Dbench 1 Clients OpenBenchmarking.org MB/s, More Is Better Dbench 4.0 1 Clients ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 130 260 390 520 650 SE +/- 0.70, N = 3 SE +/- 0.85, N = 3 SE +/- 0.25, N = 3 SE +/- 1.41, N = 3 SE +/- 0.03, N = 3 SE +/- 0.89, N = 3 SE +/- 0.61, N = 3 SE +/- 1.02, N = 3 435.70 410.17 380.35 458.91 102.99 579.24 453.17 423.83 1. (CC) gcc options: -lpopt -O2
FS-Mark Test: 4000 Files, 32 Sub Dirs, 1MB Size OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 Test: 4000 Files, 32 Sub Dirs, 1MB Size ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 150 300 450 600 750 SE +/- 5.50, N = 15 SE +/- 7.90, N = 3 SE +/- 3.56, N = 3 SE +/- 3.57, N = 3 SE +/- 16.63, N = 12 SE +/- 63.54, N = 11 SE +/- 2.79, N = 3 SE +/- 6.52, N = 12 658.2 684.2 644.7 592.1 435.0 558.2 284.7 286.6
FS-Mark Test: 5000 Files, 1MB Size, 4 Threads OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 Test: 5000 Files, 1MB Size, 4 Threads ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 400 800 1200 1600 2000 SE +/- 2.79, N = 3 SE +/- 4.95, N = 3 SE +/- 6.18, N = 3 SE +/- 2.97, N = 3 SE +/- 125.73, N = 9 SE +/- 8.47, N = 3 SE +/- 4.55, N = 12 SE +/- 4.70, N = 3 1711.4 1632.7 1655.6 1240.0 522.1 1631.5 537.9 570.0
FS-Mark Test: 1000 Files, 1MB Size, No Sync/FSync OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 Test: 1000 Files, 1MB Size, No Sync/FSync ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 400 800 1200 1600 2000 SE +/- 7.95, N = 3 SE +/- 12.01, N = 15 SE +/- 10.88, N = 15 SE +/- 13.85, N = 4 SE +/- 16.52, N = 3 SE +/- 2.57, N = 3 SE +/- 19.84, N = 4 SE +/- 11.02, N = 3 1338.4 1359.3 1340.7 1160.9 1791.6 1806.6 1733.3 1782.9
FS-Mark Test: 1000 Files, 1MB Size OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 Test: 1000 Files, 1MB Size ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 150 300 450 600 750 SE +/- 4.66, N = 3 SE +/- 5.49, N = 15 SE +/- 1.87, N = 3 SE +/- 6.13, N = 3 SE +/- 13.64, N = 12 SE +/- 9.94, N = 3 SE +/- 9.50, N = 15 SE +/- 0.85, N = 3 581.1 646.5 646.7 589.7 584.1 693.8 254.0 264.4
SQLite Threads / Copies: 128 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 128 ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 20 40 60 80 100 SE +/- 0.16, N = 3 SE +/- 0.10, N = 3 SE +/- 0.10, N = 3 SE +/- 0.27, N = 3 SE +/- 0.03, N = 3 SE +/- 0.01, N = 3 SE +/- 0.28, N = 3 SE +/- 0.16, N = 3 77.88 76.09 78.15 77.05 59.89 90.49 80.15 83.86 1. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
PostMark Disk Transaction Performance OpenBenchmarking.org TPS, More Is Better PostMark 1.51 Disk Transaction Performance ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 1100 2200 3300 4400 5500 SE +/- 14.67, N = 3 SE +/- 29.00, N = 3 SE +/- 14.33, N = 3 SE +/- 35.33, N = 3 SE +/- 58.25, N = 5 SE +/- 34.00, N = 3 SE +/- 35.33, N = 3 3318 3289 3275 3275 5137 5211 5068 5137 1. (CC) gcc options: -O3
Flexible IO Tester Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 400 800 1200 1600 2000 SE +/- 6.01, N = 3 SE +/- 2.91, N = 3 SE +/- 2.33, N = 3 SE +/- 11.98, N = 15 SE +/- 2.85, N = 3 SE +/- 0.67, N = 3 SE +/- 0.88, N = 3 1774 1029 1186 1220 1614 1669 650 814 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 800 1600 2400 3200 4000 SE +/- 11.93, N = 3 SE +/- 6.36, N = 3 SE +/- 4.67, N = 3 SE +/- 23.91, N = 15 SE +/- 5.70, N = 3 SE +/- 0.67, N = 3 SE +/- 1.73, N = 3 SE +/- 0.67, N = 3 3555 2065 2381 2446 3236 3345 1308 1636 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
SQLite Threads / Copies: 64 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 64 ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 15 30 45 60 75 SE +/- 0.08, N = 3 SE +/- 0.01, N = 3 SE +/- 0.05, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.07, N = 3 SE +/- 0.13, N = 3 SE +/- 0.21, N = 3 36.60 37.45 40.30 40.00 40.98 67.63 45.72 50.31 1. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
Flexible IO Tester Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 80K 160K 240K 320K 400K SE +/- 88.19, N = 3 SE +/- 57.74, N = 3 SE +/- 33.33, N = 3 SE +/- 57.74, N = 3 SE +/- 666.67, N = 3 SE +/- 3841.87, N = 5 SE +/- 569.60, N = 3 SE +/- 88.19, N = 3 68333 41500 45367 48200 224667 381400 82967 75067 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 300 600 900 1200 1500 SE +/- 2.67, N = 3 SE +/- 14.63, N = 5 SE +/- 2.00, N = 3 SE +/- 0.33, N = 3 267 162 177 188 878 1491 324 293 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 3K 6K 9K 12K 15K SE +/- 12.06, N = 3 SE +/- 29.49, N = 3 SE +/- 6.57, N = 3 SE +/- 20.66, N = 3 SE +/- 3.67, N = 3 SE +/- 14.99, N = 3 SE +/- 143.61, N = 4 3305 2119 2015 2081 1765 1771 6666 13225 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 70K 140K 210K 280K 350K SE +/- 2516.61, N = 3 SE +/- 1855.92, N = 3 SE +/- 3382.96, N = 3 SE +/- 1000.00, N = 3 SE +/- 4666.67, N = 3 SE +/- 202.76, N = 3 SE +/- 2728.45, N = 3 SE +/- 333.33, N = 3 332000 329333 326333 331000 335667 91333 221333 199667 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 300 600 900 1200 1500 SE +/- 9.94, N = 3 SE +/- 7.09, N = 3 SE +/- 13.25, N = 3 SE +/- 4.84, N = 3 SE +/- 17.89, N = 3 SE +/- 11.46, N = 3 SE +/- 0.67, N = 3 1296 1287 1275 1294 1311 357 865 779 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 90K 180K 270K 360K 450K SE +/- 666.67, N = 3 SE +/- 185.59, N = 3 SE +/- 176.38, N = 3 SE +/- 633.33, N = 3 SE +/- 881.92, N = 3 SE +/- 2886.75, N = 3 SE +/- 1333.33, N = 3 SE +/- 333.33, N = 3 224333 49567 57433 56167 329667 414000 306667 272667 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 300 600 900 1200 1500 SE +/- 1.86, N = 3 SE +/- 0.88, N = 3 SE +/- 0.67, N = 3 SE +/- 2.67, N = 3 SE +/- 3.48, N = 3 SE +/- 10.97, N = 3 SE +/- 5.17, N = 3 SE +/- 2.03, N = 3 877 194 224 219 1286 1616 1199 1065 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 3K 6K 9K 12K 15K SE +/- 18.84, N = 3 SE +/- 18.75, N = 3 SE +/- 25.32, N = 3 SE +/- 23.68, N = 3 SE +/- 3.61, N = 3 SE +/- 57.74, N = 3 2073 2034 1999 1995 1765 1775 6706 13200 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe 1400 2800 4200 5600 7000 SE +/- 23.92, N = 3 SE +/- 58.64, N = 3 SE +/- 13.37, N = 3 SE +/- 41.66, N = 3 SE +/- 7.67, N = 3 6617 4245 4037 4170 3538 3550 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe 900 1800 2700 3600 4500 SE +/- 37.36, N = 3 SE +/- 37.36, N = 3 SE +/- 50.44, N = 3 SE +/- 47.58, N = 3 4154 4075 4005 3998 3538 3558 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 400 800 1200 1600 2000 SE +/- 4.16, N = 3 SE +/- 8.99, N = 3 SE +/- 4.91, N = 3 SE +/- 1.20, N = 3 SE +/- 1.76, N = 3 SE +/- 1.45, N = 3 1783 1225 1327 1337 1623 1659 476 512 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 800 1600 2400 3200 4000 SE +/- 8.25, N = 3 SE +/- 17.98, N = 3 SE +/- 6.39, N = 3 SE +/- 10.27, N = 3 SE +/- 2.65, N = 3 SE +/- 3.53, N = 3 SE +/- 2.60, N = 3 3573 2458 2661 2681 3252 3325 958 1032 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 80K 160K 240K 320K 400K SE +/- 333.33, N = 3 SE +/- 666.67, N = 3 SE +/- 333.33, N = 3 SE +/- 333.33, N = 3 SE +/- 1000.00, N = 3 SE +/- 2603.42, N = 3 SE +/- 577.35, N = 3 SE +/- 617.34, N = 3 197333 171333 170667 174667 228000 387333 115000 99067 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 300 600 900 1200 1500 SE +/- 0.67, N = 3 SE +/- 1.86, N = 3 SE +/- 1.67, N = 3 SE +/- 2.08, N = 3 SE +/- 3.93, N = 3 SE +/- 9.82, N = 3 SE +/- 1.15, N = 3 SE +/- 2.33, N = 3 771 670 665 682 891 1514 449 387 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
SQLite Threads / Copies: 32 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 32 ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 10 20 30 40 50 SE +/- 0.05, N = 3 SE +/- 0.08, N = 3 SE +/- 0.04, N = 3 SE +/- 0.07, N = 3 SE +/- 0.00, N = 3 SE +/- 0.09, N = 3 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 17.74 23.19 27.00 26.89 27.58 45.16 31.79 35.99 1. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
Compile Bench Test: Compile OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Compile ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 300 600 900 1200 1500 SE +/- 9.96, N = 3 SE +/- 6.06, N = 3 SE +/- 7.47, N = 3 SE +/- 3.89, N = 3 SE +/- 0.00, N = 3 SE +/- 3.97, N = 3 SE +/- 11.73, N = 3 SE +/- 12.18, N = 3 1408.42 1398.57 1390.71 1247.36 1483.07 1513.97 1478.00 1474.80
SQLite Threads / Copies: 8 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 8 ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 5 10 15 20 25 SE +/- 0.057, N = 3 SE +/- 0.079, N = 3 SE +/- 0.174, N = 3 SE +/- 0.170, N = 3 SE +/- 0.009, N = 3 SE +/- 0.045, N = 3 SE +/- 0.047, N = 3 SE +/- 0.168, N = 15 9.043 11.267 12.857 12.827 15.767 21.562 20.005 22.829 1. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
SQLite Threads / Copies: 1 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 1 ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 3 6 9 12 15 SE +/- 0.055, N = 3 SE +/- 0.071, N = 7 SE +/- 0.096, N = 3 SE +/- 0.104, N = 3 SE +/- 0.067, N = 3 SE +/- 0.057, N = 3 SE +/- 0.069, N = 10 SE +/- 0.093, N = 3 7.252 7.852 8.380 8.379 8.367 6.487 8.810 10.426 1. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
Compile Bench Test: Read Compiled Tree OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Read Compiled Tree ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 600 1200 1800 2400 3000 SE +/- 12.22, N = 3 SE +/- 8.00, N = 3 SE +/- 8.89, N = 3 SE +/- 19.98, N = 3 SE +/- 8.94, N = 3 SE +/- 16.46, N = 3 SE +/- 42.24, N = 3 SE +/- 15.05, N = 3 1454.31 1212.73 1217.80 1694.75 2747.01 2738.07 2642.03 2683.76
Compile Bench Test: Initial Create OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Initial Create ZFS mirror 8xNVME ZFS raidz1 4xNVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ext4 Crucial P5 Plus 1TB NVME ext4 WD_Black SN770 2TB NVMe ext4 soft raid5 4xNVME ext4 soft raid5 8xNVME 90 180 270 360 450 SE +/- 1.18, N = 3 SE +/- 1.57, N = 3 SE +/- 0.50, N = 3 SE +/- 1.46, N = 3 SE +/- 1.23, N = 3 SE +/- 2.28, N = 3 SE +/- 0.77, N = 3 SE +/- 3.20, N = 3 216.21 222.04 221.83 206.77 423.67 420.47 411.07 413.46
Phoronix Test Suite v10.8.5