pts-disk-different-nvmes AMD Ryzen Threadripper 1900X 8-Core testing with a Gigabyte X399 DESIGNARE EX-CF (F13a BIOS) and NVIDIA Quadro P400 on Debian 11 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2211133-NE-PTSDISKDI69&sro&grs .
pts-disk-different-nvmes Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Compiler File-System Screen Resolution ZFS zraid1 4xNVME Pool ext4 mdadm raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression ZFS mirror 8xNVME Pool AMD Ryzen Threadripper 1900X 8-Core @ 3.80GHz (8 Cores / 16 Threads) Gigabyte X399 DESIGNARE EX-CF (F13a BIOS) AMD 17h 64GB Samsung SSD 960 EVO 500GB + 8 x 2000GB Western Digital WD_BLACK SN770 2TB + 1000GB CT1000P5PSSD8 NVIDIA Quadro P400 NVIDIA GP107GL HD Audio DELL S2340T 4 x Intel I350 + Intel 8265 / 8275 Debian 11 5.10.0-19-amd64 (x86_64) GCC 10.2.1 20210110 zfs 1920x1080 ext4 zfs OpenBenchmarking.org Kernel Details - Transparent Huge Pages: always Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-mutex --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-Km9U7s/gcc-10-10.2.1/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-Km9U7s/gcc-10-10.2.1/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8001137 Disk Scheduler Details - ZFS zraid1 4xNVME Pool, ZFS zraid1 8xNVME Pool, ZFS zraid1 8xNVME Pool no Compression, ZFS mirror 8xNVME Pool: NONE Python Details - Python 3.9.2 Security Details - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT vulnerable + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: disabled RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected Disk Details - ext4 mdadm raid5 4xNVME: NONE / relatime,rw,stripe=384 / raid5 nvme4n1p1[4] nvme3n1p1[2] nvme2n1p1[1] nvme1n1p1[0] Block Size: 4096 - ext4 Crucial P5 Plus 1TB NVME: NONE / relatime,rw / Block Size: 4096
pts-disk-different-nvmes fio: Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directory dbench: 1 Clients dbench: 12 Clients fio: Seq Read - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Rand Read - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Rand Write - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Rand Write - Linux AIO - No - Yes - 2MB - Default Test Directory fs-mark: 5000 Files, 1MB Size, 4 Threads fio: Seq Write - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Seq Write - Linux AIO - No - Yes - 2MB - Default Test Directory fs-mark: 1000 Files, 1MB Size fs-mark: 4000 Files, 32 Sub Dirs, 1MB Size compilebench: Read Compiled Tree sqlite: 8 compilebench: Initial Create fio: Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Rand Read - Linux AIO - No - Yes - 2MB - Default Test Directory sqlite: 32 postmark: Disk Transaction Performance fs-mark: 1000 Files, 1MB Size, No Sync/FSync fio: Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directory sqlite: 128 sqlite: 64 sqlite: 1 compilebench: Compile fio: Seq Read - Linux AIO - No - Yes - 2MB - Default Test Directory ZFS zraid1 4xNVME Pool ext4 mdadm raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression ZFS mirror 8xNVME Pool 49567 194 162 41500 410.171 2626.03 2034 2119 1225 2458 1632.7 1029 2065 646.5 684.2 1212.73 11.267 222.04 670 171333 4245 23.191 3289 1359.3 329333 1287 76.088 37.448 7.852 1398.57 4075 306667 1199 324 82967 453.170 2484.85 6706 6666 476 958 537.9 650 1308 254.0 284.7 2642.03 20.005 411.07 449 115000 31.793 5068 1733.3 221333 865 80.145 45.720 8.810 1478.00 329667 1286 878 224667 102.985 658.794 1765 1765 1623 3252 522.1 1614 3236 584.1 435.0 2747.01 15.767 423.67 891 228000 3538 27.578 5137 1791.6 335667 1311 59.890 40.979 8.367 1483.07 3538 57433 224 177 45367 380.354 2472.42 1999 2015 1327 2661 1655.6 1186 2381 646.7 644.7 1217.80 12.857 221.83 665 170667 4037 26.999 3275 1340.7 326333 1275 78.153 40.301 8.380 1390.71 4005 56167 219 188 48200 458.913 2783.64 1995 2081 1337 2681 1240.0 1220 2446 589.7 592.1 1694.75 12.827 206.77 682 174667 4170 26.892 3275 1160.9 331000 1294 77.047 40.003 8.379 1247.36 3998 224333 877 267 68333 435.704 2730.75 2073 3305 1783 3573 1711.4 1774 3555 581.1 658.2 1454.31 9.043 216.21 771 197333 6617 17.744 3318 1338.4 332000 1296 77.880 36.598 7.252 1408.42 4154 OpenBenchmarking.org
Flexible IO Tester Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS mirror 8xNVME Pool ZFS zraid1 4xNVME Pool ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression ext4 Crucial P5 Plus 1TB NVME ext4 mdadm raid5 4xNVME 70K 140K 210K 280K 350K SE +/- 666.67, N = 3 SE +/- 185.59, N = 3 SE +/- 176.38, N = 3 SE +/- 633.33, N = 3 SE +/- 881.92, N = 3 SE +/- 1333.33, N = 3 224333 49567 57433 56167 329667 306667 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS mirror 8xNVME Pool ZFS zraid1 4xNVME Pool ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression ext4 Crucial P5 Plus 1TB NVME ext4 mdadm raid5 4xNVME 300 600 900 1200 1500 SE +/- 1.86, N = 3 SE +/- 0.88, N = 3 SE +/- 0.67, N = 3 SE +/- 2.67, N = 3 SE +/- 3.48, N = 3 SE +/- 5.17, N = 3 877 194 224 219 1286 1199 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS mirror 8xNVME Pool ZFS zraid1 4xNVME Pool ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression ext4 Crucial P5 Plus 1TB NVME ext4 mdadm raid5 4xNVME 200 400 600 800 1000 SE +/- 2.67, N = 3 SE +/- 2.00, N = 3 267 162 177 188 878 324 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS mirror 8xNVME Pool ZFS zraid1 4xNVME Pool ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression ext4 Crucial P5 Plus 1TB NVME ext4 mdadm raid5 4xNVME 50K 100K 150K 200K 250K SE +/- 88.19, N = 3 SE +/- 57.74, N = 3 SE +/- 33.33, N = 3 SE +/- 57.74, N = 3 SE +/- 666.67, N = 3 SE +/- 569.60, N = 3 68333 41500 45367 48200 224667 82967 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Dbench 1 Clients OpenBenchmarking.org MB/s, More Is Better Dbench 4.0 1 Clients ZFS mirror 8xNVME Pool ZFS zraid1 4xNVME Pool ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression ext4 Crucial P5 Plus 1TB NVME ext4 mdadm raid5 4xNVME 100 200 300 400 500 SE +/- 0.70, N = 3 SE +/- 0.85, N = 3 SE +/- 0.25, N = 3 SE +/- 1.41, N = 3 SE +/- 0.03, N = 3 SE +/- 0.61, N = 3 435.70 410.17 380.35 458.91 102.99 453.17 1. (CC) gcc options: -lpopt -O2
Dbench 12 Clients OpenBenchmarking.org MB/s, More Is Better Dbench 4.0 12 Clients ZFS mirror 8xNVME Pool ZFS zraid1 4xNVME Pool ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression ext4 Crucial P5 Plus 1TB NVME ext4 mdadm raid5 4xNVME 600 1200 1800 2400 3000 SE +/- 3.38, N = 3 SE +/- 5.07, N = 3 SE +/- 2.03, N = 3 SE +/- 16.46, N = 3 SE +/- 1.20, N = 3 SE +/- 21.52, N = 3 2730.75 2626.03 2472.42 2783.64 658.79 2484.85 1. (CC) gcc options: -lpopt -O2
Flexible IO Tester Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS mirror 8xNVME Pool ZFS zraid1 4xNVME Pool ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression ext4 Crucial P5 Plus 1TB NVME ext4 mdadm raid5 4xNVME 1400 2800 4200 5600 7000 SE +/- 18.84, N = 3 SE +/- 18.75, N = 3 SE +/- 25.32, N = 3 SE +/- 23.68, N = 3 SE +/- 3.61, N = 3 2073 2034 1999 1995 1765 6706 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS mirror 8xNVME Pool ZFS zraid1 4xNVME Pool ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression ext4 Crucial P5 Plus 1TB NVME ext4 mdadm raid5 4xNVME 1400 2800 4200 5600 7000 SE +/- 12.06, N = 3 SE +/- 29.49, N = 3 SE +/- 6.57, N = 3 SE +/- 20.66, N = 3 SE +/- 14.99, N = 3 3305 2119 2015 2081 1765 6666 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS mirror 8xNVME Pool ZFS zraid1 4xNVME Pool ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression ext4 Crucial P5 Plus 1TB NVME ext4 mdadm raid5 4xNVME 400 800 1200 1600 2000 SE +/- 4.16, N = 3 SE +/- 8.99, N = 3 SE +/- 4.91, N = 3 SE +/- 1.76, N = 3 1783 1225 1327 1337 1623 476 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS mirror 8xNVME Pool ZFS zraid1 4xNVME Pool ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression ext4 Crucial P5 Plus 1TB NVME ext4 mdadm raid5 4xNVME 800 1600 2400 3200 4000 SE +/- 8.25, N = 3 SE +/- 17.98, N = 3 SE +/- 6.39, N = 3 SE +/- 10.27, N = 3 SE +/- 3.53, N = 3 3573 2458 2661 2681 3252 958 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
FS-Mark Test: 5000 Files, 1MB Size, 4 Threads OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 Test: 5000 Files, 1MB Size, 4 Threads ZFS mirror 8xNVME Pool ZFS zraid1 4xNVME Pool ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression ext4 Crucial P5 Plus 1TB NVME ext4 mdadm raid5 4xNVME 400 800 1200 1600 2000 SE +/- 2.79, N = 3 SE +/- 4.95, N = 3 SE +/- 6.18, N = 3 SE +/- 2.97, N = 3 SE +/- 125.73, N = 9 SE +/- 4.55, N = 12 1711.4 1632.7 1655.6 1240.0 522.1 537.9
Flexible IO Tester Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS mirror 8xNVME Pool ZFS zraid1 4xNVME Pool ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression ext4 Crucial P5 Plus 1TB NVME ext4 mdadm raid5 4xNVME 400 800 1200 1600 2000 SE +/- 6.01, N = 3 SE +/- 2.91, N = 3 SE +/- 2.33, N = 3 SE +/- 11.98, N = 15 SE +/- 2.85, N = 3 SE +/- 0.88, N = 3 1774 1029 1186 1220 1614 650 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS mirror 8xNVME Pool ZFS zraid1 4xNVME Pool ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression ext4 Crucial P5 Plus 1TB NVME ext4 mdadm raid5 4xNVME 800 1600 2400 3200 4000 SE +/- 11.93, N = 3 SE +/- 6.36, N = 3 SE +/- 4.67, N = 3 SE +/- 23.91, N = 15 SE +/- 5.70, N = 3 SE +/- 1.73, N = 3 3555 2065 2381 2446 3236 1308 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
FS-Mark Test: 1000 Files, 1MB Size OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 Test: 1000 Files, 1MB Size ZFS mirror 8xNVME Pool ZFS zraid1 4xNVME Pool ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression ext4 Crucial P5 Plus 1TB NVME ext4 mdadm raid5 4xNVME 140 280 420 560 700 SE +/- 4.66, N = 3 SE +/- 5.49, N = 15 SE +/- 1.87, N = 3 SE +/- 6.13, N = 3 SE +/- 13.64, N = 12 SE +/- 9.50, N = 15 581.1 646.5 646.7 589.7 584.1 254.0
FS-Mark Test: 4000 Files, 32 Sub Dirs, 1MB Size OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 Test: 4000 Files, 32 Sub Dirs, 1MB Size ZFS mirror 8xNVME Pool ZFS zraid1 4xNVME Pool ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression ext4 Crucial P5 Plus 1TB NVME ext4 mdadm raid5 4xNVME 150 300 450 600 750 SE +/- 5.50, N = 15 SE +/- 7.90, N = 3 SE +/- 3.56, N = 3 SE +/- 3.57, N = 3 SE +/- 16.63, N = 12 SE +/- 2.79, N = 3 658.2 684.2 644.7 592.1 435.0 284.7
Compile Bench Test: Read Compiled Tree OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Read Compiled Tree ZFS mirror 8xNVME Pool ZFS zraid1 4xNVME Pool ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression ext4 Crucial P5 Plus 1TB NVME ext4 mdadm raid5 4xNVME 600 1200 1800 2400 3000 SE +/- 12.22, N = 3 SE +/- 8.00, N = 3 SE +/- 8.89, N = 3 SE +/- 19.98, N = 3 SE +/- 8.94, N = 3 SE +/- 42.24, N = 3 1454.31 1212.73 1217.80 1694.75 2747.01 2642.03
SQLite Threads / Copies: 8 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 8 ZFS mirror 8xNVME Pool ZFS zraid1 4xNVME Pool ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression ext4 Crucial P5 Plus 1TB NVME ext4 mdadm raid5 4xNVME 5 10 15 20 25 SE +/- 0.057, N = 3 SE +/- 0.079, N = 3 SE +/- 0.174, N = 3 SE +/- 0.170, N = 3 SE +/- 0.009, N = 3 SE +/- 0.047, N = 3 9.043 11.267 12.857 12.827 15.767 20.005 1. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
Compile Bench Test: Initial Create OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Initial Create ZFS mirror 8xNVME Pool ZFS zraid1 4xNVME Pool ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression ext4 Crucial P5 Plus 1TB NVME ext4 mdadm raid5 4xNVME 90 180 270 360 450 SE +/- 1.18, N = 3 SE +/- 1.57, N = 3 SE +/- 0.50, N = 3 SE +/- 1.46, N = 3 SE +/- 1.23, N = 3 SE +/- 0.77, N = 3 216.21 222.04 221.83 206.77 423.67 411.07
Flexible IO Tester Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS mirror 8xNVME Pool ZFS zraid1 4xNVME Pool ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression ext4 Crucial P5 Plus 1TB NVME ext4 mdadm raid5 4xNVME 200 400 600 800 1000 SE +/- 0.67, N = 3 SE +/- 1.86, N = 3 SE +/- 1.67, N = 3 SE +/- 2.08, N = 3 SE +/- 3.93, N = 3 SE +/- 1.15, N = 3 771 670 665 682 891 449 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS mirror 8xNVME Pool ZFS zraid1 4xNVME Pool ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression ext4 Crucial P5 Plus 1TB NVME ext4 mdadm raid5 4xNVME 50K 100K 150K 200K 250K SE +/- 333.33, N = 3 SE +/- 666.67, N = 3 SE +/- 333.33, N = 3 SE +/- 333.33, N = 3 SE +/- 1000.00, N = 3 SE +/- 577.35, N = 3 197333 171333 170667 174667 228000 115000 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS mirror 8xNVME Pool ZFS zraid1 4xNVME Pool ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression ext4 Crucial P5 Plus 1TB NVME 1400 2800 4200 5600 7000 SE +/- 23.92, N = 3 SE +/- 58.64, N = 3 SE +/- 13.37, N = 3 SE +/- 41.66, N = 3 6617 4245 4037 4170 3538 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
SQLite Threads / Copies: 32 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 32 ZFS mirror 8xNVME Pool ZFS zraid1 4xNVME Pool ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression ext4 Crucial P5 Plus 1TB NVME ext4 mdadm raid5 4xNVME 7 14 21 28 35 SE +/- 0.05, N = 3 SE +/- 0.08, N = 3 SE +/- 0.04, N = 3 SE +/- 0.07, N = 3 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 17.74 23.19 27.00 26.89 27.58 31.79 1. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
PostMark Disk Transaction Performance OpenBenchmarking.org TPS, More Is Better PostMark 1.51 Disk Transaction Performance ZFS mirror 8xNVME Pool ZFS zraid1 4xNVME Pool ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression ext4 Crucial P5 Plus 1TB NVME ext4 mdadm raid5 4xNVME 1100 2200 3300 4400 5500 SE +/- 14.67, N = 3 SE +/- 29.00, N = 3 SE +/- 14.33, N = 3 SE +/- 35.33, N = 3 SE +/- 34.00, N = 3 3318 3289 3275 3275 5137 5068 1. (CC) gcc options: -O3
FS-Mark Test: 1000 Files, 1MB Size, No Sync/FSync OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 Test: 1000 Files, 1MB Size, No Sync/FSync ZFS mirror 8xNVME Pool ZFS zraid1 4xNVME Pool ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression ext4 Crucial P5 Plus 1TB NVME ext4 mdadm raid5 4xNVME 400 800 1200 1600 2000 SE +/- 7.95, N = 3 SE +/- 12.01, N = 15 SE +/- 10.88, N = 15 SE +/- 13.85, N = 4 SE +/- 16.52, N = 3 SE +/- 19.84, N = 4 1338.4 1359.3 1340.7 1160.9 1791.6 1733.3
Flexible IO Tester Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS mirror 8xNVME Pool ZFS zraid1 4xNVME Pool ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression ext4 Crucial P5 Plus 1TB NVME ext4 mdadm raid5 4xNVME 70K 140K 210K 280K 350K SE +/- 2516.61, N = 3 SE +/- 1855.92, N = 3 SE +/- 3382.96, N = 3 SE +/- 1000.00, N = 3 SE +/- 4666.67, N = 3 SE +/- 2728.45, N = 3 332000 329333 326333 331000 335667 221333 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS mirror 8xNVME Pool ZFS zraid1 4xNVME Pool ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression ext4 Crucial P5 Plus 1TB NVME ext4 mdadm raid5 4xNVME 300 600 900 1200 1500 SE +/- 9.94, N = 3 SE +/- 7.09, N = 3 SE +/- 13.25, N = 3 SE +/- 4.84, N = 3 SE +/- 17.89, N = 3 SE +/- 11.46, N = 3 1296 1287 1275 1294 1311 865 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
SQLite Threads / Copies: 128 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 128 ZFS mirror 8xNVME Pool ZFS zraid1 4xNVME Pool ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression ext4 Crucial P5 Plus 1TB NVME ext4 mdadm raid5 4xNVME 20 40 60 80 100 SE +/- 0.16, N = 3 SE +/- 0.10, N = 3 SE +/- 0.10, N = 3 SE +/- 0.27, N = 3 SE +/- 0.03, N = 3 SE +/- 0.28, N = 3 77.88 76.09 78.15 77.05 59.89 80.15 1. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
SQLite Threads / Copies: 64 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 64 ZFS mirror 8xNVME Pool ZFS zraid1 4xNVME Pool ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression ext4 Crucial P5 Plus 1TB NVME ext4 mdadm raid5 4xNVME 10 20 30 40 50 SE +/- 0.08, N = 3 SE +/- 0.01, N = 3 SE +/- 0.05, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.13, N = 3 36.60 37.45 40.30 40.00 40.98 45.72 1. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
SQLite Threads / Copies: 1 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 1 ZFS mirror 8xNVME Pool ZFS zraid1 4xNVME Pool ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression ext4 Crucial P5 Plus 1TB NVME ext4 mdadm raid5 4xNVME 2 4 6 8 10 SE +/- 0.055, N = 3 SE +/- 0.071, N = 7 SE +/- 0.096, N = 3 SE +/- 0.104, N = 3 SE +/- 0.067, N = 3 SE +/- 0.069, N = 10 7.252 7.852 8.380 8.379 8.367 8.810 1. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
Compile Bench Test: Compile OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Compile ZFS mirror 8xNVME Pool ZFS zraid1 4xNVME Pool ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression ext4 Crucial P5 Plus 1TB NVME ext4 mdadm raid5 4xNVME 300 600 900 1200 1500 SE +/- 9.96, N = 3 SE +/- 6.06, N = 3 SE +/- 7.47, N = 3 SE +/- 3.89, N = 3 SE +/- 0.00, N = 3 SE +/- 11.73, N = 3 1408.42 1398.57 1390.71 1247.36 1483.07 1478.00
Flexible IO Tester Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS mirror 8xNVME Pool ZFS zraid1 4xNVME Pool ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression ext4 Crucial P5 Plus 1TB NVME 900 1800 2700 3600 4500 SE +/- 37.36, N = 3 SE +/- 37.36, N = 3 SE +/- 50.44, N = 3 SE +/- 47.58, N = 3 4154 4075 4005 3998 3538 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Phoronix Test Suite v10.8.5