pts-disk-different-nvmes AMD Ryzen Threadripper 1900X 8-Core testing with a Gigabyte X399 DESIGNARE EX-CF (F13a BIOS) and NVIDIA Quadro P400 on Debian 11 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2211135-NE-PTSDISKDI63&grw .
pts-disk-different-nvmes Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Compiler File-System Screen Resolution ZFS zraid1 4xNVME Pool ext4 mdadm raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression AMD Ryzen Threadripper 1900X 8-Core @ 3.80GHz (8 Cores / 16 Threads) Gigabyte X399 DESIGNARE EX-CF (F13a BIOS) AMD 17h 64GB Samsung SSD 960 EVO 500GB + 8 x 2000GB Western Digital WD_BLACK SN770 2TB + 1000GB CT1000P5PSSD8 NVIDIA Quadro P400 NVIDIA GP107GL HD Audio DELL S2340T 4 x Intel I350 + Intel 8265 / 8275 Debian 11 5.10.0-19-amd64 (x86_64) GCC 10.2.1 20210110 zfs 1920x1080 ext4 zfs OpenBenchmarking.org Kernel Details - Transparent Huge Pages: always Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-mutex --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-Km9U7s/gcc-10-10.2.1/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-Km9U7s/gcc-10-10.2.1/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8001137 Disk Scheduler Details - ZFS zraid1 4xNVME Pool, ZFS zraid1 8xNVME Pool, ZFS zraid1 8xNVME Pool no Compression: NONE Python Details - Python 3.9.2 Security Details - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT vulnerable + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: disabled RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected Disk Details - ext4 mdadm raid5 4xNVME: NONE / relatime,rw,stripe=384 / raid5 nvme4n1p1[4] nvme3n1p1[2] nvme2n1p1[1] nvme1n1p1[0] Block Size: 4096 - ext4 Crucial P5 Plus 1TB NVME: NONE / relatime,rw / Block Size: 4096
pts-disk-different-nvmes compilebench: Compile compilebench: Initial Create compilebench: Read Compiled Tree dbench: 12 Clients dbench: 1 Clients fio: Rand Read - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Rand Read - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Rand Write - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Rand Write - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Seq Read - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Seq Read - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Seq Write - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Seq Write - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directory fs-mark: 1000 Files, 1MB Size fs-mark: 5000 Files, 1MB Size, 4 Threads fs-mark: 4000 Files, 32 Sub Dirs, 1MB Size fs-mark: 1000 Files, 1MB Size, No Sync/FSync postmark: Disk Transaction Performance sqlite: 1 sqlite: 8 sqlite: 32 sqlite: 64 sqlite: 128 ZFS zraid1 4xNVME Pool ext4 mdadm raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression 1398.57 222.04 1212.73 2626.03 410.171 4245 2119 194 49567 2458 1225 162 41500 4075 2034 1287 329333 2065 1029 670 171333 646.5 1632.7 684.2 1359.3 3289 7.852 11.267 23.191 37.448 76.088 1478.00 411.07 2642.03 2484.85 453.170 6666 1199 306667 958 476 324 82967 6706 865 221333 1308 650 449 115000 254.0 537.9 284.7 1733.3 5068 8.810 20.005 31.793 45.720 80.145 1483.07 423.67 2747.01 658.794 102.985 3538 1765 1286 329667 3252 1623 878 224667 3538 1765 1311 335667 3236 1614 891 228000 584.1 522.1 435.0 1791.6 5137 8.367 15.767 27.578 40.979 59.890 1390.71 221.83 1217.80 2472.42 380.354 4037 2015 224 57433 2661 1327 177 45367 4005 1999 1275 326333 2381 1186 665 170667 646.7 1655.6 644.7 1340.7 3275 8.380 12.857 26.999 40.301 78.153 1247.36 206.77 1694.75 2783.64 458.913 4170 2081 219 56167 2681 1337 188 48200 3998 1995 1294 331000 2446 1220 682 174667 589.7 1240.0 592.1 1160.9 3275 8.379 12.827 26.892 40.003 77.047 OpenBenchmarking.org
Compile Bench Test: Compile OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Compile ZFS zraid1 4xNVME Pool ext4 mdadm raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression 300 600 900 1200 1500 SE +/- 6.06, N = 3 SE +/- 11.73, N = 3 SE +/- 0.00, N = 3 SE +/- 7.47, N = 3 SE +/- 3.89, N = 3 1398.57 1478.00 1483.07 1390.71 1247.36
Compile Bench Test: Initial Create OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Initial Create ZFS zraid1 4xNVME Pool ext4 mdadm raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression 90 180 270 360 450 SE +/- 1.57, N = 3 SE +/- 0.77, N = 3 SE +/- 1.23, N = 3 SE +/- 0.50, N = 3 SE +/- 1.46, N = 3 222.04 411.07 423.67 221.83 206.77
Compile Bench Test: Read Compiled Tree OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Read Compiled Tree ZFS zraid1 4xNVME Pool ext4 mdadm raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression 600 1200 1800 2400 3000 SE +/- 8.00, N = 3 SE +/- 42.24, N = 3 SE +/- 8.94, N = 3 SE +/- 8.89, N = 3 SE +/- 19.98, N = 3 1212.73 2642.03 2747.01 1217.80 1694.75
Dbench 12 Clients OpenBenchmarking.org MB/s, More Is Better Dbench 4.0 12 Clients ZFS zraid1 4xNVME Pool ext4 mdadm raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression 600 1200 1800 2400 3000 SE +/- 5.07, N = 3 SE +/- 21.52, N = 3 SE +/- 1.20, N = 3 SE +/- 2.03, N = 3 SE +/- 16.46, N = 3 2626.03 2484.85 658.79 2472.42 2783.64 1. (CC) gcc options: -lpopt -O2
Dbench 1 Clients OpenBenchmarking.org MB/s, More Is Better Dbench 4.0 1 Clients ZFS zraid1 4xNVME Pool ext4 mdadm raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression 100 200 300 400 500 SE +/- 0.85, N = 3 SE +/- 0.61, N = 3 SE +/- 0.03, N = 3 SE +/- 0.25, N = 3 SE +/- 1.41, N = 3 410.17 453.17 102.99 380.35 458.91 1. (CC) gcc options: -lpopt -O2
Flexible IO Tester Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS zraid1 4xNVME Pool ext4 Crucial P5 Plus 1TB NVME ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression 900 1800 2700 3600 4500 SE +/- 58.64, N = 3 SE +/- 13.37, N = 3 SE +/- 41.66, N = 3 4245 3538 4037 4170 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS zraid1 4xNVME Pool ext4 mdadm raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression 1400 2800 4200 5600 7000 SE +/- 29.49, N = 3 SE +/- 14.99, N = 3 SE +/- 6.57, N = 3 SE +/- 20.66, N = 3 2119 6666 1765 2015 2081 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS zraid1 4xNVME Pool ext4 mdadm raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression 300 600 900 1200 1500 SE +/- 0.88, N = 3 SE +/- 5.17, N = 3 SE +/- 3.48, N = 3 SE +/- 0.67, N = 3 SE +/- 2.67, N = 3 194 1199 1286 224 219 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS zraid1 4xNVME Pool ext4 mdadm raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression 70K 140K 210K 280K 350K SE +/- 185.59, N = 3 SE +/- 1333.33, N = 3 SE +/- 881.92, N = 3 SE +/- 176.38, N = 3 SE +/- 633.33, N = 3 49567 306667 329667 57433 56167 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS zraid1 4xNVME Pool ext4 mdadm raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression 700 1400 2100 2800 3500 SE +/- 17.98, N = 3 SE +/- 3.53, N = 3 SE +/- 10.27, N = 3 SE +/- 6.39, N = 3 2458 958 3252 2661 2681 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS zraid1 4xNVME Pool ext4 mdadm raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression 300 600 900 1200 1500 SE +/- 8.99, N = 3 SE +/- 1.76, N = 3 SE +/- 4.91, N = 3 1225 476 1623 1327 1337 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS zraid1 4xNVME Pool ext4 mdadm raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression 200 400 600 800 1000 SE +/- 2.00, N = 3 SE +/- 2.67, N = 3 162 324 878 177 188 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS zraid1 4xNVME Pool ext4 mdadm raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression 50K 100K 150K 200K 250K SE +/- 57.74, N = 3 SE +/- 569.60, N = 3 SE +/- 666.67, N = 3 SE +/- 33.33, N = 3 SE +/- 57.74, N = 3 41500 82967 224667 45367 48200 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS zraid1 4xNVME Pool ext4 Crucial P5 Plus 1TB NVME ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression 900 1800 2700 3600 4500 SE +/- 37.36, N = 3 SE +/- 50.44, N = 3 SE +/- 47.58, N = 3 4075 3538 4005 3998 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS zraid1 4xNVME Pool ext4 mdadm raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression 1400 2800 4200 5600 7000 SE +/- 18.75, N = 3 SE +/- 3.61, N = 3 SE +/- 25.32, N = 3 SE +/- 23.68, N = 3 2034 6706 1765 1999 1995 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS zraid1 4xNVME Pool ext4 mdadm raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression 300 600 900 1200 1500 SE +/- 7.09, N = 3 SE +/- 11.46, N = 3 SE +/- 17.89, N = 3 SE +/- 13.25, N = 3 SE +/- 4.84, N = 3 1287 865 1311 1275 1294 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS zraid1 4xNVME Pool ext4 mdadm raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression 70K 140K 210K 280K 350K SE +/- 1855.92, N = 3 SE +/- 2728.45, N = 3 SE +/- 4666.67, N = 3 SE +/- 3382.96, N = 3 SE +/- 1000.00, N = 3 329333 221333 335667 326333 331000 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS zraid1 4xNVME Pool ext4 mdadm raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression 700 1400 2100 2800 3500 SE +/- 6.36, N = 3 SE +/- 1.73, N = 3 SE +/- 5.70, N = 3 SE +/- 4.67, N = 3 SE +/- 23.91, N = 15 2065 1308 3236 2381 2446 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS zraid1 4xNVME Pool ext4 mdadm raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression 300 600 900 1200 1500 SE +/- 2.91, N = 3 SE +/- 0.88, N = 3 SE +/- 2.85, N = 3 SE +/- 2.33, N = 3 SE +/- 11.98, N = 15 1029 650 1614 1186 1220 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS zraid1 4xNVME Pool ext4 mdadm raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression 200 400 600 800 1000 SE +/- 1.86, N = 3 SE +/- 1.15, N = 3 SE +/- 3.93, N = 3 SE +/- 1.67, N = 3 SE +/- 2.08, N = 3 670 449 891 665 682 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS zraid1 4xNVME Pool ext4 mdadm raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression 50K 100K 150K 200K 250K SE +/- 666.67, N = 3 SE +/- 577.35, N = 3 SE +/- 1000.00, N = 3 SE +/- 333.33, N = 3 SE +/- 333.33, N = 3 171333 115000 228000 170667 174667 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
FS-Mark Test: 1000 Files, 1MB Size OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 Test: 1000 Files, 1MB Size ZFS zraid1 4xNVME Pool ext4 mdadm raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression 140 280 420 560 700 SE +/- 5.49, N = 15 SE +/- 9.50, N = 15 SE +/- 13.64, N = 12 SE +/- 1.87, N = 3 SE +/- 6.13, N = 3 646.5 254.0 584.1 646.7 589.7
FS-Mark Test: 5000 Files, 1MB Size, 4 Threads OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 Test: 5000 Files, 1MB Size, 4 Threads ZFS zraid1 4xNVME Pool ext4 mdadm raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression 400 800 1200 1600 2000 SE +/- 4.95, N = 3 SE +/- 4.55, N = 12 SE +/- 125.73, N = 9 SE +/- 6.18, N = 3 SE +/- 2.97, N = 3 1632.7 537.9 522.1 1655.6 1240.0
FS-Mark Test: 4000 Files, 32 Sub Dirs, 1MB Size OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 Test: 4000 Files, 32 Sub Dirs, 1MB Size ZFS zraid1 4xNVME Pool ext4 mdadm raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression 150 300 450 600 750 SE +/- 7.90, N = 3 SE +/- 2.79, N = 3 SE +/- 16.63, N = 12 SE +/- 3.56, N = 3 SE +/- 3.57, N = 3 684.2 284.7 435.0 644.7 592.1
FS-Mark Test: 1000 Files, 1MB Size, No Sync/FSync OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 Test: 1000 Files, 1MB Size, No Sync/FSync ZFS zraid1 4xNVME Pool ext4 mdadm raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression 400 800 1200 1600 2000 SE +/- 12.01, N = 15 SE +/- 19.84, N = 4 SE +/- 16.52, N = 3 SE +/- 10.88, N = 15 SE +/- 13.85, N = 4 1359.3 1733.3 1791.6 1340.7 1160.9
PostMark Disk Transaction Performance OpenBenchmarking.org TPS, More Is Better PostMark 1.51 Disk Transaction Performance ZFS zraid1 4xNVME Pool ext4 mdadm raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression 1100 2200 3300 4400 5500 SE +/- 34.00, N = 3 SE +/- 35.33, N = 3 SE +/- 29.00, N = 3 SE +/- 14.33, N = 3 3289 5068 5137 3275 3275 1. (CC) gcc options: -O3
SQLite Threads / Copies: 1 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 1 ZFS zraid1 4xNVME Pool ext4 mdadm raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression 2 4 6 8 10 SE +/- 0.071, N = 7 SE +/- 0.069, N = 10 SE +/- 0.067, N = 3 SE +/- 0.096, N = 3 SE +/- 0.104, N = 3 7.852 8.810 8.367 8.380 8.379 1. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
SQLite Threads / Copies: 8 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 8 ZFS zraid1 4xNVME Pool ext4 mdadm raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression 5 10 15 20 25 SE +/- 0.08, N = 3 SE +/- 0.05, N = 3 SE +/- 0.01, N = 3 SE +/- 0.17, N = 3 SE +/- 0.17, N = 3 11.27 20.01 15.77 12.86 12.83 1. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
SQLite Threads / Copies: 32 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 32 ZFS zraid1 4xNVME Pool ext4 mdadm raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression 7 14 21 28 35 SE +/- 0.08, N = 3 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 SE +/- 0.04, N = 3 SE +/- 0.07, N = 3 23.19 31.79 27.58 27.00 26.89 1. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
SQLite Threads / Copies: 64 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 64 ZFS zraid1 4xNVME Pool ext4 mdadm raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression 10 20 30 40 50 SE +/- 0.01, N = 3 SE +/- 0.13, N = 3 SE +/- 0.01, N = 3 SE +/- 0.05, N = 3 SE +/- 0.01, N = 3 37.45 45.72 40.98 40.30 40.00 1. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
SQLite Threads / Copies: 128 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 128 ZFS zraid1 4xNVME Pool ext4 mdadm raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS zraid1 8xNVME Pool ZFS zraid1 8xNVME Pool no Compression 20 40 60 80 100 SE +/- 0.10, N = 3 SE +/- 0.28, N = 3 SE +/- 0.03, N = 3 SE +/- 0.10, N = 3 SE +/- 0.27, N = 3 76.09 80.15 59.89 78.15 77.05 1. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
Phoronix Test Suite v10.8.5