pts-disk-different-nvmes AMD Ryzen Threadripper 1900X 8-Core testing with a Gigabyte X399 DESIGNARE EX-CF (F13a BIOS) and NVIDIA Quadro P400 on Debian 11 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2211146-NE-PTSDISKDI38&grr .
pts-disk-different-nvmes Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Compiler File-System Screen Resolution ZFS raidz1 4xNVME ext4 soft raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ext4 soft raid5 8xNVME AMD Ryzen Threadripper 1900X 8-Core @ 3.80GHz (8 Cores / 16 Threads) Gigabyte X399 DESIGNARE EX-CF (F13a BIOS) AMD 17h 64GB Samsung SSD 960 EVO 500GB + 8 x 2000GB Western Digital WD_BLACK SN770 2TB + 1000GB CT1000P5PSSD8 NVIDIA Quadro P400 NVIDIA GP107GL HD Audio DELL S2340T 4 x Intel I350 + Intel 8265 / 8275 Debian 11 5.10.0-19-amd64 (x86_64) GCC 10.2.1 20210110 zfs 1920x1080 ext4 zfs ext4 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: always Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-mutex --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-Km9U7s/gcc-10-10.2.1/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-Km9U7s/gcc-10-10.2.1/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8001137 Disk Scheduler Details - ZFS raidz1 4xNVME, ZFS raidz1 8xNVME, ZFS raidz1 8xNVME no Compression, ZFS mirror 8xNVME: NONE Python Details - Python 3.9.2 Security Details - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT vulnerable + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: disabled RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected Disk Details - ext4 soft raid5 4xNVME: NONE / relatime,rw,stripe=384 / raid5 nvme4n1p1[4] nvme3n1p1[2] nvme2n1p1[1] nvme1n1p1[0] Block Size: 4096 - ext4 Crucial P5 Plus 1TB NVME: NONE / relatime,rw / Block Size: 4096 - ext4 soft raid5 8xNVME: NONE / relatime,rw,stripe=896 / raid5 nvme9n1[8] nvme8n1[6] nvme7n1[5] nvme6n1[4] nvme4n1[3] nvme3n1[2] nvme2n1[1] nvme1n1[0] Block Size: 4096
pts-disk-different-nvmes dbench: 12 Clients dbench: 1 Clients fs-mark: 5000 Files, 1MB Size, 4 Threads fs-mark: 4000 Files, 32 Sub Dirs, 1MB Size fs-mark: 1000 Files, 1MB Size, No Sync/FSync fs-mark: 1000 Files, 1MB Size sqlite: 128 postmark: Disk Transaction Performance fio: Seq Write - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Seq Write - Linux AIO - No - Yes - 2MB - Default Test Directory sqlite: 64 fio: Rand Read - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Seq Read - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Rand Read - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Seq Read - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Rand Write - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Rand Write - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directory compilebench: Compile sqlite: 8 sqlite: 32 sqlite: 1 compilebench: Read Compiled Tree compilebench: Initial Create ZFS raidz1 4xNVME ext4 soft raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ext4 soft raid5 8xNVME 2626.03 410.171 1632.7 684.2 1359.3 646.5 76.088 3289 1029 2065 37.448 2119 329333 1287 49567 194 2034 4245 4075 1225 2458 41500 162 171333 670 1398.57 11.267 23.191 7.852 1212.73 222.04 2484.85 453.170 537.9 284.7 1733.3 254.0 80.145 5068 650 1308 45.720 6666 221333 865 306667 1199 6706 476 958 82967 324 115000 449 1478.00 20.005 31.793 8.810 2642.03 411.07 658.794 102.985 522.1 435.0 1791.6 584.1 59.890 5137 1614 3236 40.979 1765 335667 1311 329667 1286 1765 3538 3538 1623 3252 224667 878 228000 891 1483.07 15.767 27.578 8.367 2747.01 423.67 2472.42 380.354 1655.6 644.7 1340.7 646.7 78.153 3275 1186 2381 40.301 2015 326333 1275 57433 224 1999 4037 4005 1327 2661 45367 177 170667 665 1390.71 12.857 26.999 8.380 1217.80 221.83 2783.64 458.913 1240.0 592.1 1160.9 589.7 77.047 3275 1220 2446 40.003 2081 331000 1294 56167 219 1995 4170 3998 1337 2681 48200 188 174667 682 1247.36 12.827 26.892 8.379 1694.75 206.77 2730.75 435.704 1711.4 658.2 1338.4 581.1 77.880 3318 1774 3555 36.598 3305 332000 1296 224333 877 2073 6617 4154 1783 3573 68333 267 197333 771 1408.42 9.043 17.744 7.252 1454.31 216.21 2330.16 423.832 570.0 286.6 1782.9 264.4 83.859 5137 814 1636 50.309 13225 199667 779 272667 1065 13200 512 1032 75067 293 99067 387 1474.80 22.829 35.993 10.426 2683.76 413.46 OpenBenchmarking.org
Dbench 12 Clients OpenBenchmarking.org MB/s, More Is Better Dbench 4.0 12 Clients ZFS raidz1 4xNVME ext4 soft raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ext4 soft raid5 8xNVME 600 1200 1800 2400 3000 SE +/- 5.07, N = 3 SE +/- 21.52, N = 3 SE +/- 1.20, N = 3 SE +/- 2.03, N = 3 SE +/- 16.46, N = 3 SE +/- 3.38, N = 3 SE +/- 16.88, N = 3 2626.03 2484.85 658.79 2472.42 2783.64 2730.75 2330.16 1. (CC) gcc options: -lpopt -O2
Dbench 1 Clients OpenBenchmarking.org MB/s, More Is Better Dbench 4.0 1 Clients ZFS raidz1 4xNVME ext4 soft raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ext4 soft raid5 8xNVME 100 200 300 400 500 SE +/- 0.85, N = 3 SE +/- 0.61, N = 3 SE +/- 0.03, N = 3 SE +/- 0.25, N = 3 SE +/- 1.41, N = 3 SE +/- 0.70, N = 3 SE +/- 1.02, N = 3 410.17 453.17 102.99 380.35 458.91 435.70 423.83 1. (CC) gcc options: -lpopt -O2
FS-Mark Test: 5000 Files, 1MB Size, 4 Threads OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 Test: 5000 Files, 1MB Size, 4 Threads ZFS raidz1 4xNVME ext4 soft raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ext4 soft raid5 8xNVME 400 800 1200 1600 2000 SE +/- 4.95, N = 3 SE +/- 4.55, N = 12 SE +/- 125.73, N = 9 SE +/- 6.18, N = 3 SE +/- 2.97, N = 3 SE +/- 2.79, N = 3 SE +/- 4.70, N = 3 1632.7 537.9 522.1 1655.6 1240.0 1711.4 570.0
FS-Mark Test: 4000 Files, 32 Sub Dirs, 1MB Size OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 Test: 4000 Files, 32 Sub Dirs, 1MB Size ZFS raidz1 4xNVME ext4 soft raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ext4 soft raid5 8xNVME 150 300 450 600 750 SE +/- 7.90, N = 3 SE +/- 2.79, N = 3 SE +/- 16.63, N = 12 SE +/- 3.56, N = 3 SE +/- 3.57, N = 3 SE +/- 5.50, N = 15 SE +/- 6.52, N = 12 684.2 284.7 435.0 644.7 592.1 658.2 286.6
FS-Mark Test: 1000 Files, 1MB Size, No Sync/FSync OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 Test: 1000 Files, 1MB Size, No Sync/FSync ZFS raidz1 4xNVME ext4 soft raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ext4 soft raid5 8xNVME 400 800 1200 1600 2000 SE +/- 12.01, N = 15 SE +/- 19.84, N = 4 SE +/- 16.52, N = 3 SE +/- 10.88, N = 15 SE +/- 13.85, N = 4 SE +/- 7.95, N = 3 SE +/- 11.02, N = 3 1359.3 1733.3 1791.6 1340.7 1160.9 1338.4 1782.9
FS-Mark Test: 1000 Files, 1MB Size OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 Test: 1000 Files, 1MB Size ZFS raidz1 4xNVME ext4 soft raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ext4 soft raid5 8xNVME 140 280 420 560 700 SE +/- 5.49, N = 15 SE +/- 9.50, N = 15 SE +/- 13.64, N = 12 SE +/- 1.87, N = 3 SE +/- 6.13, N = 3 SE +/- 4.66, N = 3 SE +/- 0.85, N = 3 646.5 254.0 584.1 646.7 589.7 581.1 264.4
SQLite Threads / Copies: 128 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 128 ZFS raidz1 4xNVME ext4 soft raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ext4 soft raid5 8xNVME 20 40 60 80 100 SE +/- 0.10, N = 3 SE +/- 0.28, N = 3 SE +/- 0.03, N = 3 SE +/- 0.10, N = 3 SE +/- 0.27, N = 3 SE +/- 0.16, N = 3 SE +/- 0.16, N = 3 76.09 80.15 59.89 78.15 77.05 77.88 83.86 1. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
PostMark Disk Transaction Performance OpenBenchmarking.org TPS, More Is Better PostMark 1.51 Disk Transaction Performance ZFS raidz1 4xNVME ext4 soft raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ext4 soft raid5 8xNVME 1100 2200 3300 4400 5500 SE +/- 34.00, N = 3 SE +/- 35.33, N = 3 SE +/- 29.00, N = 3 SE +/- 14.33, N = 3 SE +/- 14.67, N = 3 SE +/- 35.33, N = 3 3289 5068 5137 3275 3275 3318 5137 1. (CC) gcc options: -O3
Flexible IO Tester Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS raidz1 4xNVME ext4 soft raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ext4 soft raid5 8xNVME 400 800 1200 1600 2000 SE +/- 2.91, N = 3 SE +/- 0.88, N = 3 SE +/- 2.85, N = 3 SE +/- 2.33, N = 3 SE +/- 11.98, N = 15 SE +/- 6.01, N = 3 1029 650 1614 1186 1220 1774 814 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS raidz1 4xNVME ext4 soft raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ext4 soft raid5 8xNVME 800 1600 2400 3200 4000 SE +/- 6.36, N = 3 SE +/- 1.73, N = 3 SE +/- 5.70, N = 3 SE +/- 4.67, N = 3 SE +/- 23.91, N = 15 SE +/- 11.93, N = 3 SE +/- 0.67, N = 3 2065 1308 3236 2381 2446 3555 1636 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
SQLite Threads / Copies: 64 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 64 ZFS raidz1 4xNVME ext4 soft raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ext4 soft raid5 8xNVME 11 22 33 44 55 SE +/- 0.01, N = 3 SE +/- 0.13, N = 3 SE +/- 0.01, N = 3 SE +/- 0.05, N = 3 SE +/- 0.01, N = 3 SE +/- 0.08, N = 3 SE +/- 0.21, N = 3 37.45 45.72 40.98 40.30 40.00 36.60 50.31 1. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
Flexible IO Tester Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS raidz1 4xNVME ext4 soft raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ext4 soft raid5 8xNVME 3K 6K 9K 12K 15K SE +/- 29.49, N = 3 SE +/- 14.99, N = 3 SE +/- 6.57, N = 3 SE +/- 20.66, N = 3 SE +/- 12.06, N = 3 SE +/- 143.61, N = 4 2119 6666 1765 2015 2081 3305 13225 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS raidz1 4xNVME ext4 soft raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ext4 soft raid5 8xNVME 70K 140K 210K 280K 350K SE +/- 1855.92, N = 3 SE +/- 2728.45, N = 3 SE +/- 4666.67, N = 3 SE +/- 3382.96, N = 3 SE +/- 1000.00, N = 3 SE +/- 2516.61, N = 3 SE +/- 333.33, N = 3 329333 221333 335667 326333 331000 332000 199667 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS raidz1 4xNVME ext4 soft raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ext4 soft raid5 8xNVME 300 600 900 1200 1500 SE +/- 7.09, N = 3 SE +/- 11.46, N = 3 SE +/- 17.89, N = 3 SE +/- 13.25, N = 3 SE +/- 4.84, N = 3 SE +/- 9.94, N = 3 SE +/- 0.67, N = 3 1287 865 1311 1275 1294 1296 779 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS raidz1 4xNVME ext4 soft raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ext4 soft raid5 8xNVME 70K 140K 210K 280K 350K SE +/- 185.59, N = 3 SE +/- 1333.33, N = 3 SE +/- 881.92, N = 3 SE +/- 176.38, N = 3 SE +/- 633.33, N = 3 SE +/- 666.67, N = 3 SE +/- 333.33, N = 3 49567 306667 329667 57433 56167 224333 272667 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS raidz1 4xNVME ext4 soft raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ext4 soft raid5 8xNVME 300 600 900 1200 1500 SE +/- 0.88, N = 3 SE +/- 5.17, N = 3 SE +/- 3.48, N = 3 SE +/- 0.67, N = 3 SE +/- 2.67, N = 3 SE +/- 1.86, N = 3 SE +/- 2.03, N = 3 194 1199 1286 224 219 877 1065 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS raidz1 4xNVME ext4 soft raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ext4 soft raid5 8xNVME 3K 6K 9K 12K 15K SE +/- 18.75, N = 3 SE +/- 3.61, N = 3 SE +/- 25.32, N = 3 SE +/- 23.68, N = 3 SE +/- 18.84, N = 3 SE +/- 57.74, N = 3 2034 6706 1765 1999 1995 2073 13200 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS raidz1 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME 1400 2800 4200 5600 7000 SE +/- 58.64, N = 3 SE +/- 13.37, N = 3 SE +/- 41.66, N = 3 SE +/- 23.92, N = 3 4245 3538 4037 4170 6617 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS raidz1 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME 900 1800 2700 3600 4500 SE +/- 37.36, N = 3 SE +/- 50.44, N = 3 SE +/- 47.58, N = 3 SE +/- 37.36, N = 3 4075 3538 4005 3998 4154 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS raidz1 4xNVME ext4 soft raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ext4 soft raid5 8xNVME 400 800 1200 1600 2000 SE +/- 8.99, N = 3 SE +/- 1.76, N = 3 SE +/- 4.91, N = 3 SE +/- 4.16, N = 3 SE +/- 1.45, N = 3 1225 476 1623 1327 1337 1783 512 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS raidz1 4xNVME ext4 soft raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ext4 soft raid5 8xNVME 800 1600 2400 3200 4000 SE +/- 17.98, N = 3 SE +/- 3.53, N = 3 SE +/- 10.27, N = 3 SE +/- 6.39, N = 3 SE +/- 8.25, N = 3 SE +/- 2.60, N = 3 2458 958 3252 2661 2681 3573 1032 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS raidz1 4xNVME ext4 soft raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ext4 soft raid5 8xNVME 50K 100K 150K 200K 250K SE +/- 57.74, N = 3 SE +/- 569.60, N = 3 SE +/- 666.67, N = 3 SE +/- 33.33, N = 3 SE +/- 57.74, N = 3 SE +/- 88.19, N = 3 SE +/- 88.19, N = 3 41500 82967 224667 45367 48200 68333 75067 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS raidz1 4xNVME ext4 soft raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ext4 soft raid5 8xNVME 200 400 600 800 1000 SE +/- 2.00, N = 3 SE +/- 2.67, N = 3 SE +/- 0.33, N = 3 162 324 878 177 188 267 293 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS raidz1 4xNVME ext4 soft raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ext4 soft raid5 8xNVME 50K 100K 150K 200K 250K SE +/- 666.67, N = 3 SE +/- 577.35, N = 3 SE +/- 1000.00, N = 3 SE +/- 333.33, N = 3 SE +/- 333.33, N = 3 SE +/- 333.33, N = 3 SE +/- 617.34, N = 3 171333 115000 228000 170667 174667 197333 99067 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS raidz1 4xNVME ext4 soft raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ext4 soft raid5 8xNVME 200 400 600 800 1000 SE +/- 1.86, N = 3 SE +/- 1.15, N = 3 SE +/- 3.93, N = 3 SE +/- 1.67, N = 3 SE +/- 2.08, N = 3 SE +/- 0.67, N = 3 SE +/- 2.33, N = 3 670 449 891 665 682 771 387 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Compile Bench Test: Compile OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Compile ZFS raidz1 4xNVME ext4 soft raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ext4 soft raid5 8xNVME 300 600 900 1200 1500 SE +/- 6.06, N = 3 SE +/- 11.73, N = 3 SE +/- 0.00, N = 3 SE +/- 7.47, N = 3 SE +/- 3.89, N = 3 SE +/- 9.96, N = 3 SE +/- 12.18, N = 3 1398.57 1478.00 1483.07 1390.71 1247.36 1408.42 1474.80
SQLite Threads / Copies: 8 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 8 ZFS raidz1 4xNVME ext4 soft raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ext4 soft raid5 8xNVME 5 10 15 20 25 SE +/- 0.079, N = 3 SE +/- 0.047, N = 3 SE +/- 0.009, N = 3 SE +/- 0.174, N = 3 SE +/- 0.170, N = 3 SE +/- 0.057, N = 3 SE +/- 0.168, N = 15 11.267 20.005 15.767 12.857 12.827 9.043 22.829 1. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
SQLite Threads / Copies: 32 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 32 ZFS raidz1 4xNVME ext4 soft raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ext4 soft raid5 8xNVME 8 16 24 32 40 SE +/- 0.08, N = 3 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 SE +/- 0.04, N = 3 SE +/- 0.07, N = 3 SE +/- 0.05, N = 3 SE +/- 0.02, N = 3 23.19 31.79 27.58 27.00 26.89 17.74 35.99 1. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
SQLite Threads / Copies: 1 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 1 ZFS raidz1 4xNVME ext4 soft raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ext4 soft raid5 8xNVME 3 6 9 12 15 SE +/- 0.071, N = 7 SE +/- 0.069, N = 10 SE +/- 0.067, N = 3 SE +/- 0.096, N = 3 SE +/- 0.104, N = 3 SE +/- 0.055, N = 3 SE +/- 0.093, N = 3 7.852 8.810 8.367 8.380 8.379 7.252 10.426 1. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
Compile Bench Test: Read Compiled Tree OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Read Compiled Tree ZFS raidz1 4xNVME ext4 soft raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ext4 soft raid5 8xNVME 600 1200 1800 2400 3000 SE +/- 8.00, N = 3 SE +/- 42.24, N = 3 SE +/- 8.94, N = 3 SE +/- 8.89, N = 3 SE +/- 19.98, N = 3 SE +/- 12.22, N = 3 SE +/- 15.05, N = 3 1212.73 2642.03 2747.01 1217.80 1694.75 1454.31 2683.76
Compile Bench Test: Initial Create OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Initial Create ZFS raidz1 4xNVME ext4 soft raid5 4xNVME ext4 Crucial P5 Plus 1TB NVME ZFS raidz1 8xNVME ZFS raidz1 8xNVME no Compression ZFS mirror 8xNVME ext4 soft raid5 8xNVME 90 180 270 360 450 SE +/- 1.57, N = 3 SE +/- 0.77, N = 3 SE +/- 1.23, N = 3 SE +/- 0.50, N = 3 SE +/- 1.46, N = 3 SE +/- 1.18, N = 3 SE +/- 3.20, N = 3 222.04 411.07 423.67 221.83 206.77 216.21 413.46
Phoronix Test Suite v10.8.5