pts-disk-different-nvmes AMD Ryzen Threadripper 1900X 8-Core testing with a Gigabyte X399 DESIGNARE EX-CF (F13a BIOS) and NVIDIA Quadro P400 on Debian 11 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2211120-NE-PTSDISKDI55&grs .
pts-disk-different-nvmes Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Compiler File-System Screen Resolution ZFS zraid1 4xNVME Pool AMD Ryzen Threadripper 1900X 8-Core @ 3.80GHz (8 Cores / 16 Threads) Gigabyte X399 DESIGNARE EX-CF (F13a BIOS) AMD 17h 64GB Samsung SSD 960 EVO 500GB + 8 x 2000GB Western Digital WD_BLACK SN770 2TB + 1000GB CT1000P5PSSD8 NVIDIA Quadro P400 NVIDIA GP107GL HD Audio DELL S2340T 4 x Intel I350 + Intel 8265 / 8275 Debian 11 5.10.0-19-amd64 (x86_64) GCC 10.2.1 20210110 zfs 1920x1080 OpenBenchmarking.org - Transparent Huge Pages: always - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-mutex --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-Km9U7s/gcc-10-10.2.1/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-Km9U7s/gcc-10-10.2.1/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8001137 - NONE - Python 3.9.2 - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT vulnerable + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: disabled RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
pts-disk-different-nvmes postmark: Disk Transaction Performance compilebench: Read Compiled Tree compilebench: Initial Create compilebench: Compile dbench: 1 Clients dbench: 12 Clients fs-mark: 1000 Files, 1MB Size, No Sync/FSync fs-mark: 4000 Files, 32 Sub Dirs, 1MB Size fs-mark: 5000 Files, 1MB Size, 4 Threads fs-mark: 1000 Files, 1MB Size fio: Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Seq Write - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Seq Write - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Seq Read - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Seq Read - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Rand Write - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Rand Write - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Rand Read - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Rand Read - Linux AIO - No - Yes - 2MB - Default Test Directory sqlite: 128 sqlite: 64 sqlite: 32 sqlite: 8 sqlite: 1 ior: 2MB - /nvme_pool ZFS zraid1 4xNVME Pool 3289 1212.73 222.04 1398.57 410.171 2626.03 1359.3 684.2 1632.7 646.5 171333 670 1029 2065 329333 1287 2034 4075 41500 162 1225 2458 49567 194 2119 4245 76.088 37.448 23.191 11.267 7.852 OpenBenchmarking.org
PostMark Disk Transaction Performance OpenBenchmarking.org TPS, More Is Better PostMark 1.51 Disk Transaction Performance ZFS zraid1 4xNVME Pool 700 1400 2100 2800 3500 3289 1. (CC) gcc options: -O3
Compile Bench Test: Read Compiled Tree OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Read Compiled Tree ZFS zraid1 4xNVME Pool 300 600 900 1200 1500 SE +/- 8.00, N = 3 1212.73
Compile Bench Test: Initial Create OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Initial Create ZFS zraid1 4xNVME Pool 50 100 150 200 250 SE +/- 1.57, N = 3 222.04
Compile Bench Test: Compile OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Compile ZFS zraid1 4xNVME Pool 300 600 900 1200 1500 SE +/- 6.06, N = 3 1398.57
Dbench 1 Clients OpenBenchmarking.org MB/s, More Is Better Dbench 4.0 1 Clients ZFS zraid1 4xNVME Pool 90 180 270 360 450 SE +/- 0.85, N = 3 410.17 1. (CC) gcc options: -lpopt -O2
Dbench 12 Clients OpenBenchmarking.org MB/s, More Is Better Dbench 4.0 12 Clients ZFS zraid1 4xNVME Pool 600 1200 1800 2400 3000 SE +/- 5.07, N = 3 2626.03 1. (CC) gcc options: -lpopt -O2
FS-Mark Test: 1000 Files, 1MB Size, No Sync/FSync OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 Test: 1000 Files, 1MB Size, No Sync/FSync ZFS zraid1 4xNVME Pool 300 600 900 1200 1500 SE +/- 12.01, N = 15 1359.3
FS-Mark Test: 4000 Files, 32 Sub Dirs, 1MB Size OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 Test: 4000 Files, 32 Sub Dirs, 1MB Size ZFS zraid1 4xNVME Pool 150 300 450 600 750 SE +/- 7.90, N = 3 684.2
FS-Mark Test: 5000 Files, 1MB Size, 4 Threads OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 Test: 5000 Files, 1MB Size, 4 Threads ZFS zraid1 4xNVME Pool 400 800 1200 1600 2000 SE +/- 4.95, N = 3 1632.7
FS-Mark Test: 1000 Files, 1MB Size OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 Test: 1000 Files, 1MB Size ZFS zraid1 4xNVME Pool 140 280 420 560 700 SE +/- 5.49, N = 15 646.5
Flexible IO Tester Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS zraid1 4xNVME Pool 40K 80K 120K 160K 200K SE +/- 666.67, N = 3 171333 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS zraid1 4xNVME Pool 140 280 420 560 700 SE +/- 1.86, N = 3 670 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS zraid1 4xNVME Pool 200 400 600 800 1000 SE +/- 2.91, N = 3 1029 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS zraid1 4xNVME Pool 400 800 1200 1600 2000 SE +/- 6.36, N = 3 2065 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS zraid1 4xNVME Pool 70K 140K 210K 280K 350K SE +/- 1855.92, N = 3 329333 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS zraid1 4xNVME Pool 300 600 900 1200 1500 SE +/- 7.09, N = 3 1287 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS zraid1 4xNVME Pool 400 800 1200 1600 2000 SE +/- 18.75, N = 3 2034 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS zraid1 4xNVME Pool 900 1800 2700 3600 4500 SE +/- 37.36, N = 3 4075 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS zraid1 4xNVME Pool 9K 18K 27K 36K 45K SE +/- 57.74, N = 3 41500 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS zraid1 4xNVME Pool 40 80 120 160 200 162 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS zraid1 4xNVME Pool 300 600 900 1200 1500 SE +/- 8.99, N = 3 1225 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS zraid1 4xNVME Pool 500 1000 1500 2000 2500 SE +/- 17.98, N = 3 2458 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS zraid1 4xNVME Pool 11K 22K 33K 44K 55K SE +/- 185.59, N = 3 49567 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory ZFS zraid1 4xNVME Pool 40 80 120 160 200 SE +/- 0.88, N = 3 194 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.29 Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS zraid1 4xNVME Pool 500 1000 1500 2000 2500 SE +/- 29.49, N = 3 2119 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.29 Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory ZFS zraid1 4xNVME Pool 900 1800 2700 3600 4500 SE +/- 58.64, N = 3 4245 1. (CC) gcc options: -rdynamic -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
SQLite Threads / Copies: 128 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 128 ZFS zraid1 4xNVME Pool 20 40 60 80 100 SE +/- 0.10, N = 3 76.09 1. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
SQLite Threads / Copies: 64 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 64 ZFS zraid1 4xNVME Pool 9 18 27 36 45 SE +/- 0.01, N = 3 37.45 1. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
SQLite Threads / Copies: 32 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 32 ZFS zraid1 4xNVME Pool 6 12 18 24 30 SE +/- 0.08, N = 3 23.19 1. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
SQLite Threads / Copies: 8 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 8 ZFS zraid1 4xNVME Pool 3 6 9 12 15 SE +/- 0.08, N = 3 11.27 1. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
SQLite Threads / Copies: 1 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 1 ZFS zraid1 4xNVME Pool 2 4 6 8 10 SE +/- 0.071, N = 7 7.852 1. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
Phoronix Test Suite v10.8.5