HDD and SSD file-system tests on Linux 4.16 for a future article on Phoronix.
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 1803302-FO-1803294FO72 Linux 4.16 File-System Tests - Phoronix Test Suite Linux 4.16 File-System Tests HDD and SSD file-system tests on Linux 4.16 for a future article on Phoronix.
HTML result view exported from: https://openbenchmarking.org/result/1803302-FO-1803294FO72&gru&sro .
Linux 4.16 File-System Tests Processor Motherboard Chipset Memory Disk Graphics Monitor Network OS Kernel Desktop Display Server OpenGL Compiler File-System Screen Resolution System Layer TR150 SSD: EXT4 TR150 SSD: F2FS TR150 SSD: Btrfs TR150 SSD: XFS Seagate HDD: XFS Seagate HDD: Btrfs Seagate HDD: EXT4 Virtio ZFS HDD Raid 0 Virtio ZFS HDD Raid 0 2 Virtio ZFS HDD Raid 10 Virtio ZFS HDD Raid 10 WBU XenServer 7.4 Adaptec 6805 Raid 1 PV Proxmox ZFS Raid 1 WT Proxmox ZFS Raid 1 WB 2 x Intel Xeon Gold 6138 @ 3.70GHz (40 Cores / 80 Threads) TYAN S7106 (V1.00 BIOS) Intel Sky Lake-E DMI3 Registers 12 x 8192 MB DDR4-2666MT/s Micron 9ASF1G72PZ-2G6B1 256GB Samsung SSD 850 + 2000GB Seagate ST2000DM006-2DM1 + 2 x 120GB TOSHIBA-TR150 llvmpipe 95360MB VE228 Intel I210 Gigabit Connection Ubuntu 18.04 4.16.0-999-generic (x86_64) 20180323 GNOME Shell 3.28.0 X Server 1.19.6 3.3 Mesa 18.0.0-rc5 (LLVM 6.0 256 bits) GCC 7.3.0 ext4 1920x1080 f2fs btrfs xfs btrfs ext4 Common KVM @ 3.91GHz (2 Cores) QEMU Standard PC (i440FX + PIIX 1996) (rel-1.10.2-0-g5f4c7b1-prebuilt.qemu-project.org BIOS) 2048MB 34GB QEMU HDD + 30GB 2115 bochsdrmfb Debian 9.4 4.9.0-6-amd64 (x86_64) GCC 6.3.0 20170516 1024x768 qemu Common KVM @ 3.91GHz (4 Cores) 34GB QEMU HDD AMD Turion II Neo N54L @ 2.20GHz (2 Cores) 4096MB 15GB vm-other Xen 4.7.4-4.1 Hypervisor Common KVM @ 2.20GHz (2 Cores) QEMU Standard PC (i440FX + PIIX 1996) (rel-1.10.2-0-g5f4c7b1-prebuilt.qemu-project.org BIOS) bochsdrmfb 1024x768 qemu OpenBenchmarking.org Compiler Details - TR150 SSD: EXT4: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - TR150 SSD: F2FS: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - TR150 SSD: Btrfs: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - TR150 SSD: XFS: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - Seagate HDD: XFS: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - Seagate HDD: Btrfs: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - Seagate HDD: EXT4: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - Virtio ZFS HDD Raid 0: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v - Virtio ZFS HDD Raid 0 2: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v - Virtio ZFS HDD Raid 10: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v - Virtio ZFS HDD Raid 10 WBU: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v - XenServer 7.4 Adaptec 6805 Raid 1 PV: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v - Proxmox ZFS Raid 1 WT: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v - Proxmox ZFS Raid 1 WB: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v Disk Details - TR150 SSD: EXT4: CFQ / data=ordered,relatime,rw - TR150 SSD: F2FS: CFQ / acl,active_logs=6,background_gc=on,extent_cache,flush_merge,inline_data,inline_dentry,inline_xattr,lazytime,mode=adaptive,no_heap,relatime,rw,user_xattr - TR150 SSD: Btrfs: CFQ / relatime,rw,space_cache,ssd,subvol=/,subvolid=5 - TR150 SSD: XFS: CFQ / attr2,inode64,noquota,relatime,rw - Seagate HDD: XFS: CFQ / attr2,inode64,noquota,relatime,rw - Seagate HDD: Btrfs: CFQ / relatime,rw,space_cache,subvol=/,subvolid=5 - Seagate HDD: EXT4: CFQ / data=ordered,relatime,rw - Virtio ZFS HDD Raid 0: CFQ / data=ordered,discard,noatime,rw - Virtio ZFS HDD Raid 0 2: CFQ / data=ordered,discard,noatime,rw - Virtio ZFS HDD Raid 10: CFQ / data=ordered,discard,noatime,rw - Virtio ZFS HDD Raid 10 WBU: CFQ / data=ordered,discard,noatime,rw - XenServer 7.4 Adaptec 6805 Raid 1 PV: none / data=ordered,discard,noatime,rw - Proxmox ZFS Raid 1 WT: none / data=ordered,discard,noatime,rw - Proxmox ZFS Raid 1 WB: none / data=ordered,discard,noatime,rw Processor Details - TR150 SSD: EXT4, TR150 SSD: F2FS, TR150 SSD: Btrfs, TR150 SSD: XFS, Seagate HDD: XFS, Seagate HDD: Btrfs, Seagate HDD: EXT4: Scaling Governor: intel_pstate powersave Python Details - TR150 SSD: EXT4: Python 2.7.14+ + Python 3.6.5rc1 - TR150 SSD: F2FS: Python 2.7.14+ + Python 3.6.5rc1 - TR150 SSD: Btrfs: Python 2.7.14+ + Python 3.6.5rc1 - TR150 SSD: XFS: Python 2.7.14+ + Python 3.6.5rc1 - Seagate HDD: XFS: Python 2.7.14+ + Python 3.6.5rc1 - Seagate HDD: Btrfs: Python 2.7.14+ + Python 3.6.5rc1 - Seagate HDD: EXT4: Python 2.7.14+ + Python 3.6.5rc1 - Virtio ZFS HDD Raid 0: Python 2.7.13 + Python 3.5.3 - Virtio ZFS HDD Raid 0 2: Python 2.7.13 + Python 3.5.3 - Virtio ZFS HDD Raid 10: Python 2.7.13 + Python 3.5.3 - Virtio ZFS HDD Raid 10 WBU: Python 2.7.13 + Python 3.5.3 - XenServer 7.4 Adaptec 6805 Raid 1 PV: Python 2.7.13 + Python 3.5.3 - Proxmox ZFS Raid 1 WT: Python 2.7.13 + Python 3.5.3 - Proxmox ZFS Raid 1 WB: Python 2.7.13 + Python 3.5.3 Security Details - TR150 SSD: EXT4: KPTI + __user pointer sanitization + Full generic retpoline Protection - TR150 SSD: F2FS: KPTI + __user pointer sanitization + Full generic retpoline Protection - TR150 SSD: Btrfs: KPTI + __user pointer sanitization + Full generic retpoline Protection - TR150 SSD: XFS: KPTI + __user pointer sanitization + Full generic retpoline Protection - Seagate HDD: XFS: KPTI + __user pointer sanitization + Full generic retpoline Protection - Seagate HDD: Btrfs: KPTI + __user pointer sanitization + Full generic retpoline Protection - Seagate HDD: EXT4: KPTI + __user pointer sanitization + Full generic retpoline Protection - Virtio ZFS HDD Raid 0: KPTI + __user pointer sanitization + Full generic retpoline Protection - Virtio ZFS HDD Raid 0 2: KPTI + __user pointer sanitization + Full generic retpoline Protection - Virtio ZFS HDD Raid 10: KPTI + __user pointer sanitization + Full generic retpoline Protection - Virtio ZFS HDD Raid 10 WBU: KPTI + __user pointer sanitization + Full generic retpoline Protection - XenServer 7.4 Adaptec 6805 Raid 1 PV: __user pointer sanitization + Full AMD retpoline Protection - Proxmox ZFS Raid 1 WT: __user pointer sanitization + Full generic retpoline Protection - Proxmox ZFS Raid 1 WB: __user pointer sanitization + Full generic retpoline Protection
Linux 4.16 File-System Tests blogbench: Read blogbench: Write aio-stress: Rand Write fio: Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directory dbench: 6 iozone: 4Kb - 8GB - Write Performance compilebench: Compile compilebench: Initial Create compilebench: Read Compiled Tree sqlite: Timed SQLite Insertions unpack-linux: linux-4.15.tar.xz git: Time To Complete Common Git Commands TR150 SSD: EXT4 TR150 SSD: F2FS TR150 SSD: Btrfs TR150 SSD: XFS Seagate HDD: XFS Seagate HDD: Btrfs Seagate HDD: EXT4 Virtio ZFS HDD Raid 0 Virtio ZFS HDD Raid 0 2 Virtio ZFS HDD Raid 10 Virtio ZFS HDD Raid 10 WBU XenServer 7.4 Adaptec 6805 Raid 1 PV Proxmox ZFS Raid 1 WT Proxmox ZFS Raid 1 WB 2097180 10325 2301.32 230 275 227 413 369.21 103.49 1686.54 505.73 837.97 41.27 6.40 6.31 2054932 9156 3017.61 212 282 228 416 276.31 88.99 2271.93 550.41 878.02 34.89 6.67 6.40 2269750 3458 2936.17 212 73.28 327 83.68 247.49 93.20 1371.80 109.66 881.31 99.08 9.44 6.36 2084376 4039 2971.42 212 273 202 395 442.69 96.10 2101.38 395.06 892.28 36.76 6.82 6.65 2197243 2126 2979.14 1.53 1.17 148 145 20.21 153.08 2092.61 402.86 833.32 417.70 7.65 6.76 2293433 2370 2953.92 1.52 23.02 1.54 4.04 45.74 161.01 2177.85 262.66 804.92 1012.38 7.25 6.59 2415373 6473 2410.29 1.45 1.03 155 149 25.73 155.45 1678.76 494.62 801.75 576.02 6.67 6.57 214407 1930 1341.61 258 197 255 197 72.29 101.53 538.22 255.10 807.12 246.03 15.97 6.67 195626 1864 1376.59 2.98 217 250 219 56.88 109.03 495.46 241.59 887.85 329.31 14.52 7.15 228394 1966 748.84 2.83 207 270 228 1553.66 152.17 300.82 212.93 746.27 3.34 10.65 7.57 183352 721 47.81 0.64 1.01 68.33 95.57 212.75 79.12 75.83 68.28 230.68 23.65 15.93 17.81 257117 550 87.08 144 0.42 170 0.54 50.29 38.57 39.55 51.18 326.85 480.57 29.63 25.70 243293 678 333.49 1.17 17.46 165 21.60 48.41 66.76 119.87 83.03 315.73 400.77 30.03 20.08 OpenBenchmarking.org
BlogBench Test: Read OpenBenchmarking.org Final Score, More Is Better BlogBench 1.0 Test: Read Proxmox ZFS Raid 1 WB Proxmox ZFS Raid 1 WT Seagate HDD: Btrfs Seagate HDD: EXT4 Seagate HDD: XFS TR150 SSD: Btrfs TR150 SSD: EXT4 TR150 SSD: F2FS TR150 SSD: XFS Virtio ZFS HDD Raid 0 2 Virtio ZFS HDD Raid 10 Virtio ZFS HDD Raid 10 WBU XenServer 7.4 Adaptec 6805 Raid 1 PV 500K 1000K 1500K 2000K 2500K SE +/- 1753.55, N = 3 SE +/- 378.62, N = 3 SE +/- 1570.58, N = 3 SE +/- 16397.30, N = 3 SE +/- 39188.34, N = 3 SE +/- 12906.26, N = 3 SE +/- 5745.23, N = 3 SE +/- 146937.12, N = 6 SE +/- 37384.19, N = 6 SE +/- 26762.45, N = 6 SE +/- 9034.98, N = 6 SE +/- 11329.23, N = 6 SE +/- 1813.14, N = 3 243293 257117 2293433 2415373 2197243 2269750 2097180 2054932 2084376 214407 195626 228394 183352 1. (CC) gcc options: -O2 -pthread
BlogBench Test: Write OpenBenchmarking.org Final Score, More Is Better BlogBench 1.0 Test: Write Proxmox ZFS Raid 1 WB Proxmox ZFS Raid 1 WT Seagate HDD: Btrfs Seagate HDD: EXT4 Seagate HDD: XFS TR150 SSD: Btrfs TR150 SSD: EXT4 TR150 SSD: F2FS TR150 SSD: XFS Virtio ZFS HDD Raid 0 2 Virtio ZFS HDD Raid 10 Virtio ZFS HDD Raid 10 WBU XenServer 7.4 Adaptec 6805 Raid 1 PV 2K 4K 6K 8K 10K SE +/- 41.65, N = 3 SE +/- 100.50, N = 3 SE +/- 46.68, N = 3 SE +/- 23.54, N = 3 SE +/- 40.70, N = 3 SE +/- 25.15, N = 3 SE +/- 88.82, N = 3 SE +/- 83.15, N = 3 SE +/- 92.81, N = 3 SE +/- 190.23, N = 3 SE +/- 52.35, N = 3 SE +/- 97.59, N = 3 SE +/- 30.51, N = 3 678 550 2370 6473 2126 3458 10325 9156 4039 1930 1864 1966 721 1. (CC) gcc options: -O2 -pthread
Flexible IO Tester Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.1 Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory Proxmox ZFS Raid 1 WT TR150 SSD: Btrfs TR150 SSD: EXT4 TR150 SSD: F2FS TR150 SSD: XFS Virtio ZFS HDD Raid 0 2 14K 28K 42K 56K 70K SE +/- 392.99, N = 3 SE +/- 133.33, N = 3 SE +/- 233.33, N = 3 SE +/- 200.00, N = 3 SE +/- 1017.08, N = 3 36733 54367 58967 54200 54400 65633 1. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl
Flexible IO Tester Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.1 Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory Proxmox ZFS Raid 1 WB Seagate HDD: Btrfs TR150 SSD: Btrfs TR150 SSD: EXT4 TR150 SSD: F2FS TR150 SSD: XFS Virtio ZFS HDD Raid 0 2 Virtio ZFS HDD Raid 10 Virtio ZFS HDD Raid 10 WBU 15K 30K 45K 60K 75K SE +/- 853.91, N = 6 SE +/- 1010.53, N = 6 SE +/- 466.67, N = 3 SE +/- 1005.21, N = 6 SE +/- 896.29, N = 3 SE +/- 405.52, N = 3 SE +/- 1325.64, N = 6 16100 27400 18750 70250 72167 69967 50500 55567 53000 1. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl
Flexible IO Tester Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.1 Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory Proxmox ZFS Raid 1 WB Proxmox ZFS Raid 1 WT Seagate HDD: EXT4 Seagate HDD: XFS TR150 SSD: Btrfs TR150 SSD: EXT4 TR150 SSD: F2FS TR150 SSD: XFS Virtio ZFS HDD Raid 0 2 Virtio ZFS HDD Raid 10 Virtio ZFS HDD Raid 10 WBU XenServer 7.4 Adaptec 6805 Raid 1 PV 20K 40K 60K 80K 100K SE +/- 1772.32, N = 6 SE +/- 338.30, N = 3 SE +/- 1174.07, N = 6 SE +/- 1550.63, N = 3 SE +/- 100.00, N = 3 SE +/- 145.30, N = 3 SE +/- 463.08, N = 3 SE +/- 1156.62, N = 3 SE +/- 833.23, N = 6 SE +/- 970.36, N = 5 SE +/- 2134.71, N = 5 41933 43533 39767 37800 83633 58300 58467 51667 65167 63683 68940 19100 1. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl
Flexible IO Tester Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.1 Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory Seagate HDD: EXT4 Seagate HDD: XFS TR150 SSD: Btrfs TR150 SSD: EXT4 TR150 SSD: F2FS TR150 SSD: XFS Virtio ZFS HDD Raid 0 2 Virtio ZFS HDD Raid 10 Virtio ZFS HDD Raid 10 WBU XenServer 7.4 Adaptec 6805 Raid 1 PV 20K 40K 60K 80K 100K SE +/- 783.87, N = 3 SE +/- 617.12, N = 4 SE +/- 1238.28, N = 6 SE +/- 1000.00, N = 3 SE +/- 333.33, N = 3 SE +/- 5437.01, N = 6 SE +/- 1815.44, N = 6 SE +/- 351.19, N = 3 SE +/- 2293.38, N = 6 SE +/- 384.42, N = 3 38233 37050 21500 106000 106333 101167 50150 56000 58317 24433 1. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl
AIO-Stress Test: Random Write OpenBenchmarking.org MB/s, More Is Better AIO-Stress 0.21 Test: Random Write Proxmox ZFS Raid 1 WB Proxmox ZFS Raid 1 WT Seagate HDD: Btrfs Seagate HDD: EXT4 Seagate HDD: XFS TR150 SSD: Btrfs TR150 SSD: EXT4 TR150 SSD: F2FS TR150 SSD: XFS Virtio ZFS HDD Raid 0 2 Virtio ZFS HDD Raid 10 Virtio ZFS HDD Raid 10 WBU XenServer 7.4 Adaptec 6805 Raid 1 PV 600 1200 1800 2400 3000 SE +/- 18.55, N = 6 SE +/- 4.25, N = 6 SE +/- 38.74, N = 3 SE +/- 40.49, N = 3 SE +/- 22.47, N = 3 SE +/- 38.20, N = 3 SE +/- 45.90, N = 3 SE +/- 50.73, N = 3 SE +/- 26.35, N = 3 SE +/- 47.09, N = 6 SE +/- 33.30, N = 6 SE +/- 163.17, N = 6 SE +/- 0.84, N = 6 333.49 87.08 2953.92 2410.29 2979.14 2936.17 2301.32 3017.61 2971.42 1341.61 1376.59 748.84 47.81 1. (CC) gcc options: -pthread -laio
Flexible IO Tester Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.1 Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory Proxmox ZFS Raid 1 WB Proxmox ZFS Raid 1 WT Seagate HDD: Btrfs Seagate HDD: EXT4 Seagate HDD: XFS TR150 SSD: Btrfs TR150 SSD: EXT4 TR150 SSD: F2FS TR150 SSD: XFS Virtio ZFS HDD Raid 0 2 Virtio ZFS HDD Raid 10 Virtio ZFS HDD Raid 10 WBU XenServer 7.4 Adaptec 6805 Raid 1 PV 60 120 180 240 300 SE +/- 0.02, N = 3 SE +/- 1.76, N = 3 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 SE +/- 0.67, N = 3 SE +/- 1.00, N = 3 SE +/- 1.00, N = 3 SE +/- 3.18, N = 3 SE +/- 0.16, N = 6 SE +/- 0.24, N = 6 SE +/- 0.02, N = 6 1.17 144.00 1.52 1.45 1.53 212.00 230.00 212.00 212.00 258.00 2.98 2.83 0.64 1. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl
Flexible IO Tester Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.1 Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory Proxmox ZFS Raid 1 WB Proxmox ZFS Raid 1 WT Seagate HDD: Btrfs Seagate HDD: EXT4 Seagate HDD: XFS TR150 SSD: Btrfs TR150 SSD: EXT4 TR150 SSD: F2FS TR150 SSD: XFS Virtio ZFS HDD Raid 0 2 Virtio ZFS HDD Raid 10 Virtio ZFS HDD Raid 10 WBU XenServer 7.4 Adaptec 6805 Raid 1 PV 60 120 180 240 300 SE +/- 9.25, N = 6 SE +/- 0.02, N = 6 SE +/- 16.80, N = 6 SE +/- 0.06, N = 6 SE +/- 0.04, N = 6 SE +/- 3.34, N = 6 SE +/- 4.01, N = 6 SE +/- 1.73, N = 3 SE +/- 3.91, N = 6 SE +/- 3.76, N = 3 SE +/- 5.07, N = 6 SE +/- 0.01, N = 3 17.46 0.42 23.02 1.03 1.17 73.28 275.00 282.00 273.00 197.00 217.00 207.00 1.01 1. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl
Flexible IO Tester Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.1 Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory Proxmox ZFS Raid 1 WB Proxmox ZFS Raid 1 WT Seagate HDD: Btrfs Seagate HDD: EXT4 Seagate HDD: XFS TR150 SSD: Btrfs TR150 SSD: EXT4 TR150 SSD: F2FS TR150 SSD: XFS Virtio ZFS HDD Raid 0 2 Virtio ZFS HDD Raid 10 Virtio ZFS HDD Raid 10 WBU XenServer 7.4 Adaptec 6805 Raid 1 PV 70 140 210 280 350 SE +/- 7.05, N = 6 SE +/- 1.33, N = 3 SE +/- 0.01, N = 3 SE +/- 4.53, N = 6 SE +/- 6.23, N = 3 SE +/- 0.67, N = 3 SE +/- 1.76, N = 3 SE +/- 4.58, N = 3 SE +/- 3.43, N = 6 SE +/- 4.16, N = 5 SE +/- 9.06, N = 6 165.00 170.00 1.54 155.00 148.00 327.00 227.00 228.00 202.00 255.00 250.00 270.00 68.33 1. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl
Flexible IO Tester Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.1 Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory Proxmox ZFS Raid 1 WB Proxmox ZFS Raid 1 WT Seagate HDD: Btrfs Seagate HDD: EXT4 Seagate HDD: XFS TR150 SSD: Btrfs TR150 SSD: EXT4 TR150 SSD: F2FS TR150 SSD: XFS Virtio ZFS HDD Raid 0 2 Virtio ZFS HDD Raid 10 Virtio ZFS HDD Raid 10 WBU XenServer 7.4 Adaptec 6805 Raid 1 PV 90 180 270 360 450 SE +/- 4.64, N = 6 SE +/- 0.02, N = 6 SE +/- 0.11, N = 6 SE +/- 3.00, N = 3 SE +/- 2.50, N = 4 SE +/- 4.96, N = 6 SE +/- 3.67, N = 3 SE +/- 1.67, N = 3 SE +/- 20.47, N = 6 SE +/- 6.45, N = 6 SE +/- 1.67, N = 3 SE +/- 8.92, N = 6 SE +/- 1.55, N = 3 21.60 0.54 4.04 149.00 145.00 83.68 413.00 416.00 395.00 197.00 219.00 228.00 95.57 1. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl
Dbench Client Count: 6 OpenBenchmarking.org MB/s, More Is Better Dbench 4.0 Client Count: 6 Proxmox ZFS Raid 1 WB Proxmox ZFS Raid 1 WT Seagate HDD: Btrfs Seagate HDD: EXT4 Seagate HDD: XFS TR150 SSD: Btrfs TR150 SSD: EXT4 TR150 SSD: F2FS TR150 SSD: XFS Virtio ZFS HDD Raid 0 2 Virtio ZFS HDD Raid 10 Virtio ZFS HDD Raid 10 WBU XenServer 7.4 Adaptec 6805 Raid 1 PV 300 600 900 1200 1500 SE +/- 0.25, N = 3 SE +/- 0.17, N = 3 SE +/- 0.59, N = 6 SE +/- 0.67, N = 6 SE +/- 0.02, N = 3 SE +/- 2.85, N = 3 SE +/- 8.35, N = 6 SE +/- 0.15, N = 3 SE +/- 1.98, N = 3 SE +/- 0.05, N = 3 SE +/- 0.41, N = 3 SE +/- 2.57, N = 3 SE +/- 0.88, N = 3 48.41 50.29 45.74 25.73 20.21 247.49 369.21 276.31 442.69 72.29 56.88 1553.66 212.75 1. (CC) gcc options: -lpopt -O2
IOzone Record Size: 4Kb - File Size: 8GB - Disk Test: Write Performance OpenBenchmarking.org MB/s, More Is Better IOzone 3.465 Record Size: 4Kb - File Size: 8GB - Disk Test: Write Performance Proxmox ZFS Raid 1 WB Proxmox ZFS Raid 1 WT Seagate HDD: Btrfs Seagate HDD: EXT4 Seagate HDD: XFS TR150 SSD: Btrfs TR150 SSD: EXT4 TR150 SSD: F2FS TR150 SSD: XFS Virtio ZFS HDD Raid 0 2 Virtio ZFS HDD Raid 10 Virtio ZFS HDD Raid 10 WBU XenServer 7.4 Adaptec 6805 Raid 1 PV 40 80 120 160 200 SE +/- 3.27, N = 6 SE +/- 0.71, N = 6 SE +/- 6.56, N = 6 SE +/- 3.66, N = 6 SE +/- 1.89, N = 3 SE +/- 3.44, N = 6 SE +/- 1.81, N = 6 SE +/- 2.07, N = 6 SE +/- 4.92, N = 6 SE +/- 13.18, N = 6 SE +/- 12.21, N = 6 SE +/- 19.27, N = 6 SE +/- 5.09, N = 6 66.76 38.57 161.01 155.45 153.08 93.20 103.49 88.99 96.10 101.53 109.03 152.17 79.12 1. (CC) gcc options: -O3
Compile Bench Test: Compile OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Compile Proxmox ZFS Raid 1 WB Proxmox ZFS Raid 1 WT Seagate HDD: Btrfs Seagate HDD: EXT4 Seagate HDD: XFS TR150 SSD: Btrfs TR150 SSD: EXT4 TR150 SSD: F2FS TR150 SSD: XFS Virtio ZFS HDD Raid 0 2 Virtio ZFS HDD Raid 10 Virtio ZFS HDD Raid 10 WBU XenServer 7.4 Adaptec 6805 Raid 1 PV 500 1000 1500 2000 2500 SE +/- 2.12, N = 6 SE +/- 0.55, N = 3 SE +/- 35.99, N = 3 SE +/- 9.62, N = 3 SE +/- 2.15, N = 3 SE +/- 24.66, N = 6 SE +/- 13.09, N = 3 SE +/- 54.64, N = 6 SE +/- 4.55, N = 3 SE +/- 56.60, N = 6 SE +/- 37.60, N = 6 SE +/- 47.76, N = 6 SE +/- 1.07, N = 6 119.87 39.55 2177.85 1678.76 2092.61 1371.80 1686.54 2271.93 2101.38 538.22 495.46 300.82 75.83
Compile Bench Test: Initial Create OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Initial Create Proxmox ZFS Raid 1 WB Proxmox ZFS Raid 1 WT Seagate HDD: Btrfs Seagate HDD: EXT4 Seagate HDD: XFS TR150 SSD: Btrfs TR150 SSD: EXT4 TR150 SSD: F2FS TR150 SSD: XFS Virtio ZFS HDD Raid 0 2 Virtio ZFS HDD Raid 10 Virtio ZFS HDD Raid 10 WBU XenServer 7.4 Adaptec 6805 Raid 1 PV 120 240 360 480 600 SE +/- 4.47, N = 3 SE +/- 1.21, N = 3 SE +/- 6.83, N = 3 SE +/- 11.14, N = 3 SE +/- 9.05, N = 3 SE +/- 14.76, N = 3 SE +/- 8.16, N = 3 SE +/- 9.94, N = 3 SE +/- 4.83, N = 3 SE +/- 15.11, N = 3 SE +/- 4.61, N = 3 SE +/- 4.80, N = 3 SE +/- 0.68, N = 3 83.03 51.18 262.66 494.62 402.86 109.66 505.73 550.41 395.06 255.10 241.59 212.93 68.28
Compile Bench Test: Read Compiled Tree OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Read Compiled Tree Proxmox ZFS Raid 1 WB Proxmox ZFS Raid 1 WT Seagate HDD: Btrfs Seagate HDD: EXT4 Seagate HDD: XFS TR150 SSD: Btrfs TR150 SSD: EXT4 TR150 SSD: F2FS TR150 SSD: XFS Virtio ZFS HDD Raid 0 2 Virtio ZFS HDD Raid 10 Virtio ZFS HDD Raid 10 WBU XenServer 7.4 Adaptec 6805 Raid 1 PV 200 400 600 800 1000 SE +/- 4.38, N = 3 SE +/- 6.19, N = 3 SE +/- 7.06, N = 3 SE +/- 13.33, N = 3 SE +/- 12.74, N = 3 SE +/- 11.29, N = 3 SE +/- 44.68, N = 3 SE +/- 6.75, N = 3 SE +/- 6.60, N = 3 SE +/- 77.29, N = 3 SE +/- 71.49, N = 3 SE +/- 38.31, N = 3 SE +/- 1.31, N = 3 315.73 326.85 804.92 801.75 833.32 881.31 837.97 878.02 892.28 807.12 887.85 746.27 230.68
SQLite Timed SQLite Insertions OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.22 Timed SQLite Insertions Proxmox ZFS Raid 1 WB Proxmox ZFS Raid 1 WT Seagate HDD: Btrfs Seagate HDD: EXT4 Seagate HDD: XFS TR150 SSD: Btrfs TR150 SSD: EXT4 TR150 SSD: F2FS TR150 SSD: XFS Virtio ZFS HDD Raid 0 2 Virtio ZFS HDD Raid 10 Virtio ZFS HDD Raid 10 WBU XenServer 7.4 Adaptec 6805 Raid 1 PV 200 400 600 800 1000 SE +/- 5.53, N = 6 SE +/- 13.85, N = 6 SE +/- 2.51, N = 3 SE +/- 9.59, N = 4 SE +/- 0.96, N = 3 SE +/- 1.94, N = 3 SE +/- 0.61, N = 5 SE +/- 0.53, N = 5 SE +/- 0.51, N = 6 SE +/- 3.85, N = 3 SE +/- 6.15, N = 6 SE +/- 0.05, N = 6 SE +/- 0.55, N = 6 400.77 480.57 1012.38 576.02 417.70 99.08 41.27 34.89 36.76 246.03 329.31 3.34 23.65 1. (CC) gcc options: -O2 -ldl -lpthread
Unpacking The Linux Kernel linux-4.15.tar.xz OpenBenchmarking.org Seconds, Fewer Is Better Unpacking The Linux Kernel linux-4.15.tar.xz Proxmox ZFS Raid 1 WB Proxmox ZFS Raid 1 WT Seagate HDD: Btrfs Seagate HDD: EXT4 Seagate HDD: XFS TR150 SSD: Btrfs TR150 SSD: EXT4 TR150 SSD: F2FS TR150 SSD: XFS Virtio ZFS HDD Raid 0 2 Virtio ZFS HDD Raid 10 Virtio ZFS HDD Raid 10 WBU XenServer 7.4 Adaptec 6805 Raid 1 PV 7 14 21 28 35 SE +/- 2.11, N = 8 SE +/- 2.46, N = 8 SE +/- 0.16, N = 8 SE +/- 0.08, N = 8 SE +/- 0.16, N = 8 SE +/- 0.16, N = 4 SE +/- 0.04, N = 4 SE +/- 0.08, N = 8 SE +/- 0.08, N = 8 SE +/- 0.97, N = 8 SE +/- 1.36, N = 8 SE +/- 2.27, N = 8 SE +/- 1.09, N = 8 30.03 29.63 7.25 6.67 7.65 9.44 6.40 6.67 6.82 15.97 14.52 10.65 15.93
Git Time To Complete Common Git Commands OpenBenchmarking.org Seconds, Fewer Is Better Git Time To Complete Common Git Commands Proxmox ZFS Raid 1 WB Proxmox ZFS Raid 1 WT Seagate HDD: Btrfs Seagate HDD: EXT4 Seagate HDD: XFS TR150 SSD: Btrfs TR150 SSD: EXT4 TR150 SSD: F2FS TR150 SSD: XFS Virtio ZFS HDD Raid 0 2 Virtio ZFS HDD Raid 10 Virtio ZFS HDD Raid 10 WBU XenServer 7.4 Adaptec 6805 Raid 1 PV 6 12 18 24 30 SE +/- 0.72, N = 6 SE +/- 1.57, N = 6 SE +/- 0.11, N = 4 SE +/- 0.17, N = 6 SE +/- 0.10, N = 3 SE +/- 0.07, N = 3 SE +/- 0.03, N = 3 SE +/- 0.05, N = 3 SE +/- 0.12, N = 3 SE +/- 0.10, N = 5 SE +/- 0.27, N = 6 SE +/- 1.23, N = 6 SE +/- 0.27, N = 6 20.08 25.70 6.59 6.57 6.76 6.36 6.31 6.40 6.65 6.67 7.15 7.57 17.81 1. Proxmox ZFS Raid 1 WB: git version 2.11.0 2. Proxmox ZFS Raid 1 WT: git version 2.11.0 3. Seagate HDD: Btrfs: git version 2.15.1 4. Seagate HDD: EXT4: git version 2.15.1 5. Seagate HDD: XFS: git version 2.15.1 6. TR150 SSD: Btrfs: git version 2.15.1 7. TR150 SSD: EXT4: git version 2.15.1 8. TR150 SSD: F2FS: git version 2.15.1 9. TR150 SSD: XFS: git version 2.15.1 10. Virtio ZFS HDD Raid 0 2: git version 2.11.0 11. Virtio ZFS HDD Raid 10: git version 2.11.0 12. Virtio ZFS HDD Raid 10 WBU: git version 2.11.0 13. XenServer 7.4 Adaptec 6805 Raid 1 PV: git version 2.11.0
Phoronix Test Suite v10.8.4