Linux 4.16 File-System Tests

HDD and SSD file-system tests on Linux 4.16 for a future article on Phoronix.

HTML result view exported from: https://openbenchmarking.org/result/1804012-FO-1803294FO33&grs&sro&rro.

Linux 4.16 File-System TestsProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionSystem LayerTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: BtrfsTR150 SSD: XFSSeagate HDD: XFSSeagate HDD: BtrfsSeagate HDD: EXT4Virtio ZFS HDD Raid 0Virtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PVProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB ZFS 0.7.6Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native2 x Intel Xeon Gold 6138 @ 3.70GHz (40 Cores / 80 Threads)TYAN S7106 (V1.00 BIOS)Intel Sky Lake-E DMI3 Registers12 x 8192 MB DDR4-2666MT/s Micron 9ASF1G72PZ-2G6B1256GB Samsung SSD 850 + 2000GB Seagate ST2000DM006-2DM1 + 2 x 120GB TOSHIBA-TR150llvmpipe 95360MBVE228Intel I210 Gigabit ConnectionUbuntu 18.044.16.0-999-generic (x86_64) 20180323GNOME Shell 3.28.0X Server 1.19.63.3 Mesa 18.0.0-rc5 (LLVM 6.0 256 bits)GCC 7.3.0ext41920x1080f2fsbtrfsxfsbtrfsext4Common KVM @ 3.91GHz (2 Cores)QEMU Standard PC (i440FX + PIIX 1996) (rel-1.10.2-0-g5f4c7b1-prebuilt.qemu-project.org BIOS)2048MB34GB QEMU HDD + 30GB 2115bochsdrmfbDebian 9.44.9.0-6-amd64 (x86_64)GCC 6.3.0 201705161024x768qemuCommon KVM @ 3.91GHz (4 Cores)34GB QEMU HDDAMD Turion II Neo N54L @ 2.20GHz (2 Cores)4096MB15GBvm-other Xen 4.7.4-4.1 HypervisorCommon KVM @ 2.20GHz (2 Cores)QEMU Standard PC (i440FX + PIIX 1996) (rel-1.10.2-0-g5f4c7b1-prebuilt.qemu-project.org BIOS)bochsdrmfb1024x768qemuCommon KVM @ 2.20GHz (1 Core)OpenBenchmarking.orgCompiler Details- TR150 SSD: EXT4: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v- TR150 SSD: F2FS: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v- TR150 SSD: Btrfs: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v- TR150 SSD: XFS: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v- Seagate HDD: XFS: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v- Seagate HDD: Btrfs: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v- Seagate HDD: EXT4: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v- Virtio ZFS HDD Raid 0: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Virtio ZFS HDD Raid 0 2: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Virtio ZFS HDD Raid 10: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Virtio ZFS HDD Raid 10 WBU: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- XenServer 7.4 Adaptec 6805 Raid 1 PV: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Proxmox ZFS Raid 1 WT: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Proxmox ZFS Raid 1 WB: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Proxmox ZFS Raid 1 WB metadata: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Proxmox ZFS Raid 1 WB metadata throughput: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Proxmox ZFS Raid 1 WB 2: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Proxmox ZFS Raid 1 WB ZFS 0.7.6: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -vDisk Details- TR150 SSD: EXT4: CFQ / data=ordered,relatime,rw- TR150 SSD: F2FS: CFQ / acl,active_logs=6,background_gc=on,extent_cache,flush_merge,inline_data,inline_dentry,inline_xattr,lazytime,mode=adaptive,no_heap,relatime,rw,user_xattr- TR150 SSD: Btrfs: CFQ / relatime,rw,space_cache,ssd,subvol=/,subvolid=5- TR150 SSD: XFS: CFQ / attr2,inode64,noquota,relatime,rw- Seagate HDD: XFS: CFQ / attr2,inode64,noquota,relatime,rw- Seagate HDD: Btrfs: CFQ / relatime,rw,space_cache,subvol=/,subvolid=5- Seagate HDD: EXT4: CFQ / data=ordered,relatime,rw- Virtio ZFS HDD Raid 0: CFQ / data=ordered,discard,noatime,rw- Virtio ZFS HDD Raid 0 2: CFQ / data=ordered,discard,noatime,rw- Virtio ZFS HDD Raid 10: CFQ / data=ordered,discard,noatime,rw- Virtio ZFS HDD Raid 10 WBU: CFQ / data=ordered,discard,noatime,rw- XenServer 7.4 Adaptec 6805 Raid 1 PV: none / data=ordered,discard,noatime,rw- Proxmox ZFS Raid 1 WT: none / data=ordered,discard,noatime,rw- Proxmox ZFS Raid 1 WB: none / data=ordered,discard,noatime,rw- Proxmox ZFS Raid 1 WB metadata: none / data=ordered,discard,noatime,rw- Proxmox ZFS Raid 1 WB metadata throughput: none / data=ordered,discard,noatime,rw- Proxmox ZFS Raid 1 WB 2: none / data=ordered,discard,noatime,rw- Proxmox ZFS Raid 1 WB ZFS 0.7.6: none / data=ordered,discard,noatime,rw- Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native: none / data=ordered,discard,noatime,rwProcessor Details- TR150 SSD: EXT4, TR150 SSD: F2FS, TR150 SSD: Btrfs, TR150 SSD: XFS, Seagate HDD: XFS, Seagate HDD: Btrfs, Seagate HDD: EXT4: Scaling Governor: intel_pstate powersavePython Details- TR150 SSD: EXT4: Python 2.7.14+ + Python 3.6.5rc1- TR150 SSD: F2FS: Python 2.7.14+ + Python 3.6.5rc1- TR150 SSD: Btrfs: Python 2.7.14+ + Python 3.6.5rc1- TR150 SSD: XFS: Python 2.7.14+ + Python 3.6.5rc1- Seagate HDD: XFS: Python 2.7.14+ + Python 3.6.5rc1- Seagate HDD: Btrfs: Python 2.7.14+ + Python 3.6.5rc1- Seagate HDD: EXT4: Python 2.7.14+ + Python 3.6.5rc1- Virtio ZFS HDD Raid 0: Python 2.7.13 + Python 3.5.3- Virtio ZFS HDD Raid 0 2: Python 2.7.13 + Python 3.5.3- Virtio ZFS HDD Raid 10: Python 2.7.13 + Python 3.5.3- Virtio ZFS HDD Raid 10 WBU: Python 2.7.13 + Python 3.5.3- XenServer 7.4 Adaptec 6805 Raid 1 PV: Python 2.7.13 + Python 3.5.3- Proxmox ZFS Raid 1 WT: Python 2.7.13 + Python 3.5.3- Proxmox ZFS Raid 1 WB: Python 2.7.13 + Python 3.5.3- Proxmox ZFS Raid 1 WB metadata: Python 2.7.13 + Python 3.5.3- Proxmox ZFS Raid 1 WB metadata throughput: Python 2.7.13 + Python 3.5.3- Proxmox ZFS Raid 1 WB 2: Python 2.7.13 + Python 3.5.3- Proxmox ZFS Raid 1 WB ZFS 0.7.6: Python 2.7.13 + Python 3.5.3- Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native: Python 2.7.13 + Python 3.5.3Security Details- TR150 SSD: EXT4: KPTI + __user pointer sanitization + Full generic retpoline Protection- TR150 SSD: F2FS: KPTI + __user pointer sanitization + Full generic retpoline Protection- TR150 SSD: Btrfs: KPTI + __user pointer sanitization + Full generic retpoline Protection- TR150 SSD: XFS: KPTI + __user pointer sanitization + Full generic retpoline Protection- Seagate HDD: XFS: KPTI + __user pointer sanitization + Full generic retpoline Protection- Seagate HDD: Btrfs: KPTI + __user pointer sanitization + Full generic retpoline Protection- Seagate HDD: EXT4: KPTI + __user pointer sanitization + Full generic retpoline Protection- Virtio ZFS HDD Raid 0: KPTI + __user pointer sanitization + Full generic retpoline Protection- Virtio ZFS HDD Raid 0 2: KPTI + __user pointer sanitization + Full generic retpoline Protection- Virtio ZFS HDD Raid 10: KPTI + __user pointer sanitization + Full generic retpoline Protection- Virtio ZFS HDD Raid 10 WBU: KPTI + __user pointer sanitization + Full generic retpoline Protection- XenServer 7.4 Adaptec 6805 Raid 1 PV: __user pointer sanitization + Full AMD retpoline Protection- Proxmox ZFS Raid 1 WT: __user pointer sanitization + Full generic retpoline Protection- Proxmox ZFS Raid 1 WB: __user pointer sanitization + Full generic retpoline Protection- Proxmox ZFS Raid 1 WB metadata: __user pointer sanitization + Full generic retpoline Protection- Proxmox ZFS Raid 1 WB metadata throughput: __user pointer sanitization + Full generic retpoline Protection- Proxmox ZFS Raid 1 WB 2: __user pointer sanitization + Full generic retpoline Protection- Proxmox ZFS Raid 1 WB ZFS 0.7.6: __user pointer sanitization + Full generic retpoline Protection- Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native: __user pointer sanitization + Full generic retpoline Protection

Linux 4.16 File-System Testsdbench: 6compilebench: Compilefio: Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directorysqlite: Timed SQLite Insertionscompilebench: Read Compiled Treeblogbench: Readgit: Time To Complete Common Git Commandsunpack-linux: linux-4.15.tar.xzcompilebench: Initial Createiozone: 4Kb - 8GB - Write Performanceblogbench: Writefio: Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directoryaio-stress: Rand WriteTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: BtrfsTR150 SSD: XFSSeagate HDD: XFSSeagate HDD: BtrfsSeagate HDD: EXT4Virtio ZFS HDD Raid 0Virtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PVProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB ZFS 0.7.6Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native369.211686.5423041.27837.9720971806.316.40505.73103.49103254132272752301.32276.312271.9321234.89878.0220549326.406.67550.4188.9991564162282823017.61247.491371.8021299.08881.3122697506.369.44109.6693.20345883.6832773.282936.17442.692101.3821236.76892.2820843766.656.82395.0696.1040393952022732971.4220.212092.611.53417.70833.3221972436.767.65402.86153.0821261451481.172979.1445.742177.851.521012.38804.9222934336.597.25262.66161.0123704.041.5423.022953.9225.731678.761.45576.02801.7524153736.576.67494.62155.4564731491551.032410.2972.29538.22258246.03807.122144076.6715.97255.10101.5319301972551971341.6156.88495.462.98329.31887.851956267.1514.52241.59109.0318642192502171376.591553.66300.822.833.34746.272283947.5710.65212.93152.171966228270207748.84212.7575.830.6423.65230.6818335217.8115.9368.2879.1272195.5768.331.0147.8150.2939.55144480.57326.8525711725.7029.6351.1838.575500.541700.4287.0848.41119.871.17400.77315.7324329320.0830.0383.0366.7667821.6016517.46333.4941.18129.161.09489.60313.9223504120.5630.8381.4966.207394.041589.40275.8541.46127.591.10436.47320.4123311620.6425.4272.6567.437542.651728.48267.7749.10121.691.54396.77313.4924109620.3931.5683.2968.577345.581608.24256.5048.44122.371.18391.26301.2521089220.5731.6373.8157.0943323.50243.229.00216.53OpenBenchmarking.org

Dbench

Client Count: 6

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.0Client Count: 6XenServer 7.4 Adaptec 6805 Raid 1 PVVirtio ZFS HDD Raid 10 WBUVirtio ZFS HDD Raid 10Virtio ZFS HDD Raid 0 2TR150 SSD: XFSTR150 SSD: F2FSTR150 SSD: EXT4TR150 SSD: BtrfsSeagate HDD: XFSSeagate HDD: EXT4Seagate HDD: BtrfsProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-nativeProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB30060090012001500SE +/- 0.88, N = 3SE +/- 2.57, N = 3SE +/- 0.41, N = 3SE +/- 0.05, N = 3SE +/- 1.98, N = 3SE +/- 0.15, N = 3SE +/- 8.35, N = 6SE +/- 2.85, N = 3SE +/- 0.02, N = 3SE +/- 0.67, N = 6SE +/- 0.59, N = 6SE +/- 0.17, N = 3SE +/- 0.42, N = 3SE +/- 0.88, N = 6SE +/- 0.20, N = 3SE +/- 0.32, N = 3SE +/- 0.25, N = 3212.751553.6656.8872.29442.69276.31369.21247.4920.2125.7345.7450.2941.4641.1848.4449.1048.411. (CC) gcc options: -lpopt -O2

Compile Bench

Test: Compile

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: CompileXenServer 7.4 Adaptec 6805 Raid 1 PVVirtio ZFS HDD Raid 10 WBUVirtio ZFS HDD Raid 10Virtio ZFS HDD Raid 0 2TR150 SSD: XFSTR150 SSD: F2FSTR150 SSD: EXT4TR150 SSD: BtrfsSeagate HDD: XFSSeagate HDD: EXT4Seagate HDD: BtrfsProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-nativeProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB5001000150020002500SE +/- 1.07, N = 6SE +/- 47.76, N = 6SE +/- 37.60, N = 6SE +/- 56.60, N = 6SE +/- 4.55, N = 3SE +/- 54.64, N = 6SE +/- 13.09, N = 3SE +/- 24.66, N = 6SE +/- 2.15, N = 3SE +/- 9.62, N = 3SE +/- 35.99, N = 3SE +/- 0.55, N = 3SE +/- 3.34, N = 6SE +/- 2.28, N = 6SE +/- 1.37, N = 3SE +/- 1.93, N = 6SE +/- 2.12, N = 675.83300.82495.46538.222101.382271.931686.541371.802092.611678.762177.8539.55127.59129.16122.37121.69119.87

Flexible IO Tester

Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.1Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryXenServer 7.4 Adaptec 6805 Raid 1 PVVirtio ZFS HDD Raid 10 WBUVirtio ZFS HDD Raid 10Virtio ZFS HDD Raid 0 2TR150 SSD: XFSTR150 SSD: F2FSTR150 SSD: EXT4TR150 SSD: BtrfsSeagate HDD: XFSSeagate HDD: EXT4Seagate HDD: BtrfsProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-nativeProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB60120180240300SE +/- 0.02, N = 6SE +/- 0.24, N = 6SE +/- 0.16, N = 6SE +/- 3.18, N = 3SE +/- 1.00, N = 3SE +/- 1.00, N = 3SE +/- 0.67, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 1.76, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 6SE +/- 0.09, N = 6SE +/- 0.02, N = 30.642.832.98258.00212.00212.00230.00212.001.531.451.52144.001.101.091.181.541.171. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl

Flexible IO Tester

Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.1Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryVirtio ZFS HDD Raid 10 WBUVirtio ZFS HDD Raid 10Virtio ZFS HDD Raid 0 2TR150 SSD: XFSTR150 SSD: F2FSTR150 SSD: EXT4TR150 SSD: BtrfsSeagate HDD: BtrfsProxmox ZFS Raid 1 WB15K30K45K60K75KSE +/- 1325.64, N = 6SE +/- 405.52, N = 3SE +/- 896.29, N = 3SE +/- 1005.21, N = 6SE +/- 466.67, N = 3SE +/- 1010.53, N = 6SE +/- 853.91, N = 65300055567505006996772167702501875027400161001. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl

SQLite

Timed SQLite Insertions

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.22Timed SQLite InsertionsXenServer 7.4 Adaptec 6805 Raid 1 PVVirtio ZFS HDD Raid 10 WBUVirtio ZFS HDD Raid 10Virtio ZFS HDD Raid 0 2TR150 SSD: XFSTR150 SSD: F2FSTR150 SSD: EXT4TR150 SSD: BtrfsSeagate HDD: XFSSeagate HDD: EXT4Seagate HDD: BtrfsProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-nativeProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB2004006008001000SE +/- 0.55, N = 6SE +/- 0.05, N = 6SE +/- 6.15, N = 6SE +/- 3.85, N = 3SE +/- 0.51, N = 6SE +/- 0.53, N = 5SE +/- 0.61, N = 5SE +/- 1.94, N = 3SE +/- 0.96, N = 3SE +/- 9.59, N = 4SE +/- 2.51, N = 3SE +/- 13.85, N = 6SE +/- 6.63, N = 3SE +/- 12.09, N = 6SE +/- 9.84, N = 6SE +/- 13.56, N = 6SE +/- 5.53, N = 623.653.34329.31246.0336.7634.8941.2799.08417.70576.021012.38480.57436.47489.60391.26396.77400.771. (CC) gcc options: -O2 -ldl -lpthread

Compile Bench

Test: Read Compiled Tree

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Read Compiled TreeXenServer 7.4 Adaptec 6805 Raid 1 PVVirtio ZFS HDD Raid 10 WBUVirtio ZFS HDD Raid 10Virtio ZFS HDD Raid 0 2TR150 SSD: XFSTR150 SSD: F2FSTR150 SSD: EXT4TR150 SSD: BtrfsSeagate HDD: XFSSeagate HDD: EXT4Seagate HDD: BtrfsProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-nativeProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB2004006008001000SE +/- 1.31, N = 3SE +/- 38.31, N = 3SE +/- 71.49, N = 3SE +/- 77.29, N = 3SE +/- 6.60, N = 3SE +/- 6.75, N = 3SE +/- 44.68, N = 3SE +/- 11.29, N = 3SE +/- 12.74, N = 3SE +/- 13.33, N = 3SE +/- 7.06, N = 3SE +/- 6.19, N = 3SE +/- 3.01, N = 3SE +/- 3.86, N = 3SE +/- 1.76, N = 3SE +/- 3.38, N = 3SE +/- 4.38, N = 3230.68746.27887.85807.12892.28878.02837.97881.31833.32801.75804.92326.85320.41313.92301.25313.49315.73

BlogBench

Test: Read

OpenBenchmarking.orgFinal Score, More Is BetterBlogBench 1.0Test: ReadXenServer 7.4 Adaptec 6805 Raid 1 PVVirtio ZFS HDD Raid 10 WBUVirtio ZFS HDD Raid 10Virtio ZFS HDD Raid 0 2TR150 SSD: XFSTR150 SSD: F2FSTR150 SSD: EXT4TR150 SSD: BtrfsSeagate HDD: XFSSeagate HDD: EXT4Seagate HDD: BtrfsProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-nativeProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB500K1000K1500K2000K2500KSE +/- 1813.14, N = 3SE +/- 11329.23, N = 6SE +/- 9034.98, N = 6SE +/- 26762.45, N = 6SE +/- 37384.19, N = 6SE +/- 146937.12, N = 6SE +/- 5745.23, N = 3SE +/- 12906.26, N = 3SE +/- 39188.34, N = 3SE +/- 16397.30, N = 3SE +/- 1570.58, N = 3SE +/- 378.62, N = 3SE +/- 2712.85, N = 3SE +/- 2308.83, N = 3SE +/- 3692.20, N = 3SE +/- 3022.99, N = 3SE +/- 1753.55, N = 318335222839419562621440720843762054932209718022697502197243241537322934332571172331162350412108922410962432931. (CC) gcc options: -O2 -pthread

Flexible IO Tester

Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.1Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryVirtio ZFS HDD Raid 0 2TR150 SSD: XFSTR150 SSD: F2FSTR150 SSD: EXT4TR150 SSD: BtrfsProxmox ZFS Raid 1 WT14K28K42K56K70KSE +/- 1017.08, N = 3SE +/- 200.00, N = 3SE +/- 233.33, N = 3SE +/- 133.33, N = 3SE +/- 392.99, N = 36563354400542005896754367367331. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl

Git

Time To Complete Common Git Commands

OpenBenchmarking.orgSeconds, Fewer Is BetterGitTime To Complete Common Git CommandsXenServer 7.4 Adaptec 6805 Raid 1 PVVirtio ZFS HDD Raid 10 WBUVirtio ZFS HDD Raid 10Virtio ZFS HDD Raid 0 2TR150 SSD: XFSTR150 SSD: F2FSTR150 SSD: EXT4TR150 SSD: BtrfsSeagate HDD: XFSSeagate HDD: EXT4Seagate HDD: BtrfsProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-nativeProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB612182430SE +/- 0.27, N = 6SE +/- 1.23, N = 6SE +/- 0.27, N = 6SE +/- 0.10, N = 5SE +/- 0.12, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.07, N = 3SE +/- 0.10, N = 3SE +/- 0.17, N = 6SE +/- 0.11, N = 4SE +/- 1.57, N = 6SE +/- 0.84, N = 6SE +/- 0.81, N = 6SE +/- 0.76, N = 6SE +/- 0.95, N = 6SE +/- 0.72, N = 617.817.577.156.676.656.406.316.366.766.576.5925.7020.6420.5620.5720.3920.081. XenServer 7.4 Adaptec 6805 Raid 1 PV: git version 2.11.02. Virtio ZFS HDD Raid 10 WBU: git version 2.11.03. Virtio ZFS HDD Raid 10: git version 2.11.04. Virtio ZFS HDD Raid 0 2: git version 2.11.05. TR150 SSD: XFS: git version 2.15.16. TR150 SSD: F2FS: git version 2.15.17. TR150 SSD: EXT4: git version 2.15.18. TR150 SSD: Btrfs: git version 2.15.19. Seagate HDD: XFS: git version 2.15.110. Seagate HDD: EXT4: git version 2.15.111. Seagate HDD: Btrfs: git version 2.15.112. Proxmox ZFS Raid 1 WT: git version 2.11.013. Proxmox ZFS Raid 1 WB metadata throughput: git version 2.11.014. Proxmox ZFS Raid 1 WB metadata: git version 2.11.015. Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native: git version 2.11.016. Proxmox ZFS Raid 1 WB 2: git version 2.11.017. Proxmox ZFS Raid 1 WB: git version 2.11.0

Unpacking The Linux Kernel

linux-4.15.tar.xz

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernellinux-4.15.tar.xzXenServer 7.4 Adaptec 6805 Raid 1 PVVirtio ZFS HDD Raid 10 WBUVirtio ZFS HDD Raid 10Virtio ZFS HDD Raid 0 2TR150 SSD: XFSTR150 SSD: F2FSTR150 SSD: EXT4TR150 SSD: BtrfsSeagate HDD: XFSSeagate HDD: EXT4Seagate HDD: BtrfsProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-nativeProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB714212835SE +/- 1.09, N = 8SE +/- 2.27, N = 8SE +/- 1.36, N = 8SE +/- 0.97, N = 8SE +/- 0.08, N = 8SE +/- 0.08, N = 8SE +/- 0.04, N = 4SE +/- 0.16, N = 4SE +/- 0.16, N = 8SE +/- 0.08, N = 8SE +/- 0.16, N = 8SE +/- 2.46, N = 8SE +/- 0.38, N = 4SE +/- 2.71, N = 8SE +/- 1.75, N = 8SE +/- 3.11, N = 8SE +/- 2.11, N = 815.9310.6514.5215.976.826.676.409.447.656.677.2529.6325.4230.8331.6331.5630.03

Compile Bench

Test: Initial Create

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Initial CreateXenServer 7.4 Adaptec 6805 Raid 1 PVVirtio ZFS HDD Raid 10 WBUVirtio ZFS HDD Raid 10Virtio ZFS HDD Raid 0 2TR150 SSD: XFSTR150 SSD: F2FSTR150 SSD: EXT4TR150 SSD: BtrfsSeagate HDD: XFSSeagate HDD: EXT4Seagate HDD: BtrfsProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-nativeProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB120240360480600SE +/- 0.68, N = 3SE +/- 4.80, N = 3SE +/- 4.61, N = 3SE +/- 15.11, N = 3SE +/- 4.83, N = 3SE +/- 9.94, N = 3SE +/- 8.16, N = 3SE +/- 14.76, N = 3SE +/- 9.05, N = 3SE +/- 11.14, N = 3SE +/- 6.83, N = 3SE +/- 1.21, N = 3SE +/- 8.35, N = 3SE +/- 4.88, N = 3SE +/- 7.58, N = 3SE +/- 6.20, N = 3SE +/- 4.47, N = 368.28212.93241.59255.10395.06550.41505.73109.66402.86494.62262.6651.1872.6581.4973.8183.2983.03

IOzone

Record Size: 4Kb - File Size: 8GB - Disk Test: Write Performance

OpenBenchmarking.orgMB/s, More Is BetterIOzone 3.465Record Size: 4Kb - File Size: 8GB - Disk Test: Write PerformanceXenServer 7.4 Adaptec 6805 Raid 1 PVVirtio ZFS HDD Raid 10 WBUVirtio ZFS HDD Raid 10Virtio ZFS HDD Raid 0 2TR150 SSD: XFSTR150 SSD: F2FSTR150 SSD: EXT4TR150 SSD: BtrfsSeagate HDD: XFSSeagate HDD: EXT4Seagate HDD: BtrfsProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-nativeProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB4080120160200SE +/- 5.09, N = 6SE +/- 19.27, N = 6SE +/- 12.21, N = 6SE +/- 13.18, N = 6SE +/- 4.92, N = 6SE +/- 2.07, N = 6SE +/- 1.81, N = 6SE +/- 3.44, N = 6SE +/- 1.89, N = 3SE +/- 3.66, N = 6SE +/- 6.56, N = 6SE +/- 0.71, N = 6SE +/- 3.85, N = 6SE +/- 2.80, N = 6SE +/- 6.70, N = 6SE +/- 2.93, N = 6SE +/- 3.27, N = 679.12152.17109.03101.5396.1088.99103.4993.20153.08155.45161.0138.5767.4366.2057.0968.5766.761. (CC) gcc options: -O3

BlogBench

Test: Write

OpenBenchmarking.orgFinal Score, More Is BetterBlogBench 1.0Test: WriteXenServer 7.4 Adaptec 6805 Raid 1 PVVirtio ZFS HDD Raid 10 WBUVirtio ZFS HDD Raid 10Virtio ZFS HDD Raid 0 2TR150 SSD: XFSTR150 SSD: F2FSTR150 SSD: EXT4TR150 SSD: BtrfsSeagate HDD: XFSSeagate HDD: EXT4Seagate HDD: BtrfsProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-nativeProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB2K4K6K8K10KSE +/- 30.51, N = 3SE +/- 97.59, N = 3SE +/- 52.35, N = 3SE +/- 190.23, N = 3SE +/- 92.81, N = 3SE +/- 83.15, N = 3SE +/- 88.82, N = 3SE +/- 25.15, N = 3SE +/- 40.70, N = 3SE +/- 23.54, N = 3SE +/- 46.68, N = 3SE +/- 100.50, N = 3SE +/- 15.30, N = 3SE +/- 18.93, N = 3SE +/- 48.38, N = 3SE +/- 28.82, N = 3SE +/- 41.65, N = 3721196618641930403991561032534582126647323705507547394337346781. (CC) gcc options: -O2 -pthread

Flexible IO Tester

Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.1Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryXenServer 7.4 Adaptec 6805 Raid 1 PVVirtio ZFS HDD Raid 10 WBUVirtio ZFS HDD Raid 10Virtio ZFS HDD Raid 0 2TR150 SSD: XFSTR150 SSD: F2FSTR150 SSD: EXT4TR150 SSD: BtrfsSeagate HDD: XFSSeagate HDD: EXT420K40K60K80K100KSE +/- 384.42, N = 3SE +/- 2293.38, N = 6SE +/- 351.19, N = 3SE +/- 1815.44, N = 6SE +/- 5437.01, N = 6SE +/- 333.33, N = 3SE +/- 1000.00, N = 3SE +/- 1238.28, N = 6SE +/- 617.12, N = 4SE +/- 783.87, N = 3244335831756000501501011671063331060002150037050382331. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl

Flexible IO Tester

Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.1Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryXenServer 7.4 Adaptec 6805 Raid 1 PVVirtio ZFS HDD Raid 10 WBUVirtio ZFS HDD Raid 10Virtio ZFS HDD Raid 0 2TR150 SSD: XFSTR150 SSD: F2FSTR150 SSD: EXT4TR150 SSD: BtrfsSeagate HDD: XFSSeagate HDD: EXT4Seagate HDD: BtrfsProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-nativeProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB90180270360450SE +/- 1.55, N = 3SE +/- 8.92, N = 6SE +/- 1.67, N = 3SE +/- 6.45, N = 6SE +/- 20.47, N = 6SE +/- 1.67, N = 3SE +/- 3.67, N = 3SE +/- 4.96, N = 6SE +/- 2.50, N = 4SE +/- 3.00, N = 3SE +/- 0.11, N = 6SE +/- 0.02, N = 6SE +/- 0.52, N = 6SE +/- 0.51, N = 6SE +/- 4.59, N = 6SE +/- 0.99, N = 6SE +/- 4.64, N = 695.57228.00219.00197.00395.00416.00413.0083.68145.00149.004.040.542.654.0423.505.5821.601. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl

Flexible IO Tester

Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.1Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryXenServer 7.4 Adaptec 6805 Raid 1 PVVirtio ZFS HDD Raid 10 WBUVirtio ZFS HDD Raid 10Virtio ZFS HDD Raid 0 2TR150 SSD: XFSTR150 SSD: F2FSTR150 SSD: EXT4TR150 SSD: BtrfsSeagate HDD: XFSSeagate HDD: EXT4Proxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-nativeProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB20K40K60K80K100KSE +/- 2134.71, N = 5SE +/- 970.36, N = 5SE +/- 833.23, N = 6SE +/- 1156.62, N = 3SE +/- 463.08, N = 3SE +/- 145.30, N = 3SE +/- 100.00, N = 3SE +/- 1550.63, N = 3SE +/- 1174.07, N = 6SE +/- 338.30, N = 3SE +/- 2193.13, N = 6SE +/- 1209.41, N = 6SE +/- 14548.04, N = 5SE +/- 617.34, N = 3SE +/- 1772.32, N = 6191006894063683651675166758467583008363337800397674353344050402007378040833419331. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl

Flexible IO Tester

Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.1Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryXenServer 7.4 Adaptec 6805 Raid 1 PVVirtio ZFS HDD Raid 10 WBUVirtio ZFS HDD Raid 10Virtio ZFS HDD Raid 0 2TR150 SSD: XFSTR150 SSD: F2FSTR150 SSD: EXT4TR150 SSD: BtrfsSeagate HDD: XFSSeagate HDD: EXT4Seagate HDD: BtrfsProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-nativeProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB70140210280350SE +/- 9.06, N = 6SE +/- 4.16, N = 5SE +/- 3.43, N = 6SE +/- 4.58, N = 3SE +/- 1.76, N = 3SE +/- 0.67, N = 3SE +/- 6.23, N = 3SE +/- 4.53, N = 6SE +/- 0.01, N = 3SE +/- 1.33, N = 3SE +/- 8.50, N = 6SE +/- 4.61, N = 6SE +/- 64.58, N = 6SE +/- 2.33, N = 3SE +/- 7.05, N = 668.33270.00250.00255.00202.00228.00227.00327.00148.00155.001.54170.00172.00158.00243.22160.00165.001. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl

Flexible IO Tester

Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.1Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryXenServer 7.4 Adaptec 6805 Raid 1 PVVirtio ZFS HDD Raid 10 WBUVirtio ZFS HDD Raid 10Virtio ZFS HDD Raid 0 2TR150 SSD: XFSTR150 SSD: F2FSTR150 SSD: EXT4TR150 SSD: BtrfsSeagate HDD: XFSSeagate HDD: EXT4Seagate HDD: BtrfsProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-nativeProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB60120180240300SE +/- 0.01, N = 3SE +/- 5.07, N = 6SE +/- 3.76, N = 3SE +/- 3.91, N = 6SE +/- 1.73, N = 3SE +/- 4.01, N = 6SE +/- 3.34, N = 6SE +/- 0.04, N = 6SE +/- 0.06, N = 6SE +/- 16.80, N = 6SE +/- 0.02, N = 6SE +/- 3.18, N = 6SE +/- 4.07, N = 6SE +/- 3.37, N = 6SE +/- 3.37, N = 6SE +/- 9.25, N = 61.01207.00217.00197.00273.00282.00275.0073.281.171.0323.020.428.489.409.008.2417.461. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl

AIO-Stress

Test: Random Write

OpenBenchmarking.orgMB/s, More Is BetterAIO-Stress 0.21Test: Random WriteXenServer 7.4 Adaptec 6805 Raid 1 PVVirtio ZFS HDD Raid 10 WBUVirtio ZFS HDD Raid 10Virtio ZFS HDD Raid 0 2TR150 SSD: XFSTR150 SSD: F2FSTR150 SSD: EXT4TR150 SSD: BtrfsSeagate HDD: XFSSeagate HDD: EXT4Seagate HDD: BtrfsProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-nativeProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB6001200180024003000SE +/- 0.84, N = 6SE +/- 163.17, N = 6SE +/- 33.30, N = 6SE +/- 47.09, N = 6SE +/- 26.35, N = 3SE +/- 50.73, N = 3SE +/- 45.90, N = 3SE +/- 38.20, N = 3SE +/- 22.47, N = 3SE +/- 40.49, N = 3SE +/- 38.74, N = 3SE +/- 4.25, N = 6SE +/- 4.55, N = 3SE +/- 7.43, N = 6SE +/- 39.69, N = 6SE +/- 13.22, N = 6SE +/- 18.55, N = 647.81748.841376.591341.612971.423017.612301.322936.172979.142410.292953.9287.08267.77275.85216.53256.50333.491. (CC) gcc options: -pthread -laio


Phoronix Test Suite v10.8.5