Linux 4.16 File-System Tests

HDD and SSD file-system tests on Linux 4.16 for a future article on Phoronix.

HTML result view exported from: https://openbenchmarking.org/result/1804167-FO-1804132FO53&sro&grw.

Linux 4.16 File-System TestsProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionSystem LayerTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: BtrfsTR150 SSD: XFSSeagate HDD: XFSSeagate HDD: BtrfsSeagate HDD: EXT4Virtio ZFS HDD Raid 0Virtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PVProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB ZFS 0.7.6Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-nativeProxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native moswapProxmox ZFS Raid 1 N ZFS 0.7.6 iothread-native noswap nodiskwcProxmox ZFS Raid 1 N ZFS 0.7.6 iothread-native noswap nodiskwc 2Proxmox ZFS Raid 1 N metadata ZFS 0.7.6 iothread-native noswap nodiskwcProxmox ZFS Raid 1 LXC 32 x Intel Xeon Gold 6138 @ 3.70GHz (40 Cores / 80 Threads)TYAN S7106 (V1.00 BIOS)Intel Sky Lake-E DMI3 Registers12 x 8192 MB DDR4-2666MT/s Micron 9ASF1G72PZ-2G6B1256GB Samsung SSD 850 + 2000GB Seagate ST2000DM006-2DM1 + 2 x 120GB TOSHIBA-TR150llvmpipe 95360MBVE228Intel I210 Gigabit ConnectionUbuntu 18.044.16.0-999-generic (x86_64) 20180323GNOME Shell 3.28.0X Server 1.19.63.3 Mesa 18.0.0-rc5 (LLVM 6.0 256 bits)GCC 7.3.0ext41920x1080f2fsbtrfsxfsbtrfsext4Common KVM @ 3.91GHz (2 Cores)QEMU Standard PC (i440FX + PIIX 1996) (rel-1.10.2-0-g5f4c7b1-prebuilt.qemu-project.org BIOS)2048MB34GB QEMU HDD + 30GB 2115bochsdrmfbDebian 9.44.9.0-6-amd64 (x86_64)GCC 6.3.0 201705161024x768qemuCommon KVM @ 3.91GHz (4 Cores)34GB QEMU HDDAMD Turion II Neo N54L @ 2.20GHz (2 Cores)4096MB15GBvm-other Xen 4.7.4-4.1 HypervisorCommon KVM @ 2.20GHz (2 Cores)QEMU Standard PC (i440FX + PIIX 1996) (rel-1.10.2-0-g5f4c7b1-prebuilt.qemu-project.org BIOS)bochsdrmfb1024x768qemuCommon KVM @ 2.20GHz (1 Core)AMD Turion II Neo N54L @ 2.20GHz (2 Cores)HP ProLiant MicroServer (O41 BIOS)2 x 500GB SAMSUNG HD502HI + 62GB Ultra Fit + 1000GB SAMSUNG HD103SIastdrmfbDebian GNU/Linux 94.13.16-1-pve (x86_64)zfslxcOpenBenchmarking.orgCompiler Details- TR150 SSD: EXT4: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v- TR150 SSD: F2FS: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v- TR150 SSD: Btrfs: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v- TR150 SSD: XFS: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v- Seagate HDD: XFS: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v- Seagate HDD: Btrfs: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v- Seagate HDD: EXT4: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v- Virtio ZFS HDD Raid 0: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Virtio ZFS HDD Raid 0 2: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Virtio ZFS HDD Raid 10: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Virtio ZFS HDD Raid 10 WBU: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- XenServer 7.4 Adaptec 6805 Raid 1 PV: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Proxmox ZFS Raid 1 WT: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Proxmox ZFS Raid 1 WB: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Proxmox ZFS Raid 1 WB metadata: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Proxmox ZFS Raid 1 WB metadata throughput: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Proxmox ZFS Raid 1 WB 2: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Proxmox ZFS Raid 1 WB ZFS 0.7.6: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native moswap: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Proxmox ZFS Raid 1 N ZFS 0.7.6 iothread-native noswap nodiskwc: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Proxmox ZFS Raid 1 N ZFS 0.7.6 iothread-native noswap nodiskwc 2: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Proxmox ZFS Raid 1 N metadata ZFS 0.7.6 iothread-native noswap nodiskwc: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Proxmox ZFS Raid 1 LXC 3: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -vDisk Details- TR150 SSD: EXT4: CFQ / data=ordered,relatime,rw- TR150 SSD: F2FS: CFQ / acl,active_logs=6,background_gc=on,extent_cache,flush_merge,inline_data,inline_dentry,inline_xattr,lazytime,mode=adaptive,no_heap,relatime,rw,user_xattr- TR150 SSD: Btrfs: CFQ / relatime,rw,space_cache,ssd,subvol=/,subvolid=5- TR150 SSD: XFS: CFQ / attr2,inode64,noquota,relatime,rw- Seagate HDD: XFS: CFQ / attr2,inode64,noquota,relatime,rw- Seagate HDD: Btrfs: CFQ / relatime,rw,space_cache,subvol=/,subvolid=5- Seagate HDD: EXT4: CFQ / data=ordered,relatime,rw- Virtio ZFS HDD Raid 0: CFQ / data=ordered,discard,noatime,rw- Virtio ZFS HDD Raid 0 2: CFQ / data=ordered,discard,noatime,rw- Virtio ZFS HDD Raid 10: CFQ / data=ordered,discard,noatime,rw- Virtio ZFS HDD Raid 10 WBU: CFQ / data=ordered,discard,noatime,rw- XenServer 7.4 Adaptec 6805 Raid 1 PV: none / data=ordered,discard,noatime,rw- Proxmox ZFS Raid 1 WT: none / data=ordered,discard,noatime,rw- Proxmox ZFS Raid 1 WB: none / data=ordered,discard,noatime,rw- Proxmox ZFS Raid 1 WB metadata: none / data=ordered,discard,noatime,rw- Proxmox ZFS Raid 1 WB metadata throughput: none / data=ordered,discard,noatime,rw- Proxmox ZFS Raid 1 WB 2: none / data=ordered,discard,noatime,rw- Proxmox ZFS Raid 1 WB ZFS 0.7.6: none / data=ordered,discard,noatime,rw- Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native: none / data=ordered,discard,noatime,rw- Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native moswap: none / data=ordered,discard,noatime,rw- Proxmox ZFS Raid 1 N ZFS 0.7.6 iothread-native noswap nodiskwc: none / data=ordered,discard,noatime,rw- Proxmox ZFS Raid 1 N ZFS 0.7.6 iothread-native noswap nodiskwc 2: none / data=ordered,discard,noatime,rw- Proxmox ZFS Raid 1 N metadata ZFS 0.7.6 iothread-native noswap nodiskwc: none / data=ordered,discard,noatime,rwProcessor Details- TR150 SSD: EXT4: Scaling Governor: intel_pstate powersave- TR150 SSD: F2FS: Scaling Governor: intel_pstate powersave- TR150 SSD: Btrfs: Scaling Governor: intel_pstate powersave- TR150 SSD: XFS: Scaling Governor: intel_pstate powersave- Seagate HDD: XFS: Scaling Governor: intel_pstate powersave- Seagate HDD: Btrfs: Scaling Governor: intel_pstate powersave- Seagate HDD: EXT4: Scaling Governor: intel_pstate powersave- Proxmox ZFS Raid 1 LXC 3: Scaling Governor: acpi-cpufreq performancePython Details- TR150 SSD: EXT4: Python 2.7.14+ + Python 3.6.5rc1- TR150 SSD: F2FS: Python 2.7.14+ + Python 3.6.5rc1- TR150 SSD: Btrfs: Python 2.7.14+ + Python 3.6.5rc1- TR150 SSD: XFS: Python 2.7.14+ + Python 3.6.5rc1- Seagate HDD: XFS: Python 2.7.14+ + Python 3.6.5rc1- Seagate HDD: Btrfs: Python 2.7.14+ + Python 3.6.5rc1- Seagate HDD: EXT4: Python 2.7.14+ + Python 3.6.5rc1- Virtio ZFS HDD Raid 0: Python 2.7.13 + Python 3.5.3- Virtio ZFS HDD Raid 0 2: Python 2.7.13 + Python 3.5.3- Virtio ZFS HDD Raid 10: Python 2.7.13 + Python 3.5.3- Virtio ZFS HDD Raid 10 WBU: Python 2.7.13 + Python 3.5.3- XenServer 7.4 Adaptec 6805 Raid 1 PV: Python 2.7.13 + Python 3.5.3- Proxmox ZFS Raid 1 WT: Python 2.7.13 + Python 3.5.3- Proxmox ZFS Raid 1 WB: Python 2.7.13 + Python 3.5.3- Proxmox ZFS Raid 1 WB metadata: Python 2.7.13 + Python 3.5.3- Proxmox ZFS Raid 1 WB metadata throughput: Python 2.7.13 + Python 3.5.3- Proxmox ZFS Raid 1 WB 2: Python 2.7.13 + Python 3.5.3- Proxmox ZFS Raid 1 WB ZFS 0.7.6: Python 2.7.13 + Python 3.5.3- Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native: Python 2.7.13 + Python 3.5.3- Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native moswap: Python 2.7.13 + Python 3.5.3- Proxmox ZFS Raid 1 N ZFS 0.7.6 iothread-native noswap nodiskwc: Python 2.7.13 + Python 3.5.3- Proxmox ZFS Raid 1 N ZFS 0.7.6 iothread-native noswap nodiskwc 2: Python 2.7.13 + Python 3.5.3- Proxmox ZFS Raid 1 N metadata ZFS 0.7.6 iothread-native noswap nodiskwc: Python 2.7.13 + Python 3.5.3- Proxmox ZFS Raid 1 LXC 3: Python 2.7.13 + Python 3.5.3Security Details- TR150 SSD: EXT4: KPTI + __user pointer sanitization + Full generic retpoline Protection- TR150 SSD: F2FS: KPTI + __user pointer sanitization + Full generic retpoline Protection- TR150 SSD: Btrfs: KPTI + __user pointer sanitization + Full generic retpoline Protection- TR150 SSD: XFS: KPTI + __user pointer sanitization + Full generic retpoline Protection- Seagate HDD: XFS: KPTI + __user pointer sanitization + Full generic retpoline Protection- Seagate HDD: Btrfs: KPTI + __user pointer sanitization + Full generic retpoline Protection- Seagate HDD: EXT4: KPTI + __user pointer sanitization + Full generic retpoline Protection- Virtio ZFS HDD Raid 0: KPTI + __user pointer sanitization + Full generic retpoline Protection- Virtio ZFS HDD Raid 0 2: KPTI + __user pointer sanitization + Full generic retpoline Protection- Virtio ZFS HDD Raid 10: KPTI + __user pointer sanitization + Full generic retpoline Protection- Virtio ZFS HDD Raid 10 WBU: KPTI + __user pointer sanitization + Full generic retpoline Protection- XenServer 7.4 Adaptec 6805 Raid 1 PV: __user pointer sanitization + Full AMD retpoline Protection- Proxmox ZFS Raid 1 WT: __user pointer sanitization + Full generic retpoline Protection- Proxmox ZFS Raid 1 WB: __user pointer sanitization + Full generic retpoline Protection- Proxmox ZFS Raid 1 WB metadata: __user pointer sanitization + Full generic retpoline Protection- Proxmox ZFS Raid 1 WB metadata throughput: __user pointer sanitization + Full generic retpoline Protection- Proxmox ZFS Raid 1 WB 2: __user pointer sanitization + Full generic retpoline Protection- Proxmox ZFS Raid 1 WB ZFS 0.7.6: __user pointer sanitization + Full generic retpoline Protection- Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native: __user pointer sanitization + Full generic retpoline Protection- Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native moswap: __user pointer sanitization + Full generic retpoline Protection- Proxmox ZFS Raid 1 N ZFS 0.7.6 iothread-native noswap nodiskwc: __user pointer sanitization + Full generic retpoline Protection- Proxmox ZFS Raid 1 N ZFS 0.7.6 iothread-native noswap nodiskwc 2: __user pointer sanitization + Full generic retpoline Protection- Proxmox ZFS Raid 1 N metadata ZFS 0.7.6 iothread-native noswap nodiskwc: __user pointer sanitization + Full generic retpoline Protection- Proxmox ZFS Raid 1 LXC 3: OSB (observable speculation barrier Intel v6) + Full AMD retpoline ProtectionDisk Scheduler Details- Proxmox ZFS Raid 1 LXC 3: NOOP

Linux 4.16 File-System Testscompilebench: Compilecompilebench: Initial Createcompilebench: Read Compiled Treedbench: 6fio: Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directoryiozone: 4Kb - 8GB - Write Performanceunpack-linux: linux-4.15.tar.xzfio: Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directoryblogbench: Readblogbench: Writesqlite: Timed SQLite Insertionsgit: Time To Complete Common Git Commandsaio-stress: Rand WriteTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: BtrfsTR150 SSD: XFSSeagate HDD: XFSSeagate HDD: BtrfsSeagate HDD: EXT4Virtio ZFS HDD Raid 0Virtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PVProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB ZFS 0.7.6Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-nativeProxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native moswapProxmox ZFS Raid 1 N ZFS 0.7.6 iothread-native noswap nodiskwcProxmox ZFS Raid 1 N ZFS 0.7.6 iothread-native noswap nodiskwc 2Proxmox ZFS Raid 1 N metadata ZFS 0.7.6 iothread-native noswap nodiskwcProxmox ZFS Raid 1 LXC 31686.54505.73837.97369.21230275413103.496.4022720971801032541.276.312301.322271.93550.41878.02276.3121228241688.996.672282054932915634.896.403017.611371.80109.66881.31247.4921273.2883.6893.209.443272269750345899.086.362936.172101.38395.06892.28442.6921227339596.106.822022084376403936.766.652971.422092.61402.86833.3220.211.531.17145153.087.6514821972432126417.706.762979.142177.85262.66804.9245.741.5223.024.04161.017.251.54229343323701012.386.592953.921678.76494.62801.7525.731.451.03149155.456.6715524153736473576.026.572410.29538.22255.10807.1272.29258197197101.5315.972552144071930246.036.671341.61495.46241.59887.8556.882.98217219109.0314.522501956261864329.317.151376.59300.82212.93746.271553.662.83207228152.1710.6527022839419663.347.57748.8475.8368.28230.68212.750.641.0195.5779.1215.9368.3318335272123.6517.8147.8139.5551.18326.8550.291440.420.5438.5729.63170257117550480.5725.7087.08119.8783.03315.7348.411.1717.4621.6066.7630.03165243293678400.7720.08333.49129.1681.49313.9241.181.099.404.0466.2030.83158235041739489.6020.56275.85127.5972.65320.4141.461.108.482.6567.4325.42172233116754436.4720.64267.77121.6983.29313.4949.101.548.245.5868.5731.56160241096734396.7720.39256.50122.3773.81301.2548.441.189.0023.5057.0931.63243.22210892433391.2620.57216.53136.2883.55305.5549.691.5010.8115.9783.2031.08290.53203102800388.9521.14300.01196.56102.4882.72314.0648.861.171.1517.4078.6730.5335.83230306884399.6522.97205.72100.6670.16318.9641.251.071.2513.9975.9230.0637.40221821871449.3020.67145.93234.5386.2073.3591.57238.2134.12159289786270.0579.74182.65OpenBenchmarking.org

Compile Bench

Test: Compile

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: CompileProxmox ZFS Raid 1 LXC 3Proxmox ZFS Raid 1 N ZFS 0.7.6 iothread-native noswap nodiskwc 2Proxmox ZFS Raid 1 N metadata ZFS 0.7.6 iothread-native noswap nodiskwcProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-nativeProxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native moswapProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WTSeagate HDD: BtrfsSeagate HDD: EXT4Seagate HDD: XFSTR150 SSD: BtrfsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: XFSVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PV5001000150020002500SE +/- 10.37, N = 6SE +/- 1.45, N = 5SE +/- 4.04, N = 6SE +/- 2.12, N = 6SE +/- 1.93, N = 6SE +/- 1.37, N = 3SE +/- 1.27, N = 3SE +/- 2.28, N = 6SE +/- 3.34, N = 6SE +/- 0.55, N = 3SE +/- 35.99, N = 3SE +/- 9.62, N = 3SE +/- 2.15, N = 3SE +/- 24.66, N = 6SE +/- 13.09, N = 3SE +/- 54.64, N = 6SE +/- 4.55, N = 3SE +/- 56.60, N = 6SE +/- 37.60, N = 6SE +/- 47.76, N = 6SE +/- 1.07, N = 6234.53102.48100.66119.87121.69122.37136.28129.16127.5939.552177.851678.762092.611371.801686.542271.932101.38538.22495.46300.8275.83

Compile Bench

Test: Initial Create

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Initial CreateProxmox ZFS Raid 1 LXC 3Proxmox ZFS Raid 1 N ZFS 0.7.6 iothread-native noswap nodiskwc 2Proxmox ZFS Raid 1 N metadata ZFS 0.7.6 iothread-native noswap nodiskwcProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-nativeProxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native moswapProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WTSeagate HDD: BtrfsSeagate HDD: EXT4Seagate HDD: XFSTR150 SSD: BtrfsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: XFSVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PV120240360480600SE +/- 0.58, N = 3SE +/- 6.33, N = 3SE +/- 1.39, N = 3SE +/- 4.47, N = 3SE +/- 6.20, N = 3SE +/- 7.58, N = 3SE +/- 6.78, N = 3SE +/- 4.88, N = 3SE +/- 8.35, N = 3SE +/- 1.21, N = 3SE +/- 6.83, N = 3SE +/- 11.14, N = 3SE +/- 9.05, N = 3SE +/- 14.76, N = 3SE +/- 8.16, N = 3SE +/- 9.94, N = 3SE +/- 4.83, N = 3SE +/- 15.11, N = 3SE +/- 4.61, N = 3SE +/- 4.80, N = 3SE +/- 0.68, N = 386.2082.7270.1683.0383.2973.8183.5581.4972.6551.18262.66494.62402.86109.66505.73550.41395.06255.10241.59212.9368.28

Compile Bench

Test: Read Compiled Tree

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Read Compiled TreeProxmox ZFS Raid 1 LXC 3Proxmox ZFS Raid 1 N ZFS 0.7.6 iothread-native noswap nodiskwc 2Proxmox ZFS Raid 1 N metadata ZFS 0.7.6 iothread-native noswap nodiskwcProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-nativeProxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native moswapProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WTSeagate HDD: BtrfsSeagate HDD: EXT4Seagate HDD: XFSTR150 SSD: BtrfsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: XFSVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PV2004006008001000SE +/- 3.98, N = 3SE +/- 8.84, N = 3SE +/- 1.41, N = 3SE +/- 4.38, N = 3SE +/- 3.38, N = 3SE +/- 1.76, N = 3SE +/- 1.14, N = 3SE +/- 3.86, N = 3SE +/- 3.01, N = 3SE +/- 6.19, N = 3SE +/- 7.06, N = 3SE +/- 13.33, N = 3SE +/- 12.74, N = 3SE +/- 11.29, N = 3SE +/- 44.68, N = 3SE +/- 6.75, N = 3SE +/- 6.60, N = 3SE +/- 77.29, N = 3SE +/- 71.49, N = 3SE +/- 38.31, N = 3SE +/- 1.31, N = 373.35314.06318.96315.73313.49301.25305.55313.92320.41326.85804.92801.75833.32881.31837.97878.02892.28807.12887.85746.27230.68

Dbench

Client Count: 6

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.0Client Count: 6Proxmox ZFS Raid 1 LXC 3Proxmox ZFS Raid 1 N ZFS 0.7.6 iothread-native noswap nodiskwc 2Proxmox ZFS Raid 1 N metadata ZFS 0.7.6 iothread-native noswap nodiskwcProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-nativeProxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native moswapProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WTSeagate HDD: BtrfsSeagate HDD: EXT4Seagate HDD: XFSTR150 SSD: BtrfsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: XFSVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PV30060090012001500SE +/- 1.07, N = 3SE +/- 0.36, N = 3SE +/- 0.58, N = 5SE +/- 0.25, N = 3SE +/- 0.32, N = 3SE +/- 0.20, N = 3SE +/- 0.14, N = 3SE +/- 0.88, N = 6SE +/- 0.42, N = 3SE +/- 0.17, N = 3SE +/- 0.59, N = 6SE +/- 0.67, N = 6SE +/- 0.02, N = 3SE +/- 2.85, N = 3SE +/- 8.35, N = 6SE +/- 0.15, N = 3SE +/- 1.98, N = 3SE +/- 0.05, N = 3SE +/- 0.41, N = 3SE +/- 2.57, N = 3SE +/- 0.88, N = 391.5748.8641.2548.4149.1048.4449.6941.1841.4650.2945.7425.7320.21247.49369.21276.31442.6972.2956.881553.66212.751. (CC) gcc options: -lpopt -O2

Flexible IO Tester

Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.1Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryProxmox ZFS Raid 1 N ZFS 0.7.6 iothread-native noswap nodiskwc 2Proxmox ZFS Raid 1 N metadata ZFS 0.7.6 iothread-native noswap nodiskwcProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-nativeProxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native moswapProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WTSeagate HDD: BtrfsSeagate HDD: EXT4Seagate HDD: XFSTR150 SSD: BtrfsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: XFSVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PV60120180240300SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.09, N = 6SE +/- 0.03, N = 6SE +/- 0.08, N = 6SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 1.76, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.67, N = 3SE +/- 1.00, N = 3SE +/- 1.00, N = 3SE +/- 3.18, N = 3SE +/- 0.16, N = 6SE +/- 0.24, N = 6SE +/- 0.02, N = 61.171.071.171.541.181.501.091.10144.001.521.451.53212.00230.00212.00212.00258.002.982.830.641. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl

Flexible IO Tester

Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.1Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryProxmox ZFS Raid 1 N ZFS 0.7.6 iothread-native noswap nodiskwc 2Proxmox ZFS Raid 1 N metadata ZFS 0.7.6 iothread-native noswap nodiskwcProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-nativeProxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native moswapProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WTSeagate HDD: BtrfsSeagate HDD: EXT4Seagate HDD: XFSTR150 SSD: BtrfsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: XFSVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PV60120180240300SE +/- 0.01, N = 3SE +/- 0.09, N = 6SE +/- 9.25, N = 6SE +/- 3.37, N = 6SE +/- 3.37, N = 6SE +/- 5.57, N = 6SE +/- 4.07, N = 6SE +/- 3.18, N = 6SE +/- 0.02, N = 6SE +/- 16.80, N = 6SE +/- 0.06, N = 6SE +/- 0.04, N = 6SE +/- 3.34, N = 6SE +/- 4.01, N = 6SE +/- 1.73, N = 3SE +/- 3.91, N = 6SE +/- 3.76, N = 3SE +/- 5.07, N = 6SE +/- 0.01, N = 31.151.2517.468.249.0010.819.408.480.4223.021.031.1773.28275.00282.00273.00197.00217.00207.001.011. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl

Flexible IO Tester

Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.1Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryProxmox ZFS Raid 1 N ZFS 0.7.6 iothread-native noswap nodiskwc 2Proxmox ZFS Raid 1 N metadata ZFS 0.7.6 iothread-native noswap nodiskwcProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-nativeProxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native moswapProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WTSeagate HDD: BtrfsSeagate HDD: EXT4Seagate HDD: XFSTR150 SSD: BtrfsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: XFSVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PV90180270360450SE +/- 2.67, N = 6SE +/- 1.46, N = 6SE +/- 4.64, N = 6SE +/- 0.99, N = 6SE +/- 4.59, N = 6SE +/- 5.64, N = 6SE +/- 0.51, N = 6SE +/- 0.52, N = 6SE +/- 0.02, N = 6SE +/- 0.11, N = 6SE +/- 3.00, N = 3SE +/- 2.50, N = 4SE +/- 4.96, N = 6SE +/- 3.67, N = 3SE +/- 1.67, N = 3SE +/- 20.47, N = 6SE +/- 6.45, N = 6SE +/- 1.67, N = 3SE +/- 8.92, N = 6SE +/- 1.55, N = 317.4013.9921.605.5823.5015.974.042.650.544.04149.00145.0083.68413.00416.00395.00197.00219.00228.0095.571. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl

IOzone

Record Size: 4Kb - File Size: 8GB - Disk Test: Write Performance

OpenBenchmarking.orgMB/s, More Is BetterIOzone 3.465Record Size: 4Kb - File Size: 8GB - Disk Test: Write PerformanceProxmox ZFS Raid 1 LXC 3Proxmox ZFS Raid 1 N ZFS 0.7.6 iothread-native noswap nodiskwc 2Proxmox ZFS Raid 1 N metadata ZFS 0.7.6 iothread-native noswap nodiskwcProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-nativeProxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native moswapProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WTSeagate HDD: BtrfsSeagate HDD: EXT4Seagate HDD: XFSTR150 SSD: BtrfsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: XFSVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PV50100150200250SE +/- 4.33, N = 3SE +/- 2.58, N = 6SE +/- 1.43, N = 6SE +/- 3.27, N = 6SE +/- 2.93, N = 6SE +/- 6.70, N = 6SE +/- 2.00, N = 6SE +/- 2.80, N = 6SE +/- 3.85, N = 6SE +/- 0.71, N = 6SE +/- 6.56, N = 6SE +/- 3.66, N = 6SE +/- 1.89, N = 3SE +/- 3.44, N = 6SE +/- 1.81, N = 6SE +/- 2.07, N = 6SE +/- 4.92, N = 6SE +/- 13.18, N = 6SE +/- 12.21, N = 6SE +/- 19.27, N = 6SE +/- 5.09, N = 6238.2178.6775.9266.7668.5757.0983.2066.2067.4338.57161.01155.45153.0893.20103.4988.9996.10101.53109.03152.1779.121. (CC) gcc options: -O3

Unpacking The Linux Kernel

linux-4.15.tar.xz

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernellinux-4.15.tar.xzProxmox ZFS Raid 1 LXC 3Proxmox ZFS Raid 1 N ZFS 0.7.6 iothread-native noswap nodiskwc 2Proxmox ZFS Raid 1 N metadata ZFS 0.7.6 iothread-native noswap nodiskwcProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-nativeProxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native moswapProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WTSeagate HDD: BtrfsSeagate HDD: EXT4Seagate HDD: XFSTR150 SSD: BtrfsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: XFSVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PV816243240SE +/- 0.43, N = 8SE +/- 4.17, N = 8SE +/- 1.73, N = 8SE +/- 2.11, N = 8SE +/- 3.11, N = 8SE +/- 1.75, N = 8SE +/- 1.88, N = 8SE +/- 2.71, N = 8SE +/- 0.38, N = 4SE +/- 2.46, N = 8SE +/- 0.16, N = 8SE +/- 0.08, N = 8SE +/- 0.16, N = 8SE +/- 0.16, N = 4SE +/- 0.04, N = 4SE +/- 0.08, N = 8SE +/- 0.08, N = 8SE +/- 0.97, N = 8SE +/- 1.36, N = 8SE +/- 2.27, N = 8SE +/- 1.09, N = 834.1230.5330.0630.0331.5631.6331.0830.8325.4229.637.256.677.659.446.406.676.8215.9714.5210.6515.93

Flexible IO Tester

Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.1Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryProxmox ZFS Raid 1 N ZFS 0.7.6 iothread-native noswap nodiskwc 2Proxmox ZFS Raid 1 N metadata ZFS 0.7.6 iothread-native noswap nodiskwcProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-nativeProxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native moswapProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WTSeagate HDD: BtrfsSeagate HDD: EXT4Seagate HDD: XFSTR150 SSD: BtrfsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: XFSVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PV70140210280350SE +/- 7.67, N = 6SE +/- 0.21, N = 3SE +/- 7.05, N = 6SE +/- 2.33, N = 3SE +/- 64.58, N = 6SE +/- 59.65, N = 6SE +/- 4.61, N = 6SE +/- 8.50, N = 6SE +/- 1.33, N = 3SE +/- 0.01, N = 3SE +/- 4.53, N = 6SE +/- 6.23, N = 3SE +/- 0.67, N = 3SE +/- 1.76, N = 3SE +/- 4.58, N = 3SE +/- 3.43, N = 6SE +/- 4.16, N = 5SE +/- 9.06, N = 635.8337.40165.00160.00243.22290.53158.00172.00170.001.54155.00148.00327.00227.00228.00202.00255.00250.00270.0068.331. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl

BlogBench

Test: Read

OpenBenchmarking.orgFinal Score, More Is BetterBlogBench 1.0Test: ReadProxmox ZFS Raid 1 LXC 3Proxmox ZFS Raid 1 N ZFS 0.7.6 iothread-native noswap nodiskwc 2Proxmox ZFS Raid 1 N metadata ZFS 0.7.6 iothread-native noswap nodiskwcProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-nativeProxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native moswapProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WTSeagate HDD: BtrfsSeagate HDD: EXT4Seagate HDD: XFSTR150 SSD: BtrfsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: XFSVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PV500K1000K1500K2000K2500KSE +/- 1888.08, N = 3SE +/- 3493.94, N = 3SE +/- 1641.11, N = 3SE +/- 1753.55, N = 3SE +/- 3022.99, N = 3SE +/- 3692.20, N = 3SE +/- 3201.35, N = 4SE +/- 2308.83, N = 3SE +/- 2712.85, N = 3SE +/- 378.62, N = 3SE +/- 1570.58, N = 3SE +/- 16397.30, N = 3SE +/- 39188.34, N = 3SE +/- 12906.26, N = 3SE +/- 5745.23, N = 3SE +/- 146937.12, N = 6SE +/- 37384.19, N = 6SE +/- 26762.45, N = 6SE +/- 9034.98, N = 6SE +/- 11329.23, N = 6SE +/- 1813.14, N = 315928923030622182124329324109621089220310223504123311625711722934332415373219724322697502097180205493220843762144071956262283941833521. (CC) gcc options: -O2 -pthread

BlogBench

Test: Write

OpenBenchmarking.orgFinal Score, More Is BetterBlogBench 1.0Test: WriteProxmox ZFS Raid 1 LXC 3Proxmox ZFS Raid 1 N ZFS 0.7.6 iothread-native noswap nodiskwc 2Proxmox ZFS Raid 1 N metadata ZFS 0.7.6 iothread-native noswap nodiskwcProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-nativeProxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native moswapProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WTSeagate HDD: BtrfsSeagate HDD: EXT4Seagate HDD: XFSTR150 SSD: BtrfsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: XFSVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PV2K4K6K8K10KSE +/- 17.74, N = 3SE +/- 10.71, N = 3SE +/- 1.00, N = 3SE +/- 41.65, N = 3SE +/- 28.82, N = 3SE +/- 48.38, N = 3SE +/- 14.50, N = 3SE +/- 18.93, N = 3SE +/- 15.30, N = 3SE +/- 100.50, N = 3SE +/- 46.68, N = 3SE +/- 23.54, N = 3SE +/- 40.70, N = 3SE +/- 25.15, N = 3SE +/- 88.82, N = 3SE +/- 83.15, N = 3SE +/- 92.81, N = 3SE +/- 190.23, N = 3SE +/- 52.35, N = 3SE +/- 97.59, N = 3SE +/- 30.51, N = 3786884871678734433800739754550237064732126345810325915640391930186419667211. (CC) gcc options: -O2 -pthread

SQLite

Timed SQLite Insertions

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.22Timed SQLite InsertionsProxmox ZFS Raid 1 LXC 3Proxmox ZFS Raid 1 N ZFS 0.7.6 iothread-native noswap nodiskwc 2Proxmox ZFS Raid 1 N metadata ZFS 0.7.6 iothread-native noswap nodiskwcProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-nativeProxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native moswapProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WTSeagate HDD: BtrfsSeagate HDD: EXT4Seagate HDD: XFSTR150 SSD: BtrfsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: XFSVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PV2004006008001000SE +/- 1.14, N = 3SE +/- 3.94, N = 3SE +/- 4.28, N = 3SE +/- 5.53, N = 6SE +/- 13.56, N = 6SE +/- 9.84, N = 6SE +/- 10.46, N = 6SE +/- 12.09, N = 6SE +/- 6.63, N = 3SE +/- 13.85, N = 6SE +/- 2.51, N = 3SE +/- 9.59, N = 4SE +/- 0.96, N = 3SE +/- 1.94, N = 3SE +/- 0.61, N = 5SE +/- 0.53, N = 5SE +/- 0.51, N = 6SE +/- 3.85, N = 3SE +/- 6.15, N = 6SE +/- 0.05, N = 6SE +/- 0.55, N = 6270.05399.65449.30400.77396.77391.26388.95489.60436.47480.571012.38576.02417.7099.0841.2734.8936.76246.03329.313.3423.651. (CC) gcc options: -O2 -ldl -lpthread

Git

Time To Complete Common Git Commands

OpenBenchmarking.orgSeconds, Fewer Is BetterGitTime To Complete Common Git CommandsProxmox ZFS Raid 1 LXC 3Proxmox ZFS Raid 1 N ZFS 0.7.6 iothread-native noswap nodiskwc 2Proxmox ZFS Raid 1 N metadata ZFS 0.7.6 iothread-native noswap nodiskwcProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-nativeProxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native moswapProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WTSeagate HDD: BtrfsSeagate HDD: EXT4Seagate HDD: XFSTR150 SSD: BtrfsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: XFSVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PV20406080100SE +/- 1.10, N = 3SE +/- 3.73, N = 6SE +/- 0.78, N = 6SE +/- 0.72, N = 6SE +/- 0.95, N = 6SE +/- 0.76, N = 6SE +/- 0.84, N = 6SE +/- 0.81, N = 6SE +/- 0.84, N = 6SE +/- 1.57, N = 6SE +/- 0.11, N = 4SE +/- 0.17, N = 6SE +/- 0.10, N = 3SE +/- 0.07, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 3SE +/- 0.12, N = 3SE +/- 0.10, N = 5SE +/- 0.27, N = 6SE +/- 1.23, N = 6SE +/- 0.27, N = 679.7422.9720.6720.0820.3920.5721.1420.5620.6425.706.596.576.766.366.316.406.656.677.157.5717.811. Proxmox ZFS Raid 1 LXC 3: git version 2.11.02. Proxmox ZFS Raid 1 N ZFS 0.7.6 iothread-native noswap nodiskwc 2: git version 2.11.03. Proxmox ZFS Raid 1 N metadata ZFS 0.7.6 iothread-native noswap nodiskwc: git version 2.11.04. Proxmox ZFS Raid 1 WB: git version 2.11.05. Proxmox ZFS Raid 1 WB 2: git version 2.11.06. Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native: git version 2.11.07. Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native moswap: git version 2.11.08. Proxmox ZFS Raid 1 WB metadata: git version 2.11.09. Proxmox ZFS Raid 1 WB metadata throughput: git version 2.11.010. Proxmox ZFS Raid 1 WT: git version 2.11.011. Seagate HDD: Btrfs: git version 2.15.112. Seagate HDD: EXT4: git version 2.15.113. Seagate HDD: XFS: git version 2.15.114. TR150 SSD: Btrfs: git version 2.15.115. TR150 SSD: EXT4: git version 2.15.116. TR150 SSD: F2FS: git version 2.15.117. TR150 SSD: XFS: git version 2.15.118. Virtio ZFS HDD Raid 0 2: git version 2.11.019. Virtio ZFS HDD Raid 10: git version 2.11.020. Virtio ZFS HDD Raid 10 WBU: git version 2.11.021. XenServer 7.4 Adaptec 6805 Raid 1 PV: git version 2.11.0

AIO-Stress

Test: Random Write

OpenBenchmarking.orgMB/s, More Is BetterAIO-Stress 0.21Test: Random WriteProxmox ZFS Raid 1 LXC 3Proxmox ZFS Raid 1 N ZFS 0.7.6 iothread-native noswap nodiskwcProxmox ZFS Raid 1 N ZFS 0.7.6 iothread-native noswap nodiskwc 2Proxmox ZFS Raid 1 N metadata ZFS 0.7.6 iothread-native noswap nodiskwcProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-nativeProxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native moswapProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WTSeagate HDD: BtrfsSeagate HDD: EXT4Seagate HDD: XFSTR150 SSD: BtrfsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: XFSVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PV6001200180024003000SE +/- 3.46, N = 6SE +/- 24.85, N = 6SE +/- 19.27, N = 6SE +/- 17.65, N = 6SE +/- 18.55, N = 6SE +/- 13.22, N = 6SE +/- 39.69, N = 6SE +/- 2.62, N = 3SE +/- 7.43, N = 6SE +/- 4.55, N = 3SE +/- 4.25, N = 6SE +/- 38.74, N = 3SE +/- 40.49, N = 3SE +/- 22.47, N = 3SE +/- 38.20, N = 3SE +/- 45.90, N = 3SE +/- 50.73, N = 3SE +/- 26.35, N = 3SE +/- 47.09, N = 6SE +/- 33.30, N = 6SE +/- 163.17, N = 6SE +/- 0.84, N = 6182.65196.56205.72145.93333.49256.50216.53300.01275.85267.7787.082953.922410.292979.142936.172301.323017.612971.421341.611376.59748.8447.811. (CC) gcc options: -pthread -laio

Flexible IO Tester

Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.1Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryProxmox ZFS Raid 1 WTTR150 SSD: BtrfsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: XFSVirtio ZFS HDD Raid 0 214K28K42K56K70KSE +/- 392.99, N = 3SE +/- 133.33, N = 3SE +/- 233.33, N = 3SE +/- 200.00, N = 3SE +/- 1017.08, N = 33673354367589675420054400656331. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl

Flexible IO Tester

Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.1Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryProxmox ZFS Raid 1 WBSeagate HDD: BtrfsTR150 SSD: BtrfsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: XFSVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBU15K30K45K60K75KSE +/- 853.91, N = 6SE +/- 1010.53, N = 6SE +/- 466.67, N = 3SE +/- 1005.21, N = 6SE +/- 896.29, N = 3SE +/- 405.52, N = 3SE +/- 1325.64, N = 61610027400187507025072167699675050055567530001. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl

Flexible IO Tester

Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.1Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryProxmox ZFS Raid 1 N ZFS 0.7.6 iothread-native noswap nodiskwc 2Proxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-nativeProxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native moswapProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WTSeagate HDD: EXT4Seagate HDD: XFSTR150 SSD: BtrfsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: XFSVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PV20K40K60K80K100KSE +/- 1772.32, N = 6SE +/- 617.34, N = 3SE +/- 14548.04, N = 5SE +/- 4277.08, N = 5SE +/- 1209.41, N = 6SE +/- 2193.13, N = 6SE +/- 338.30, N = 3SE +/- 1174.07, N = 6SE +/- 1550.63, N = 3SE +/- 100.00, N = 3SE +/- 145.30, N = 3SE +/- 463.08, N = 3SE +/- 1156.62, N = 3SE +/- 833.23, N = 6SE +/- 970.36, N = 5SE +/- 2134.71, N = 51890041933408337378089220402004405043533397673780083633583005846751667651676368368940191001. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl

Flexible IO Tester

Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.1Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryProxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native moswapSeagate HDD: EXT4Seagate HDD: XFSTR150 SSD: BtrfsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: XFSVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PV20K40K60K80K100KSE +/- 783.87, N = 3SE +/- 617.12, N = 4SE +/- 1238.28, N = 6SE +/- 1000.00, N = 3SE +/- 333.33, N = 3SE +/- 5437.01, N = 6SE +/- 1815.44, N = 6SE +/- 351.19, N = 3SE +/- 2293.38, N = 6SE +/- 384.42, N = 311200382333705021500106000106333101167501505600058317244331. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl


Phoronix Test Suite v10.8.4