Linux 4.16 File-System Tests

HDD and SSD file-system tests on Linux 4.16 for a future article on Phoronix.

HTML result view exported from: https://openbenchmarking.org/result/1804012-FO-1803294FO33.

Linux 4.16 File-System TestsProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionSystem LayerTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: BtrfsTR150 SSD: XFSSeagate HDD: XFSSeagate HDD: BtrfsSeagate HDD: EXT4Virtio ZFS HDD Raid 0Virtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PVProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB ZFS 0.7.6Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native2 x Intel Xeon Gold 6138 @ 3.70GHz (40 Cores / 80 Threads)TYAN S7106 (V1.00 BIOS)Intel Sky Lake-E DMI3 Registers12 x 8192 MB DDR4-2666MT/s Micron 9ASF1G72PZ-2G6B1256GB Samsung SSD 850 + 2000GB Seagate ST2000DM006-2DM1 + 2 x 120GB TOSHIBA-TR150llvmpipe 95360MBVE228Intel I210 Gigabit ConnectionUbuntu 18.044.16.0-999-generic (x86_64) 20180323GNOME Shell 3.28.0X Server 1.19.63.3 Mesa 18.0.0-rc5 (LLVM 6.0 256 bits)GCC 7.3.0ext41920x1080f2fsbtrfsxfsbtrfsext4Common KVM @ 3.91GHz (2 Cores)QEMU Standard PC (i440FX + PIIX 1996) (rel-1.10.2-0-g5f4c7b1-prebuilt.qemu-project.org BIOS)2048MB34GB QEMU HDD + 30GB 2115bochsdrmfbDebian 9.44.9.0-6-amd64 (x86_64)GCC 6.3.0 201705161024x768qemuCommon KVM @ 3.91GHz (4 Cores)34GB QEMU HDDAMD Turion II Neo N54L @ 2.20GHz (2 Cores)4096MB15GBvm-other Xen 4.7.4-4.1 HypervisorCommon KVM @ 2.20GHz (2 Cores)QEMU Standard PC (i440FX + PIIX 1996) (rel-1.10.2-0-g5f4c7b1-prebuilt.qemu-project.org BIOS)bochsdrmfb1024x768qemuCommon KVM @ 2.20GHz (1 Core)OpenBenchmarking.orgCompiler Details- TR150 SSD: EXT4: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v- TR150 SSD: F2FS: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v- TR150 SSD: Btrfs: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v- TR150 SSD: XFS: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v- Seagate HDD: XFS: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v- Seagate HDD: Btrfs: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v- Seagate HDD: EXT4: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v- Virtio ZFS HDD Raid 0: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Virtio ZFS HDD Raid 0 2: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Virtio ZFS HDD Raid 10: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Virtio ZFS HDD Raid 10 WBU: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- XenServer 7.4 Adaptec 6805 Raid 1 PV: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Proxmox ZFS Raid 1 WT: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Proxmox ZFS Raid 1 WB: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Proxmox ZFS Raid 1 WB metadata: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Proxmox ZFS Raid 1 WB metadata throughput: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Proxmox ZFS Raid 1 WB 2: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Proxmox ZFS Raid 1 WB ZFS 0.7.6: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -vDisk Details- TR150 SSD: EXT4: CFQ / data=ordered,relatime,rw- TR150 SSD: F2FS: CFQ / acl,active_logs=6,background_gc=on,extent_cache,flush_merge,inline_data,inline_dentry,inline_xattr,lazytime,mode=adaptive,no_heap,relatime,rw,user_xattr- TR150 SSD: Btrfs: CFQ / relatime,rw,space_cache,ssd,subvol=/,subvolid=5- TR150 SSD: XFS: CFQ / attr2,inode64,noquota,relatime,rw- Seagate HDD: XFS: CFQ / attr2,inode64,noquota,relatime,rw- Seagate HDD: Btrfs: CFQ / relatime,rw,space_cache,subvol=/,subvolid=5- Seagate HDD: EXT4: CFQ / data=ordered,relatime,rw- Virtio ZFS HDD Raid 0: CFQ / data=ordered,discard,noatime,rw- Virtio ZFS HDD Raid 0 2: CFQ / data=ordered,discard,noatime,rw- Virtio ZFS HDD Raid 10: CFQ / data=ordered,discard,noatime,rw- Virtio ZFS HDD Raid 10 WBU: CFQ / data=ordered,discard,noatime,rw- XenServer 7.4 Adaptec 6805 Raid 1 PV: none / data=ordered,discard,noatime,rw- Proxmox ZFS Raid 1 WT: none / data=ordered,discard,noatime,rw- Proxmox ZFS Raid 1 WB: none / data=ordered,discard,noatime,rw- Proxmox ZFS Raid 1 WB metadata: none / data=ordered,discard,noatime,rw- Proxmox ZFS Raid 1 WB metadata throughput: none / data=ordered,discard,noatime,rw- Proxmox ZFS Raid 1 WB 2: none / data=ordered,discard,noatime,rw- Proxmox ZFS Raid 1 WB ZFS 0.7.6: none / data=ordered,discard,noatime,rw- Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native: none / data=ordered,discard,noatime,rwProcessor Details- TR150 SSD: EXT4, TR150 SSD: F2FS, TR150 SSD: Btrfs, TR150 SSD: XFS, Seagate HDD: XFS, Seagate HDD: Btrfs, Seagate HDD: EXT4: Scaling Governor: intel_pstate powersavePython Details- TR150 SSD: EXT4: Python 2.7.14+ + Python 3.6.5rc1- TR150 SSD: F2FS: Python 2.7.14+ + Python 3.6.5rc1- TR150 SSD: Btrfs: Python 2.7.14+ + Python 3.6.5rc1- TR150 SSD: XFS: Python 2.7.14+ + Python 3.6.5rc1- Seagate HDD: XFS: Python 2.7.14+ + Python 3.6.5rc1- Seagate HDD: Btrfs: Python 2.7.14+ + Python 3.6.5rc1- Seagate HDD: EXT4: Python 2.7.14+ + Python 3.6.5rc1- Virtio ZFS HDD Raid 0: Python 2.7.13 + Python 3.5.3- Virtio ZFS HDD Raid 0 2: Python 2.7.13 + Python 3.5.3- Virtio ZFS HDD Raid 10: Python 2.7.13 + Python 3.5.3- Virtio ZFS HDD Raid 10 WBU: Python 2.7.13 + Python 3.5.3- XenServer 7.4 Adaptec 6805 Raid 1 PV: Python 2.7.13 + Python 3.5.3- Proxmox ZFS Raid 1 WT: Python 2.7.13 + Python 3.5.3- Proxmox ZFS Raid 1 WB: Python 2.7.13 + Python 3.5.3- Proxmox ZFS Raid 1 WB metadata: Python 2.7.13 + Python 3.5.3- Proxmox ZFS Raid 1 WB metadata throughput: Python 2.7.13 + Python 3.5.3- Proxmox ZFS Raid 1 WB 2: Python 2.7.13 + Python 3.5.3- Proxmox ZFS Raid 1 WB ZFS 0.7.6: Python 2.7.13 + Python 3.5.3- Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native: Python 2.7.13 + Python 3.5.3Security Details- TR150 SSD: EXT4: KPTI + __user pointer sanitization + Full generic retpoline Protection- TR150 SSD: F2FS: KPTI + __user pointer sanitization + Full generic retpoline Protection- TR150 SSD: Btrfs: KPTI + __user pointer sanitization + Full generic retpoline Protection- TR150 SSD: XFS: KPTI + __user pointer sanitization + Full generic retpoline Protection- Seagate HDD: XFS: KPTI + __user pointer sanitization + Full generic retpoline Protection- Seagate HDD: Btrfs: KPTI + __user pointer sanitization + Full generic retpoline Protection- Seagate HDD: EXT4: KPTI + __user pointer sanitization + Full generic retpoline Protection- Virtio ZFS HDD Raid 0: KPTI + __user pointer sanitization + Full generic retpoline Protection- Virtio ZFS HDD Raid 0 2: KPTI + __user pointer sanitization + Full generic retpoline Protection- Virtio ZFS HDD Raid 10: KPTI + __user pointer sanitization + Full generic retpoline Protection- Virtio ZFS HDD Raid 10 WBU: KPTI + __user pointer sanitization + Full generic retpoline Protection- XenServer 7.4 Adaptec 6805 Raid 1 PV: __user pointer sanitization + Full AMD retpoline Protection- Proxmox ZFS Raid 1 WT: __user pointer sanitization + Full generic retpoline Protection- Proxmox ZFS Raid 1 WB: __user pointer sanitization + Full generic retpoline Protection- Proxmox ZFS Raid 1 WB metadata: __user pointer sanitization + Full generic retpoline Protection- Proxmox ZFS Raid 1 WB metadata throughput: __user pointer sanitization + Full generic retpoline Protection- Proxmox ZFS Raid 1 WB 2: __user pointer sanitization + Full generic retpoline Protection- Proxmox ZFS Raid 1 WB ZFS 0.7.6: __user pointer sanitization + Full generic retpoline Protection- Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native: __user pointer sanitization + Full generic retpoline Protection

Linux 4.16 File-System Testsaio-stress: Rand Writesqlite: Timed SQLite Insertionsfio: Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directoryblogbench: Readblogbench: Writedbench: 6iozone: 4Kb - 8GB - Write Performancecompilebench: Compilecompilebench: Initial Createcompilebench: Read Compiled Treeunpack-linux: linux-4.15.tar.xzgit: Time To Complete Common Git CommandsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: BtrfsTR150 SSD: XFSSeagate HDD: XFSSeagate HDD: BtrfsSeagate HDD: EXT4Virtio ZFS HDD Raid 0Virtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PVProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB ZFS 0.7.6Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native2301.3241.27230275227413209718010325369.21103.491686.54505.73837.976.406.313017.6134.8921228222841620549329156276.3188.992271.93550.41878.026.676.402936.1799.0821273.2832783.6822697503458247.4993.201371.80109.66881.319.446.362971.4236.7621227320239520843764039442.6996.102101.38395.06892.286.826.652979.14417.701.531.171481452197243212620.21153.082092.61402.86833.327.656.762953.921012.381.5223.021.544.042293433237045.74161.012177.85262.66804.927.256.592410.29576.021.451.031551492415373647325.73155.451678.76494.62801.756.676.571341.61246.03258197255197214407193072.29101.53538.22255.10807.1215.976.671376.59329.312.98217250219195626186456.88109.03495.46241.59887.8514.527.15748.843.342.8320727022822839419661553.66152.17300.82212.93746.2710.657.5747.8123.650.641.0168.3395.57183352721212.7579.1275.8368.28230.6815.9317.8187.08480.571440.421700.5425711755050.2938.5739.5551.18326.8529.6325.70333.49400.771.1717.4616521.6024329367848.4166.76119.8783.03315.7330.0320.08275.85489.601.099.401584.0423504173941.1866.20129.1681.49313.9230.8320.56267.77436.471.108.481722.6523311675441.4667.43127.5972.65320.4125.4220.64256.50396.771.548.241605.5824109673449.1068.57121.6983.29313.4931.5620.39216.53391.261.189.00243.2223.5021089243348.4457.09122.3773.81301.2531.6320.57OpenBenchmarking.org

AIO-Stress

Test: Random Write

OpenBenchmarking.orgMB/s, More Is BetterAIO-Stress 0.21Test: Random WriteTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: BtrfsTR150 SSD: XFSSeagate HDD: XFSSeagate HDD: BtrfsSeagate HDD: EXT4Virtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PVProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native6001200180024003000SE +/- 45.90, N = 3SE +/- 50.73, N = 3SE +/- 38.20, N = 3SE +/- 26.35, N = 3SE +/- 22.47, N = 3SE +/- 38.74, N = 3SE +/- 40.49, N = 3SE +/- 47.09, N = 6SE +/- 33.30, N = 6SE +/- 163.17, N = 6SE +/- 0.84, N = 6SE +/- 4.25, N = 6SE +/- 18.55, N = 6SE +/- 7.43, N = 6SE +/- 4.55, N = 3SE +/- 13.22, N = 6SE +/- 39.69, N = 62301.323017.612936.172971.422979.142953.922410.291341.611376.59748.8447.8187.08333.49275.85267.77256.50216.531. (CC) gcc options: -pthread -laio

SQLite

Timed SQLite Insertions

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.22Timed SQLite InsertionsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: BtrfsTR150 SSD: XFSSeagate HDD: XFSSeagate HDD: BtrfsSeagate HDD: EXT4Virtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PVProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native2004006008001000SE +/- 0.61, N = 5SE +/- 0.53, N = 5SE +/- 1.94, N = 3SE +/- 0.51, N = 6SE +/- 0.96, N = 3SE +/- 2.51, N = 3SE +/- 9.59, N = 4SE +/- 3.85, N = 3SE +/- 6.15, N = 6SE +/- 0.05, N = 6SE +/- 0.55, N = 6SE +/- 13.85, N = 6SE +/- 5.53, N = 6SE +/- 12.09, N = 6SE +/- 6.63, N = 3SE +/- 13.56, N = 6SE +/- 9.84, N = 641.2734.8999.0836.76417.701012.38576.02246.03329.313.3423.65480.57400.77489.60436.47396.77391.261. (CC) gcc options: -O2 -ldl -lpthread

Flexible IO Tester

Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.1Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: BtrfsTR150 SSD: XFSSeagate HDD: XFSSeagate HDD: BtrfsSeagate HDD: EXT4Virtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PVProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native60120180240300SE +/- 1.00, N = 3SE +/- 1.00, N = 3SE +/- 0.67, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 3.18, N = 3SE +/- 0.16, N = 6SE +/- 0.24, N = 6SE +/- 0.02, N = 6SE +/- 1.76, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.09, N = 6SE +/- 0.03, N = 6230.00212.00212.00212.001.531.521.45258.002.982.830.64144.001.171.091.101.541.181. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl

Flexible IO Tester

Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.1Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: BtrfsTR150 SSD: XFSVirtio ZFS HDD Raid 0 2Proxmox ZFS Raid 1 WT14K28K42K56K70KSE +/- 233.33, N = 3SE +/- 200.00, N = 3SE +/- 133.33, N = 3SE +/- 1017.08, N = 3SE +/- 392.99, N = 35896754200543675440065633367331. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl

Flexible IO Tester

Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.1Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: BtrfsTR150 SSD: XFSSeagate HDD: XFSSeagate HDD: BtrfsSeagate HDD: EXT4Virtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PVProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native60120180240300SE +/- 4.01, N = 6SE +/- 1.73, N = 3SE +/- 3.34, N = 6SE +/- 3.91, N = 6SE +/- 0.04, N = 6SE +/- 16.80, N = 6SE +/- 0.06, N = 6SE +/- 3.76, N = 3SE +/- 5.07, N = 6SE +/- 0.01, N = 3SE +/- 0.02, N = 6SE +/- 9.25, N = 6SE +/- 4.07, N = 6SE +/- 3.18, N = 6SE +/- 3.37, N = 6SE +/- 3.37, N = 6275.00282.0073.28273.001.1723.021.03197.00217.00207.001.010.4217.469.408.488.249.001. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl

Flexible IO Tester

Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.1Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: BtrfsTR150 SSD: XFSSeagate HDD: BtrfsVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUProxmox ZFS Raid 1 WB15K30K45K60K75KSE +/- 1010.53, N = 6SE +/- 466.67, N = 3SE +/- 853.91, N = 6SE +/- 1005.21, N = 6SE +/- 896.29, N = 3SE +/- 405.52, N = 3SE +/- 1325.64, N = 67025072167187506996727400505005556753000161001. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl

Flexible IO Tester

Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.1Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: BtrfsTR150 SSD: XFSSeagate HDD: XFSSeagate HDD: BtrfsSeagate HDD: EXT4Virtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PVProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native70140210280350SE +/- 0.67, N = 3SE +/- 6.23, N = 3SE +/- 1.76, N = 3SE +/- 0.01, N = 3SE +/- 4.53, N = 6SE +/- 4.58, N = 3SE +/- 3.43, N = 6SE +/- 4.16, N = 5SE +/- 9.06, N = 6SE +/- 1.33, N = 3SE +/- 7.05, N = 6SE +/- 4.61, N = 6SE +/- 8.50, N = 6SE +/- 2.33, N = 3SE +/- 64.58, N = 6227.00228.00327.00202.00148.001.54155.00255.00250.00270.0068.33170.00165.00158.00172.00160.00243.221. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl

Flexible IO Tester

Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.1Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: BtrfsTR150 SSD: XFSSeagate HDD: XFSSeagate HDD: EXT4Virtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PVProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native20K40K60K80K100KSE +/- 100.00, N = 3SE +/- 145.30, N = 3SE +/- 1550.63, N = 3SE +/- 463.08, N = 3SE +/- 1174.07, N = 6SE +/- 1156.62, N = 3SE +/- 833.23, N = 6SE +/- 970.36, N = 5SE +/- 2134.71, N = 5SE +/- 338.30, N = 3SE +/- 1772.32, N = 6SE +/- 1209.41, N = 6SE +/- 2193.13, N = 6SE +/- 617.34, N = 3SE +/- 14548.04, N = 5583005846783633516673780039767651676368368940191004353341933402004405040833737801. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl

Flexible IO Tester

Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.1Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: BtrfsTR150 SSD: XFSSeagate HDD: XFSSeagate HDD: BtrfsSeagate HDD: EXT4Virtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PVProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native90180270360450SE +/- 3.67, N = 3SE +/- 1.67, N = 3SE +/- 4.96, N = 6SE +/- 20.47, N = 6SE +/- 2.50, N = 4SE +/- 0.11, N = 6SE +/- 3.00, N = 3SE +/- 6.45, N = 6SE +/- 1.67, N = 3SE +/- 8.92, N = 6SE +/- 1.55, N = 3SE +/- 0.02, N = 6SE +/- 4.64, N = 6SE +/- 0.51, N = 6SE +/- 0.52, N = 6SE +/- 0.99, N = 6SE +/- 4.59, N = 6413.00416.0083.68395.00145.004.04149.00197.00219.00228.0095.570.5421.604.042.655.5823.501. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl

Flexible IO Tester

Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.1Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: BtrfsTR150 SSD: XFSSeagate HDD: XFSSeagate HDD: EXT4Virtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PV20K40K60K80K100KSE +/- 1000.00, N = 3SE +/- 333.33, N = 3SE +/- 1238.28, N = 6SE +/- 5437.01, N = 6SE +/- 617.12, N = 4SE +/- 783.87, N = 3SE +/- 1815.44, N = 6SE +/- 351.19, N = 3SE +/- 2293.38, N = 6SE +/- 384.42, N = 3106000106333215001011673705038233501505600058317244331. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl

BlogBench

Test: Read

OpenBenchmarking.orgFinal Score, More Is BetterBlogBench 1.0Test: ReadTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: BtrfsTR150 SSD: XFSSeagate HDD: XFSSeagate HDD: BtrfsSeagate HDD: EXT4Virtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PVProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native500K1000K1500K2000K2500KSE +/- 5745.23, N = 3SE +/- 146937.12, N = 6SE +/- 12906.26, N = 3SE +/- 37384.19, N = 6SE +/- 39188.34, N = 3SE +/- 1570.58, N = 3SE +/- 16397.30, N = 3SE +/- 26762.45, N = 6SE +/- 9034.98, N = 6SE +/- 11329.23, N = 6SE +/- 1813.14, N = 3SE +/- 378.62, N = 3SE +/- 1753.55, N = 3SE +/- 2308.83, N = 3SE +/- 2712.85, N = 3SE +/- 3022.99, N = 3SE +/- 3692.20, N = 320971802054932226975020843762197243229343324153732144071956262283941833522571172432932350412331162410962108921. (CC) gcc options: -O2 -pthread

BlogBench

Test: Write

OpenBenchmarking.orgFinal Score, More Is BetterBlogBench 1.0Test: WriteTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: BtrfsTR150 SSD: XFSSeagate HDD: XFSSeagate HDD: BtrfsSeagate HDD: EXT4Virtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PVProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native2K4K6K8K10KSE +/- 88.82, N = 3SE +/- 83.15, N = 3SE +/- 25.15, N = 3SE +/- 92.81, N = 3SE +/- 40.70, N = 3SE +/- 46.68, N = 3SE +/- 23.54, N = 3SE +/- 190.23, N = 3SE +/- 52.35, N = 3SE +/- 97.59, N = 3SE +/- 30.51, N = 3SE +/- 100.50, N = 3SE +/- 41.65, N = 3SE +/- 18.93, N = 3SE +/- 15.30, N = 3SE +/- 28.82, N = 3SE +/- 48.38, N = 3103259156345840392126237064731930186419667215506787397547344331. (CC) gcc options: -O2 -pthread

Dbench

Client Count: 6

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.0Client Count: 6TR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: BtrfsTR150 SSD: XFSSeagate HDD: XFSSeagate HDD: BtrfsSeagate HDD: EXT4Virtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PVProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native30060090012001500SE +/- 8.35, N = 6SE +/- 0.15, N = 3SE +/- 2.85, N = 3SE +/- 1.98, N = 3SE +/- 0.02, N = 3SE +/- 0.59, N = 6SE +/- 0.67, N = 6SE +/- 0.05, N = 3SE +/- 0.41, N = 3SE +/- 2.57, N = 3SE +/- 0.88, N = 3SE +/- 0.17, N = 3SE +/- 0.25, N = 3SE +/- 0.88, N = 6SE +/- 0.42, N = 3SE +/- 0.32, N = 3SE +/- 0.20, N = 3369.21276.31247.49442.6920.2145.7425.7372.2956.881553.66212.7550.2948.4141.1841.4649.1048.441. (CC) gcc options: -lpopt -O2

IOzone

Record Size: 4Kb - File Size: 8GB - Disk Test: Write Performance

OpenBenchmarking.orgMB/s, More Is BetterIOzone 3.465Record Size: 4Kb - File Size: 8GB - Disk Test: Write PerformanceTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: BtrfsTR150 SSD: XFSSeagate HDD: XFSSeagate HDD: BtrfsSeagate HDD: EXT4Virtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PVProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native4080120160200SE +/- 1.81, N = 6SE +/- 2.07, N = 6SE +/- 3.44, N = 6SE +/- 4.92, N = 6SE +/- 1.89, N = 3SE +/- 6.56, N = 6SE +/- 3.66, N = 6SE +/- 13.18, N = 6SE +/- 12.21, N = 6SE +/- 19.27, N = 6SE +/- 5.09, N = 6SE +/- 0.71, N = 6SE +/- 3.27, N = 6SE +/- 2.80, N = 6SE +/- 3.85, N = 6SE +/- 2.93, N = 6SE +/- 6.70, N = 6103.4988.9993.2096.10153.08161.01155.45101.53109.03152.1779.1238.5766.7666.2067.4368.5757.091. (CC) gcc options: -O3

Compile Bench

Test: Compile

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: CompileTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: BtrfsTR150 SSD: XFSSeagate HDD: XFSSeagate HDD: BtrfsSeagate HDD: EXT4Virtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PVProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native5001000150020002500SE +/- 13.09, N = 3SE +/- 54.64, N = 6SE +/- 24.66, N = 6SE +/- 4.55, N = 3SE +/- 2.15, N = 3SE +/- 35.99, N = 3SE +/- 9.62, N = 3SE +/- 56.60, N = 6SE +/- 37.60, N = 6SE +/- 47.76, N = 6SE +/- 1.07, N = 6SE +/- 0.55, N = 3SE +/- 2.12, N = 6SE +/- 2.28, N = 6SE +/- 3.34, N = 6SE +/- 1.93, N = 6SE +/- 1.37, N = 31686.542271.931371.802101.382092.612177.851678.76538.22495.46300.8275.8339.55119.87129.16127.59121.69122.37

Compile Bench

Test: Initial Create

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Initial CreateTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: BtrfsTR150 SSD: XFSSeagate HDD: XFSSeagate HDD: BtrfsSeagate HDD: EXT4Virtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PVProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native120240360480600SE +/- 8.16, N = 3SE +/- 9.94, N = 3SE +/- 14.76, N = 3SE +/- 4.83, N = 3SE +/- 9.05, N = 3SE +/- 6.83, N = 3SE +/- 11.14, N = 3SE +/- 15.11, N = 3SE +/- 4.61, N = 3SE +/- 4.80, N = 3SE +/- 0.68, N = 3SE +/- 1.21, N = 3SE +/- 4.47, N = 3SE +/- 4.88, N = 3SE +/- 8.35, N = 3SE +/- 6.20, N = 3SE +/- 7.58, N = 3505.73550.41109.66395.06402.86262.66494.62255.10241.59212.9368.2851.1883.0381.4972.6583.2973.81

Compile Bench

Test: Read Compiled Tree

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Read Compiled TreeTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: BtrfsTR150 SSD: XFSSeagate HDD: XFSSeagate HDD: BtrfsSeagate HDD: EXT4Virtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PVProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native2004006008001000SE +/- 44.68, N = 3SE +/- 6.75, N = 3SE +/- 11.29, N = 3SE +/- 6.60, N = 3SE +/- 12.74, N = 3SE +/- 7.06, N = 3SE +/- 13.33, N = 3SE +/- 77.29, N = 3SE +/- 71.49, N = 3SE +/- 38.31, N = 3SE +/- 1.31, N = 3SE +/- 6.19, N = 3SE +/- 4.38, N = 3SE +/- 3.86, N = 3SE +/- 3.01, N = 3SE +/- 3.38, N = 3SE +/- 1.76, N = 3837.97878.02881.31892.28833.32804.92801.75807.12887.85746.27230.68326.85315.73313.92320.41313.49301.25

Unpacking The Linux Kernel

linux-4.15.tar.xz

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernellinux-4.15.tar.xzTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: BtrfsTR150 SSD: XFSSeagate HDD: XFSSeagate HDD: BtrfsSeagate HDD: EXT4Virtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PVProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native714212835SE +/- 0.04, N = 4SE +/- 0.08, N = 8SE +/- 0.16, N = 4SE +/- 0.08, N = 8SE +/- 0.16, N = 8SE +/- 0.16, N = 8SE +/- 0.08, N = 8SE +/- 0.97, N = 8SE +/- 1.36, N = 8SE +/- 2.27, N = 8SE +/- 1.09, N = 8SE +/- 2.46, N = 8SE +/- 2.11, N = 8SE +/- 2.71, N = 8SE +/- 0.38, N = 4SE +/- 3.11, N = 8SE +/- 1.75, N = 86.406.679.446.827.657.256.6715.9714.5210.6515.9329.6330.0330.8325.4231.5631.63

Git

Time To Complete Common Git Commands

OpenBenchmarking.orgSeconds, Fewer Is BetterGitTime To Complete Common Git CommandsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: BtrfsTR150 SSD: XFSSeagate HDD: XFSSeagate HDD: BtrfsSeagate HDD: EXT4Virtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PVProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WB metadataProxmox ZFS Raid 1 WB metadata throughputProxmox ZFS Raid 1 WB 2Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native612182430SE +/- 0.03, N = 3SE +/- 0.05, N = 3SE +/- 0.07, N = 3SE +/- 0.12, N = 3SE +/- 0.10, N = 3SE +/- 0.11, N = 4SE +/- 0.17, N = 6SE +/- 0.10, N = 5SE +/- 0.27, N = 6SE +/- 1.23, N = 6SE +/- 0.27, N = 6SE +/- 1.57, N = 6SE +/- 0.72, N = 6SE +/- 0.81, N = 6SE +/- 0.84, N = 6SE +/- 0.95, N = 6SE +/- 0.76, N = 66.316.406.366.656.766.596.576.677.157.5717.8125.7020.0820.5620.6420.3920.571. TR150 SSD: EXT4: git version 2.15.12. TR150 SSD: F2FS: git version 2.15.13. TR150 SSD: Btrfs: git version 2.15.14. TR150 SSD: XFS: git version 2.15.15. Seagate HDD: XFS: git version 2.15.16. Seagate HDD: Btrfs: git version 2.15.17. Seagate HDD: EXT4: git version 2.15.18. Virtio ZFS HDD Raid 0 2: git version 2.11.09. Virtio ZFS HDD Raid 10: git version 2.11.010. Virtio ZFS HDD Raid 10 WBU: git version 2.11.011. XenServer 7.4 Adaptec 6805 Raid 1 PV: git version 2.11.012. Proxmox ZFS Raid 1 WT: git version 2.11.013. Proxmox ZFS Raid 1 WB: git version 2.11.014. Proxmox ZFS Raid 1 WB metadata: git version 2.11.015. Proxmox ZFS Raid 1 WB metadata throughput: git version 2.11.016. Proxmox ZFS Raid 1 WB 2: git version 2.11.017. Proxmox ZFS Raid 1 WB ZFS 0.7.6 iothread-native: git version 2.11.0


Phoronix Test Suite v10.8.4