Linux 4.16 File-System Tests

HDD and SSD file-system tests on Linux 4.16 for a future article on Phoronix.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 1803302-FO-1803294FO72
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

CPU Massive 2 Tests
Disk Test Suite 5 Tests
Server 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable
Show Perf Per Clock Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
TR150 SSD: EXT4
March 24 2018
 
TR150 SSD: F2FS
March 24 2018
 
TR150 SSD: Btrfs
March 25 2018
 
TR150 SSD: XFS
March 25 2018
 
Seagate HDD: XFS
March 25 2018
 
Seagate HDD: Btrfs
March 25 2018
 
Seagate HDD: EXT4
March 25 2018
 
Virtio ZFS HDD Raid 0
March 26 2018
 
Virtio ZFS HDD Raid 0 2
March 26 2018
 
Virtio ZFS HDD Raid 10
March 29 2018
 
Virtio ZFS HDD Raid 10 WBU
March 29 2018
 
XenServer 7.4 Adaptec 6805 Raid 1 PV
March 30 2018
 
Proxmox ZFS Raid 1 WT
March 30 2018
 
Proxmox ZFS Raid 1 WB
March 30 2018
 
Invert Hiding All Results Option
 

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Linux 4.16 File-System Tests - Phoronix Test Suite

Linux 4.16 File-System Tests

HDD and SSD file-system tests on Linux 4.16 for a future article on Phoronix.

HTML result view exported from: https://openbenchmarking.org/result/1803302-FO-1803294FO72&gru&sro.

Linux 4.16 File-System TestsProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionSystem LayerTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: BtrfsTR150 SSD: XFSSeagate HDD: XFSSeagate HDD: BtrfsSeagate HDD: EXT4Virtio ZFS HDD Raid 0Virtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PVProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WB2 x Intel Xeon Gold 6138 @ 3.70GHz (40 Cores / 80 Threads)TYAN S7106 (V1.00 BIOS)Intel Sky Lake-E DMI3 Registers12 x 8192 MB DDR4-2666MT/s Micron 9ASF1G72PZ-2G6B1256GB Samsung SSD 850 + 2000GB Seagate ST2000DM006-2DM1 + 2 x 120GB TOSHIBA-TR150llvmpipe 95360MBVE228Intel I210 Gigabit ConnectionUbuntu 18.044.16.0-999-generic (x86_64) 20180323GNOME Shell 3.28.0X Server 1.19.63.3 Mesa 18.0.0-rc5 (LLVM 6.0 256 bits)GCC 7.3.0ext41920x1080f2fsbtrfsxfsbtrfsext4Common KVM @ 3.91GHz (2 Cores)QEMU Standard PC (i440FX + PIIX 1996) (rel-1.10.2-0-g5f4c7b1-prebuilt.qemu-project.org BIOS)2048MB34GB QEMU HDD + 30GB 2115bochsdrmfbDebian 9.44.9.0-6-amd64 (x86_64)GCC 6.3.0 201705161024x768qemuCommon KVM @ 3.91GHz (4 Cores)34GB QEMU HDDAMD Turion II Neo N54L @ 2.20GHz (2 Cores)4096MB15GBvm-other Xen 4.7.4-4.1 HypervisorCommon KVM @ 2.20GHz (2 Cores)QEMU Standard PC (i440FX + PIIX 1996) (rel-1.10.2-0-g5f4c7b1-prebuilt.qemu-project.org BIOS)bochsdrmfb1024x768qemuOpenBenchmarking.orgCompiler Details- TR150 SSD: EXT4: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - TR150 SSD: F2FS: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - TR150 SSD: Btrfs: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - TR150 SSD: XFS: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - Seagate HDD: XFS: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - Seagate HDD: Btrfs: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - Seagate HDD: EXT4: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - Virtio ZFS HDD Raid 0: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v - Virtio ZFS HDD Raid 0 2: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v - Virtio ZFS HDD Raid 10: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v - Virtio ZFS HDD Raid 10 WBU: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v - XenServer 7.4 Adaptec 6805 Raid 1 PV: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v - Proxmox ZFS Raid 1 WT: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v - Proxmox ZFS Raid 1 WB: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v Disk Details- TR150 SSD: EXT4: CFQ / data=ordered,relatime,rw- TR150 SSD: F2FS: CFQ / acl,active_logs=6,background_gc=on,extent_cache,flush_merge,inline_data,inline_dentry,inline_xattr,lazytime,mode=adaptive,no_heap,relatime,rw,user_xattr- TR150 SSD: Btrfs: CFQ / relatime,rw,space_cache,ssd,subvol=/,subvolid=5- TR150 SSD: XFS: CFQ / attr2,inode64,noquota,relatime,rw- Seagate HDD: XFS: CFQ / attr2,inode64,noquota,relatime,rw- Seagate HDD: Btrfs: CFQ / relatime,rw,space_cache,subvol=/,subvolid=5- Seagate HDD: EXT4: CFQ / data=ordered,relatime,rw- Virtio ZFS HDD Raid 0: CFQ / data=ordered,discard,noatime,rw- Virtio ZFS HDD Raid 0 2: CFQ / data=ordered,discard,noatime,rw- Virtio ZFS HDD Raid 10: CFQ / data=ordered,discard,noatime,rw- Virtio ZFS HDD Raid 10 WBU: CFQ / data=ordered,discard,noatime,rw- XenServer 7.4 Adaptec 6805 Raid 1 PV: none / data=ordered,discard,noatime,rw- Proxmox ZFS Raid 1 WT: none / data=ordered,discard,noatime,rw- Proxmox ZFS Raid 1 WB: none / data=ordered,discard,noatime,rwProcessor Details- TR150 SSD: EXT4, TR150 SSD: F2FS, TR150 SSD: Btrfs, TR150 SSD: XFS, Seagate HDD: XFS, Seagate HDD: Btrfs, Seagate HDD: EXT4: Scaling Governor: intel_pstate powersavePython Details- TR150 SSD: EXT4: Python 2.7.14+ + Python 3.6.5rc1- TR150 SSD: F2FS: Python 2.7.14+ + Python 3.6.5rc1- TR150 SSD: Btrfs: Python 2.7.14+ + Python 3.6.5rc1- TR150 SSD: XFS: Python 2.7.14+ + Python 3.6.5rc1- Seagate HDD: XFS: Python 2.7.14+ + Python 3.6.5rc1- Seagate HDD: Btrfs: Python 2.7.14+ + Python 3.6.5rc1- Seagate HDD: EXT4: Python 2.7.14+ + Python 3.6.5rc1- Virtio ZFS HDD Raid 0: Python 2.7.13 + Python 3.5.3- Virtio ZFS HDD Raid 0 2: Python 2.7.13 + Python 3.5.3- Virtio ZFS HDD Raid 10: Python 2.7.13 + Python 3.5.3- Virtio ZFS HDD Raid 10 WBU: Python 2.7.13 + Python 3.5.3- XenServer 7.4 Adaptec 6805 Raid 1 PV: Python 2.7.13 + Python 3.5.3- Proxmox ZFS Raid 1 WT: Python 2.7.13 + Python 3.5.3- Proxmox ZFS Raid 1 WB: Python 2.7.13 + Python 3.5.3Security Details- TR150 SSD: EXT4: KPTI + __user pointer sanitization + Full generic retpoline Protection- TR150 SSD: F2FS: KPTI + __user pointer sanitization + Full generic retpoline Protection- TR150 SSD: Btrfs: KPTI + __user pointer sanitization + Full generic retpoline Protection- TR150 SSD: XFS: KPTI + __user pointer sanitization + Full generic retpoline Protection- Seagate HDD: XFS: KPTI + __user pointer sanitization + Full generic retpoline Protection- Seagate HDD: Btrfs: KPTI + __user pointer sanitization + Full generic retpoline Protection- Seagate HDD: EXT4: KPTI + __user pointer sanitization + Full generic retpoline Protection- Virtio ZFS HDD Raid 0: KPTI + __user pointer sanitization + Full generic retpoline Protection- Virtio ZFS HDD Raid 0 2: KPTI + __user pointer sanitization + Full generic retpoline Protection- Virtio ZFS HDD Raid 10: KPTI + __user pointer sanitization + Full generic retpoline Protection- Virtio ZFS HDD Raid 10 WBU: KPTI + __user pointer sanitization + Full generic retpoline Protection- XenServer 7.4 Adaptec 6805 Raid 1 PV: __user pointer sanitization + Full AMD retpoline Protection- Proxmox ZFS Raid 1 WT: __user pointer sanitization + Full generic retpoline Protection- Proxmox ZFS Raid 1 WB: __user pointer sanitization + Full generic retpoline Protection

Linux 4.16 File-System Testsblogbench: Readblogbench: Writeaio-stress: Rand Writefio: Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directorydbench: 6iozone: 4Kb - 8GB - Write Performancecompilebench: Compilecompilebench: Initial Createcompilebench: Read Compiled Treesqlite: Timed SQLite Insertionsunpack-linux: linux-4.15.tar.xzgit: Time To Complete Common Git CommandsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: BtrfsTR150 SSD: XFSSeagate HDD: XFSSeagate HDD: BtrfsSeagate HDD: EXT4Virtio ZFS HDD Raid 0Virtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PVProxmox ZFS Raid 1 WTProxmox ZFS Raid 1 WB2097180103252301.32230275227413369.21103.491686.54505.73837.9741.276.406.31205493291563017.61212282228416276.3188.992271.93550.41878.0234.896.676.40226975034582936.1721273.2832783.68247.4993.201371.80109.66881.3199.089.446.36208437640392971.42212273202395442.6996.102101.38395.06892.2836.766.826.65219724321262979.141.531.1714814520.21153.082092.61402.86833.32417.707.656.76229343323702953.921.5223.021.544.0445.74161.012177.85262.66804.921012.387.256.59241537364732410.291.451.0315514925.73155.451678.76494.62801.75576.026.676.5721440719301341.6125819725519772.29101.53538.22255.10807.12246.0315.976.6719562618641376.592.9821725021956.88109.03495.46241.59887.85329.3114.527.152283941966748.842.832072702281553.66152.17300.82212.93746.273.3410.657.5718335272147.810.641.0168.3395.57212.7579.1275.8368.28230.6823.6515.9317.8125711755087.081440.421700.5450.2938.5739.5551.18326.85480.5729.6325.70243293678333.491.1717.4616521.6048.4166.76119.8783.03315.73400.7730.0320.08OpenBenchmarking.org

BlogBench

Test: Read

OpenBenchmarking.orgFinal Score, More Is BetterBlogBench 1.0Test: ReadProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WTSeagate HDD: BtrfsSeagate HDD: EXT4Seagate HDD: XFSTR150 SSD: BtrfsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: XFSVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PV500K1000K1500K2000K2500KSE +/- 1753.55, N = 3SE +/- 378.62, N = 3SE +/- 1570.58, N = 3SE +/- 16397.30, N = 3SE +/- 39188.34, N = 3SE +/- 12906.26, N = 3SE +/- 5745.23, N = 3SE +/- 146937.12, N = 6SE +/- 37384.19, N = 6SE +/- 26762.45, N = 6SE +/- 9034.98, N = 6SE +/- 11329.23, N = 6SE +/- 1813.14, N = 324329325711722934332415373219724322697502097180205493220843762144071956262283941833521. (CC) gcc options: -O2 -pthread

BlogBench

Test: Write

OpenBenchmarking.orgFinal Score, More Is BetterBlogBench 1.0Test: WriteProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WTSeagate HDD: BtrfsSeagate HDD: EXT4Seagate HDD: XFSTR150 SSD: BtrfsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: XFSVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PV2K4K6K8K10KSE +/- 41.65, N = 3SE +/- 100.50, N = 3SE +/- 46.68, N = 3SE +/- 23.54, N = 3SE +/- 40.70, N = 3SE +/- 25.15, N = 3SE +/- 88.82, N = 3SE +/- 83.15, N = 3SE +/- 92.81, N = 3SE +/- 190.23, N = 3SE +/- 52.35, N = 3SE +/- 97.59, N = 3SE +/- 30.51, N = 3678550237064732126345810325915640391930186419667211. (CC) gcc options: -O2 -pthread

Flexible IO Tester

Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.1Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryProxmox ZFS Raid 1 WTTR150 SSD: BtrfsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: XFSVirtio ZFS HDD Raid 0 214K28K42K56K70KSE +/- 392.99, N = 3SE +/- 133.33, N = 3SE +/- 233.33, N = 3SE +/- 200.00, N = 3SE +/- 1017.08, N = 33673354367589675420054400656331. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl

Flexible IO Tester

Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.1Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryProxmox ZFS Raid 1 WBSeagate HDD: BtrfsTR150 SSD: BtrfsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: XFSVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBU15K30K45K60K75KSE +/- 853.91, N = 6SE +/- 1010.53, N = 6SE +/- 466.67, N = 3SE +/- 1005.21, N = 6SE +/- 896.29, N = 3SE +/- 405.52, N = 3SE +/- 1325.64, N = 61610027400187507025072167699675050055567530001. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl

Flexible IO Tester

Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.1Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WTSeagate HDD: EXT4Seagate HDD: XFSTR150 SSD: BtrfsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: XFSVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PV20K40K60K80K100KSE +/- 1772.32, N = 6SE +/- 338.30, N = 3SE +/- 1174.07, N = 6SE +/- 1550.63, N = 3SE +/- 100.00, N = 3SE +/- 145.30, N = 3SE +/- 463.08, N = 3SE +/- 1156.62, N = 3SE +/- 833.23, N = 6SE +/- 970.36, N = 5SE +/- 2134.71, N = 54193343533397673780083633583005846751667651676368368940191001. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl

Flexible IO Tester

Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.1Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectorySeagate HDD: EXT4Seagate HDD: XFSTR150 SSD: BtrfsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: XFSVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PV20K40K60K80K100KSE +/- 783.87, N = 3SE +/- 617.12, N = 4SE +/- 1238.28, N = 6SE +/- 1000.00, N = 3SE +/- 333.33, N = 3SE +/- 5437.01, N = 6SE +/- 1815.44, N = 6SE +/- 351.19, N = 3SE +/- 2293.38, N = 6SE +/- 384.42, N = 3382333705021500106000106333101167501505600058317244331. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl

AIO-Stress

Test: Random Write

OpenBenchmarking.orgMB/s, More Is BetterAIO-Stress 0.21Test: Random WriteProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WTSeagate HDD: BtrfsSeagate HDD: EXT4Seagate HDD: XFSTR150 SSD: BtrfsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: XFSVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PV6001200180024003000SE +/- 18.55, N = 6SE +/- 4.25, N = 6SE +/- 38.74, N = 3SE +/- 40.49, N = 3SE +/- 22.47, N = 3SE +/- 38.20, N = 3SE +/- 45.90, N = 3SE +/- 50.73, N = 3SE +/- 26.35, N = 3SE +/- 47.09, N = 6SE +/- 33.30, N = 6SE +/- 163.17, N = 6SE +/- 0.84, N = 6333.4987.082953.922410.292979.142936.172301.323017.612971.421341.611376.59748.8447.811. (CC) gcc options: -pthread -laio

Flexible IO Tester

Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.1Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WTSeagate HDD: BtrfsSeagate HDD: EXT4Seagate HDD: XFSTR150 SSD: BtrfsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: XFSVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PV60120180240300SE +/- 0.02, N = 3SE +/- 1.76, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.67, N = 3SE +/- 1.00, N = 3SE +/- 1.00, N = 3SE +/- 3.18, N = 3SE +/- 0.16, N = 6SE +/- 0.24, N = 6SE +/- 0.02, N = 61.17144.001.521.451.53212.00230.00212.00212.00258.002.982.830.641. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl

Flexible IO Tester

Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.1Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WTSeagate HDD: BtrfsSeagate HDD: EXT4Seagate HDD: XFSTR150 SSD: BtrfsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: XFSVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PV60120180240300SE +/- 9.25, N = 6SE +/- 0.02, N = 6SE +/- 16.80, N = 6SE +/- 0.06, N = 6SE +/- 0.04, N = 6SE +/- 3.34, N = 6SE +/- 4.01, N = 6SE +/- 1.73, N = 3SE +/- 3.91, N = 6SE +/- 3.76, N = 3SE +/- 5.07, N = 6SE +/- 0.01, N = 317.460.4223.021.031.1773.28275.00282.00273.00197.00217.00207.001.011. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl

Flexible IO Tester

Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.1Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WTSeagate HDD: BtrfsSeagate HDD: EXT4Seagate HDD: XFSTR150 SSD: BtrfsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: XFSVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PV70140210280350SE +/- 7.05, N = 6SE +/- 1.33, N = 3SE +/- 0.01, N = 3SE +/- 4.53, N = 6SE +/- 6.23, N = 3SE +/- 0.67, N = 3SE +/- 1.76, N = 3SE +/- 4.58, N = 3SE +/- 3.43, N = 6SE +/- 4.16, N = 5SE +/- 9.06, N = 6165.00170.001.54155.00148.00327.00227.00228.00202.00255.00250.00270.0068.331. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl

Flexible IO Tester

Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.1Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WTSeagate HDD: BtrfsSeagate HDD: EXT4Seagate HDD: XFSTR150 SSD: BtrfsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: XFSVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PV90180270360450SE +/- 4.64, N = 6SE +/- 0.02, N = 6SE +/- 0.11, N = 6SE +/- 3.00, N = 3SE +/- 2.50, N = 4SE +/- 4.96, N = 6SE +/- 3.67, N = 3SE +/- 1.67, N = 3SE +/- 20.47, N = 6SE +/- 6.45, N = 6SE +/- 1.67, N = 3SE +/- 8.92, N = 6SE +/- 1.55, N = 321.600.544.04149.00145.0083.68413.00416.00395.00197.00219.00228.0095.571. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl

Dbench

Client Count: 6

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.0Client Count: 6Proxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WTSeagate HDD: BtrfsSeagate HDD: EXT4Seagate HDD: XFSTR150 SSD: BtrfsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: XFSVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PV30060090012001500SE +/- 0.25, N = 3SE +/- 0.17, N = 3SE +/- 0.59, N = 6SE +/- 0.67, N = 6SE +/- 0.02, N = 3SE +/- 2.85, N = 3SE +/- 8.35, N = 6SE +/- 0.15, N = 3SE +/- 1.98, N = 3SE +/- 0.05, N = 3SE +/- 0.41, N = 3SE +/- 2.57, N = 3SE +/- 0.88, N = 348.4150.2945.7425.7320.21247.49369.21276.31442.6972.2956.881553.66212.751. (CC) gcc options: -lpopt -O2

IOzone

Record Size: 4Kb - File Size: 8GB - Disk Test: Write Performance

OpenBenchmarking.orgMB/s, More Is BetterIOzone 3.465Record Size: 4Kb - File Size: 8GB - Disk Test: Write PerformanceProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WTSeagate HDD: BtrfsSeagate HDD: EXT4Seagate HDD: XFSTR150 SSD: BtrfsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: XFSVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PV4080120160200SE +/- 3.27, N = 6SE +/- 0.71, N = 6SE +/- 6.56, N = 6SE +/- 3.66, N = 6SE +/- 1.89, N = 3SE +/- 3.44, N = 6SE +/- 1.81, N = 6SE +/- 2.07, N = 6SE +/- 4.92, N = 6SE +/- 13.18, N = 6SE +/- 12.21, N = 6SE +/- 19.27, N = 6SE +/- 5.09, N = 666.7638.57161.01155.45153.0893.20103.4988.9996.10101.53109.03152.1779.121. (CC) gcc options: -O3

Compile Bench

Test: Compile

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: CompileProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WTSeagate HDD: BtrfsSeagate HDD: EXT4Seagate HDD: XFSTR150 SSD: BtrfsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: XFSVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PV5001000150020002500SE +/- 2.12, N = 6SE +/- 0.55, N = 3SE +/- 35.99, N = 3SE +/- 9.62, N = 3SE +/- 2.15, N = 3SE +/- 24.66, N = 6SE +/- 13.09, N = 3SE +/- 54.64, N = 6SE +/- 4.55, N = 3SE +/- 56.60, N = 6SE +/- 37.60, N = 6SE +/- 47.76, N = 6SE +/- 1.07, N = 6119.8739.552177.851678.762092.611371.801686.542271.932101.38538.22495.46300.8275.83

Compile Bench

Test: Initial Create

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Initial CreateProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WTSeagate HDD: BtrfsSeagate HDD: EXT4Seagate HDD: XFSTR150 SSD: BtrfsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: XFSVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PV120240360480600SE +/- 4.47, N = 3SE +/- 1.21, N = 3SE +/- 6.83, N = 3SE +/- 11.14, N = 3SE +/- 9.05, N = 3SE +/- 14.76, N = 3SE +/- 8.16, N = 3SE +/- 9.94, N = 3SE +/- 4.83, N = 3SE +/- 15.11, N = 3SE +/- 4.61, N = 3SE +/- 4.80, N = 3SE +/- 0.68, N = 383.0351.18262.66494.62402.86109.66505.73550.41395.06255.10241.59212.9368.28

Compile Bench

Test: Read Compiled Tree

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Read Compiled TreeProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WTSeagate HDD: BtrfsSeagate HDD: EXT4Seagate HDD: XFSTR150 SSD: BtrfsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: XFSVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PV2004006008001000SE +/- 4.38, N = 3SE +/- 6.19, N = 3SE +/- 7.06, N = 3SE +/- 13.33, N = 3SE +/- 12.74, N = 3SE +/- 11.29, N = 3SE +/- 44.68, N = 3SE +/- 6.75, N = 3SE +/- 6.60, N = 3SE +/- 77.29, N = 3SE +/- 71.49, N = 3SE +/- 38.31, N = 3SE +/- 1.31, N = 3315.73326.85804.92801.75833.32881.31837.97878.02892.28807.12887.85746.27230.68

SQLite

Timed SQLite Insertions

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.22Timed SQLite InsertionsProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WTSeagate HDD: BtrfsSeagate HDD: EXT4Seagate HDD: XFSTR150 SSD: BtrfsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: XFSVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PV2004006008001000SE +/- 5.53, N = 6SE +/- 13.85, N = 6SE +/- 2.51, N = 3SE +/- 9.59, N = 4SE +/- 0.96, N = 3SE +/- 1.94, N = 3SE +/- 0.61, N = 5SE +/- 0.53, N = 5SE +/- 0.51, N = 6SE +/- 3.85, N = 3SE +/- 6.15, N = 6SE +/- 0.05, N = 6SE +/- 0.55, N = 6400.77480.571012.38576.02417.7099.0841.2734.8936.76246.03329.313.3423.651. (CC) gcc options: -O2 -ldl -lpthread

Unpacking The Linux Kernel

linux-4.15.tar.xz

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernellinux-4.15.tar.xzProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WTSeagate HDD: BtrfsSeagate HDD: EXT4Seagate HDD: XFSTR150 SSD: BtrfsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: XFSVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PV714212835SE +/- 2.11, N = 8SE +/- 2.46, N = 8SE +/- 0.16, N = 8SE +/- 0.08, N = 8SE +/- 0.16, N = 8SE +/- 0.16, N = 4SE +/- 0.04, N = 4SE +/- 0.08, N = 8SE +/- 0.08, N = 8SE +/- 0.97, N = 8SE +/- 1.36, N = 8SE +/- 2.27, N = 8SE +/- 1.09, N = 830.0329.637.256.677.659.446.406.676.8215.9714.5210.6515.93

Git

Time To Complete Common Git Commands

OpenBenchmarking.orgSeconds, Fewer Is BetterGitTime To Complete Common Git CommandsProxmox ZFS Raid 1 WBProxmox ZFS Raid 1 WTSeagate HDD: BtrfsSeagate HDD: EXT4Seagate HDD: XFSTR150 SSD: BtrfsTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: XFSVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PV612182430SE +/- 0.72, N = 6SE +/- 1.57, N = 6SE +/- 0.11, N = 4SE +/- 0.17, N = 6SE +/- 0.10, N = 3SE +/- 0.07, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 3SE +/- 0.12, N = 3SE +/- 0.10, N = 5SE +/- 0.27, N = 6SE +/- 1.23, N = 6SE +/- 0.27, N = 620.0825.706.596.576.766.366.316.406.656.677.157.5717.811. Proxmox ZFS Raid 1 WB: git version 2.11.02. Proxmox ZFS Raid 1 WT: git version 2.11.03. Seagate HDD: Btrfs: git version 2.15.14. Seagate HDD: EXT4: git version 2.15.15. Seagate HDD: XFS: git version 2.15.16. TR150 SSD: Btrfs: git version 2.15.17. TR150 SSD: EXT4: git version 2.15.18. TR150 SSD: F2FS: git version 2.15.19. TR150 SSD: XFS: git version 2.15.110. Virtio ZFS HDD Raid 0 2: git version 2.11.011. Virtio ZFS HDD Raid 10: git version 2.11.012. Virtio ZFS HDD Raid 10 WBU: git version 2.11.013. XenServer 7.4 Adaptec 6805 Raid 1 PV: git version 2.11.0


Phoronix Test Suite v10.8.4