Linux 4.16 File-System Tests

HDD and SSD file-system tests on Linux 4.16 for a future article on Phoronix.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 1803308-FO-1803305FO34
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

CPU Massive 2 Tests
Disk Test Suite 5 Tests
Server 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable
Show Perf Per Clock Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
TR150 SSD: EXT4
March 24 2018
 
TR150 SSD: F2FS
March 24 2018
 
TR150 SSD: Btrfs
March 25 2018
 
TR150 SSD: XFS
March 25 2018
 
Seagate HDD: XFS
March 25 2018
 
Seagate HDD: Btrfs
March 25 2018
 
Seagate HDD: EXT4
March 25 2018
 
Virtio ZFS HDD Raid 0
March 26 2018
 
Virtio ZFS HDD Raid 0 2
March 26 2018
 
Virtio ZFS HDD Raid 10
March 29 2018
 
Virtio ZFS HDD Raid 10 WBU
March 29 2018
 
XenServer 7.4 Adaptec 6805 Raid 1 PV
March 30 2018
 
Proxmox ZFS Raid 1 WT
March 30 2018
 
XenServer 7.4 Adaptec 6805 Raid 1 HVM-PV
March 30 2018
 
Invert Hiding All Results Option
 

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Linux 4.16 File-System Tests - Phoronix Test Suite

Linux 4.16 File-System Tests

HDD and SSD file-system tests on Linux 4.16 for a future article on Phoronix.

HTML result view exported from: https://openbenchmarking.org/result/1803308-FO-1803305FO34&sor&grw.

Linux 4.16 File-System TestsProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionSystem LayerTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: BtrfsTR150 SSD: XFSSeagate HDD: XFSSeagate HDD: BtrfsSeagate HDD: EXT4Virtio ZFS HDD Raid 0Virtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PVProxmox ZFS Raid 1 WTXenServer 7.4 Adaptec 6805 Raid 1 HVM-PV2 x Intel Xeon Gold 6138 @ 3.70GHz (40 Cores / 80 Threads)TYAN S7106 (V1.00 BIOS)Intel Sky Lake-E DMI3 Registers12 x 8192 MB DDR4-2666MT/s Micron 9ASF1G72PZ-2G6B1256GB Samsung SSD 850 + 2000GB Seagate ST2000DM006-2DM1 + 2 x 120GB TOSHIBA-TR150llvmpipe 95360MBVE228Intel I210 Gigabit ConnectionUbuntu 18.044.16.0-999-generic (x86_64) 20180323GNOME Shell 3.28.0X Server 1.19.63.3 Mesa 18.0.0-rc5 (LLVM 6.0 256 bits)GCC 7.3.0ext41920x1080f2fsbtrfsxfsbtrfsext4Common KVM @ 3.91GHz (2 Cores)QEMU Standard PC (i440FX + PIIX 1996) (rel-1.10.2-0-g5f4c7b1-prebuilt.qemu-project.org BIOS)2048MB34GB QEMU HDD + 30GB 2115bochsdrmfbDebian 9.44.9.0-6-amd64 (x86_64)GCC 6.3.0 201705161024x768qemuCommon KVM @ 3.91GHz (4 Cores)34GB QEMU HDDAMD Turion II Neo N54L @ 2.20GHz (2 Cores)4096MB15GBvm-other Xen 4.7.4-4.1 HypervisorCommon KVM @ 2.20GHz (2 Cores)QEMU Standard PC (i440FX + PIIX 1996) (rel-1.10.2-0-g5f4c7b1-prebuilt.qemu-project.org BIOS)bochsdrmfb1024x768qemu2 x AMD Turion II Neo N54L @ 2.20GHz (2 Cores)Xen HVM domU (4.7.4-4.1 BIOS)cirrusdrmfbXen HVM domU 4.7.4-4.1OpenBenchmarking.orgCompiler Details- TR150 SSD: EXT4: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v- TR150 SSD: F2FS: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v- TR150 SSD: Btrfs: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v- TR150 SSD: XFS: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v- Seagate HDD: XFS: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v- Seagate HDD: Btrfs: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v- Seagate HDD: EXT4: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v- Virtio ZFS HDD Raid 0: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Virtio ZFS HDD Raid 0 2: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Virtio ZFS HDD Raid 10: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Virtio ZFS HDD Raid 10 WBU: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- XenServer 7.4 Adaptec 6805 Raid 1 PV: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- Proxmox ZFS Raid 1 WT: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -v- XenServer 7.4 Adaptec 6805 Raid 1 HVM-PV: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic -vDisk Details- TR150 SSD: EXT4: CFQ / data=ordered,relatime,rw- TR150 SSD: F2FS: CFQ / acl,active_logs=6,background_gc=on,extent_cache,flush_merge,inline_data,inline_dentry,inline_xattr,lazytime,mode=adaptive,no_heap,relatime,rw,user_xattr- TR150 SSD: Btrfs: CFQ / relatime,rw,space_cache,ssd,subvol=/,subvolid=5- TR150 SSD: XFS: CFQ / attr2,inode64,noquota,relatime,rw- Seagate HDD: XFS: CFQ / attr2,inode64,noquota,relatime,rw- Seagate HDD: Btrfs: CFQ / relatime,rw,space_cache,subvol=/,subvolid=5- Seagate HDD: EXT4: CFQ / data=ordered,relatime,rw- Virtio ZFS HDD Raid 0: CFQ / data=ordered,discard,noatime,rw- Virtio ZFS HDD Raid 0 2: CFQ / data=ordered,discard,noatime,rw- Virtio ZFS HDD Raid 10: CFQ / data=ordered,discard,noatime,rw- Virtio ZFS HDD Raid 10 WBU: CFQ / data=ordered,discard,noatime,rw- XenServer 7.4 Adaptec 6805 Raid 1 PV: none / data=ordered,discard,noatime,rw- Proxmox ZFS Raid 1 WT: none / data=ordered,discard,noatime,rw- XenServer 7.4 Adaptec 6805 Raid 1 HVM-PV: none / data=ordered,discard,noatime,rwProcessor Details- TR150 SSD: EXT4, TR150 SSD: F2FS, TR150 SSD: Btrfs, TR150 SSD: XFS, Seagate HDD: XFS, Seagate HDD: Btrfs, Seagate HDD: EXT4: Scaling Governor: intel_pstate powersavePython Details- TR150 SSD: EXT4: Python 2.7.14+ + Python 3.6.5rc1- TR150 SSD: F2FS: Python 2.7.14+ + Python 3.6.5rc1- TR150 SSD: Btrfs: Python 2.7.14+ + Python 3.6.5rc1- TR150 SSD: XFS: Python 2.7.14+ + Python 3.6.5rc1- Seagate HDD: XFS: Python 2.7.14+ + Python 3.6.5rc1- Seagate HDD: Btrfs: Python 2.7.14+ + Python 3.6.5rc1- Seagate HDD: EXT4: Python 2.7.14+ + Python 3.6.5rc1- Virtio ZFS HDD Raid 0: Python 2.7.13 + Python 3.5.3- Virtio ZFS HDD Raid 0 2: Python 2.7.13 + Python 3.5.3- Virtio ZFS HDD Raid 10: Python 2.7.13 + Python 3.5.3- Virtio ZFS HDD Raid 10 WBU: Python 2.7.13 + Python 3.5.3- XenServer 7.4 Adaptec 6805 Raid 1 PV: Python 2.7.13 + Python 3.5.3- Proxmox ZFS Raid 1 WT: Python 2.7.13 + Python 3.5.3- XenServer 7.4 Adaptec 6805 Raid 1 HVM-PV: Python 2.7.13 + Python 3.5.3Security Details- TR150 SSD: EXT4: KPTI + __user pointer sanitization + Full generic retpoline Protection- TR150 SSD: F2FS: KPTI + __user pointer sanitization + Full generic retpoline Protection- TR150 SSD: Btrfs: KPTI + __user pointer sanitization + Full generic retpoline Protection- TR150 SSD: XFS: KPTI + __user pointer sanitization + Full generic retpoline Protection- Seagate HDD: XFS: KPTI + __user pointer sanitization + Full generic retpoline Protection- Seagate HDD: Btrfs: KPTI + __user pointer sanitization + Full generic retpoline Protection- Seagate HDD: EXT4: KPTI + __user pointer sanitization + Full generic retpoline Protection- Virtio ZFS HDD Raid 0: KPTI + __user pointer sanitization + Full generic retpoline Protection- Virtio ZFS HDD Raid 0 2: KPTI + __user pointer sanitization + Full generic retpoline Protection- Virtio ZFS HDD Raid 10: KPTI + __user pointer sanitization + Full generic retpoline Protection- Virtio ZFS HDD Raid 10 WBU: KPTI + __user pointer sanitization + Full generic retpoline Protection- XenServer 7.4 Adaptec 6805 Raid 1 PV: __user pointer sanitization + Full AMD retpoline Protection- Proxmox ZFS Raid 1 WT: __user pointer sanitization + Full generic retpoline Protection- XenServer 7.4 Adaptec 6805 Raid 1 HVM-PV: __user pointer sanitization + Full AMD retpoline Protection

Linux 4.16 File-System Testscompilebench: Compilecompilebench: Initial Createcompilebench: Read Compiled Treedbench: 6fio: Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directoryiozone: 4Kb - 8GB - Write Performanceunpack-linux: linux-4.15.tar.xzfio: Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directoryblogbench: Readblogbench: Writesqlite: Timed SQLite Insertionsgit: Time To Complete Common Git Commandsaio-stress: Rand WriteTR150 SSD: EXT4TR150 SSD: F2FSTR150 SSD: BtrfsTR150 SSD: XFSSeagate HDD: XFSSeagate HDD: BtrfsSeagate HDD: EXT4Virtio ZFS HDD Raid 0Virtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PVProxmox ZFS Raid 1 WTXenServer 7.4 Adaptec 6805 Raid 1 HVM-PV1686.54505.73837.97369.21230275413103.496.4022720971801032541.276.312301.322271.93550.41878.02276.3121228241688.996.672282054932915634.896.403017.611371.80109.66881.31247.4921273.2883.6893.209.443272269750345899.086.362936.172101.38395.06892.28442.6921227339596.106.822022084376403936.766.652971.422092.61402.86833.3220.211.531.17145153.087.6514821972432126417.706.762979.142177.85262.66804.9245.741.5223.024.04161.017.251.54229343323701012.386.592953.921678.76494.62801.7525.731.451.03149155.456.6715524153736473576.026.572410.29538.22255.10807.1272.29258197197101.5315.972552144071930246.036.671341.61495.46241.59887.8556.882.98217219109.0314.522501956261864329.317.151376.59300.82212.93746.271553.662.83207228152.1710.6527022839419663.347.57748.8475.8368.28230.68212.750.641.0195.5779.1215.9368.3318335272123.6517.8147.8139.5551.18326.8550.291440.420.5438.5729.63170257117550480.5725.7087.0870.5069.18324.00347.710.941.0597.1784.2416.5090.6730115362922.6516.0949.62OpenBenchmarking.org

Compile Bench

Test: Compile

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: CompileTR150 SSD: F2FSSeagate HDD: BtrfsTR150 SSD: XFSSeagate HDD: XFSTR150 SSD: EXT4Seagate HDD: EXT4TR150 SSD: BtrfsVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 PVXenServer 7.4 Adaptec 6805 Raid 1 HVM-PVProxmox ZFS Raid 1 WT5001000150020002500SE +/- 54.64, N = 6SE +/- 35.99, N = 3SE +/- 4.55, N = 3SE +/- 2.15, N = 3SE +/- 13.09, N = 3SE +/- 9.62, N = 3SE +/- 24.66, N = 6SE +/- 56.60, N = 6SE +/- 37.60, N = 6SE +/- 47.76, N = 6SE +/- 1.07, N = 6SE +/- 1.98, N = 6SE +/- 0.55, N = 32271.932177.852101.382092.611686.541678.761371.80538.22495.46300.8275.8370.5039.55

Compile Bench

Test: Initial Create

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Initial CreateTR150 SSD: F2FSTR150 SSD: EXT4Seagate HDD: EXT4Seagate HDD: XFSTR150 SSD: XFSSeagate HDD: BtrfsVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUTR150 SSD: BtrfsXenServer 7.4 Adaptec 6805 Raid 1 HVM-PVXenServer 7.4 Adaptec 6805 Raid 1 PVProxmox ZFS Raid 1 WT120240360480600SE +/- 9.94, N = 3SE +/- 8.16, N = 3SE +/- 11.14, N = 3SE +/- 9.05, N = 3SE +/- 4.83, N = 3SE +/- 6.83, N = 3SE +/- 15.11, N = 3SE +/- 4.61, N = 3SE +/- 4.80, N = 3SE +/- 14.76, N = 3SE +/- 0.99, N = 3SE +/- 0.68, N = 3SE +/- 1.21, N = 3550.41505.73494.62402.86395.06262.66255.10241.59212.93109.6669.1868.2851.18

Compile Bench

Test: Read Compiled Tree

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Read Compiled TreeTR150 SSD: XFSVirtio ZFS HDD Raid 10TR150 SSD: BtrfsTR150 SSD: F2FSTR150 SSD: EXT4Seagate HDD: XFSVirtio ZFS HDD Raid 0 2Seagate HDD: BtrfsSeagate HDD: EXT4Virtio ZFS HDD Raid 10 WBUProxmox ZFS Raid 1 WTXenServer 7.4 Adaptec 6805 Raid 1 HVM-PVXenServer 7.4 Adaptec 6805 Raid 1 PV2004006008001000SE +/- 6.60, N = 3SE +/- 71.49, N = 3SE +/- 11.29, N = 3SE +/- 6.75, N = 3SE +/- 44.68, N = 3SE +/- 12.74, N = 3SE +/- 77.29, N = 3SE +/- 7.06, N = 3SE +/- 13.33, N = 3SE +/- 38.31, N = 3SE +/- 6.19, N = 3SE +/- 2.93, N = 3SE +/- 1.31, N = 3892.28887.85881.31878.02837.97833.32807.12804.92801.75746.27326.85324.00230.68

Dbench

Client Count: 6

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.0Client Count: 6Virtio ZFS HDD Raid 10 WBUTR150 SSD: XFSTR150 SSD: EXT4XenServer 7.4 Adaptec 6805 Raid 1 HVM-PVTR150 SSD: F2FSTR150 SSD: BtrfsXenServer 7.4 Adaptec 6805 Raid 1 PVVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Proxmox ZFS Raid 1 WTSeagate HDD: BtrfsSeagate HDD: EXT4Seagate HDD: XFS30060090012001500SE +/- 2.57, N = 3SE +/- 1.98, N = 3SE +/- 8.35, N = 6SE +/- 0.63, N = 3SE +/- 0.15, N = 3SE +/- 2.85, N = 3SE +/- 0.88, N = 3SE +/- 0.05, N = 3SE +/- 0.41, N = 3SE +/- 0.17, N = 3SE +/- 0.59, N = 6SE +/- 0.67, N = 6SE +/- 0.02, N = 31553.66442.69369.21347.71276.31247.49212.7572.2956.8850.2945.7425.7320.211. (CC) gcc options: -lpopt -O2

Flexible IO Tester

Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.1Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryVirtio ZFS HDD Raid 0 2TR150 SSD: EXT4TR150 SSD: XFSTR150 SSD: BtrfsTR150 SSD: F2FSProxmox ZFS Raid 1 WTVirtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUSeagate HDD: XFSSeagate HDD: BtrfsSeagate HDD: EXT4XenServer 7.4 Adaptec 6805 Raid 1 HVM-PVXenServer 7.4 Adaptec 6805 Raid 1 PV60120180240300SE +/- 3.18, N = 3SE +/- 1.00, N = 3SE +/- 0.67, N = 3SE +/- 1.00, N = 3SE +/- 1.76, N = 3SE +/- 0.16, N = 6SE +/- 0.24, N = 6SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 6258.00230.00212.00212.00212.00144.002.982.831.531.521.450.940.641. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl

Flexible IO Tester

Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.1Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryTR150 SSD: F2FSTR150 SSD: EXT4TR150 SSD: XFSVirtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUVirtio ZFS HDD Raid 0 2TR150 SSD: BtrfsSeagate HDD: BtrfsSeagate HDD: XFSXenServer 7.4 Adaptec 6805 Raid 1 HVM-PVSeagate HDD: EXT4XenServer 7.4 Adaptec 6805 Raid 1 PVProxmox ZFS Raid 1 WT60120180240300SE +/- 1.73, N = 3SE +/- 4.01, N = 6SE +/- 3.91, N = 6SE +/- 5.07, N = 6SE +/- 3.76, N = 3SE +/- 3.34, N = 6SE +/- 16.80, N = 6SE +/- 0.04, N = 6SE +/- 0.01, N = 3SE +/- 0.06, N = 6SE +/- 0.01, N = 3SE +/- 0.02, N = 6282.00275.00273.00217.00207.00197.0073.2823.021.171.051.031.010.421. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl

Flexible IO Tester

Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.1Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryTR150 SSD: F2FSTR150 SSD: EXT4TR150 SSD: XFSVirtio ZFS HDD Raid 10 WBUVirtio ZFS HDD Raid 10Virtio ZFS HDD Raid 0 2Seagate HDD: EXT4Seagate HDD: XFSXenServer 7.4 Adaptec 6805 Raid 1 HVM-PVXenServer 7.4 Adaptec 6805 Raid 1 PVTR150 SSD: BtrfsSeagate HDD: BtrfsProxmox ZFS Raid 1 WT90180270360450SE +/- 1.67, N = 3SE +/- 3.67, N = 3SE +/- 20.47, N = 6SE +/- 8.92, N = 6SE +/- 1.67, N = 3SE +/- 6.45, N = 6SE +/- 3.00, N = 3SE +/- 2.50, N = 4SE +/- 0.32, N = 3SE +/- 1.55, N = 3SE +/- 4.96, N = 6SE +/- 0.11, N = 6SE +/- 0.02, N = 6416.00413.00395.00228.00219.00197.00149.00145.0097.1795.5783.684.040.541. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl

IOzone

Record Size: 4Kb - File Size: 8GB - Disk Test: Write Performance

OpenBenchmarking.orgMB/s, More Is BetterIOzone 3.465Record Size: 4Kb - File Size: 8GB - Disk Test: Write PerformanceSeagate HDD: BtrfsSeagate HDD: EXT4Seagate HDD: XFSVirtio ZFS HDD Raid 10 WBUVirtio ZFS HDD Raid 10TR150 SSD: EXT4Virtio ZFS HDD Raid 0 2TR150 SSD: XFSTR150 SSD: BtrfsTR150 SSD: F2FSXenServer 7.4 Adaptec 6805 Raid 1 HVM-PVXenServer 7.4 Adaptec 6805 Raid 1 PVProxmox ZFS Raid 1 WT4080120160200SE +/- 6.56, N = 6SE +/- 3.66, N = 6SE +/- 1.89, N = 3SE +/- 19.27, N = 6SE +/- 12.21, N = 6SE +/- 1.81, N = 6SE +/- 13.18, N = 6SE +/- 4.92, N = 6SE +/- 3.44, N = 6SE +/- 2.07, N = 6SE +/- 1.03, N = 3SE +/- 5.09, N = 6SE +/- 0.71, N = 6161.01155.45153.08152.17109.03103.49101.5396.1093.2088.9984.2479.1238.571. (CC) gcc options: -O3

Unpacking The Linux Kernel

linux-4.15.tar.xz

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernellinux-4.15.tar.xzTR150 SSD: EXT4TR150 SSD: F2FSSeagate HDD: EXT4TR150 SSD: XFSSeagate HDD: BtrfsSeagate HDD: XFSTR150 SSD: BtrfsVirtio ZFS HDD Raid 10 WBUVirtio ZFS HDD Raid 10XenServer 7.4 Adaptec 6805 Raid 1 PVVirtio ZFS HDD Raid 0 2XenServer 7.4 Adaptec 6805 Raid 1 HVM-PVProxmox ZFS Raid 1 WT714212835SE +/- 0.04, N = 4SE +/- 0.08, N = 8SE +/- 0.08, N = 8SE +/- 0.08, N = 8SE +/- 0.16, N = 8SE +/- 0.16, N = 8SE +/- 0.16, N = 4SE +/- 2.27, N = 8SE +/- 1.36, N = 8SE +/- 1.09, N = 8SE +/- 0.97, N = 8SE +/- 1.02, N = 8SE +/- 2.46, N = 86.406.676.676.827.257.659.4410.6514.5215.9315.9716.5029.63

Flexible IO Tester

Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.1Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryTR150 SSD: BtrfsVirtio ZFS HDD Raid 10 WBUVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10TR150 SSD: F2FSTR150 SSD: EXT4TR150 SSD: XFSProxmox ZFS Raid 1 WTSeagate HDD: EXT4Seagate HDD: XFSXenServer 7.4 Adaptec 6805 Raid 1 HVM-PVXenServer 7.4 Adaptec 6805 Raid 1 PVSeagate HDD: Btrfs70140210280350SE +/- 6.23, N = 3SE +/- 4.16, N = 5SE +/- 4.58, N = 3SE +/- 3.43, N = 6SE +/- 0.67, N = 3SE +/- 1.76, N = 3SE +/- 1.33, N = 3SE +/- 4.53, N = 6SE +/- 0.83, N = 3SE +/- 9.06, N = 6SE +/- 0.01, N = 3327.00270.00255.00250.00228.00227.00202.00170.00155.00148.0090.6768.331.541. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl

BlogBench

Test: Read

OpenBenchmarking.orgFinal Score, More Is BetterBlogBench 1.0Test: ReadSeagate HDD: EXT4Seagate HDD: BtrfsTR150 SSD: BtrfsSeagate HDD: XFSTR150 SSD: EXT4TR150 SSD: XFSTR150 SSD: F2FSXenServer 7.4 Adaptec 6805 Raid 1 HVM-PVProxmox ZFS Raid 1 WTVirtio ZFS HDD Raid 10 WBUVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10XenServer 7.4 Adaptec 6805 Raid 1 PV500K1000K1500K2000K2500KSE +/- 16397.30, N = 3SE +/- 1570.58, N = 3SE +/- 12906.26, N = 3SE +/- 39188.34, N = 3SE +/- 5745.23, N = 3SE +/- 37384.19, N = 6SE +/- 146937.12, N = 6SE +/- 2565.03, N = 3SE +/- 378.62, N = 3SE +/- 11329.23, N = 6SE +/- 26762.45, N = 6SE +/- 9034.98, N = 6SE +/- 1813.14, N = 324153732293433226975021972432097180208437620549323011532571172283942144071956261833521. (CC) gcc options: -O2 -pthread

BlogBench

Test: Write

OpenBenchmarking.orgFinal Score, More Is BetterBlogBench 1.0Test: WriteTR150 SSD: EXT4TR150 SSD: F2FSSeagate HDD: EXT4TR150 SSD: XFSTR150 SSD: BtrfsSeagate HDD: BtrfsSeagate HDD: XFSVirtio ZFS HDD Raid 10 WBUVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10XenServer 7.4 Adaptec 6805 Raid 1 PVXenServer 7.4 Adaptec 6805 Raid 1 HVM-PVProxmox ZFS Raid 1 WT2K4K6K8K10KSE +/- 88.82, N = 3SE +/- 83.15, N = 3SE +/- 23.54, N = 3SE +/- 92.81, N = 3SE +/- 25.15, N = 3SE +/- 46.68, N = 3SE +/- 40.70, N = 3SE +/- 97.59, N = 3SE +/- 190.23, N = 3SE +/- 52.35, N = 3SE +/- 30.51, N = 3SE +/- 8.65, N = 3SE +/- 100.50, N = 3103259156647340393458237021261966193018647216295501. (CC) gcc options: -O2 -pthread

SQLite

Timed SQLite Insertions

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.22Timed SQLite InsertionsVirtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 HVM-PVXenServer 7.4 Adaptec 6805 Raid 1 PVTR150 SSD: F2FSTR150 SSD: XFSTR150 SSD: EXT4TR150 SSD: BtrfsVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10Seagate HDD: XFSProxmox ZFS Raid 1 WTSeagate HDD: EXT4Seagate HDD: Btrfs2004006008001000SE +/- 0.05, N = 6SE +/- 0.39, N = 6SE +/- 0.55, N = 6SE +/- 0.53, N = 5SE +/- 0.51, N = 6SE +/- 0.61, N = 5SE +/- 1.94, N = 3SE +/- 3.85, N = 3SE +/- 6.15, N = 6SE +/- 0.96, N = 3SE +/- 13.85, N = 6SE +/- 9.59, N = 4SE +/- 2.51, N = 33.3422.6523.6534.8936.7641.2799.08246.03329.31417.70480.57576.021012.381. (CC) gcc options: -O2 -ldl -lpthread

Git

Time To Complete Common Git Commands

OpenBenchmarking.orgSeconds, Fewer Is BetterGitTime To Complete Common Git CommandsTR150 SSD: EXT4TR150 SSD: BtrfsTR150 SSD: F2FSSeagate HDD: EXT4Seagate HDD: BtrfsTR150 SSD: XFSVirtio ZFS HDD Raid 0 2Seagate HDD: XFSVirtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUXenServer 7.4 Adaptec 6805 Raid 1 HVM-PVXenServer 7.4 Adaptec 6805 Raid 1 PVProxmox ZFS Raid 1 WT612182430SE +/- 0.03, N = 3SE +/- 0.07, N = 3SE +/- 0.05, N = 3SE +/- 0.17, N = 6SE +/- 0.11, N = 4SE +/- 0.12, N = 3SE +/- 0.10, N = 5SE +/- 0.10, N = 3SE +/- 0.27, N = 6SE +/- 1.23, N = 6SE +/- 0.17, N = 3SE +/- 0.27, N = 6SE +/- 1.57, N = 66.316.366.406.576.596.656.676.767.157.5716.0917.8125.701. TR150 SSD: EXT4: git version 2.15.12. TR150 SSD: Btrfs: git version 2.15.13. TR150 SSD: F2FS: git version 2.15.14. Seagate HDD: EXT4: git version 2.15.15. Seagate HDD: Btrfs: git version 2.15.16. TR150 SSD: XFS: git version 2.15.17. Virtio ZFS HDD Raid 0 2: git version 2.11.08. Seagate HDD: XFS: git version 2.15.19. Virtio ZFS HDD Raid 10: git version 2.11.010. Virtio ZFS HDD Raid 10 WBU: git version 2.11.011. XenServer 7.4 Adaptec 6805 Raid 1 HVM-PV: git version 2.11.012. XenServer 7.4 Adaptec 6805 Raid 1 PV: git version 2.11.013. Proxmox ZFS Raid 1 WT: git version 2.11.0

AIO-Stress

Test: Random Write

OpenBenchmarking.orgMB/s, More Is BetterAIO-Stress 0.21Test: Random WriteTR150 SSD: F2FSSeagate HDD: XFSTR150 SSD: XFSSeagate HDD: BtrfsTR150 SSD: BtrfsSeagate HDD: EXT4TR150 SSD: EXT4Virtio ZFS HDD Raid 10Virtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10 WBUProxmox ZFS Raid 1 WTXenServer 7.4 Adaptec 6805 Raid 1 HVM-PVXenServer 7.4 Adaptec 6805 Raid 1 PV6001200180024003000SE +/- 50.73, N = 3SE +/- 22.47, N = 3SE +/- 26.35, N = 3SE +/- 38.74, N = 3SE +/- 38.20, N = 3SE +/- 40.49, N = 3SE +/- 45.90, N = 3SE +/- 33.30, N = 6SE +/- 47.09, N = 6SE +/- 163.17, N = 6SE +/- 4.25, N = 6SE +/- 0.68, N = 6SE +/- 0.84, N = 63017.612979.142971.422953.922936.172410.292301.321376.591341.61748.8487.0849.6247.811. (CC) gcc options: -pthread -laio

Flexible IO Tester

Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.1Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryVirtio ZFS HDD Raid 0 2TR150 SSD: EXT4TR150 SSD: XFSTR150 SSD: BtrfsTR150 SSD: F2FSProxmox ZFS Raid 1 WT14K28K42K56K70KSE +/- 1017.08, N = 3SE +/- 233.33, N = 3SE +/- 133.33, N = 3SE +/- 200.00, N = 3SE +/- 392.99, N = 36563358967544005436754200367331. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl

Flexible IO Tester

Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.1Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryTR150 SSD: F2FSTR150 SSD: EXT4TR150 SSD: XFSVirtio ZFS HDD Raid 10Virtio ZFS HDD Raid 10 WBUVirtio ZFS HDD Raid 0 2Seagate HDD: BtrfsTR150 SSD: Btrfs15K30K45K60K75KSE +/- 466.67, N = 3SE +/- 1010.53, N = 6SE +/- 1005.21, N = 6SE +/- 405.52, N = 3SE +/- 1325.64, N = 6SE +/- 896.29, N = 3SE +/- 853.91, N = 672167702506996755567530005050027400187501. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl

Flexible IO Tester

Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.1Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryTR150 SSD: BtrfsVirtio ZFS HDD Raid 10 WBUVirtio ZFS HDD Raid 0 2Virtio ZFS HDD Raid 10TR150 SSD: F2FSTR150 SSD: EXT4TR150 SSD: XFSProxmox ZFS Raid 1 WTSeagate HDD: EXT4Seagate HDD: XFSXenServer 7.4 Adaptec 6805 Raid 1 HVM-PVXenServer 7.4 Adaptec 6805 Raid 1 PV20K40K60K80K100KSE +/- 1550.63, N = 3SE +/- 970.36, N = 5SE +/- 1156.62, N = 3SE +/- 833.23, N = 6SE +/- 145.30, N = 3SE +/- 100.00, N = 3SE +/- 463.08, N = 3SE +/- 338.30, N = 3SE +/- 1174.07, N = 6SE +/- 208.17, N = 3SE +/- 2134.71, N = 58363368940651676368358467583005166743533397673780023200191001. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl

Flexible IO Tester

Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.1Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test DirectoryTR150 SSD: F2FSTR150 SSD: EXT4TR150 SSD: XFSVirtio ZFS HDD Raid 10 WBUVirtio ZFS HDD Raid 10Virtio ZFS HDD Raid 0 2Seagate HDD: EXT4Seagate HDD: XFSXenServer 7.4 Adaptec 6805 Raid 1 HVM-PVXenServer 7.4 Adaptec 6805 Raid 1 PVTR150 SSD: Btrfs20K40K60K80K100KSE +/- 333.33, N = 3SE +/- 1000.00, N = 3SE +/- 5437.01, N = 6SE +/- 2293.38, N = 6SE +/- 351.19, N = 3SE +/- 1815.44, N = 6SE +/- 783.87, N = 3SE +/- 617.12, N = 4SE +/- 66.67, N = 3SE +/- 384.42, N = 3SE +/- 1238.28, N = 610633310600010116758317560005015038233370502486724433215001. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -U_FORTIFY_SOURCE -lrt -laio -lm -lpthread -ldl


Phoronix Test Suite v10.8.4