drbd overhead

Oracle VMware testing on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2102266-HA-DRBDOVERH13
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Triggered
  Test
  Duration
2c-50-768m_drbd.xfs
February 26
  45 Minutes
2c-50-768m_sd.xfs
February 26
  29 Minutes
2c-75-768m_drbd.xfs
February 26
  35 Minutes
2c-75-768m_sd.xfs
February 26
  26 Minutes
2c-100-768m_drbd.xfs
February 26
  32 Minutes
2c-100-768m_sd.xfs
February 26
  36 Minutes
2c-100-768m_drbd.xfs_broken-sync
February 26
  24 Minutes
4c-100-768m_drbd.xfs_broken-sync
February 26
  28 Minutes
4c-100-768m_sd.xfs
February 26
  39 Minutes
Invert Hiding All Results Option
  33 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):


drbd overheadProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelCompilerFile-SystemScreen ResolutionSystem Layer2c-50-768m_drbd.xfs2c-50-768m_sd.xfs2c-75-768m_drbd.xfs2c-75-768m_sd.xfs2c-100-768m_drbd.xfs2c-100-768m_sd.xfs2c-100-768m_drbd.xfs_broken-sync4c-100-768m_drbd.xfs_broken-sync4c-100-768m_sd.xfsAMD Ryzen 5 3600XT 6-Core (2 Cores)Oracle VirtualBox v1.2Intel 440FX 82441FX PMC729MB21GB VBOX HDD + 2 x 11GB VBOX HDDVMware SVGA IIIntel 82801AA AC 97 AudioIntel 82540EMUbuntu 20.045.4.0-66-generic (x86_64)GCC 9.3.0xfs2048x2048Oracle VMwareAMD Ryzen 5 3600XT 6-Core (4 Cores)728MBOpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Disk Details- MQ-DEADLINE / relatime,rw / Block Size: 4096Processor Details- CPU Microcode: 0x6000626Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Not affected

2c-50-768m_drbd.xfs2c-50-768m_sd.xfs2c-75-768m_drbd.xfs2c-75-768m_sd.xfs2c-100-768m_drbd.xfs2c-100-768m_sd.xfs2c-100-768m_drbd.xfs_broken-sync4c-100-768m_drbd.xfs_broken-sync4c-100-768m_sd.xfsLogarithmic Result OverviewPhoronix Test Suite 10.4.0m1Flexible IO TesterFlexible IO TesterFS-MarkSysbenchFlexible IO TesterFlexible IO TesterFS-MarkSysbenchFlexible IO TesterFlexible IO TesterFlexible IO TesterFlexible IO TesterRand Write - Linux AIO - Yes - No - 128KBRand Write - Linux AIO - Yes - No - 128KB1.F.1.SCPURand Write - Linux AIO - Yes - No - 4KBRand Write - Linux AIO - Yes - No - 4KB1.F.1.S.N.S.FMemoryRand Read - Linux AIO - Yes - No - 4KBRand Read - Linux AIO - Yes - No - 4KBRand Read - Linux AIO - Yes - No - 128KBRand Read - Linux AIO - Yes - No - 128KB

drbd overheadfio: Rand Read - Linux AIO - Yes - No - 4KB - Default Test Directoryfio: Rand Read - Linux AIO - Yes - No - 4KB - Default Test Directoryfio: Rand Write - Linux AIO - Yes - No - 4KB - Default Test Directoryfio: Rand Write - Linux AIO - Yes - No - 4KB - Default Test Directoryfio: Rand Read - Linux AIO - Yes - No - 128KB - Default Test Directoryfio: Rand Read - Linux AIO - Yes - No - 128KB - Default Test Directoryfio: Rand Write - Linux AIO - Yes - No - 128KB - Default Test Directoryfio: Rand Write - Linux AIO - Yes - No - 128KB - Default Test Directoryfs-mark: 1000 Files, 1MB Sizefs-mark: 1000 Files, 1MB Size, No Sync/FSyncsysbench: Memorysysbench: CPU2c-50-768m_drbd.xfs2c-50-768m_sd.xfs2c-75-768m_drbd.xfs2c-75-768m_sd.xfs2c-100-768m_drbd.xfs2c-100-768m_sd.xfs2c-100-768m_drbd.xfs_broken-sync4c-100-768m_drbd.xfs_broken-sync4c-100-768m_sd.xfs35.8923537.395475864681116.693079.7144.64684365.56171770.232736.793811052676759447545694543381.11114.04500949.66311789.069738.2977541.310495631504510785395.2133.55571462.28972734.844937.99704118.93035865952657566046386.81467.65185705.67912762.095140.01033346.411867627501212196898.3150.75416008.17104147.820640.9104671333390065852608476772387.71568.15352567.52574202.283343.3110751463749269155198897112398.01382.85334350.07694224.030741.2106001333399268254509337461388.81483.27993172.47038379.470242.610900119.03031369255339867887378.71487.38149845.28718309.3583OpenBenchmarking.org

Flexible IO Tester

FIO, the Flexible I/O Tester, is an advanced Linux disk benchmark supporting multiple I/O engines and a wealth of options. FIO was written by Jens Axboe for testing of the Linux I/O subsystem and schedulers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.25Type: Random Read - Engine: Linux AIO - Buffered: Yes - Direct: No - Block Size: 4KB - Disk Target: Default Test Directory2c-50-768m_drbd.xfs2c-50-768m_sd.xfs2c-75-768m_drbd.xfs2c-75-768m_sd.xfs2c-100-768m_drbd.xfs2c-100-768m_sd.xfs2c-100-768m_drbd.xfs_broken-sync4c-100-768m_drbd.xfs_broken-sync4c-100-768m_sd.xfs1020304050SE +/- 0.46, N = 3SE +/- 0.07, N = 3SE +/- 0.31, N = 9SE +/- 0.09, N = 3SE +/- 0.12, N = 3SE +/- 0.48, N = 3SE +/- 0.45, N = 4SE +/- 0.39, N = 7SE +/- 0.03, N = 335.836.738.237.940.040.943.341.242.61. (CC) gcc options: -rdynamic -ll -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.25Type: Random Read - Engine: Linux AIO - Buffered: Yes - Direct: No - Block Size: 4KB - Disk Target: Default Test Directory2c-50-768m_drbd.xfs2c-50-768m_sd.xfs2c-75-768m_drbd.xfs2c-75-768m_sd.xfs2c-100-768m_drbd.xfs2c-100-768m_sd.xfs2c-100-768m_drbd.xfs_broken-sync4c-100-768m_drbd.xfs_broken-sync4c-100-768m_sd.xfs918273645Min: 35 / Avg: 35.77 / Max: 36.6Min: 36.6 / Avg: 36.67 / Max: 36.8Min: 36.5 / Avg: 38.17 / Max: 39.4Min: 37.8 / Avg: 37.93 / Max: 38.1Min: 39.8 / Avg: 40 / Max: 40.2Min: 39.9 / Avg: 40.87 / Max: 41.4Min: 42.3 / Avg: 43.3 / Max: 44.5Min: 40 / Avg: 41.21 / Max: 43.2Min: 42.6 / Avg: 42.63 / Max: 42.71. (CC) gcc options: -rdynamic -ll -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.25Type: Random Read - Engine: Linux AIO - Buffered: Yes - Direct: No - Block Size: 4KB - Disk Target: Default Test Directory2c-50-768m_drbd.xfs2c-50-768m_sd.xfs2c-75-768m_drbd.xfs2c-75-768m_sd.xfs2c-100-768m_drbd.xfs2c-100-768m_sd.xfs2c-100-768m_drbd.xfs_broken-sync4c-100-768m_drbd.xfs_broken-sync4c-100-768m_sd.xfs2K4K6K8K10KSE +/- 63.20, N = 3SE +/- 18.26, N = 3SE +/- 80.96, N = 9SE +/- 17.37, N = 3SE +/- 88.19, N = 3SE +/- 133.33, N = 3SE +/- 125.00, N = 4SE +/- 89.97, N = 7923593819775970410333104671107510600109001. (CC) gcc options: -rdynamic -ll -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.25Type: Random Read - Engine: Linux AIO - Buffered: Yes - Direct: No - Block Size: 4KB - Disk Target: Default Test Directory2c-50-768m_drbd.xfs2c-50-768m_sd.xfs2c-75-768m_drbd.xfs2c-75-768m_sd.xfs2c-100-768m_drbd.xfs2c-100-768m_sd.xfs2c-100-768m_drbd.xfs_broken-sync4c-100-768m_drbd.xfs_broken-sync4c-100-768m_sd.xfs2K4K6K8K10KMin: 9141 / Avg: 9234.67 / Max: 9355Min: 9355 / Avg: 9380.67 / Max: 9416Min: 9349 / Avg: 9774.89 / Max: 10100Min: 9685 / Avg: 9704.33 / Max: 9739Min: 10200 / Avg: 10333.33 / Max: 10500Min: 10200 / Avg: 10466.67 / Max: 10600Min: 10800 / Avg: 11075 / Max: 11400Min: 10400 / Avg: 10600 / Max: 111001. (CC) gcc options: -rdynamic -ll -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.25Type: Random Write - Engine: Linux AIO - Buffered: Yes - Direct: No - Block Size: 4KB - Disk Target: Default Test Directory2c-50-768m_drbd.xfs2c-50-768m_sd.xfs2c-75-768m_drbd.xfs2c-75-768m_sd.xfs2c-100-768m_drbd.xfs2c-100-768m_sd.xfs2c-100-768m_drbd.xfs_broken-sync4c-100-768m_drbd.xfs_broken-sync4c-100-768m_sd.xfs306090120150SE +/- 2.13, N = 15SE +/- 1.98, N = 12SE +/- 4.95, N = 12SE +/- 0.09, N = 3SE +/- 4.66, N = 15SE +/- 4.38, N = 12SE +/- 2.34, N = 12SE +/- 6.47, N = 1537.3105.041.3118.946.4133.0146.0133.0119.01. (CC) gcc options: -rdynamic -ll -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.25Type: Random Write - Engine: Linux AIO - Buffered: Yes - Direct: No - Block Size: 4KB - Disk Target: Default Test Directory2c-50-768m_drbd.xfs2c-50-768m_sd.xfs2c-75-768m_drbd.xfs2c-75-768m_sd.xfs2c-100-768m_drbd.xfs2c-100-768m_sd.xfs2c-100-768m_drbd.xfs_broken-sync4c-100-768m_drbd.xfs_broken-sync4c-100-768m_sd.xfs306090120150Min: 22.1 / Avg: 37.33 / Max: 46.9Min: 31.3 / Avg: 41.32 / Max: 49.4Min: 89.3 / Avg: 118.89 / Max: 144Min: 46.2 / Avg: 46.37 / Max: 46.5Min: 103 / Avg: 133 / Max: 170Min: 129 / Avg: 146.42 / Max: 179Min: 116 / Avg: 132.83 / Max: 148Min: 80.2 / Avg: 119.04 / Max: 1691. (CC) gcc options: -rdynamic -ll -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.25Type: Random Write - Engine: Linux AIO - Buffered: Yes - Direct: No - Block Size: 4KB - Disk Target: Default Test Directory2c-50-768m_drbd.xfs2c-50-768m_sd.xfs2c-75-768m_drbd.xfs2c-75-768m_sd.xfs2c-100-768m_drbd.xfs2c-100-768m_sd.xfs2c-100-768m_drbd.xfs_broken-sync4c-100-768m_drbd.xfs_broken-sync4c-100-768m_sd.xfs8K16K24K32K40KSE +/- 546.57, N = 15SE +/- 33.33, N = 3SE +/- 495.11, N = 12SE +/- 1289.44, N = 12SE +/- 33.33, N = 3SE +/- 1214.24, N = 15SE +/- 1119.49, N = 12SE +/- 598.92, N = 12SE +/- 1633.89, N = 15954726767104953035811867339003749233992303131. (CC) gcc options: -rdynamic -ll -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.25Type: Random Write - Engine: Linux AIO - Buffered: Yes - Direct: No - Block Size: 4KB - Disk Target: Default Test Directory2c-50-768m_drbd.xfs2c-50-768m_sd.xfs2c-75-768m_drbd.xfs2c-75-768m_sd.xfs2c-100-768m_drbd.xfs2c-100-768m_sd.xfs2c-100-768m_drbd.xfs_broken-sync4c-100-768m_drbd.xfs_broken-sync4c-100-768m_sd.xfs7K14K21K28K35KMin: 5644 / Avg: 9547.33 / Max: 12000Min: 26700 / Avg: 26766.67 / Max: 26800Min: 8003 / Avg: 10495.33 / Max: 12700Min: 22900 / Avg: 30358.33 / Max: 36800Min: 11800 / Avg: 11866.67 / Max: 11900Min: 26400 / Avg: 33900 / Max: 43400Min: 33100 / Avg: 37491.67 / Max: 45900Min: 29700 / Avg: 33991.67 / Max: 37900Min: 20500 / Avg: 30313.33 / Max: 432001. (CC) gcc options: -rdynamic -ll -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.25Type: Random Read - Engine: Linux AIO - Buffered: Yes - Direct: No - Block Size: 128KB - Disk Target: Default Test Directory2c-50-768m_drbd.xfs2c-50-768m_sd.xfs2c-75-768m_drbd.xfs2c-75-768m_sd.xfs2c-100-768m_drbd.xfs2c-100-768m_sd.xfs2c-100-768m_drbd.xfs_broken-sync4c-100-768m_drbd.xfs_broken-sync4c-100-768m_sd.xfs150300450600750SE +/- 2.33, N = 3SE +/- 6.01, N = 3SE +/- 5.93, N = 3SE +/- 6.36, N = 3SE +/- 4.04, N = 3SE +/- 5.03, N = 3SE +/- 7.69, N = 3SE +/- 4.63, N = 3SE +/- 3.00, N = 35865946316596276586916826921. (CC) gcc options: -rdynamic -ll -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.25Type: Random Read - Engine: Linux AIO - Buffered: Yes - Direct: No - Block Size: 128KB - Disk Target: Default Test Directory2c-50-768m_drbd.xfs2c-50-768m_sd.xfs2c-75-768m_drbd.xfs2c-75-768m_sd.xfs2c-100-768m_drbd.xfs2c-100-768m_sd.xfs2c-100-768m_drbd.xfs_broken-sync4c-100-768m_drbd.xfs_broken-sync4c-100-768m_sd.xfs120240360480600Min: 581 / Avg: 585.67 / Max: 588Min: 586 / Avg: 594.33 / Max: 606Min: 622 / Avg: 630.67 / Max: 642Min: 648 / Avg: 658.67 / Max: 670Min: 619 / Avg: 627 / Max: 632Min: 648 / Avg: 658 / Max: 664Min: 676 / Avg: 690.67 / Max: 702Min: 674 / Avg: 681.67 / Max: 690Min: 686 / Avg: 692 / Max: 6951. (CC) gcc options: -rdynamic -ll -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.25Type: Random Read - Engine: Linux AIO - Buffered: Yes - Direct: No - Block Size: 128KB - Disk Target: Default Test Directory2c-50-768m_drbd.xfs2c-50-768m_sd.xfs2c-75-768m_drbd.xfs2c-75-768m_sd.xfs2c-100-768m_drbd.xfs2c-100-768m_sd.xfs2c-100-768m_drbd.xfs_broken-sync4c-100-768m_drbd.xfs_broken-sync4c-100-768m_sd.xfs12002400360048006000SE +/- 19.37, N = 3SE +/- 47.66, N = 3SE +/- 47.16, N = 3SE +/- 50.64, N = 3SE +/- 32.84, N = 3SE +/- 39.49, N = 3SE +/- 62.22, N = 3SE +/- 37.47, N = 3SE +/- 25.51, N = 34681475450455265501252605519545055331. (CC) gcc options: -rdynamic -ll -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.25Type: Random Read - Engine: Linux AIO - Buffered: Yes - Direct: No - Block Size: 128KB - Disk Target: Default Test Directory2c-50-768m_drbd.xfs2c-50-768m_sd.xfs2c-75-768m_drbd.xfs2c-75-768m_sd.xfs2c-100-768m_drbd.xfs2c-100-768m_sd.xfs2c-100-768m_drbd.xfs_broken-sync4c-100-768m_drbd.xfs_broken-sync4c-100-768m_sd.xfs10002000300040005000Min: 4642 / Avg: 4680.67 / Max: 4702Min: 4687 / Avg: 4753.67 / Max: 4846Min: 4976 / Avg: 5044.67 / Max: 5135Min: 5181 / Avg: 5265 / Max: 5356Min: 4947 / Avg: 5011.67 / Max: 5054Min: 5181 / Avg: 5259.67 / Max: 5305Min: 5400 / Avg: 5519 / Max: 5610Min: 5390 / Avg: 5450.33 / Max: 5519Min: 5482 / Avg: 5533 / Max: 55601. (CC) gcc options: -rdynamic -ll -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.25Type: Random Write - Engine: Linux AIO - Buffered: Yes - Direct: No - Block Size: 128KB - Disk Target: Default Test Directory2c-50-768m_drbd.xfs2c-50-768m_sd.xfs2c-75-768m_drbd.xfs2c-75-768m_sd.xfs2c-100-768m_drbd.xfs2c-100-768m_sd.xfs2c-100-768m_drbd.xfs_broken-sync4c-100-768m_drbd.xfs_broken-sync4c-100-768m_sd.xfs2004006008001000SE +/- 4.05, N = 13SE +/- 6.68, N = 15SE +/- 1.53, N = 3SE +/- 6.81, N = 3SE +/- 1.43, N = 15SE +/- 25.39, N = 12SE +/- 8.84, N = 3SE +/- 5.51, N = 3SE +/- 8.10, N = 15116.6569.0107.0756.0121.0847.0889.0933.0986.01. (CC) gcc options: -rdynamic -ll -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.25Type: Random Write - Engine: Linux AIO - Buffered: Yes - Direct: No - Block Size: 128KB - Disk Target: Default Test Directory2c-50-768m_drbd.xfs2c-50-768m_sd.xfs2c-75-768m_drbd.xfs2c-75-768m_sd.xfs2c-100-768m_drbd.xfs2c-100-768m_sd.xfs2c-100-768m_drbd.xfs_broken-sync4c-100-768m_drbd.xfs_broken-sync4c-100-768m_sd.xfs2004006008001000Min: 71.3 / Avg: 116.64 / Max: 128Min: 536 / Avg: 568.53 / Max: 625Min: 104 / Avg: 107 / Max: 109Min: 746 / Avg: 756 / Max: 769Min: 114 / Avg: 121.47 / Max: 131Min: 702 / Avg: 847 / Max: 975Min: 880 / Avg: 889.33 / Max: 907Min: 922 / Avg: 933 / Max: 939Min: 945 / Avg: 986.2 / Max: 10611. (CC) gcc options: -rdynamic -ll -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.25Type: Random Write - Engine: Linux AIO - Buffered: Yes - Direct: No - Block Size: 128KB - Disk Target: Default Test Directory2c-50-768m_drbd.xfs2c-50-768m_sd.xfs2c-75-768m_drbd.xfs2c-75-768m_sd.xfs2c-100-768m_drbd.xfs2c-100-768m_sd.xfs2c-100-768m_drbd.xfs_broken-sync4c-100-768m_drbd.xfs_broken-sync4c-100-768m_sd.xfs2K4K6K8K10KSE +/- 32.32, N = 13SE +/- 53.43, N = 15SE +/- 11.61, N = 3SE +/- 53.11, N = 3SE +/- 11.40, N = 15SE +/- 203.38, N = 12SE +/- 71.42, N = 3SE +/- 42.67, N = 3SE +/- 64.67, N = 159304543853604696867727112746178871. (CC) gcc options: -rdynamic -ll -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.25Type: Random Write - Engine: Linux AIO - Buffered: Yes - Direct: No - Block Size: 128KB - Disk Target: Default Test Directory2c-50-768m_drbd.xfs2c-50-768m_sd.xfs2c-75-768m_drbd.xfs2c-75-768m_sd.xfs2c-100-768m_drbd.xfs2c-100-768m_sd.xfs2c-100-768m_drbd.xfs_broken-sync4c-100-768m_drbd.xfs_broken-sync4c-100-768m_sd.xfs14002800420056007000Min: 568 / Avg: 929.62 / Max: 1022Min: 4281 / Avg: 4543.07 / Max: 4993Min: 831 / Avg: 853.33 / Max: 870Min: 5967 / Avg: 6046 / Max: 6147Min: 906 / Avg: 967.73 / Max: 1045Min: 5614 / Avg: 6771.83 / Max: 7797Min: 7035 / Avg: 7112.33 / Max: 7255Min: 7376 / Avg: 7461.33 / Max: 7505Min: 7559 / Avg: 7887.07 / Max: 84841. (CC) gcc options: -rdynamic -ll -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native

FS-Mark

FS_Mark is designed to test a system's file-system performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.3Test: 1000 Files, 1MB Size2c-50-768m_drbd.xfs2c-50-768m_sd.xfs2c-75-768m_drbd.xfs2c-75-768m_sd.xfs2c-100-768m_drbd.xfs2c-100-768m_sd.xfs2c-100-768m_drbd.xfs_broken-sync4c-100-768m_drbd.xfs_broken-sync4c-100-768m_sd.xfs90180270360450SE +/- 1.56, N = 15SE +/- 3.17, N = 15SE +/- 0.96, N = 3SE +/- 4.26, N = 15SE +/- 1.03, N = 15SE +/- 3.16, N = 9SE +/- 4.44, N = 3SE +/- 5.30, N = 3SE +/- 3.97, N = 479.7381.195.2386.898.3387.7398.0388.8378.71. (CC) gcc options: -static
OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.3Test: 1000 Files, 1MB Size2c-50-768m_drbd.xfs2c-50-768m_sd.xfs2c-75-768m_drbd.xfs2c-75-768m_sd.xfs2c-100-768m_drbd.xfs2c-100-768m_sd.xfs2c-100-768m_drbd.xfs_broken-sync4c-100-768m_drbd.xfs_broken-sync4c-100-768m_sd.xfs70140210280350Min: 71.1 / Avg: 79.69 / Max: 90.9Min: 357.2 / Avg: 381.08 / Max: 393.4Min: 94 / Avg: 95.2 / Max: 97.1Min: 349 / Avg: 386.77 / Max: 409.5Min: 89.6 / Avg: 98.29 / Max: 104.1Min: 371.1 / Avg: 387.68 / Max: 400.6Min: 390.8 / Avg: 398.03 / Max: 406.1Min: 380.1 / Avg: 388.77 / Max: 398.4Min: 370.7 / Avg: 378.73 / Max: 389.71. (CC) gcc options: -static

OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.3Test: 1000 Files, 1MB Size, No Sync/FSync2c-50-768m_drbd.xfs2c-50-768m_sd.xfs2c-75-768m_drbd.xfs2c-75-768m_sd.xfs2c-100-768m_drbd.xfs2c-100-768m_sd.xfs2c-100-768m_drbd.xfs_broken-sync4c-100-768m_drbd.xfs_broken-sync4c-100-768m_sd.xfs30060090012001500SE +/- 1.35, N = 15SE +/- 28.52, N = 15SE +/- 0.33, N = 3SE +/- 25.36, N = 15SE +/- 0.65, N = 3SE +/- 19.73, N = 14SE +/- 34.63, N = 15SE +/- 55.31, N = 15SE +/- 34.21, N = 15144.61114.0133.51467.6150.71568.11382.81483.21487.31. (CC) gcc options: -static
OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.3Test: 1000 Files, 1MB Size, No Sync/FSync2c-50-768m_drbd.xfs2c-50-768m_sd.xfs2c-75-768m_drbd.xfs2c-75-768m_sd.xfs2c-100-768m_drbd.xfs2c-100-768m_sd.xfs2c-100-768m_drbd.xfs_broken-sync4c-100-768m_drbd.xfs_broken-sync4c-100-768m_sd.xfs30060090012001500Min: 135.2 / Avg: 144.57 / Max: 151.4Min: 860.8 / Avg: 1113.95 / Max: 1255.6Min: 133.2 / Avg: 133.53 / Max: 134.2Min: 1171.1 / Avg: 1467.65 / Max: 1580.6Min: 149.7 / Avg: 150.67 / Max: 151.9Min: 1410.5 / Avg: 1568.06 / Max: 1705.4Min: 1149.1 / Avg: 1382.77 / Max: 1683.2Min: 1166.5 / Avg: 1483.19 / Max: 1975.5Min: 1263 / Avg: 1487.29 / Max: 16971. (CC) gcc options: -static

Sysbench

This is a benchmark of Sysbench with CPU and memory sub-tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 2018-07-28Test: Memory2c-50-768m_drbd.xfs2c-50-768m_sd.xfs2c-75-768m_drbd.xfs2c-75-768m_sd.xfs2c-100-768m_drbd.xfs2c-100-768m_sd.xfs2c-100-768m_drbd.xfs_broken-sync4c-100-768m_drbd.xfs_broken-sync4c-100-768m_sd.xfs2M4M6M8M10MSE +/- 226749.04, N = 12SE +/- 246434.83, N = 15SE +/- 187210.72, N = 15SE +/- 165423.75, N = 15SE +/- 59805.54, N = 15SE +/- 62488.89, N = 3SE +/- 18712.23, N = 3SE +/- 9483.96, N = 3SE +/- 30257.04, N = 34684365.564500949.665571462.295185705.685416008.175352567.535334350.087993172.478149845.291. (CC) gcc options: -pthread -O3 -funroll-loops -ggdb3 -march=amdfam10 -rdynamic -ldl -laio -lm
OpenBenchmarking.orgEvents Per Second Per Core, More Is BetterSysbench 2018-07-28Performance Per Core - Test: Memory2c-50-768m_drbd.xfs2c-50-768m_sd.xfs2c-75-768m_drbd.xfs2c-75-768m_sd.xfs2c-100-768m_drbd.xfs2c-100-768m_sd.xfs2c-100-768m_drbd.xfs_broken-sync4c-100-768m_drbd.xfs_broken-sync4c-100-768m_sd.xfs600K1200K1800K2400K3000K2342182.782250474.832785731.142592852.842708004.092676283.762667175.041998293.122037461.321. 2c-50-768m_drbd.xfs: Detected core count of 22. 2c-50-768m_sd.xfs: Detected core count of 23. 2c-75-768m_drbd.xfs: Detected core count of 24. 2c-75-768m_sd.xfs: Detected core count of 25. 2c-100-768m_drbd.xfs: Detected core count of 26. 2c-100-768m_sd.xfs: Detected core count of 27. 2c-100-768m_drbd.xfs_broken-sync: Detected core count of 28. 4c-100-768m_drbd.xfs_broken-sync: Detected core count of 49. 4c-100-768m_sd.xfs: Detected core count of 4
OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 2018-07-28Test: Memory2c-50-768m_drbd.xfs2c-50-768m_sd.xfs2c-75-768m_drbd.xfs2c-75-768m_sd.xfs2c-100-768m_drbd.xfs2c-100-768m_sd.xfs2c-100-768m_drbd.xfs_broken-sync4c-100-768m_drbd.xfs_broken-sync4c-100-768m_sd.xfs1.4M2.8M4.2M5.6M7MMin: 3498203.75 / Avg: 4684365.56 / Max: 5644063.41Min: 2872119.22 / Avg: 4500949.66 / Max: 5818614.12Min: 4286104.14 / Avg: 5571462.29 / Max: 6758271.18Min: 4262450.08 / Avg: 5185705.68 / Max: 6504509.71Min: 5059242.32 / Avg: 5416008.17 / Max: 5913165.08Min: 5265082.62 / Avg: 5352567.53 / Max: 5473604.07Min: 5303386.25 / Avg: 5334350.08 / Max: 5368035.66Min: 7978678.72 / Avg: 7993172.47 / Max: 8011015.81Min: 8094394.12 / Avg: 8149845.29 / Max: 8198555.21. (CC) gcc options: -pthread -O3 -funroll-loops -ggdb3 -march=amdfam10 -rdynamic -ldl -laio -lm

OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 2018-07-28Test: CPU2c-50-768m_drbd.xfs2c-50-768m_sd.xfs2c-75-768m_drbd.xfs2c-75-768m_sd.xfs2c-100-768m_drbd.xfs2c-100-768m_sd.xfs2c-100-768m_drbd.xfs_broken-sync4c-100-768m_drbd.xfs_broken-sync4c-100-768m_sd.xfs2K4K6K8K10KSE +/- 15.38, N = 3SE +/- 2.05, N = 3SE +/- 21.08, N = 3SE +/- 13.35, N = 3SE +/- 7.61, N = 3SE +/- 4.56, N = 3SE +/- 2.65, N = 3SE +/- 10.07, N = 3SE +/- 4.97, N = 31770.231789.072734.842762.104147.824202.284224.038379.478309.361. (CC) gcc options: -pthread -O3 -funroll-loops -ggdb3 -march=amdfam10 -rdynamic -ldl -laio -lm
OpenBenchmarking.orgEvents Per Second Per Core, More Is BetterSysbench 2018-07-28Performance Per Core - Test: CPU2c-50-768m_drbd.xfs2c-50-768m_sd.xfs2c-75-768m_drbd.xfs2c-75-768m_sd.xfs2c-100-768m_drbd.xfs2c-100-768m_sd.xfs2c-100-768m_drbd.xfs_broken-sync4c-100-768m_drbd.xfs_broken-sync4c-100-768m_sd.xfs5001000150020002500885.12894.531367.421381.052073.912101.142112.022094.872077.341. 2c-50-768m_drbd.xfs: Detected core count of 22. 2c-50-768m_sd.xfs: Detected core count of 23. 2c-75-768m_drbd.xfs: Detected core count of 24. 2c-75-768m_sd.xfs: Detected core count of 25. 2c-100-768m_drbd.xfs: Detected core count of 26. 2c-100-768m_sd.xfs: Detected core count of 27. 2c-100-768m_drbd.xfs_broken-sync: Detected core count of 28. 4c-100-768m_drbd.xfs_broken-sync: Detected core count of 49. 4c-100-768m_sd.xfs: Detected core count of 4
OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 2018-07-28Test: CPU2c-50-768m_drbd.xfs2c-50-768m_sd.xfs2c-75-768m_drbd.xfs2c-75-768m_sd.xfs2c-100-768m_drbd.xfs2c-100-768m_sd.xfs2c-100-768m_drbd.xfs_broken-sync4c-100-768m_drbd.xfs_broken-sync4c-100-768m_sd.xfs15003000450060007500Min: 1751.48 / Avg: 1770.23 / Max: 1800.73Min: 1784.97 / Avg: 1789.07 / Max: 1791.23Min: 2692.91 / Avg: 2734.84 / Max: 2759.65Min: 2735.4 / Avg: 2762.1 / Max: 2776.23Min: 4134.01 / Avg: 4147.82 / Max: 4160.26Min: 4195.43 / Avg: 4202.28 / Max: 4210.93Min: 4218.79 / Avg: 4224.03 / Max: 4227.33Min: 8360.65 / Avg: 8379.47 / Max: 8395.12Min: 8299.44 / Avg: 8309.36 / Max: 8314.871. (CC) gcc options: -pthread -O3 -funroll-loops -ggdb3 -march=amdfam10 -rdynamic -ldl -laio -lm