drbd overhead Oracle VMware testing on Ubuntu 20.04 via the Phoronix Test Suite.
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 2102266-HA-DRBDOVERH13 2c-50-768m_drbd.xfs 2c-50-768m_sd.xfs 2c-75-768m_drbd.xfs 2c-75-768m_sd.xfs 2c-100-768m_drbd.xfs 2c-100-768m_sd.xfs 2c-100-768m_drbd.xfs_broken-sync Processor: AMD Ryzen 5 3600XT 6-Core (2 Cores), Motherboard: Oracle VirtualBox v1.2, Chipset: Intel 440FX 82441FX PMC, Memory: 729MB, Disk: 21GB VBOX HDD + 2 x 11GB VBOX HDD, Graphics: VMware SVGA II, Audio: Intel 82801AA AC 97 Audio, Network: Intel 82540EM
OS: Ubuntu 20.04, Kernel: 5.4.0-66-generic (x86_64), Compiler: GCC 9.3.0, File-System: xfs, Screen Resolution: 2048x2048, System Layer: Oracle VMware
4c-100-768m_drbd.xfs_broken-sync 4c-100-768m_sd.xfs Changed Processor to AMD Ryzen 5 3600XT 6-Core (4 Cores) .
Changed Memory to 728MB .
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vDisk Notes: MQ-DEADLINE / relatime,rw / Block Size: 4096Processor Notes: CPU Microcode: 0x6000626Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Not affected
drbd overhead Processor Motherboard Chipset Memory Disk Graphics Audio Network OS Kernel Compiler File-System Screen Resolution System Layer 2c-50-768m_drbd.xfs 2c-50-768m_sd.xfs 2c-75-768m_drbd.xfs 2c-75-768m_sd.xfs 2c-100-768m_drbd.xfs 2c-100-768m_sd.xfs 2c-100-768m_drbd.xfs_broken-sync 4c-100-768m_drbd.xfs_broken-sync 4c-100-768m_sd.xfs AMD Ryzen 5 3600XT 6-Core (2 Cores) Oracle VirtualBox v1.2 Intel 440FX 82441FX PMC 729MB 21GB VBOX HDD + 2 x 11GB VBOX HDD VMware SVGA II Intel 82801AA AC 97 Audio Intel 82540EM Ubuntu 20.04 5.4.0-66-generic (x86_64) GCC 9.3.0 xfs 2048x2048 Oracle VMware AMD Ryzen 5 3600XT 6-Core (4 Cores) 728MB OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Disk Details - MQ-DEADLINE / relatime,rw / Block Size: 4096 Processor Details - CPU Microcode: 0x6000626 Security Details - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Not affected
2c-50-768m_drbd.xfs 2c-50-768m_sd.xfs 2c-75-768m_drbd.xfs 2c-75-768m_sd.xfs 2c-100-768m_drbd.xfs 2c-100-768m_sd.xfs 2c-100-768m_drbd.xfs_broken-sync 4c-100-768m_drbd.xfs_broken-sync 4c-100-768m_sd.xfs Logarithmic Result Overview Phoronix Test Suite 10.4.0m1 Flexible IO Tester Flexible IO Tester FS-Mark Sysbench Flexible IO Tester Flexible IO Tester FS-Mark Sysbench Flexible IO Tester Flexible IO Tester Flexible IO Tester Flexible IO Tester Rand Write - Linux AIO - Yes - No - 128KB Rand Write - Linux AIO - Yes - No - 128KB 1.F.1.S CPU Rand Write - Linux AIO - Yes - No - 4KB Rand Write - Linux AIO - Yes - No - 4KB 1.F.1.S.N.S.F Memory Rand Read - Linux AIO - Yes - No - 4KB Rand Read - Linux AIO - Yes - No - 4KB Rand Read - Linux AIO - Yes - No - 128KB Rand Read - Linux AIO - Yes - No - 128KB
drbd overhead fio: Rand Read - Linux AIO - Yes - No - 4KB - Default Test Directory fio: Rand Read - Linux AIO - Yes - No - 4KB - Default Test Directory fio: Rand Write - Linux AIO - Yes - No - 4KB - Default Test Directory fio: Rand Write - Linux AIO - Yes - No - 4KB - Default Test Directory fio: Rand Read - Linux AIO - Yes - No - 128KB - Default Test Directory fio: Rand Read - Linux AIO - Yes - No - 128KB - Default Test Directory fio: Rand Write - Linux AIO - Yes - No - 128KB - Default Test Directory fio: Rand Write - Linux AIO - Yes - No - 128KB - Default Test Directory fs-mark: 1000 Files, 1MB Size fs-mark: 1000 Files, 1MB Size, No Sync/FSync sysbench: Memory sysbench: CPU 2c-50-768m_drbd.xfs 2c-50-768m_sd.xfs 2c-75-768m_drbd.xfs 2c-75-768m_sd.xfs 2c-100-768m_drbd.xfs 2c-100-768m_sd.xfs 2c-100-768m_drbd.xfs_broken-sync 4c-100-768m_drbd.xfs_broken-sync 4c-100-768m_sd.xfs 35.8 9235 37.3 9547 586 4681 116.6 930 79.7 144.6 4684365.5617 1770.2327 36.7 9381 105 26767 594 4754 569 4543 381.1 1114.0 4500949.6631 1789.0697 38.2 9775 41.3 10495 631 5045 107 853 95.2 133.5 5571462.2897 2734.8449 37.9 9704 118.9 30358 659 5265 756 6046 386.8 1467.6 5185705.6791 2762.0951 40.0 10333 46.4 11867 627 5012 121 968 98.3 150.7 5416008.1710 4147.8206 40.9 10467 133 33900 658 5260 847 6772 387.7 1568.1 5352567.5257 4202.2833 43.3 11075 146 37492 691 5519 889 7112 398.0 1382.8 5334350.0769 4224.0307 41.2 10600 133 33992 682 5450 933 7461 388.8 1483.2 7993172.4703 8379.4702 42.6 10900 119.0 30313 692 5533 986 7887 378.7 1487.3 8149845.2871 8309.3583 OpenBenchmarking.org
Flexible IO Tester FIO, the Flexible I/O Tester, is an advanced Linux disk benchmark supporting multiple I/O engines and a wealth of options. FIO was written by Jens Axboe for testing of the Linux I/O subsystem and schedulers. Learn more via the OpenBenchmarking.org test page .
Result
OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.25 Type: Random Read - Engine: Linux AIO - Buffered: Yes - Direct: No - Block Size: 4KB - Disk Target: Default Test Directory 2c-50-768m_drbd.xfs 2c-50-768m_sd.xfs 2c-75-768m_drbd.xfs 2c-75-768m_sd.xfs 2c-100-768m_drbd.xfs 2c-100-768m_sd.xfs 2c-100-768m_drbd.xfs_broken-sync 4c-100-768m_drbd.xfs_broken-sync 4c-100-768m_sd.xfs 10 20 30 40 50 SE +/- 0.46, N = 3 SE +/- 0.07, N = 3 SE +/- 0.31, N = 9 SE +/- 0.09, N = 3 SE +/- 0.12, N = 3 SE +/- 0.48, N = 3 SE +/- 0.45, N = 4 SE +/- 0.39, N = 7 SE +/- 0.03, N = 3 35.8 36.7 38.2 37.9 40.0 40.9 43.3 41.2 42.6 1. (CC) gcc options: -rdynamic -ll -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Result Confidence
OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.25 Type: Random Read - Engine: Linux AIO - Buffered: Yes - Direct: No - Block Size: 4KB - Disk Target: Default Test Directory 2c-50-768m_drbd.xfs 2c-50-768m_sd.xfs 2c-75-768m_drbd.xfs 2c-75-768m_sd.xfs 2c-100-768m_drbd.xfs 2c-100-768m_sd.xfs 2c-100-768m_drbd.xfs_broken-sync 4c-100-768m_drbd.xfs_broken-sync 4c-100-768m_sd.xfs 9 18 27 36 45 Min: 35 / Avg: 35.77 / Max: 36.6 Min: 36.6 / Avg: 36.67 / Max: 36.8 Min: 36.5 / Avg: 38.17 / Max: 39.4 Min: 37.8 / Avg: 37.93 / Max: 38.1 Min: 39.8 / Avg: 40 / Max: 40.2 Min: 39.9 / Avg: 40.87 / Max: 41.4 Min: 42.3 / Avg: 43.3 / Max: 44.5 Min: 40 / Avg: 41.21 / Max: 43.2 Min: 42.6 / Avg: 42.63 / Max: 42.7 1. (CC) gcc options: -rdynamic -ll -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Result
OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.25 Type: Random Read - Engine: Linux AIO - Buffered: Yes - Direct: No - Block Size: 4KB - Disk Target: Default Test Directory 2c-50-768m_drbd.xfs 2c-50-768m_sd.xfs 2c-75-768m_drbd.xfs 2c-75-768m_sd.xfs 2c-100-768m_drbd.xfs 2c-100-768m_sd.xfs 2c-100-768m_drbd.xfs_broken-sync 4c-100-768m_drbd.xfs_broken-sync 4c-100-768m_sd.xfs 2K 4K 6K 8K 10K SE +/- 63.20, N = 3 SE +/- 18.26, N = 3 SE +/- 80.96, N = 9 SE +/- 17.37, N = 3 SE +/- 88.19, N = 3 SE +/- 133.33, N = 3 SE +/- 125.00, N = 4 SE +/- 89.97, N = 7 9235 9381 9775 9704 10333 10467 11075 10600 10900 1. (CC) gcc options: -rdynamic -ll -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Result Confidence
OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.25 Type: Random Read - Engine: Linux AIO - Buffered: Yes - Direct: No - Block Size: 4KB - Disk Target: Default Test Directory 2c-50-768m_drbd.xfs 2c-50-768m_sd.xfs 2c-75-768m_drbd.xfs 2c-75-768m_sd.xfs 2c-100-768m_drbd.xfs 2c-100-768m_sd.xfs 2c-100-768m_drbd.xfs_broken-sync 4c-100-768m_drbd.xfs_broken-sync 4c-100-768m_sd.xfs 2K 4K 6K 8K 10K Min: 9141 / Avg: 9234.67 / Max: 9355 Min: 9355 / Avg: 9380.67 / Max: 9416 Min: 9349 / Avg: 9774.89 / Max: 10100 Min: 9685 / Avg: 9704.33 / Max: 9739 Min: 10200 / Avg: 10333.33 / Max: 10500 Min: 10200 / Avg: 10466.67 / Max: 10600 Min: 10800 / Avg: 11075 / Max: 11400 Min: 10400 / Avg: 10600 / Max: 11100 1. (CC) gcc options: -rdynamic -ll -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Result
OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.25 Type: Random Write - Engine: Linux AIO - Buffered: Yes - Direct: No - Block Size: 4KB - Disk Target: Default Test Directory 2c-50-768m_drbd.xfs 2c-50-768m_sd.xfs 2c-75-768m_drbd.xfs 2c-75-768m_sd.xfs 2c-100-768m_drbd.xfs 2c-100-768m_sd.xfs 2c-100-768m_drbd.xfs_broken-sync 4c-100-768m_drbd.xfs_broken-sync 4c-100-768m_sd.xfs 30 60 90 120 150 SE +/- 2.13, N = 15 SE +/- 1.98, N = 12 SE +/- 4.95, N = 12 SE +/- 0.09, N = 3 SE +/- 4.66, N = 15 SE +/- 4.38, N = 12 SE +/- 2.34, N = 12 SE +/- 6.47, N = 15 37.3 105.0 41.3 118.9 46.4 133.0 146.0 133.0 119.0 1. (CC) gcc options: -rdynamic -ll -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Result Confidence
OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.25 Type: Random Write - Engine: Linux AIO - Buffered: Yes - Direct: No - Block Size: 4KB - Disk Target: Default Test Directory 2c-50-768m_drbd.xfs 2c-50-768m_sd.xfs 2c-75-768m_drbd.xfs 2c-75-768m_sd.xfs 2c-100-768m_drbd.xfs 2c-100-768m_sd.xfs 2c-100-768m_drbd.xfs_broken-sync 4c-100-768m_drbd.xfs_broken-sync 4c-100-768m_sd.xfs 30 60 90 120 150 Min: 22.1 / Avg: 37.33 / Max: 46.9 Min: 31.3 / Avg: 41.32 / Max: 49.4 Min: 89.3 / Avg: 118.89 / Max: 144 Min: 46.2 / Avg: 46.37 / Max: 46.5 Min: 103 / Avg: 133 / Max: 170 Min: 129 / Avg: 146.42 / Max: 179 Min: 116 / Avg: 132.83 / Max: 148 Min: 80.2 / Avg: 119.04 / Max: 169 1. (CC) gcc options: -rdynamic -ll -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Result
OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.25 Type: Random Write - Engine: Linux AIO - Buffered: Yes - Direct: No - Block Size: 4KB - Disk Target: Default Test Directory 2c-50-768m_drbd.xfs 2c-50-768m_sd.xfs 2c-75-768m_drbd.xfs 2c-75-768m_sd.xfs 2c-100-768m_drbd.xfs 2c-100-768m_sd.xfs 2c-100-768m_drbd.xfs_broken-sync 4c-100-768m_drbd.xfs_broken-sync 4c-100-768m_sd.xfs 8K 16K 24K 32K 40K SE +/- 546.57, N = 15 SE +/- 33.33, N = 3 SE +/- 495.11, N = 12 SE +/- 1289.44, N = 12 SE +/- 33.33, N = 3 SE +/- 1214.24, N = 15 SE +/- 1119.49, N = 12 SE +/- 598.92, N = 12 SE +/- 1633.89, N = 15 9547 26767 10495 30358 11867 33900 37492 33992 30313 1. (CC) gcc options: -rdynamic -ll -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Result Confidence
OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.25 Type: Random Write - Engine: Linux AIO - Buffered: Yes - Direct: No - Block Size: 4KB - Disk Target: Default Test Directory 2c-50-768m_drbd.xfs 2c-50-768m_sd.xfs 2c-75-768m_drbd.xfs 2c-75-768m_sd.xfs 2c-100-768m_drbd.xfs 2c-100-768m_sd.xfs 2c-100-768m_drbd.xfs_broken-sync 4c-100-768m_drbd.xfs_broken-sync 4c-100-768m_sd.xfs 7K 14K 21K 28K 35K Min: 5644 / Avg: 9547.33 / Max: 12000 Min: 26700 / Avg: 26766.67 / Max: 26800 Min: 8003 / Avg: 10495.33 / Max: 12700 Min: 22900 / Avg: 30358.33 / Max: 36800 Min: 11800 / Avg: 11866.67 / Max: 11900 Min: 26400 / Avg: 33900 / Max: 43400 Min: 33100 / Avg: 37491.67 / Max: 45900 Min: 29700 / Avg: 33991.67 / Max: 37900 Min: 20500 / Avg: 30313.33 / Max: 43200 1. (CC) gcc options: -rdynamic -ll -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Result
OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.25 Type: Random Read - Engine: Linux AIO - Buffered: Yes - Direct: No - Block Size: 128KB - Disk Target: Default Test Directory 2c-50-768m_drbd.xfs 2c-50-768m_sd.xfs 2c-75-768m_drbd.xfs 2c-75-768m_sd.xfs 2c-100-768m_drbd.xfs 2c-100-768m_sd.xfs 2c-100-768m_drbd.xfs_broken-sync 4c-100-768m_drbd.xfs_broken-sync 4c-100-768m_sd.xfs 150 300 450 600 750 SE +/- 2.33, N = 3 SE +/- 6.01, N = 3 SE +/- 5.93, N = 3 SE +/- 6.36, N = 3 SE +/- 4.04, N = 3 SE +/- 5.03, N = 3 SE +/- 7.69, N = 3 SE +/- 4.63, N = 3 SE +/- 3.00, N = 3 586 594 631 659 627 658 691 682 692 1. (CC) gcc options: -rdynamic -ll -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Result Confidence
OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.25 Type: Random Read - Engine: Linux AIO - Buffered: Yes - Direct: No - Block Size: 128KB - Disk Target: Default Test Directory 2c-50-768m_drbd.xfs 2c-50-768m_sd.xfs 2c-75-768m_drbd.xfs 2c-75-768m_sd.xfs 2c-100-768m_drbd.xfs 2c-100-768m_sd.xfs 2c-100-768m_drbd.xfs_broken-sync 4c-100-768m_drbd.xfs_broken-sync 4c-100-768m_sd.xfs 120 240 360 480 600 Min: 581 / Avg: 585.67 / Max: 588 Min: 586 / Avg: 594.33 / Max: 606 Min: 622 / Avg: 630.67 / Max: 642 Min: 648 / Avg: 658.67 / Max: 670 Min: 619 / Avg: 627 / Max: 632 Min: 648 / Avg: 658 / Max: 664 Min: 676 / Avg: 690.67 / Max: 702 Min: 674 / Avg: 681.67 / Max: 690 Min: 686 / Avg: 692 / Max: 695 1. (CC) gcc options: -rdynamic -ll -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Result
OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.25 Type: Random Read - Engine: Linux AIO - Buffered: Yes - Direct: No - Block Size: 128KB - Disk Target: Default Test Directory 2c-50-768m_drbd.xfs 2c-50-768m_sd.xfs 2c-75-768m_drbd.xfs 2c-75-768m_sd.xfs 2c-100-768m_drbd.xfs 2c-100-768m_sd.xfs 2c-100-768m_drbd.xfs_broken-sync 4c-100-768m_drbd.xfs_broken-sync 4c-100-768m_sd.xfs 1200 2400 3600 4800 6000 SE +/- 19.37, N = 3 SE +/- 47.66, N = 3 SE +/- 47.16, N = 3 SE +/- 50.64, N = 3 SE +/- 32.84, N = 3 SE +/- 39.49, N = 3 SE +/- 62.22, N = 3 SE +/- 37.47, N = 3 SE +/- 25.51, N = 3 4681 4754 5045 5265 5012 5260 5519 5450 5533 1. (CC) gcc options: -rdynamic -ll -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Result Confidence
OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.25 Type: Random Read - Engine: Linux AIO - Buffered: Yes - Direct: No - Block Size: 128KB - Disk Target: Default Test Directory 2c-50-768m_drbd.xfs 2c-50-768m_sd.xfs 2c-75-768m_drbd.xfs 2c-75-768m_sd.xfs 2c-100-768m_drbd.xfs 2c-100-768m_sd.xfs 2c-100-768m_drbd.xfs_broken-sync 4c-100-768m_drbd.xfs_broken-sync 4c-100-768m_sd.xfs 1000 2000 3000 4000 5000 Min: 4642 / Avg: 4680.67 / Max: 4702 Min: 4687 / Avg: 4753.67 / Max: 4846 Min: 4976 / Avg: 5044.67 / Max: 5135 Min: 5181 / Avg: 5265 / Max: 5356 Min: 4947 / Avg: 5011.67 / Max: 5054 Min: 5181 / Avg: 5259.67 / Max: 5305 Min: 5400 / Avg: 5519 / Max: 5610 Min: 5390 / Avg: 5450.33 / Max: 5519 Min: 5482 / Avg: 5533 / Max: 5560 1. (CC) gcc options: -rdynamic -ll -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Result
OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.25 Type: Random Write - Engine: Linux AIO - Buffered: Yes - Direct: No - Block Size: 128KB - Disk Target: Default Test Directory 2c-50-768m_drbd.xfs 2c-50-768m_sd.xfs 2c-75-768m_drbd.xfs 2c-75-768m_sd.xfs 2c-100-768m_drbd.xfs 2c-100-768m_sd.xfs 2c-100-768m_drbd.xfs_broken-sync 4c-100-768m_drbd.xfs_broken-sync 4c-100-768m_sd.xfs 200 400 600 800 1000 SE +/- 4.05, N = 13 SE +/- 6.68, N = 15 SE +/- 1.53, N = 3 SE +/- 6.81, N = 3 SE +/- 1.43, N = 15 SE +/- 25.39, N = 12 SE +/- 8.84, N = 3 SE +/- 5.51, N = 3 SE +/- 8.10, N = 15 116.6 569.0 107.0 756.0 121.0 847.0 889.0 933.0 986.0 1. (CC) gcc options: -rdynamic -ll -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Result Confidence
OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.25 Type: Random Write - Engine: Linux AIO - Buffered: Yes - Direct: No - Block Size: 128KB - Disk Target: Default Test Directory 2c-50-768m_drbd.xfs 2c-50-768m_sd.xfs 2c-75-768m_drbd.xfs 2c-75-768m_sd.xfs 2c-100-768m_drbd.xfs 2c-100-768m_sd.xfs 2c-100-768m_drbd.xfs_broken-sync 4c-100-768m_drbd.xfs_broken-sync 4c-100-768m_sd.xfs 200 400 600 800 1000 Min: 71.3 / Avg: 116.64 / Max: 128 Min: 536 / Avg: 568.53 / Max: 625 Min: 104 / Avg: 107 / Max: 109 Min: 746 / Avg: 756 / Max: 769 Min: 114 / Avg: 121.47 / Max: 131 Min: 702 / Avg: 847 / Max: 975 Min: 880 / Avg: 889.33 / Max: 907 Min: 922 / Avg: 933 / Max: 939 Min: 945 / Avg: 986.2 / Max: 1061 1. (CC) gcc options: -rdynamic -ll -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Result
OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.25 Type: Random Write - Engine: Linux AIO - Buffered: Yes - Direct: No - Block Size: 128KB - Disk Target: Default Test Directory 2c-50-768m_drbd.xfs 2c-50-768m_sd.xfs 2c-75-768m_drbd.xfs 2c-75-768m_sd.xfs 2c-100-768m_drbd.xfs 2c-100-768m_sd.xfs 2c-100-768m_drbd.xfs_broken-sync 4c-100-768m_drbd.xfs_broken-sync 4c-100-768m_sd.xfs 2K 4K 6K 8K 10K SE +/- 32.32, N = 13 SE +/- 53.43, N = 15 SE +/- 11.61, N = 3 SE +/- 53.11, N = 3 SE +/- 11.40, N = 15 SE +/- 203.38, N = 12 SE +/- 71.42, N = 3 SE +/- 42.67, N = 3 SE +/- 64.67, N = 15 930 4543 853 6046 968 6772 7112 7461 7887 1. (CC) gcc options: -rdynamic -ll -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Result Confidence
OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.25 Type: Random Write - Engine: Linux AIO - Buffered: Yes - Direct: No - Block Size: 128KB - Disk Target: Default Test Directory 2c-50-768m_drbd.xfs 2c-50-768m_sd.xfs 2c-75-768m_drbd.xfs 2c-75-768m_sd.xfs 2c-100-768m_drbd.xfs 2c-100-768m_sd.xfs 2c-100-768m_drbd.xfs_broken-sync 4c-100-768m_drbd.xfs_broken-sync 4c-100-768m_sd.xfs 1400 2800 4200 5600 7000 Min: 568 / Avg: 929.62 / Max: 1022 Min: 4281 / Avg: 4543.07 / Max: 4993 Min: 831 / Avg: 853.33 / Max: 870 Min: 5967 / Avg: 6046 / Max: 6147 Min: 906 / Avg: 967.73 / Max: 1045 Min: 5614 / Avg: 6771.83 / Max: 7797 Min: 7035 / Avg: 7112.33 / Max: 7255 Min: 7376 / Avg: 7461.33 / Max: 7505 Min: 7559 / Avg: 7887.07 / Max: 8484 1. (CC) gcc options: -rdynamic -ll -lrt -lz -lpthread -lm -ldl -laio -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
FS-Mark FS_Mark is designed to test a system's file-system performance. Learn more via the OpenBenchmarking.org test page .
Result
OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 Test: 1000 Files, 1MB Size 2c-50-768m_drbd.xfs 2c-50-768m_sd.xfs 2c-75-768m_drbd.xfs 2c-75-768m_sd.xfs 2c-100-768m_drbd.xfs 2c-100-768m_sd.xfs 2c-100-768m_drbd.xfs_broken-sync 4c-100-768m_drbd.xfs_broken-sync 4c-100-768m_sd.xfs 90 180 270 360 450 SE +/- 1.56, N = 15 SE +/- 3.17, N = 15 SE +/- 0.96, N = 3 SE +/- 4.26, N = 15 SE +/- 1.03, N = 15 SE +/- 3.16, N = 9 SE +/- 4.44, N = 3 SE +/- 5.30, N = 3 SE +/- 3.97, N = 4 79.7 381.1 95.2 386.8 98.3 387.7 398.0 388.8 378.7 1. (CC) gcc options: -static
Result Confidence
OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 Test: 1000 Files, 1MB Size 2c-50-768m_drbd.xfs 2c-50-768m_sd.xfs 2c-75-768m_drbd.xfs 2c-75-768m_sd.xfs 2c-100-768m_drbd.xfs 2c-100-768m_sd.xfs 2c-100-768m_drbd.xfs_broken-sync 4c-100-768m_drbd.xfs_broken-sync 4c-100-768m_sd.xfs 70 140 210 280 350 Min: 71.1 / Avg: 79.69 / Max: 90.9 Min: 357.2 / Avg: 381.08 / Max: 393.4 Min: 94 / Avg: 95.2 / Max: 97.1 Min: 349 / Avg: 386.77 / Max: 409.5 Min: 89.6 / Avg: 98.29 / Max: 104.1 Min: 371.1 / Avg: 387.68 / Max: 400.6 Min: 390.8 / Avg: 398.03 / Max: 406.1 Min: 380.1 / Avg: 388.77 / Max: 398.4 Min: 370.7 / Avg: 378.73 / Max: 389.7 1. (CC) gcc options: -static
Result
OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 Test: 1000 Files, 1MB Size, No Sync/FSync 2c-50-768m_drbd.xfs 2c-50-768m_sd.xfs 2c-75-768m_drbd.xfs 2c-75-768m_sd.xfs 2c-100-768m_drbd.xfs 2c-100-768m_sd.xfs 2c-100-768m_drbd.xfs_broken-sync 4c-100-768m_drbd.xfs_broken-sync 4c-100-768m_sd.xfs 300 600 900 1200 1500 SE +/- 1.35, N = 15 SE +/- 28.52, N = 15 SE +/- 0.33, N = 3 SE +/- 25.36, N = 15 SE +/- 0.65, N = 3 SE +/- 19.73, N = 14 SE +/- 34.63, N = 15 SE +/- 55.31, N = 15 SE +/- 34.21, N = 15 144.6 1114.0 133.5 1467.6 150.7 1568.1 1382.8 1483.2 1487.3 1. (CC) gcc options: -static
Result Confidence
OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 Test: 1000 Files, 1MB Size, No Sync/FSync 2c-50-768m_drbd.xfs 2c-50-768m_sd.xfs 2c-75-768m_drbd.xfs 2c-75-768m_sd.xfs 2c-100-768m_drbd.xfs 2c-100-768m_sd.xfs 2c-100-768m_drbd.xfs_broken-sync 4c-100-768m_drbd.xfs_broken-sync 4c-100-768m_sd.xfs 300 600 900 1200 1500 Min: 135.2 / Avg: 144.57 / Max: 151.4 Min: 860.8 / Avg: 1113.95 / Max: 1255.6 Min: 133.2 / Avg: 133.53 / Max: 134.2 Min: 1171.1 / Avg: 1467.65 / Max: 1580.6 Min: 149.7 / Avg: 150.67 / Max: 151.9 Min: 1410.5 / Avg: 1568.06 / Max: 1705.4 Min: 1149.1 / Avg: 1382.77 / Max: 1683.2 Min: 1166.5 / Avg: 1483.19 / Max: 1975.5 Min: 1263 / Avg: 1487.29 / Max: 1697 1. (CC) gcc options: -static
Sysbench This is a benchmark of Sysbench with CPU and memory sub-tests. Learn more via the OpenBenchmarking.org test page .
Result
OpenBenchmarking.org Events Per Second, More Is Better Sysbench 2018-07-28 Test: Memory 2c-50-768m_drbd.xfs 2c-50-768m_sd.xfs 2c-75-768m_drbd.xfs 2c-75-768m_sd.xfs 2c-100-768m_drbd.xfs 2c-100-768m_sd.xfs 2c-100-768m_drbd.xfs_broken-sync 4c-100-768m_drbd.xfs_broken-sync 4c-100-768m_sd.xfs 2M 4M 6M 8M 10M SE +/- 226749.04, N = 12 SE +/- 246434.83, N = 15 SE +/- 187210.72, N = 15 SE +/- 165423.75, N = 15 SE +/- 59805.54, N = 15 SE +/- 62488.89, N = 3 SE +/- 18712.23, N = 3 SE +/- 9483.96, N = 3 SE +/- 30257.04, N = 3 4684365.56 4500949.66 5571462.29 5185705.68 5416008.17 5352567.53 5334350.08 7993172.47 8149845.29 1. (CC) gcc options: -pthread -O3 -funroll-loops -ggdb3 -march=amdfam10 -rdynamic -ldl -laio -lm
Perf Per Core
OpenBenchmarking.org Events Per Second Per Core, More Is Better Sysbench 2018-07-28 Performance Per Core - Test: Memory 2c-50-768m_drbd.xfs 2c-50-768m_sd.xfs 2c-75-768m_drbd.xfs 2c-75-768m_sd.xfs 2c-100-768m_drbd.xfs 2c-100-768m_sd.xfs 2c-100-768m_drbd.xfs_broken-sync 4c-100-768m_drbd.xfs_broken-sync 4c-100-768m_sd.xfs 600K 1200K 1800K 2400K 3000K 2342182.78 2250474.83 2785731.14 2592852.84 2708004.09 2676283.76 2667175.04 1998293.12 2037461.32 1. 2c-50-768m_drbd.xfs: Detected core count of 2 2. 2c-50-768m_sd.xfs: Detected core count of 2 3. 2c-75-768m_drbd.xfs: Detected core count of 2 4. 2c-75-768m_sd.xfs: Detected core count of 2 5. 2c-100-768m_drbd.xfs: Detected core count of 2 6. 2c-100-768m_sd.xfs: Detected core count of 2 7. 2c-100-768m_drbd.xfs_broken-sync: Detected core count of 2 8. 4c-100-768m_drbd.xfs_broken-sync: Detected core count of 4 9. 4c-100-768m_sd.xfs: Detected core count of 4
Result Confidence
OpenBenchmarking.org Events Per Second, More Is Better Sysbench 2018-07-28 Test: Memory 2c-50-768m_drbd.xfs 2c-50-768m_sd.xfs 2c-75-768m_drbd.xfs 2c-75-768m_sd.xfs 2c-100-768m_drbd.xfs 2c-100-768m_sd.xfs 2c-100-768m_drbd.xfs_broken-sync 4c-100-768m_drbd.xfs_broken-sync 4c-100-768m_sd.xfs 1.4M 2.8M 4.2M 5.6M 7M Min: 3498203.75 / Avg: 4684365.56 / Max: 5644063.41 Min: 2872119.22 / Avg: 4500949.66 / Max: 5818614.12 Min: 4286104.14 / Avg: 5571462.29 / Max: 6758271.18 Min: 4262450.08 / Avg: 5185705.68 / Max: 6504509.71 Min: 5059242.32 / Avg: 5416008.17 / Max: 5913165.08 Min: 5265082.62 / Avg: 5352567.53 / Max: 5473604.07 Min: 5303386.25 / Avg: 5334350.08 / Max: 5368035.66 Min: 7978678.72 / Avg: 7993172.47 / Max: 8011015.81 Min: 8094394.12 / Avg: 8149845.29 / Max: 8198555.2 1. (CC) gcc options: -pthread -O3 -funroll-loops -ggdb3 -march=amdfam10 -rdynamic -ldl -laio -lm
Result
OpenBenchmarking.org Events Per Second, More Is Better Sysbench 2018-07-28 Test: CPU 2c-50-768m_drbd.xfs 2c-50-768m_sd.xfs 2c-75-768m_drbd.xfs 2c-75-768m_sd.xfs 2c-100-768m_drbd.xfs 2c-100-768m_sd.xfs 2c-100-768m_drbd.xfs_broken-sync 4c-100-768m_drbd.xfs_broken-sync 4c-100-768m_sd.xfs 2K 4K 6K 8K 10K SE +/- 15.38, N = 3 SE +/- 2.05, N = 3 SE +/- 21.08, N = 3 SE +/- 13.35, N = 3 SE +/- 7.61, N = 3 SE +/- 4.56, N = 3 SE +/- 2.65, N = 3 SE +/- 10.07, N = 3 SE +/- 4.97, N = 3 1770.23 1789.07 2734.84 2762.10 4147.82 4202.28 4224.03 8379.47 8309.36 1. (CC) gcc options: -pthread -O3 -funroll-loops -ggdb3 -march=amdfam10 -rdynamic -ldl -laio -lm
Perf Per Core
OpenBenchmarking.org Events Per Second Per Core, More Is Better Sysbench 2018-07-28 Performance Per Core - Test: CPU 2c-50-768m_drbd.xfs 2c-50-768m_sd.xfs 2c-75-768m_drbd.xfs 2c-75-768m_sd.xfs 2c-100-768m_drbd.xfs 2c-100-768m_sd.xfs 2c-100-768m_drbd.xfs_broken-sync 4c-100-768m_drbd.xfs_broken-sync 4c-100-768m_sd.xfs 500 1000 1500 2000 2500 885.12 894.53 1367.42 1381.05 2073.91 2101.14 2112.02 2094.87 2077.34 1. 2c-50-768m_drbd.xfs: Detected core count of 2 2. 2c-50-768m_sd.xfs: Detected core count of 2 3. 2c-75-768m_drbd.xfs: Detected core count of 2 4. 2c-75-768m_sd.xfs: Detected core count of 2 5. 2c-100-768m_drbd.xfs: Detected core count of 2 6. 2c-100-768m_sd.xfs: Detected core count of 2 7. 2c-100-768m_drbd.xfs_broken-sync: Detected core count of 2 8. 4c-100-768m_drbd.xfs_broken-sync: Detected core count of 4 9. 4c-100-768m_sd.xfs: Detected core count of 4
Result Confidence
OpenBenchmarking.org Events Per Second, More Is Better Sysbench 2018-07-28 Test: CPU 2c-50-768m_drbd.xfs 2c-50-768m_sd.xfs 2c-75-768m_drbd.xfs 2c-75-768m_sd.xfs 2c-100-768m_drbd.xfs 2c-100-768m_sd.xfs 2c-100-768m_drbd.xfs_broken-sync 4c-100-768m_drbd.xfs_broken-sync 4c-100-768m_sd.xfs 1500 3000 4500 6000 7500 Min: 1751.48 / Avg: 1770.23 / Max: 1800.73 Min: 1784.97 / Avg: 1789.07 / Max: 1791.23 Min: 2692.91 / Avg: 2734.84 / Max: 2759.65 Min: 2735.4 / Avg: 2762.1 / Max: 2776.23 Min: 4134.01 / Avg: 4147.82 / Max: 4160.26 Min: 4195.43 / Avg: 4202.28 / Max: 4210.93 Min: 4218.79 / Avg: 4224.03 / Max: 4227.33 Min: 8360.65 / Avg: 8379.47 / Max: 8395.12 Min: 8299.44 / Avg: 8309.36 / Max: 8314.87 1. (CC) gcc options: -pthread -O3 -funroll-loops -ggdb3 -march=amdfam10 -rdynamic -ldl -laio -lm