ocdcephbenchmarks

Running disk benchmark against various CEPH versions and configurations

HTML result view exported from: https://openbenchmarking.org/result/1805308-FO-OCDCEPHBE08&sro&gru.

ocdcephbenchmarksProcessorMotherboardMemoryDiskGraphicsOSKernelCompilerFile-SystemScreen ResolutionSystem Layerlocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSD8 x QEMU Virtual 2.5+ @ 2.19GHz (8 Cores)Red Hat KVM (1.11.0-2.el7 BIOS)2 x 16384 MB RAM28GBcirrusdrmfbCentOS Linux 73.10.0-862.3.2.el7.x86_64 (x86_64)GCC 4.8.5 20150623xfs1024x768KVM QEMU1024GB1788GB28GB1024GBOpenBenchmarking.orgCompiler Details- --build=x86_64-redhat-linux --disable-libgcj --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,objc,obj-c++,java,fortran,ada,go,lto --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-linker-hash-style=gnu --with-tune=generic System Details- local filesystem: The root filesystem of the VM. QCOW on XFS on LVM on MD-RAID RAID 1 over two SSDs Micron 5100 MAX 240GB- CEPH Jewel 3 OSDs replica 1: CEPH, Jewel, 3 OSDs, Filestore, on-disk journal, replica 1, Micron 5100 MAX 1.9 TB- Direct SSD io=native cache=none: Direct SSD, Micron 5100 MAX 1.9 TB- CEPH Jewel 1 OSD w/ external Journal: CEPH, Jewel, 1 OSDs, Filestore, journal on separate SSD, replica 1, Micron 5100 MAX 1.9 TB- CEPH Jewel 1 OSD: CEPH, Jewel, 1 OSDs, Filestore, on-disk journal, replica 1, Micron 5100 MAX 1.9 TB- CEPH Jewel 3 OSDs replica 3: CEPH, Jewel, 3 OSDs, Filestore, on-disk journal, replica 3, Micron 5100 MAX 1.9 TB- CEPH luminous bluestore 3 OSDs replica 3: CEPH, Luminous, 3 OSDs, Bluestore, replica 3, Micron 5100 MAX 1.9 TB- CEPH luminous bluestore 3 OSDs replica 1: CEPH, Luminous, 3 OSDs, Bluestore, replica 1, Micron 5100 MAX 1.9 TBDisk Mount Options Details- attr2,inode64,noquota,relatime,rw,seclabelPython Details- Python 2.7.5Security Details- SELinux + KPTI + Load fences + Retpoline without IBPB Protection

ocdcephbenchmarksfs-mark: 1000 Files, 1MB Sizeaio-stress: Rand Writedbench: 12 Clientsdbench: 48 Clientsdbench: 128 Clientsdbench: 1 Clientstiobench: 64MB Rand Read - 32 Threadstiobench: 64MB Rand Write - 32 Threadscompilebench: Compilecompilebench: Initial Createcompilebench: Read Compiled Treeapache: Static Web Page Servingpostmark: Disk Transaction Performancepgbench: On-Disk - Normal Load - Read Writesqlite: Timed SQLite Insertionsunpack-linux: linux-4.15.tar.xzcompress-gzip: Linux Source Tree Archiving To .tar.gzlocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSD152.131478.381285.75812.43959.00179.0260691.53958.967307.72240920.6114.4571.3387.981721.82683.49968.60965.1782.85107041.83337.007336.67214952.7515.4173.37159.031802.66800.771220.011336.95197.93115753.71555.547272.3422993642.9117.2914.7169.3995.501822.87773.201055.651055.64101.27100449.37300.231028.88135.49260.327162.8722061824.7045.1014.7767.5783.531340.55691.87938.32970.3198.67102558.87299.601148.88144.53260.088550.11244346.2114.5371.7461.931818.64417.51712.22779.8656.0584936.34214.011112.43136.01250.967961.19227398.3015.6870.3766.071690.54344.71679.26771.7054.67100973.58151.001025.83134.45236.526755.532066109.4816.3074.6282.601754.61480.21768.75754.9067.09108942.81255.32916.83139.52239.007729.99243469.9515.3366.5883.601773.67842.7773.94105283.54229.611168.81145.29259.6565.14OpenBenchmarking.org

FS-Mark

1000 Files, 1MB Size

OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.31000 Files, 1MB SizeCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem4080120160200SE +/- 0.80, N = 3SE +/- 1.35, N = 6SE +/- 1.36, N = 5SE +/- 0.29, N = 3SE +/- 0.46, N = 3SE +/- 1.25, N = 4SE +/- 0.64, N = 3SE +/- 1.29, N = 3SE +/- 4.84, N = 683.5395.5087.9861.9383.6082.6066.07159.03152.131. (CC) gcc options: -static

AIO-Stress

Random Write

OpenBenchmarking.orgMB/s, More Is BetterAIO-Stress 0.21Random WriteCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem400800120016002000SE +/- 13.54, N = 3SE +/- 109.84, N = 6SE +/- 25.85, N = 3SE +/- 24.90, N = 3SE +/- 68.62, N = 6SE +/- 96.24, N = 6SE +/- 25.18, N = 6SE +/- 55.02, N = 6SE +/- 73.02, N = 61340.551822.871721.821818.641773.671754.611690.541802.661478.381. (CC) gcc options: -pthread -laio

Dbench

12 Clients

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.012 ClientsCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem30060090012001500SE +/- 2.75, N = 3SE +/- 2.72, N = 3SE +/- 1.93, N = 3SE +/- 0.66, N = 3SE +/- 1.35, N = 3SE +/- 3.55, N = 3SE +/- 7.08, N = 3SE +/- 4.49, N = 3691.87773.20683.49417.51480.21344.71800.771285.751. (CC) gcc options: -lpopt -O2

Dbench

48 Clients

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.048 ClientsCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem30060090012001500SE +/- 8.61, N = 3SE +/- 2.21, N = 3SE +/- 7.36, N = 3SE +/- 1.19, N = 3SE +/- 12.96, N = 3SE +/- 3.97, N = 3SE +/- 2.02, N = 3SE +/- 2.82, N = 3SE +/- 98.18, N = 6938.321055.65968.60712.22842.77768.75679.261220.01812.431. (CC) gcc options: -lpopt -O2

Dbench

128 Clients

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.0128 ClientsCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem30060090012001500SE +/- 2.85, N = 3SE +/- 11.99, N = 3SE +/- 11.71, N = 3SE +/- 5.18, N = 3SE +/- 6.81, N = 3SE +/- 3.58, N = 3SE +/- 6.02, N = 3SE +/- 10.43, N = 3970.311055.64965.17779.86754.90771.701336.95959.001. (CC) gcc options: -lpopt -O2

Dbench

1 Clients

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.01 ClientsCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem4080120160200SE +/- 1.69, N = 4SE +/- 1.69, N = 3SE +/- 0.76, N = 3SE +/- 2.01, N = 6SE +/- 0.56, N = 3SE +/- 0.30, N = 3SE +/- 0.39, N = 3SE +/- 1.04, N = 3SE +/- 2.91, N = 398.67101.2782.8556.0573.9467.0954.67197.93179.021. (CC) gcc options: -lpopt -O2

Threaded I/O Tester

64MB Random Read - 32 Threads

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 2017050364MB Random Read - 32 ThreadsCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem20K40K60K80K100KSE +/- 2213.07, N = 6SE +/- 2822.07, N = 6SE +/- 1303.43, N = 3SE +/- 9550.32, N = 6SE +/- 885.93, N = 3SE +/- 7596.75, N = 6SE +/- 2403.25, N = 6SE +/- 1990.78, N = 6SE +/- 3323.01, N = 6102558.87100449.37107041.8384936.34105283.54108942.81100973.58115753.7160691.531. (CC) gcc options: -O2

Threaded I/O Tester

64MB Random Write - 32 Threads

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 2017050364MB Random Write - 32 ThreadsCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem2004006008001000SE +/- 5.80, N = 6SE +/- 1.00, N = 3SE +/- 5.79, N = 3SE +/- 2.37, N = 3SE +/- 3.19, N = 3SE +/- 3.91, N = 3SE +/- 9.35, N = 6SE +/- 10.53, N = 3SE +/- 27.51, N = 6299.60300.23337.00214.01229.61255.32151.00555.54958.961. (CC) gcc options: -O2

Compile Bench

Test: Compile

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: CompileCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 330060090012001500SE +/- 15.77, N = 3SE +/- 22.78, N = 6SE +/- 4.80, N = 3SE +/- 14.27, N = 3SE +/- 28.08, N = 6SE +/- 19.45, N = 61148.881028.881112.431168.81916.831025.83

Compile Bench

Test: Initial Create

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Initial CreateCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3306090120150SE +/- 2.37, N = 3SE +/- 4.01, N = 3SE +/- 1.49, N = 3SE +/- 2.37, N = 3SE +/- 1.96, N = 3SE +/- 1.69, N = 3144.53135.49136.01145.29139.52134.45

Compile Bench

Test: Read Compiled Tree

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Read Compiled TreeCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 360120180240300SE +/- 0.73, N = 3SE +/- 2.94, N = 3SE +/- 5.76, N = 3SE +/- 7.99, N = 3SE +/- 1.50, N = 3SE +/- 5.63, N = 3260.08260.32250.96259.65239.00236.52

Apache Benchmark

Static Web Page Serving

OpenBenchmarking.orgRequests Per Second, More Is BetterApache Benchmark 2.4.29Static Web Page ServingCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem2K4K6K8K10KSE +/- 49.14, N = 3SE +/- 197.05, N = 6SE +/- 99.52, N = 6SE +/- 128.93, N = 6SE +/- 88.00, N = 3SE +/- 80.76, N = 3SE +/- 125.92, N = 4SE +/- 37.64, N = 38550.117162.877336.677961.197729.996755.537272.347307.721. (CC) gcc options: -shared -fPIC -O2 -pthread

PostMark

Disk Transaction Performance

OpenBenchmarking.orgTPS, More Is BetterPostMark 1.51Disk Transaction PerformanceCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem5001000150020002500SE +/- 21.11, N = 3SE +/- 34.53, N = 3SE +/- 16.19, N = 3SE +/- 31.10, N = 3SE +/- 15.67, N = 3SE +/- 35.31, N = 5SE +/- 53.62, N = 6244322062149227324342066229924091. (CC) gcc options: -O3

PostgreSQL pgbench

Scaling: On-Disk - Test: Normal Load - Mode: Read Write

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 10.3Scaling: On-Disk - Test: Normal Load - Mode: Read WriteCEPH Jewel 1 OSD w/ external JournalDirect SSD io=native cache=none8001600240032004000SE +/- 63.93, N = 3SE +/- 14.08, N = 31824.703642.911. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm

SQLite

Timed SQLite Insertions

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.22Timed SQLite InsertionsCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem20406080100SE +/- 0.10, N = 3SE +/- 0.34, N = 3SE +/- 0.77, N = 4SE +/- 0.38, N = 3SE +/- 0.78, N = 3SE +/- 1.07, N = 3SE +/- 0.93, N = 3SE +/- 0.28, N = 6SE +/- 0.06, N = 346.2145.1052.7598.3065.1469.95109.4817.2920.611. (CC) gcc options: -O2 -ldl -lpthread

Unpacking The Linux Kernel

linux-4.15.tar.xz

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernellinux-4.15.tar.xzCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem48121620SE +/- 0.14, N = 4SE +/- 0.42, N = 8SE +/- 0.19, N = 8SE +/- 0.24, N = 5SE +/- 0.33, N = 8SE +/- 0.27, N = 4SE +/- 0.19, N = 7SE +/- 0.07, N = 414.5314.7715.4115.6815.3316.3014.7114.45

Gzip Compression

Linux Source Tree Archiving To .tar.gz

OpenBenchmarking.orgSeconds, Fewer Is BetterGzip CompressionLinux Source Tree Archiving To .tar.gzCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem20406080100SE +/- 2.16, N = 6SE +/- 1.35, N = 3SE +/- 1.77, N = 6SE +/- 2.62, N = 6SE +/- 0.70, N = 3SE +/- 1.47, N = 3SE +/- 2.46, N = 6SE +/- 2.64, N = 671.7467.5773.3770.3766.5874.6269.3971.33


Phoronix Test Suite v10.8.4