ocdcephbenchmarks

Running disk benchmark against various CEPH versions and configurations

HTML result view exported from: https://openbenchmarking.org/result/1805308-FO-OCDCEPHBE08&grr&sor.

ocdcephbenchmarksProcessorMotherboardMemoryDiskGraphicsOSKernelCompilerFile-SystemScreen ResolutionSystem Layerlocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSD8 x QEMU Virtual 2.5+ @ 2.19GHz (8 Cores)Red Hat KVM (1.11.0-2.el7 BIOS)2 x 16384 MB RAM28GBcirrusdrmfbCentOS Linux 73.10.0-862.3.2.el7.x86_64 (x86_64)GCC 4.8.5 20150623xfs1024x768KVM QEMU1024GB1788GB28GB1024GBOpenBenchmarking.orgCompiler Details- --build=x86_64-redhat-linux --disable-libgcj --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,objc,obj-c++,java,fortran,ada,go,lto --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-linker-hash-style=gnu --with-tune=generic System Details- local filesystem: The root filesystem of the VM. QCOW on XFS on LVM on MD-RAID RAID 1 over two SSDs Micron 5100 MAX 240GB- CEPH Jewel 3 OSDs replica 1: CEPH, Jewel, 3 OSDs, Filestore, on-disk journal, replica 1, Micron 5100 MAX 1.9 TB- Direct SSD io=native cache=none: Direct SSD, Micron 5100 MAX 1.9 TB- CEPH Jewel 1 OSD w/ external Journal: CEPH, Jewel, 1 OSDs, Filestore, journal on separate SSD, replica 1, Micron 5100 MAX 1.9 TB- CEPH Jewel 1 OSD: CEPH, Jewel, 1 OSDs, Filestore, on-disk journal, replica 1, Micron 5100 MAX 1.9 TB- CEPH Jewel 3 OSDs replica 3: CEPH, Jewel, 3 OSDs, Filestore, on-disk journal, replica 3, Micron 5100 MAX 1.9 TB- CEPH luminous bluestore 3 OSDs replica 3: CEPH, Luminous, 3 OSDs, Bluestore, replica 3, Micron 5100 MAX 1.9 TB- CEPH luminous bluestore 3 OSDs replica 1: CEPH, Luminous, 3 OSDs, Bluestore, replica 1, Micron 5100 MAX 1.9 TBDisk Mount Options Details- attr2,inode64,noquota,relatime,rw,seclabelPython Details- Python 2.7.5Security Details- SELinux + KPTI + Load fences + Retpoline without IBPB Protection

ocdcephbenchmarkscompilebench: Read Compiled Treecompilebench: Initial Createcompilebench: Compilepgbench: On-Disk - Normal Load - Read Writeapache: Static Web Page Servingcompress-gzip: Linux Source Tree Archiving To .tar.gzpostmark: Disk Transaction Performanceunpack-linux: linux-4.15.tar.xztiobench: 64MB Rand Write - 32 Threadstiobench: 64MB Rand Read - 32 Threadsdbench: 1 Clientsdbench: 128 Clientsdbench: 48 Clientsdbench: 12 Clientsfs-mark: 1000 Files, 1MB Sizesqlite: Timed SQLite Insertionsaio-stress: Rand Writelocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSD7307.7271.33240914.45958.9660691.53179.02959.00812.431285.75152.1320.611478.387336.6773.37214915.41337.00107041.8382.85965.17968.60683.4987.9852.751721.823642.917272.3469.39229914.71555.54115753.71197.931336.951220.01800.77159.0317.291802.66260.32135.491028.881824.707162.8767.57220614.77300.23100449.37101.271055.641055.65773.2095.5045.101822.87260.08144.531148.888550.1171.74244314.53299.60102558.8798.67970.31938.32691.8783.5346.211340.55250.96136.011112.437961.1970.37227315.68214.0184936.3456.05779.86712.22417.5161.9398.301818.64236.52134.451025.836755.5374.62206616.30151.00100973.5854.67771.70679.26344.7166.07109.481690.54239.00139.52916.837729.9966.58243415.33255.32108942.8167.09754.90768.75480.2182.6069.951754.61259.65145.291168.81229.61105283.5473.94842.7783.6065.141773.67OpenBenchmarking.org

Compile Bench

Test: Read Compiled Tree

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Read Compiled TreeCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH luminous bluestore 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 360120180240300SE +/- 2.94, N = 3SE +/- 0.73, N = 3SE +/- 7.99, N = 3SE +/- 5.76, N = 3SE +/- 1.50, N = 3SE +/- 5.63, N = 3260.32260.08259.65250.96239.00236.52

Compile Bench

Test: Initial Create

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Initial CreateCEPH luminous bluestore 1 OSDCEPH Jewel 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH Jewel 1 OSD w/ external JournalCEPH luminous bluestore 3 OSDs replica 3306090120150SE +/- 2.37, N = 3SE +/- 2.37, N = 3SE +/- 1.96, N = 3SE +/- 1.49, N = 3SE +/- 4.01, N = 3SE +/- 1.69, N = 3145.29144.53139.52136.01135.49134.45

Compile Bench

Test: Compile

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: CompileCEPH luminous bluestore 1 OSDCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH Jewel 1 OSD w/ external JournalCEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 130060090012001500SE +/- 14.27, N = 3SE +/- 15.77, N = 3SE +/- 4.80, N = 3SE +/- 22.78, N = 6SE +/- 19.45, N = 6SE +/- 28.08, N = 61168.811148.881112.431028.881025.83916.83

PostgreSQL pgbench

Scaling: On-Disk - Test: Normal Load - Mode: Read Write

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 10.3Scaling: On-Disk - Test: Normal Load - Mode: Read WriteDirect SSD io=native cache=noneCEPH Jewel 1 OSD w/ external Journal8001600240032004000SE +/- 14.08, N = 3SE +/- 63.93, N = 33642.911824.701. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm

Apache Benchmark

Static Web Page Serving

OpenBenchmarking.orgRequests Per Second, More Is BetterApache Benchmark 2.4.29Static Web Page ServingCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH Jewel 3 OSDs replica 1local filesystemDirect SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH luminous bluestore 3 OSDs replica 32K4K6K8K10KSE +/- 49.14, N = 3SE +/- 128.93, N = 6SE +/- 88.00, N = 3SE +/- 99.52, N = 6SE +/- 37.64, N = 3SE +/- 125.92, N = 4SE +/- 197.05, N = 6SE +/- 80.76, N = 38550.117961.197729.997336.677307.727272.347162.876755.531. (CC) gcc options: -shared -fPIC -O2 -pthread

Gzip Compression

Linux Source Tree Archiving To .tar.gz

OpenBenchmarking.orgSeconds, Fewer Is BetterGzip CompressionLinux Source Tree Archiving To .tar.gzCEPH luminous bluestore 3 OSDs replica 1CEPH Jewel 1 OSD w/ external JournalDirect SSD io=native cache=noneCEPH Jewel 3 OSDs replica 3local filesystemCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 320406080100SE +/- 0.70, N = 3SE +/- 1.35, N = 3SE +/- 2.46, N = 6SE +/- 2.62, N = 6SE +/- 2.64, N = 6SE +/- 2.16, N = 6SE +/- 1.77, N = 6SE +/- 1.47, N = 366.5867.5769.3970.3771.3371.7473.3774.62

PostMark

Disk Transaction Performance

OpenBenchmarking.orgTPS, More Is BetterPostMark 1.51Disk Transaction PerformanceCEPH Jewel 1 OSDCEPH luminous bluestore 3 OSDs replica 1local filesystemDirect SSD io=native cache=noneCEPH Jewel 3 OSDs replica 3CEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 35001000150020002500SE +/- 21.11, N = 3SE +/- 15.67, N = 3SE +/- 53.62, N = 6SE +/- 35.31, N = 5SE +/- 31.10, N = 3SE +/- 34.53, N = 3SE +/- 16.19, N = 3244324342409229922732206214920661. (CC) gcc options: -O3

Unpacking The Linux Kernel

linux-4.15.tar.xz

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernellinux-4.15.tar.xzlocal filesystemCEPH Jewel 1 OSDDirect SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH luminous bluestore 3 OSDs replica 1CEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 348121620SE +/- 0.07, N = 4SE +/- 0.14, N = 4SE +/- 0.19, N = 7SE +/- 0.42, N = 8SE +/- 0.33, N = 8SE +/- 0.19, N = 8SE +/- 0.24, N = 5SE +/- 0.27, N = 414.4514.5314.7114.7715.3315.4115.6816.30

Threaded I/O Tester

64MB Random Write - 32 Threads

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 2017050364MB Random Write - 32 Threadslocal filesystemDirect SSD io=native cache=noneCEPH Jewel 3 OSDs replica 1CEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 32004006008001000SE +/- 27.51, N = 6SE +/- 10.53, N = 3SE +/- 5.79, N = 3SE +/- 1.00, N = 3SE +/- 5.80, N = 6SE +/- 3.91, N = 3SE +/- 3.19, N = 3SE +/- 2.37, N = 3SE +/- 9.35, N = 6958.96555.54337.00300.23299.60255.32229.61214.01151.001. (CC) gcc options: -O2

Threaded I/O Tester

64MB Random Read - 32 Threads

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 2017050364MB Random Read - 32 ThreadsDirect SSD io=native cache=noneCEPH luminous bluestore 3 OSDs replica 1CEPH Jewel 3 OSDs replica 1CEPH luminous bluestore 1 OSDCEPH Jewel 1 OSDCEPH luminous bluestore 3 OSDs replica 3CEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 3local filesystem20K40K60K80K100KSE +/- 1990.78, N = 6SE +/- 7596.75, N = 6SE +/- 1303.43, N = 3SE +/- 885.93, N = 3SE +/- 2213.07, N = 6SE +/- 2403.25, N = 6SE +/- 2822.07, N = 6SE +/- 9550.32, N = 6SE +/- 3323.01, N = 6115753.71108942.81107041.83105283.54102558.87100973.58100449.3784936.3460691.531. (CC) gcc options: -O2

Dbench

1 Clients

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.01 ClientsDirect SSD io=native cache=nonelocal filesystemCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 1CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 34080120160200SE +/- 1.04, N = 3SE +/- 2.91, N = 3SE +/- 1.69, N = 3SE +/- 1.69, N = 4SE +/- 0.76, N = 3SE +/- 0.56, N = 3SE +/- 0.30, N = 3SE +/- 2.01, N = 6SE +/- 0.39, N = 3197.93179.02101.2798.6782.8573.9467.0956.0554.671. (CC) gcc options: -lpopt -O2

Dbench

128 Clients

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.0128 ClientsDirect SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 1local filesystemCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 130060090012001500SE +/- 6.02, N = 3SE +/- 11.99, N = 3SE +/- 2.85, N = 3SE +/- 11.71, N = 3SE +/- 10.43, N = 3SE +/- 5.18, N = 3SE +/- 3.58, N = 3SE +/- 6.81, N = 31336.951055.64970.31965.17959.00779.86771.70754.901. (CC) gcc options: -lpopt -O2

Dbench

48 Clients

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.048 ClientsDirect SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 1 OSDCEPH luminous bluestore 1 OSDlocal filesystemCEPH luminous bluestore 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 330060090012001500SE +/- 2.82, N = 3SE +/- 2.21, N = 3SE +/- 7.36, N = 3SE +/- 8.61, N = 3SE +/- 12.96, N = 3SE +/- 98.18, N = 6SE +/- 3.97, N = 3SE +/- 1.19, N = 3SE +/- 2.02, N = 31220.011055.65968.60938.32842.77812.43768.75712.22679.261. (CC) gcc options: -lpopt -O2

Dbench

12 Clients

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.012 Clientslocal filesystemDirect SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 330060090012001500SE +/- 4.49, N = 3SE +/- 7.08, N = 3SE +/- 2.72, N = 3SE +/- 2.75, N = 3SE +/- 1.93, N = 3SE +/- 1.35, N = 3SE +/- 0.66, N = 3SE +/- 3.55, N = 31285.75800.77773.20691.87683.49480.21417.51344.711. (CC) gcc options: -lpopt -O2

FS-Mark

1000 Files, 1MB Size

OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.31000 Files, 1MB SizeDirect SSD io=native cache=nonelocal filesystemCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH luminous bluestore 1 OSDCEPH Jewel 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3CEPH Jewel 3 OSDs replica 34080120160200SE +/- 1.29, N = 3SE +/- 4.84, N = 6SE +/- 1.35, N = 6SE +/- 1.36, N = 5SE +/- 0.46, N = 3SE +/- 0.80, N = 3SE +/- 1.25, N = 4SE +/- 0.64, N = 3SE +/- 0.29, N = 3159.03152.1395.5087.9883.6083.5382.6066.0761.931. (CC) gcc options: -static

SQLite

Timed SQLite Insertions

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.22Timed SQLite InsertionsDirect SSD io=native cache=nonelocal filesystemCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 1CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 320406080100SE +/- 0.28, N = 6SE +/- 0.06, N = 3SE +/- 0.34, N = 3SE +/- 0.10, N = 3SE +/- 0.77, N = 4SE +/- 0.78, N = 3SE +/- 1.07, N = 3SE +/- 0.38, N = 3SE +/- 0.93, N = 317.2920.6145.1046.2152.7565.1469.9598.30109.481. (CC) gcc options: -O2 -ldl -lpthread

AIO-Stress

Random Write

OpenBenchmarking.orgMB/s, More Is BetterAIO-Stress 0.21Random WriteCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 3Direct SSD io=native cache=noneCEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH Jewel 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3local filesystemCEPH Jewel 1 OSD400800120016002000SE +/- 109.84, N = 6SE +/- 24.90, N = 3SE +/- 55.02, N = 6SE +/- 68.62, N = 6SE +/- 96.24, N = 6SE +/- 25.85, N = 3SE +/- 25.18, N = 6SE +/- 73.02, N = 6SE +/- 13.54, N = 31822.871818.641802.661773.671754.611721.821690.541478.381340.551. (CC) gcc options: -pthread -laio


Phoronix Test Suite v10.8.4