ocdcephbenchmarks

Running disk benchmark against various CEPH versions and configurations

HTML result view exported from: https://openbenchmarking.org/result/1805308-FO-OCDCEPHBE08.

ocdcephbenchmarksProcessorMotherboardMemoryDiskGraphicsOSKernelCompilerFile-SystemScreen ResolutionSystem Layerlocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSD8 x QEMU Virtual 2.5+ @ 2.19GHz (8 Cores)Red Hat KVM (1.11.0-2.el7 BIOS)2 x 16384 MB RAM28GBcirrusdrmfbCentOS Linux 73.10.0-862.3.2.el7.x86_64 (x86_64)GCC 4.8.5 20150623xfs1024x768KVM QEMU1024GB1788GB28GB1024GBOpenBenchmarking.orgCompiler Details- --build=x86_64-redhat-linux --disable-libgcj --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,objc,obj-c++,java,fortran,ada,go,lto --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-linker-hash-style=gnu --with-tune=generic System Details- local filesystem: The root filesystem of the VM. QCOW on XFS on LVM on MD-RAID RAID 1 over two SSDs Micron 5100 MAX 240GB- CEPH Jewel 3 OSDs replica 1: CEPH, Jewel, 3 OSDs, Filestore, on-disk journal, replica 1, Micron 5100 MAX 1.9 TB- Direct SSD io=native cache=none: Direct SSD, Micron 5100 MAX 1.9 TB- CEPH Jewel 1 OSD w/ external Journal: CEPH, Jewel, 1 OSDs, Filestore, journal on separate SSD, replica 1, Micron 5100 MAX 1.9 TB- CEPH Jewel 1 OSD: CEPH, Jewel, 1 OSDs, Filestore, on-disk journal, replica 1, Micron 5100 MAX 1.9 TB- CEPH Jewel 3 OSDs replica 3: CEPH, Jewel, 3 OSDs, Filestore, on-disk journal, replica 3, Micron 5100 MAX 1.9 TB- CEPH luminous bluestore 3 OSDs replica 3: CEPH, Luminous, 3 OSDs, Bluestore, replica 3, Micron 5100 MAX 1.9 TB- CEPH luminous bluestore 3 OSDs replica 1: CEPH, Luminous, 3 OSDs, Bluestore, replica 1, Micron 5100 MAX 1.9 TBDisk Mount Options Details- attr2,inode64,noquota,relatime,rw,seclabelPython Details- Python 2.7.5Security Details- SELinux + KPTI + Load fences + Retpoline without IBPB Protection

ocdcephbenchmarksaio-stress: Rand Writesqlite: Timed SQLite Insertionsfs-mark: 1000 Files, 1MB Sizedbench: 12 Clientsdbench: 48 Clientsdbench: 128 Clientsdbench: 1 Clientstiobench: 64MB Rand Read - 32 Threadstiobench: 64MB Rand Write - 32 Threadsunpack-linux: linux-4.15.tar.xzpostmark: Disk Transaction Performancecompress-gzip: Linux Source Tree Archiving To .tar.gzapache: Static Web Page Servingpgbench: On-Disk - Normal Load - Read Writecompilebench: Compilecompilebench: Initial Createcompilebench: Read Compiled Treelocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSD1478.3820.61152.131285.75812.43959.00179.0260691.53958.9614.45240971.337307.721721.8252.7587.98683.49968.60965.1782.85107041.83337.0015.41214973.377336.671802.6617.29159.03800.771220.011336.95197.93115753.71555.5414.71229969.397272.343642.911822.8745.1095.50773.201055.651055.64101.27100449.37300.2314.77220667.577162.871824.701028.88135.49260.321340.5546.2183.53691.87938.32970.3198.67102558.87299.6014.53244371.748550.111148.88144.53260.081818.6498.3061.93417.51712.22779.8656.0584936.34214.0115.68227370.377961.191112.43136.01250.961690.54109.4866.07344.71679.26771.7054.67100973.58151.0016.30206674.626755.531025.83134.45236.521754.6169.9582.60480.21768.75754.9067.09108942.81255.3215.33243466.587729.99916.83139.52239.001773.6765.1483.60842.7773.94105283.54229.611168.81145.29259.65OpenBenchmarking.org

AIO-Stress

Random Write

OpenBenchmarking.orgMB/s, More Is BetterAIO-Stress 0.21Random Writelocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSD400800120016002000SE +/- 73.02, N = 6SE +/- 25.85, N = 3SE +/- 55.02, N = 6SE +/- 109.84, N = 6SE +/- 13.54, N = 3SE +/- 24.90, N = 3SE +/- 25.18, N = 6SE +/- 96.24, N = 6SE +/- 68.62, N = 61478.381721.821802.661822.871340.551818.641690.541754.611773.671. (CC) gcc options: -pthread -laio

SQLite

Timed SQLite Insertions

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.22Timed SQLite Insertionslocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSD20406080100SE +/- 0.06, N = 3SE +/- 0.77, N = 4SE +/- 0.28, N = 6SE +/- 0.34, N = 3SE +/- 0.10, N = 3SE +/- 0.38, N = 3SE +/- 0.93, N = 3SE +/- 1.07, N = 3SE +/- 0.78, N = 320.6152.7517.2945.1046.2198.30109.4869.9565.141. (CC) gcc options: -O2 -ldl -lpthread

FS-Mark

1000 Files, 1MB Size

OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.31000 Files, 1MB Sizelocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSD4080120160200SE +/- 4.84, N = 6SE +/- 1.36, N = 5SE +/- 1.29, N = 3SE +/- 1.35, N = 6SE +/- 0.80, N = 3SE +/- 0.29, N = 3SE +/- 0.64, N = 3SE +/- 1.25, N = 4SE +/- 0.46, N = 3152.1387.98159.0395.5083.5361.9366.0782.6083.601. (CC) gcc options: -static

Dbench

12 Clients

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.012 Clientslocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 130060090012001500SE +/- 4.49, N = 3SE +/- 1.93, N = 3SE +/- 7.08, N = 3SE +/- 2.72, N = 3SE +/- 2.75, N = 3SE +/- 0.66, N = 3SE +/- 3.55, N = 3SE +/- 1.35, N = 31285.75683.49800.77773.20691.87417.51344.71480.211. (CC) gcc options: -lpopt -O2

Dbench

48 Clients

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.048 Clientslocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSD30060090012001500SE +/- 98.18, N = 6SE +/- 7.36, N = 3SE +/- 2.82, N = 3SE +/- 2.21, N = 3SE +/- 8.61, N = 3SE +/- 1.19, N = 3SE +/- 2.02, N = 3SE +/- 3.97, N = 3SE +/- 12.96, N = 3812.43968.601220.011055.65938.32712.22679.26768.75842.771. (CC) gcc options: -lpopt -O2

Dbench

128 Clients

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.0128 Clientslocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 130060090012001500SE +/- 10.43, N = 3SE +/- 11.71, N = 3SE +/- 6.02, N = 3SE +/- 11.99, N = 3SE +/- 2.85, N = 3SE +/- 5.18, N = 3SE +/- 3.58, N = 3SE +/- 6.81, N = 3959.00965.171336.951055.64970.31779.86771.70754.901. (CC) gcc options: -lpopt -O2

Dbench

1 Clients

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.01 Clientslocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSD4080120160200SE +/- 2.91, N = 3SE +/- 0.76, N = 3SE +/- 1.04, N = 3SE +/- 1.69, N = 3SE +/- 1.69, N = 4SE +/- 2.01, N = 6SE +/- 0.39, N = 3SE +/- 0.30, N = 3SE +/- 0.56, N = 3179.0282.85197.93101.2798.6756.0554.6767.0973.941. (CC) gcc options: -lpopt -O2

Threaded I/O Tester

64MB Random Read - 32 Threads

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 2017050364MB Random Read - 32 Threadslocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSD20K40K60K80K100KSE +/- 3323.01, N = 6SE +/- 1303.43, N = 3SE +/- 1990.78, N = 6SE +/- 2822.07, N = 6SE +/- 2213.07, N = 6SE +/- 9550.32, N = 6SE +/- 2403.25, N = 6SE +/- 7596.75, N = 6SE +/- 885.93, N = 360691.53107041.83115753.71100449.37102558.8784936.34100973.58108942.81105283.541. (CC) gcc options: -O2

Threaded I/O Tester

64MB Random Write - 32 Threads

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 2017050364MB Random Write - 32 Threadslocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSD2004006008001000SE +/- 27.51, N = 6SE +/- 5.79, N = 3SE +/- 10.53, N = 3SE +/- 1.00, N = 3SE +/- 5.80, N = 6SE +/- 2.37, N = 3SE +/- 9.35, N = 6SE +/- 3.91, N = 3SE +/- 3.19, N = 3958.96337.00555.54300.23299.60214.01151.00255.32229.611. (CC) gcc options: -O2

Unpacking The Linux Kernel

linux-4.15.tar.xz

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernellinux-4.15.tar.xzlocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 148121620SE +/- 0.07, N = 4SE +/- 0.19, N = 8SE +/- 0.19, N = 7SE +/- 0.42, N = 8SE +/- 0.14, N = 4SE +/- 0.24, N = 5SE +/- 0.27, N = 4SE +/- 0.33, N = 814.4515.4114.7114.7714.5315.6816.3015.33

PostMark

Disk Transaction Performance

OpenBenchmarking.orgTPS, More Is BetterPostMark 1.51Disk Transaction Performancelocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 15001000150020002500SE +/- 53.62, N = 6SE +/- 16.19, N = 3SE +/- 35.31, N = 5SE +/- 34.53, N = 3SE +/- 21.11, N = 3SE +/- 31.10, N = 3SE +/- 15.67, N = 3240921492299220624432273206624341. (CC) gcc options: -O3

Gzip Compression

Linux Source Tree Archiving To .tar.gz

OpenBenchmarking.orgSeconds, Fewer Is BetterGzip CompressionLinux Source Tree Archiving To .tar.gzlocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 120406080100SE +/- 2.64, N = 6SE +/- 1.77, N = 6SE +/- 2.46, N = 6SE +/- 1.35, N = 3SE +/- 2.16, N = 6SE +/- 2.62, N = 6SE +/- 1.47, N = 3SE +/- 0.70, N = 371.3373.3769.3967.5771.7470.3774.6266.58

Apache Benchmark

Static Web Page Serving

OpenBenchmarking.orgRequests Per Second, More Is BetterApache Benchmark 2.4.29Static Web Page Servinglocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 12K4K6K8K10KSE +/- 37.64, N = 3SE +/- 99.52, N = 6SE +/- 125.92, N = 4SE +/- 197.05, N = 6SE +/- 49.14, N = 3SE +/- 128.93, N = 6SE +/- 80.76, N = 3SE +/- 88.00, N = 37307.727336.677272.347162.878550.117961.196755.537729.991. (CC) gcc options: -shared -fPIC -O2 -pthread

PostgreSQL pgbench

Scaling: On-Disk - Test: Normal Load - Mode: Read Write

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 10.3Scaling: On-Disk - Test: Normal Load - Mode: Read WriteDirect SSD io=native cache=noneCEPH Jewel 1 OSD w/ external Journal8001600240032004000SE +/- 14.08, N = 3SE +/- 63.93, N = 33642.911824.701. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm

Compile Bench

Test: Compile

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: CompileCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSD30060090012001500SE +/- 22.78, N = 6SE +/- 15.77, N = 3SE +/- 4.80, N = 3SE +/- 19.45, N = 6SE +/- 28.08, N = 6SE +/- 14.27, N = 31028.881148.881112.431025.83916.831168.81

Compile Bench

Test: Initial Create

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Initial CreateCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSD306090120150SE +/- 4.01, N = 3SE +/- 2.37, N = 3SE +/- 1.49, N = 3SE +/- 1.69, N = 3SE +/- 1.96, N = 3SE +/- 2.37, N = 3135.49144.53136.01134.45139.52145.29

Compile Bench

Test: Read Compiled Tree

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Read Compiled TreeCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSD60120180240300SE +/- 2.94, N = 3SE +/- 0.73, N = 3SE +/- 5.76, N = 3SE +/- 5.63, N = 3SE +/- 1.50, N = 3SE +/- 7.99, N = 3260.32260.08250.96236.52239.00259.65


Phoronix Test Suite v10.8.4