ocdcephbenchmarks

Running disk benchmark against various CEPH versions and configurations

HTML result view exported from: https://openbenchmarking.org/result/1805308-FO-OCDCEPHBE08&grr&sro&rro.

ocdcephbenchmarksProcessorMotherboardMemoryDiskGraphicsOSKernelCompilerFile-SystemScreen ResolutionSystem Layerlocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSD8 x QEMU Virtual 2.5+ @ 2.19GHz (8 Cores)Red Hat KVM (1.11.0-2.el7 BIOS)2 x 16384 MB RAM28GBcirrusdrmfbCentOS Linux 73.10.0-862.3.2.el7.x86_64 (x86_64)GCC 4.8.5 20150623xfs1024x768KVM QEMU1024GB1788GB28GB1024GBOpenBenchmarking.orgCompiler Details- --build=x86_64-redhat-linux --disable-libgcj --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,objc,obj-c++,java,fortran,ada,go,lto --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-linker-hash-style=gnu --with-tune=generic System Details- local filesystem: The root filesystem of the VM. QCOW on XFS on LVM on MD-RAID RAID 1 over two SSDs Micron 5100 MAX 240GB- CEPH Jewel 3 OSDs replica 1: CEPH, Jewel, 3 OSDs, Filestore, on-disk journal, replica 1, Micron 5100 MAX 1.9 TB- Direct SSD io=native cache=none: Direct SSD, Micron 5100 MAX 1.9 TB- CEPH Jewel 1 OSD w/ external Journal: CEPH, Jewel, 1 OSDs, Filestore, journal on separate SSD, replica 1, Micron 5100 MAX 1.9 TB- CEPH Jewel 1 OSD: CEPH, Jewel, 1 OSDs, Filestore, on-disk journal, replica 1, Micron 5100 MAX 1.9 TB- CEPH Jewel 3 OSDs replica 3: CEPH, Jewel, 3 OSDs, Filestore, on-disk journal, replica 3, Micron 5100 MAX 1.9 TB- CEPH luminous bluestore 3 OSDs replica 3: CEPH, Luminous, 3 OSDs, Bluestore, replica 3, Micron 5100 MAX 1.9 TB- CEPH luminous bluestore 3 OSDs replica 1: CEPH, Luminous, 3 OSDs, Bluestore, replica 1, Micron 5100 MAX 1.9 TBDisk Mount Options Details- attr2,inode64,noquota,relatime,rw,seclabelPython Details- Python 2.7.5Security Details- SELinux + KPTI + Load fences + Retpoline without IBPB Protection

ocdcephbenchmarkscompilebench: Read Compiled Treecompilebench: Initial Createcompilebench: Compilepgbench: On-Disk - Normal Load - Read Writeapache: Static Web Page Servingcompress-gzip: Linux Source Tree Archiving To .tar.gzpostmark: Disk Transaction Performanceunpack-linux: linux-4.15.tar.xztiobench: 64MB Rand Write - 32 Threadstiobench: 64MB Rand Read - 32 Threadsdbench: 1 Clientsdbench: 128 Clientsdbench: 48 Clientsdbench: 12 Clientsfs-mark: 1000 Files, 1MB Sizesqlite: Timed SQLite Insertionsaio-stress: Rand Writelocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSD7307.7271.33240914.45958.9660691.53179.02959.00812.431285.75152.1320.611478.387336.6773.37214915.41337.00107041.8382.85965.17968.60683.4987.9852.751721.823642.917272.3469.39229914.71555.54115753.71197.931336.951220.01800.77159.0317.291802.66260.32135.491028.881824.707162.8767.57220614.77300.23100449.37101.271055.641055.65773.2095.5045.101822.87260.08144.531148.888550.1171.74244314.53299.60102558.8798.67970.31938.32691.8783.5346.211340.55250.96136.011112.437961.1970.37227315.68214.0184936.3456.05779.86712.22417.5161.9398.301818.64236.52134.451025.836755.5374.62206616.30151.00100973.5854.67771.70679.26344.7166.07109.481690.54239.00139.52916.837729.9966.58243415.33255.32108942.8167.09754.90768.75480.2182.6069.951754.61259.65145.291168.81229.61105283.5473.94842.7783.6065.141773.67OpenBenchmarking.org

Compile Bench

Test: Read Compiled Tree

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Read Compiled TreeCEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSDCEPH Jewel 3 OSDs replica 3CEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSD60120180240300SE +/- 5.63, N = 3SE +/- 1.50, N = 3SE +/- 7.99, N = 3SE +/- 5.76, N = 3SE +/- 2.94, N = 3SE +/- 0.73, N = 3236.52239.00259.65250.96260.32260.08

Compile Bench

Test: Initial Create

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Initial CreateCEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSDCEPH Jewel 3 OSDs replica 3CEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSD306090120150SE +/- 1.69, N = 3SE +/- 1.96, N = 3SE +/- 2.37, N = 3SE +/- 1.49, N = 3SE +/- 4.01, N = 3SE +/- 2.37, N = 3134.45139.52145.29136.01135.49144.53

Compile Bench

Test: Compile

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: CompileCEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSDCEPH Jewel 3 OSDs replica 3CEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSD30060090012001500SE +/- 19.45, N = 6SE +/- 28.08, N = 6SE +/- 14.27, N = 3SE +/- 4.80, N = 3SE +/- 22.78, N = 6SE +/- 15.77, N = 31025.83916.831168.811112.431028.881148.88

PostgreSQL pgbench

Scaling: On-Disk - Test: Normal Load - Mode: Read Write

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 10.3Scaling: On-Disk - Test: Normal Load - Mode: Read WriteDirect SSD io=native cache=noneCEPH Jewel 1 OSD w/ external Journal8001600240032004000SE +/- 14.08, N = 3SE +/- 63.93, N = 33642.911824.701. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm

Apache Benchmark

Static Web Page Serving

OpenBenchmarking.orgRequests Per Second, More Is BetterApache Benchmark 2.4.29Static Web Page Servinglocal filesystemDirect SSD io=native cache=noneCEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH Jewel 3 OSDs replica 1CEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSD2K4K6K8K10KSE +/- 37.64, N = 3SE +/- 125.92, N = 4SE +/- 80.76, N = 3SE +/- 88.00, N = 3SE +/- 128.93, N = 6SE +/- 99.52, N = 6SE +/- 197.05, N = 6SE +/- 49.14, N = 37307.727272.346755.537729.997961.197336.677162.878550.111. (CC) gcc options: -shared -fPIC -O2 -pthread

Gzip Compression

Linux Source Tree Archiving To .tar.gz

OpenBenchmarking.orgSeconds, Fewer Is BetterGzip CompressionLinux Source Tree Archiving To .tar.gzlocal filesystemDirect SSD io=native cache=noneCEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH Jewel 3 OSDs replica 1CEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSD20406080100SE +/- 2.64, N = 6SE +/- 2.46, N = 6SE +/- 1.47, N = 3SE +/- 0.70, N = 3SE +/- 2.62, N = 6SE +/- 1.77, N = 6SE +/- 1.35, N = 3SE +/- 2.16, N = 671.3369.3974.6266.5870.3773.3767.5771.74

PostMark

Disk Transaction Performance

OpenBenchmarking.orgTPS, More Is BetterPostMark 1.51Disk Transaction Performancelocal filesystemDirect SSD io=native cache=noneCEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH Jewel 3 OSDs replica 1CEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSD5001000150020002500SE +/- 53.62, N = 6SE +/- 35.31, N = 5SE +/- 15.67, N = 3SE +/- 31.10, N = 3SE +/- 16.19, N = 3SE +/- 34.53, N = 3SE +/- 21.11, N = 3240922992066243422732149220624431. (CC) gcc options: -O3

Unpacking The Linux Kernel

linux-4.15.tar.xz

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernellinux-4.15.tar.xzlocal filesystemDirect SSD io=native cache=noneCEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH Jewel 3 OSDs replica 1CEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSD48121620SE +/- 0.07, N = 4SE +/- 0.19, N = 7SE +/- 0.27, N = 4SE +/- 0.33, N = 8SE +/- 0.24, N = 5SE +/- 0.19, N = 8SE +/- 0.42, N = 8SE +/- 0.14, N = 414.4514.7116.3015.3315.6815.4114.7714.53

Threaded I/O Tester

64MB Random Write - 32 Threads

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 2017050364MB Random Write - 32 Threadslocal filesystemDirect SSD io=native cache=noneCEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSDCEPH Jewel 3 OSDs replica 3CEPH Jewel 3 OSDs replica 1CEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSD2004006008001000SE +/- 27.51, N = 6SE +/- 10.53, N = 3SE +/- 9.35, N = 6SE +/- 3.91, N = 3SE +/- 3.19, N = 3SE +/- 2.37, N = 3SE +/- 5.79, N = 3SE +/- 1.00, N = 3SE +/- 5.80, N = 6958.96555.54151.00255.32229.61214.01337.00300.23299.601. (CC) gcc options: -O2

Threaded I/O Tester

64MB Random Read - 32 Threads

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 2017050364MB Random Read - 32 Threadslocal filesystemDirect SSD io=native cache=noneCEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSDCEPH Jewel 3 OSDs replica 3CEPH Jewel 3 OSDs replica 1CEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSD20K40K60K80K100KSE +/- 3323.01, N = 6SE +/- 1990.78, N = 6SE +/- 2403.25, N = 6SE +/- 7596.75, N = 6SE +/- 885.93, N = 3SE +/- 9550.32, N = 6SE +/- 1303.43, N = 3SE +/- 2822.07, N = 6SE +/- 2213.07, N = 660691.53115753.71100973.58108942.81105283.5484936.34107041.83100449.37102558.871. (CC) gcc options: -O2

Dbench

1 Clients

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.01 Clientslocal filesystemDirect SSD io=native cache=noneCEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSDCEPH Jewel 3 OSDs replica 3CEPH Jewel 3 OSDs replica 1CEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSD4080120160200SE +/- 2.91, N = 3SE +/- 1.04, N = 3SE +/- 0.39, N = 3SE +/- 0.30, N = 3SE +/- 0.56, N = 3SE +/- 2.01, N = 6SE +/- 0.76, N = 3SE +/- 1.69, N = 3SE +/- 1.69, N = 4179.02197.9354.6767.0973.9456.0582.85101.2798.671. (CC) gcc options: -lpopt -O2

Dbench

128 Clients

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.0128 Clientslocal filesystemDirect SSD io=native cache=noneCEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH Jewel 3 OSDs replica 1CEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSD30060090012001500SE +/- 10.43, N = 3SE +/- 6.02, N = 3SE +/- 3.58, N = 3SE +/- 6.81, N = 3SE +/- 5.18, N = 3SE +/- 11.71, N = 3SE +/- 11.99, N = 3SE +/- 2.85, N = 3959.001336.95771.70754.90779.86965.171055.64970.311. (CC) gcc options: -lpopt -O2

Dbench

48 Clients

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.048 Clientslocal filesystemDirect SSD io=native cache=noneCEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSDCEPH Jewel 3 OSDs replica 3CEPH Jewel 3 OSDs replica 1CEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSD30060090012001500SE +/- 98.18, N = 6SE +/- 2.82, N = 3SE +/- 2.02, N = 3SE +/- 3.97, N = 3SE +/- 12.96, N = 3SE +/- 1.19, N = 3SE +/- 7.36, N = 3SE +/- 2.21, N = 3SE +/- 8.61, N = 3812.431220.01679.26768.75842.77712.22968.601055.65938.321. (CC) gcc options: -lpopt -O2

Dbench

12 Clients

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.012 Clientslocal filesystemDirect SSD io=native cache=noneCEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH Jewel 3 OSDs replica 1CEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSD30060090012001500SE +/- 4.49, N = 3SE +/- 7.08, N = 3SE +/- 3.55, N = 3SE +/- 1.35, N = 3SE +/- 0.66, N = 3SE +/- 1.93, N = 3SE +/- 2.72, N = 3SE +/- 2.75, N = 31285.75800.77344.71480.21417.51683.49773.20691.871. (CC) gcc options: -lpopt -O2

FS-Mark

1000 Files, 1MB Size

OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.31000 Files, 1MB Sizelocal filesystemDirect SSD io=native cache=noneCEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSDCEPH Jewel 3 OSDs replica 3CEPH Jewel 3 OSDs replica 1CEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSD4080120160200SE +/- 4.84, N = 6SE +/- 1.29, N = 3SE +/- 0.64, N = 3SE +/- 1.25, N = 4SE +/- 0.46, N = 3SE +/- 0.29, N = 3SE +/- 1.36, N = 5SE +/- 1.35, N = 6SE +/- 0.80, N = 3152.13159.0366.0782.6083.6061.9387.9895.5083.531. (CC) gcc options: -static

SQLite

Timed SQLite Insertions

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.22Timed SQLite Insertionslocal filesystemDirect SSD io=native cache=noneCEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSDCEPH Jewel 3 OSDs replica 3CEPH Jewel 3 OSDs replica 1CEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSD20406080100SE +/- 0.06, N = 3SE +/- 0.28, N = 6SE +/- 0.93, N = 3SE +/- 1.07, N = 3SE +/- 0.78, N = 3SE +/- 0.38, N = 3SE +/- 0.77, N = 4SE +/- 0.34, N = 3SE +/- 0.10, N = 320.6117.29109.4869.9565.1498.3052.7545.1046.211. (CC) gcc options: -O2 -ldl -lpthread

AIO-Stress

Random Write

OpenBenchmarking.orgMB/s, More Is BetterAIO-Stress 0.21Random Writelocal filesystemDirect SSD io=native cache=noneCEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSDCEPH Jewel 3 OSDs replica 3CEPH Jewel 3 OSDs replica 1CEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSD400800120016002000SE +/- 73.02, N = 6SE +/- 55.02, N = 6SE +/- 25.18, N = 6SE +/- 96.24, N = 6SE +/- 68.62, N = 6SE +/- 24.90, N = 3SE +/- 25.85, N = 3SE +/- 109.84, N = 6SE +/- 13.54, N = 31478.381802.661690.541754.611773.671818.641721.821822.871340.551. (CC) gcc options: -pthread -laio


Phoronix Test Suite v10.8.5