ocdcephbenchmarks

Running disk benchmark against various CEPH versions and configurations

HTML result view exported from: https://openbenchmarking.org/result/1805308-FO-OCDCEPHBE08&grt&rdt&rro.

ocdcephbenchmarksProcessorMotherboardMemoryDiskGraphicsOSKernelCompilerFile-SystemScreen ResolutionSystem Layerlocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSD8 x QEMU Virtual 2.5+ @ 2.19GHz (8 Cores)Red Hat KVM (1.11.0-2.el7 BIOS)2 x 16384 MB RAM28GBcirrusdrmfbCentOS Linux 73.10.0-862.3.2.el7.x86_64 (x86_64)GCC 4.8.5 20150623xfs1024x768KVM QEMU1024GB1788GB28GB1024GBOpenBenchmarking.orgCompiler Details- --build=x86_64-redhat-linux --disable-libgcj --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,objc,obj-c++,java,fortran,ada,go,lto --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-linker-hash-style=gnu --with-tune=generic System Details- local filesystem: The root filesystem of the VM. QCOW on XFS on LVM on MD-RAID RAID 1 over two SSDs Micron 5100 MAX 240GB- CEPH Jewel 3 OSDs replica 1: CEPH, Jewel, 3 OSDs, Filestore, on-disk journal, replica 1, Micron 5100 MAX 1.9 TB- Direct SSD io=native cache=none: Direct SSD, Micron 5100 MAX 1.9 TB- CEPH Jewel 1 OSD w/ external Journal: CEPH, Jewel, 1 OSDs, Filestore, journal on separate SSD, replica 1, Micron 5100 MAX 1.9 TB- CEPH Jewel 1 OSD: CEPH, Jewel, 1 OSDs, Filestore, on-disk journal, replica 1, Micron 5100 MAX 1.9 TB- CEPH Jewel 3 OSDs replica 3: CEPH, Jewel, 3 OSDs, Filestore, on-disk journal, replica 3, Micron 5100 MAX 1.9 TB- CEPH luminous bluestore 3 OSDs replica 3: CEPH, Luminous, 3 OSDs, Bluestore, replica 3, Micron 5100 MAX 1.9 TB- CEPH luminous bluestore 3 OSDs replica 1: CEPH, Luminous, 3 OSDs, Bluestore, replica 1, Micron 5100 MAX 1.9 TBDisk Mount Options Details- attr2,inode64,noquota,relatime,rw,seclabelPython Details- Python 2.7.5Security Details- SELinux + KPTI + Load fences + Retpoline without IBPB Protection

ocdcephbenchmarksaio-stress: Rand Writeapache: Static Web Page Servingcompilebench: Compilecompilebench: Initial Createcompilebench: Read Compiled Treedbench: 12 Clientsdbench: 48 Clientsdbench: 128 Clientsdbench: 1 Clientsfs-mark: 1000 Files, 1MB Sizecompress-gzip: Linux Source Tree Archiving To .tar.gzpgbench: On-Disk - Normal Load - Read Writepostmark: Disk Transaction Performancesqlite: Timed SQLite Insertionstiobench: 64MB Rand Read - 32 Threadstiobench: 64MB Rand Write - 32 Threadsunpack-linux: linux-4.15.tar.xzlocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSD1478.387307.721285.75812.43959.00179.02152.1371.33240920.6160691.53958.9614.451721.827336.67683.49968.60965.1782.8587.9873.37214952.75107041.83337.0015.411802.667272.34800.771220.011336.95197.93159.0369.393642.91229917.29115753.71555.5414.711822.877162.871028.88135.49260.32773.201055.651055.64101.2795.5067.571824.70220645.10100449.37300.2314.771340.558550.111148.88144.53260.08691.87938.32970.3198.6783.5371.74244346.21102558.87299.6014.531818.647961.191112.43136.01250.96417.51712.22779.8656.0561.9370.37227398.3084936.34214.0115.681690.546755.531025.83134.45236.52344.71679.26771.7054.6766.0774.622066109.48100973.58151.0016.301754.617729.99916.83139.52239.00480.21768.75754.9067.0982.6066.58243469.95108942.81255.3215.331773.671168.81145.29259.65842.7773.9483.6065.14105283.54229.61OpenBenchmarking.org

AIO-Stress

Random Write

OpenBenchmarking.orgMB/s, More Is BetterAIO-Stress 0.21Random WriteCEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3CEPH Jewel 3 OSDs replica 3CEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalDirect SSD io=native cache=noneCEPH Jewel 3 OSDs replica 1local filesystem400800120016002000SE +/- 68.62, N = 6SE +/- 96.24, N = 6SE +/- 25.18, N = 6SE +/- 24.90, N = 3SE +/- 13.54, N = 3SE +/- 109.84, N = 6SE +/- 55.02, N = 6SE +/- 25.85, N = 3SE +/- 73.02, N = 61773.671754.611690.541818.641340.551822.871802.661721.821478.381. (CC) gcc options: -pthread -laio

Apache Benchmark

Static Web Page Serving

OpenBenchmarking.orgRequests Per Second, More Is BetterApache Benchmark 2.4.29Static Web Page ServingCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3CEPH Jewel 3 OSDs replica 3CEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalDirect SSD io=native cache=noneCEPH Jewel 3 OSDs replica 1local filesystem2K4K6K8K10KSE +/- 88.00, N = 3SE +/- 80.76, N = 3SE +/- 128.93, N = 6SE +/- 49.14, N = 3SE +/- 197.05, N = 6SE +/- 125.92, N = 4SE +/- 99.52, N = 6SE +/- 37.64, N = 37729.996755.537961.198550.117162.877272.347336.677307.721. (CC) gcc options: -shared -fPIC -O2 -pthread

Compile Bench

Test: Compile

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: CompileCEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3CEPH Jewel 3 OSDs replica 3CEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external Journal30060090012001500SE +/- 14.27, N = 3SE +/- 28.08, N = 6SE +/- 19.45, N = 6SE +/- 4.80, N = 3SE +/- 15.77, N = 3SE +/- 22.78, N = 61168.81916.831025.831112.431148.881028.88

Compile Bench

Test: Initial Create

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Initial CreateCEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3CEPH Jewel 3 OSDs replica 3CEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external Journal306090120150SE +/- 2.37, N = 3SE +/- 1.96, N = 3SE +/- 1.69, N = 3SE +/- 1.49, N = 3SE +/- 2.37, N = 3SE +/- 4.01, N = 3145.29139.52134.45136.01144.53135.49

Compile Bench

Test: Read Compiled Tree

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Read Compiled TreeCEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3CEPH Jewel 3 OSDs replica 3CEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external Journal60120180240300SE +/- 7.99, N = 3SE +/- 1.50, N = 3SE +/- 5.63, N = 3SE +/- 5.76, N = 3SE +/- 0.73, N = 3SE +/- 2.94, N = 3259.65239.00236.52250.96260.08260.32

Dbench

12 Clients

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.012 ClientsCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3CEPH Jewel 3 OSDs replica 3CEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalDirect SSD io=native cache=noneCEPH Jewel 3 OSDs replica 1local filesystem30060090012001500SE +/- 1.35, N = 3SE +/- 3.55, N = 3SE +/- 0.66, N = 3SE +/- 2.75, N = 3SE +/- 2.72, N = 3SE +/- 7.08, N = 3SE +/- 1.93, N = 3SE +/- 4.49, N = 3480.21344.71417.51691.87773.20800.77683.491285.751. (CC) gcc options: -lpopt -O2

Dbench

48 Clients

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.048 ClientsCEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3CEPH Jewel 3 OSDs replica 3CEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalDirect SSD io=native cache=noneCEPH Jewel 3 OSDs replica 1local filesystem30060090012001500SE +/- 12.96, N = 3SE +/- 3.97, N = 3SE +/- 2.02, N = 3SE +/- 1.19, N = 3SE +/- 8.61, N = 3SE +/- 2.21, N = 3SE +/- 2.82, N = 3SE +/- 7.36, N = 3SE +/- 98.18, N = 6842.77768.75679.26712.22938.321055.651220.01968.60812.431. (CC) gcc options: -lpopt -O2

Dbench

128 Clients

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.0128 ClientsCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3CEPH Jewel 3 OSDs replica 3CEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalDirect SSD io=native cache=noneCEPH Jewel 3 OSDs replica 1local filesystem30060090012001500SE +/- 6.81, N = 3SE +/- 3.58, N = 3SE +/- 5.18, N = 3SE +/- 2.85, N = 3SE +/- 11.99, N = 3SE +/- 6.02, N = 3SE +/- 11.71, N = 3SE +/- 10.43, N = 3754.90771.70779.86970.311055.641336.95965.17959.001. (CC) gcc options: -lpopt -O2

Dbench

1 Clients

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.01 ClientsCEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3CEPH Jewel 3 OSDs replica 3CEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalDirect SSD io=native cache=noneCEPH Jewel 3 OSDs replica 1local filesystem4080120160200SE +/- 0.56, N = 3SE +/- 0.30, N = 3SE +/- 0.39, N = 3SE +/- 2.01, N = 6SE +/- 1.69, N = 4SE +/- 1.69, N = 3SE +/- 1.04, N = 3SE +/- 0.76, N = 3SE +/- 2.91, N = 373.9467.0954.6756.0598.67101.27197.9382.85179.021. (CC) gcc options: -lpopt -O2

FS-Mark

1000 Files, 1MB Size

OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.31000 Files, 1MB SizeCEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3CEPH Jewel 3 OSDs replica 3CEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalDirect SSD io=native cache=noneCEPH Jewel 3 OSDs replica 1local filesystem4080120160200SE +/- 0.46, N = 3SE +/- 1.25, N = 4SE +/- 0.64, N = 3SE +/- 0.29, N = 3SE +/- 0.80, N = 3SE +/- 1.35, N = 6SE +/- 1.29, N = 3SE +/- 1.36, N = 5SE +/- 4.84, N = 683.6082.6066.0761.9383.5395.50159.0387.98152.131. (CC) gcc options: -static

Gzip Compression

Linux Source Tree Archiving To .tar.gz

OpenBenchmarking.orgSeconds, Fewer Is BetterGzip CompressionLinux Source Tree Archiving To .tar.gzCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3CEPH Jewel 3 OSDs replica 3CEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalDirect SSD io=native cache=noneCEPH Jewel 3 OSDs replica 1local filesystem20406080100SE +/- 0.70, N = 3SE +/- 1.47, N = 3SE +/- 2.62, N = 6SE +/- 2.16, N = 6SE +/- 1.35, N = 3SE +/- 2.46, N = 6SE +/- 1.77, N = 6SE +/- 2.64, N = 666.5874.6270.3771.7467.5769.3973.3771.33

PostgreSQL pgbench

Scaling: On-Disk - Test: Normal Load - Mode: Read Write

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 10.3Scaling: On-Disk - Test: Normal Load - Mode: Read WriteCEPH Jewel 1 OSD w/ external JournalDirect SSD io=native cache=none8001600240032004000SE +/- 63.93, N = 3SE +/- 14.08, N = 31824.703642.911. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm

PostMark

Disk Transaction Performance

OpenBenchmarking.orgTPS, More Is BetterPostMark 1.51Disk Transaction PerformanceCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3CEPH Jewel 3 OSDs replica 3CEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalDirect SSD io=native cache=noneCEPH Jewel 3 OSDs replica 1local filesystem5001000150020002500SE +/- 15.67, N = 3SE +/- 31.10, N = 3SE +/- 21.11, N = 3SE +/- 34.53, N = 3SE +/- 35.31, N = 5SE +/- 16.19, N = 3SE +/- 53.62, N = 6243420662273244322062299214924091. (CC) gcc options: -O3

SQLite

Timed SQLite Insertions

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.22Timed SQLite InsertionsCEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3CEPH Jewel 3 OSDs replica 3CEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalDirect SSD io=native cache=noneCEPH Jewel 3 OSDs replica 1local filesystem20406080100SE +/- 0.78, N = 3SE +/- 1.07, N = 3SE +/- 0.93, N = 3SE +/- 0.38, N = 3SE +/- 0.10, N = 3SE +/- 0.34, N = 3SE +/- 0.28, N = 6SE +/- 0.77, N = 4SE +/- 0.06, N = 365.1469.95109.4898.3046.2145.1017.2952.7520.611. (CC) gcc options: -O2 -ldl -lpthread

Threaded I/O Tester

64MB Random Read - 32 Threads

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 2017050364MB Random Read - 32 ThreadsCEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3CEPH Jewel 3 OSDs replica 3CEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalDirect SSD io=native cache=noneCEPH Jewel 3 OSDs replica 1local filesystem20K40K60K80K100KSE +/- 885.93, N = 3SE +/- 7596.75, N = 6SE +/- 2403.25, N = 6SE +/- 9550.32, N = 6SE +/- 2213.07, N = 6SE +/- 2822.07, N = 6SE +/- 1990.78, N = 6SE +/- 1303.43, N = 3SE +/- 3323.01, N = 6105283.54108942.81100973.5884936.34102558.87100449.37115753.71107041.8360691.531. (CC) gcc options: -O2

Threaded I/O Tester

64MB Random Write - 32 Threads

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 2017050364MB Random Write - 32 ThreadsCEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3CEPH Jewel 3 OSDs replica 3CEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalDirect SSD io=native cache=noneCEPH Jewel 3 OSDs replica 1local filesystem2004006008001000SE +/- 3.19, N = 3SE +/- 3.91, N = 3SE +/- 9.35, N = 6SE +/- 2.37, N = 3SE +/- 5.80, N = 6SE +/- 1.00, N = 3SE +/- 10.53, N = 3SE +/- 5.79, N = 3SE +/- 27.51, N = 6229.61255.32151.00214.01299.60300.23555.54337.00958.961. (CC) gcc options: -O2

Unpacking The Linux Kernel

linux-4.15.tar.xz

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernellinux-4.15.tar.xzCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3CEPH Jewel 3 OSDs replica 3CEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalDirect SSD io=native cache=noneCEPH Jewel 3 OSDs replica 1local filesystem48121620SE +/- 0.33, N = 8SE +/- 0.27, N = 4SE +/- 0.24, N = 5SE +/- 0.14, N = 4SE +/- 0.42, N = 8SE +/- 0.19, N = 7SE +/- 0.19, N = 8SE +/- 0.07, N = 415.3316.3015.6814.5314.7714.7115.4114.45


Phoronix Test Suite v10.8.5