ocdcephbenchmarks

Running disk benchmark against various CEPH versions and configurations

HTML result view exported from: https://openbenchmarking.org/result/1805308-FO-OCDCEPHBE08&grr.

ocdcephbenchmarksProcessorMotherboardMemoryDiskGraphicsOSKernelCompilerFile-SystemScreen ResolutionSystem Layerlocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSD8 x QEMU Virtual 2.5+ @ 2.19GHz (8 Cores)Red Hat KVM (1.11.0-2.el7 BIOS)2 x 16384 MB RAM28GBcirrusdrmfbCentOS Linux 73.10.0-862.3.2.el7.x86_64 (x86_64)GCC 4.8.5 20150623xfs1024x768KVM QEMU1024GB1788GB28GB1024GBOpenBenchmarking.orgCompiler Details- --build=x86_64-redhat-linux --disable-libgcj --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,objc,obj-c++,java,fortran,ada,go,lto --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-linker-hash-style=gnu --with-tune=generic System Details- local filesystem: The root filesystem of the VM. QCOW on XFS on LVM on MD-RAID RAID 1 over two SSDs Micron 5100 MAX 240GB- CEPH Jewel 3 OSDs replica 1: CEPH, Jewel, 3 OSDs, Filestore, on-disk journal, replica 1, Micron 5100 MAX 1.9 TB- Direct SSD io=native cache=none: Direct SSD, Micron 5100 MAX 1.9 TB- CEPH Jewel 1 OSD w/ external Journal: CEPH, Jewel, 1 OSDs, Filestore, journal on separate SSD, replica 1, Micron 5100 MAX 1.9 TB- CEPH Jewel 1 OSD: CEPH, Jewel, 1 OSDs, Filestore, on-disk journal, replica 1, Micron 5100 MAX 1.9 TB- CEPH Jewel 3 OSDs replica 3: CEPH, Jewel, 3 OSDs, Filestore, on-disk journal, replica 3, Micron 5100 MAX 1.9 TB- CEPH luminous bluestore 3 OSDs replica 3: CEPH, Luminous, 3 OSDs, Bluestore, replica 3, Micron 5100 MAX 1.9 TB- CEPH luminous bluestore 3 OSDs replica 1: CEPH, Luminous, 3 OSDs, Bluestore, replica 1, Micron 5100 MAX 1.9 TBDisk Mount Options Details- attr2,inode64,noquota,relatime,rw,seclabelPython Details- Python 2.7.5Security Details- SELinux + KPTI + Load fences + Retpoline without IBPB Protection

ocdcephbenchmarkscompilebench: Read Compiled Treecompilebench: Initial Createcompilebench: Compilepgbench: On-Disk - Normal Load - Read Writeapache: Static Web Page Servingcompress-gzip: Linux Source Tree Archiving To .tar.gzpostmark: Disk Transaction Performanceunpack-linux: linux-4.15.tar.xztiobench: 64MB Rand Write - 32 Threadstiobench: 64MB Rand Read - 32 Threadsdbench: 1 Clientsdbench: 128 Clientsdbench: 48 Clientsdbench: 12 Clientsfs-mark: 1000 Files, 1MB Sizesqlite: Timed SQLite Insertionsaio-stress: Rand Writelocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSD7307.7271.33240914.45958.9660691.53179.02959.00812.431285.75152.1320.611478.387336.6773.37214915.41337.00107041.8382.85965.17968.60683.4987.9852.751721.823642.917272.3469.39229914.71555.54115753.71197.931336.951220.01800.77159.0317.291802.66260.32135.491028.881824.707162.8767.57220614.77300.23100449.37101.271055.641055.65773.2095.5045.101822.87260.08144.531148.888550.1171.74244314.53299.60102558.8798.67970.31938.32691.8783.5346.211340.55250.96136.011112.437961.1970.37227315.68214.0184936.3456.05779.86712.22417.5161.9398.301818.64236.52134.451025.836755.5374.62206616.30151.00100973.5854.67771.70679.26344.7166.07109.481690.54239.00139.52916.837729.9966.58243415.33255.32108942.8167.09754.90768.75480.2182.6069.951754.61259.65145.291168.81229.61105283.5473.94842.7783.6065.141773.67OpenBenchmarking.org

Compile Bench

Test: Read Compiled Tree

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Read Compiled TreeCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSD60120180240300SE +/- 2.94, N = 3SE +/- 0.73, N = 3SE +/- 5.76, N = 3SE +/- 5.63, N = 3SE +/- 1.50, N = 3SE +/- 7.99, N = 3260.32260.08250.96236.52239.00259.65

Compile Bench

Test: Initial Create

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Initial CreateCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSD306090120150SE +/- 4.01, N = 3SE +/- 2.37, N = 3SE +/- 1.49, N = 3SE +/- 1.69, N = 3SE +/- 1.96, N = 3SE +/- 2.37, N = 3135.49144.53136.01134.45139.52145.29

Compile Bench

Test: Compile

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: CompileCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSD30060090012001500SE +/- 22.78, N = 6SE +/- 15.77, N = 3SE +/- 4.80, N = 3SE +/- 19.45, N = 6SE +/- 28.08, N = 6SE +/- 14.27, N = 31028.881148.881112.431025.83916.831168.81

PostgreSQL pgbench

Scaling: On-Disk - Test: Normal Load - Mode: Read Write

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 10.3Scaling: On-Disk - Test: Normal Load - Mode: Read WriteDirect SSD io=native cache=noneCEPH Jewel 1 OSD w/ external Journal8001600240032004000SE +/- 14.08, N = 3SE +/- 63.93, N = 33642.911824.701. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm

Apache Benchmark

Static Web Page Serving

OpenBenchmarking.orgRequests Per Second, More Is BetterApache Benchmark 2.4.29Static Web Page Servinglocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 12K4K6K8K10KSE +/- 37.64, N = 3SE +/- 99.52, N = 6SE +/- 125.92, N = 4SE +/- 197.05, N = 6SE +/- 49.14, N = 3SE +/- 128.93, N = 6SE +/- 80.76, N = 3SE +/- 88.00, N = 37307.727336.677272.347162.878550.117961.196755.537729.991. (CC) gcc options: -shared -fPIC -O2 -pthread

Gzip Compression

Linux Source Tree Archiving To .tar.gz

OpenBenchmarking.orgSeconds, Fewer Is BetterGzip CompressionLinux Source Tree Archiving To .tar.gzlocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 120406080100SE +/- 2.64, N = 6SE +/- 1.77, N = 6SE +/- 2.46, N = 6SE +/- 1.35, N = 3SE +/- 2.16, N = 6SE +/- 2.62, N = 6SE +/- 1.47, N = 3SE +/- 0.70, N = 371.3373.3769.3967.5771.7470.3774.6266.58

PostMark

Disk Transaction Performance

OpenBenchmarking.orgTPS, More Is BetterPostMark 1.51Disk Transaction Performancelocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 15001000150020002500SE +/- 53.62, N = 6SE +/- 16.19, N = 3SE +/- 35.31, N = 5SE +/- 34.53, N = 3SE +/- 21.11, N = 3SE +/- 31.10, N = 3SE +/- 15.67, N = 3240921492299220624432273206624341. (CC) gcc options: -O3

Unpacking The Linux Kernel

linux-4.15.tar.xz

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernellinux-4.15.tar.xzlocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 148121620SE +/- 0.07, N = 4SE +/- 0.19, N = 8SE +/- 0.19, N = 7SE +/- 0.42, N = 8SE +/- 0.14, N = 4SE +/- 0.24, N = 5SE +/- 0.27, N = 4SE +/- 0.33, N = 814.4515.4114.7114.7714.5315.6816.3015.33

Threaded I/O Tester

64MB Random Write - 32 Threads

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 2017050364MB Random Write - 32 Threadslocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSD2004006008001000SE +/- 27.51, N = 6SE +/- 5.79, N = 3SE +/- 10.53, N = 3SE +/- 1.00, N = 3SE +/- 5.80, N = 6SE +/- 2.37, N = 3SE +/- 9.35, N = 6SE +/- 3.91, N = 3SE +/- 3.19, N = 3958.96337.00555.54300.23299.60214.01151.00255.32229.611. (CC) gcc options: -O2

Threaded I/O Tester

64MB Random Read - 32 Threads

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 2017050364MB Random Read - 32 Threadslocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSD20K40K60K80K100KSE +/- 3323.01, N = 6SE +/- 1303.43, N = 3SE +/- 1990.78, N = 6SE +/- 2822.07, N = 6SE +/- 2213.07, N = 6SE +/- 9550.32, N = 6SE +/- 2403.25, N = 6SE +/- 7596.75, N = 6SE +/- 885.93, N = 360691.53107041.83115753.71100449.37102558.8784936.34100973.58108942.81105283.541. (CC) gcc options: -O2

Dbench

1 Clients

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.01 Clientslocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSD4080120160200SE +/- 2.91, N = 3SE +/- 0.76, N = 3SE +/- 1.04, N = 3SE +/- 1.69, N = 3SE +/- 1.69, N = 4SE +/- 2.01, N = 6SE +/- 0.39, N = 3SE +/- 0.30, N = 3SE +/- 0.56, N = 3179.0282.85197.93101.2798.6756.0554.6767.0973.941. (CC) gcc options: -lpopt -O2

Dbench

128 Clients

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.0128 Clientslocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 130060090012001500SE +/- 10.43, N = 3SE +/- 11.71, N = 3SE +/- 6.02, N = 3SE +/- 11.99, N = 3SE +/- 2.85, N = 3SE +/- 5.18, N = 3SE +/- 3.58, N = 3SE +/- 6.81, N = 3959.00965.171336.951055.64970.31779.86771.70754.901. (CC) gcc options: -lpopt -O2

Dbench

48 Clients

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.048 Clientslocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSD30060090012001500SE +/- 98.18, N = 6SE +/- 7.36, N = 3SE +/- 2.82, N = 3SE +/- 2.21, N = 3SE +/- 8.61, N = 3SE +/- 1.19, N = 3SE +/- 2.02, N = 3SE +/- 3.97, N = 3SE +/- 12.96, N = 3812.43968.601220.011055.65938.32712.22679.26768.75842.771. (CC) gcc options: -lpopt -O2

Dbench

12 Clients

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.012 Clientslocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 130060090012001500SE +/- 4.49, N = 3SE +/- 1.93, N = 3SE +/- 7.08, N = 3SE +/- 2.72, N = 3SE +/- 2.75, N = 3SE +/- 0.66, N = 3SE +/- 3.55, N = 3SE +/- 1.35, N = 31285.75683.49800.77773.20691.87417.51344.71480.211. (CC) gcc options: -lpopt -O2

FS-Mark

1000 Files, 1MB Size

OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.31000 Files, 1MB Sizelocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSD4080120160200SE +/- 4.84, N = 6SE +/- 1.36, N = 5SE +/- 1.29, N = 3SE +/- 1.35, N = 6SE +/- 0.80, N = 3SE +/- 0.29, N = 3SE +/- 0.64, N = 3SE +/- 1.25, N = 4SE +/- 0.46, N = 3152.1387.98159.0395.5083.5361.9366.0782.6083.601. (CC) gcc options: -static

SQLite

Timed SQLite Insertions

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.22Timed SQLite Insertionslocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSD20406080100SE +/- 0.06, N = 3SE +/- 0.77, N = 4SE +/- 0.28, N = 6SE +/- 0.34, N = 3SE +/- 0.10, N = 3SE +/- 0.38, N = 3SE +/- 0.93, N = 3SE +/- 1.07, N = 3SE +/- 0.78, N = 320.6152.7517.2945.1046.2198.30109.4869.9565.141. (CC) gcc options: -O2 -ldl -lpthread

AIO-Stress

Random Write

OpenBenchmarking.orgMB/s, More Is BetterAIO-Stress 0.21Random Writelocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSD400800120016002000SE +/- 73.02, N = 6SE +/- 25.85, N = 3SE +/- 55.02, N = 6SE +/- 109.84, N = 6SE +/- 13.54, N = 3SE +/- 24.90, N = 3SE +/- 25.18, N = 6SE +/- 96.24, N = 6SE +/- 68.62, N = 61478.381721.821802.661822.871340.551818.641690.541754.611773.671. (CC) gcc options: -pthread -laio


Phoronix Test Suite v10.8.4