ocdcephbenchmarks

Running disk benchmark against various CEPH versions and configurations

HTML result view exported from: https://openbenchmarking.org/result/1805308-FO-OCDCEPHBE08&grs&sro.

ocdcephbenchmarksProcessorMotherboardMemoryDiskGraphicsOSKernelCompilerFile-SystemScreen ResolutionSystem Layerlocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSD8 x QEMU Virtual 2.5+ @ 2.19GHz (8 Cores)Red Hat KVM (1.11.0-2.el7 BIOS)2 x 16384 MB RAM28GBcirrusdrmfbCentOS Linux 73.10.0-862.3.2.el7.x86_64 (x86_64)GCC 4.8.5 20150623xfs1024x768KVM QEMU1024GB1788GB28GB1024GBOpenBenchmarking.orgCompiler Details- --build=x86_64-redhat-linux --disable-libgcj --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,objc,obj-c++,java,fortran,ada,go,lto --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-linker-hash-style=gnu --with-tune=generic System Details- local filesystem: The root filesystem of the VM. QCOW on XFS on LVM on MD-RAID RAID 1 over two SSDs Micron 5100 MAX 240GB- CEPH Jewel 3 OSDs replica 1: CEPH, Jewel, 3 OSDs, Filestore, on-disk journal, replica 1, Micron 5100 MAX 1.9 TB- Direct SSD io=native cache=none: Direct SSD, Micron 5100 MAX 1.9 TB- CEPH Jewel 1 OSD w/ external Journal: CEPH, Jewel, 1 OSDs, Filestore, journal on separate SSD, replica 1, Micron 5100 MAX 1.9 TB- CEPH Jewel 1 OSD: CEPH, Jewel, 1 OSDs, Filestore, on-disk journal, replica 1, Micron 5100 MAX 1.9 TB- CEPH Jewel 3 OSDs replica 3: CEPH, Jewel, 3 OSDs, Filestore, on-disk journal, replica 3, Micron 5100 MAX 1.9 TB- CEPH luminous bluestore 3 OSDs replica 3: CEPH, Luminous, 3 OSDs, Bluestore, replica 3, Micron 5100 MAX 1.9 TB- CEPH luminous bluestore 3 OSDs replica 1: CEPH, Luminous, 3 OSDs, Bluestore, replica 1, Micron 5100 MAX 1.9 TBDisk Mount Options Details- attr2,inode64,noquota,relatime,rw,seclabelPython Details- Python 2.7.5Security Details- SELinux + KPTI + Load fences + Retpoline without IBPB Protection

ocdcephbenchmarkstiobench: 64MB Rand Write - 32 Threadssqlite: Timed SQLite Insertionsdbench: 12 Clientsdbench: 1 Clientsfs-mark: 1000 Files, 1MB Sizedbench: 48 Clientsdbench: 128 Clientscompilebench: Compileapache: Static Web Page Servingpostmark: Disk Transaction Performanceunpack-linux: linux-4.15.tar.xzcompilebench: Read Compiled Treecompilebench: Initial Createpgbench: On-Disk - Normal Load - Read Writecompress-gzip: Linux Source Tree Archiving To .tar.gztiobench: 64MB Rand Read - 32 Threadsaio-stress: Rand Writelocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSD958.9620.611285.75179.02152.13812.43959.007307.72240914.4571.3360691.531478.38337.0052.75683.4982.8587.98968.60965.177336.67214915.4173.37107041.831721.82555.5417.29800.77197.93159.031220.011336.957272.34229914.713642.9169.39115753.711802.66300.2345.10773.20101.2795.501055.651055.641028.887162.87220614.77260.32135.491824.7067.57100449.371822.87299.6046.21691.8798.6783.53938.32970.311148.888550.11244314.53260.08144.5371.74102558.871340.55214.0198.30417.5156.0561.93712.22779.861112.437961.19227315.68250.96136.0170.3784936.341818.64151.00109.48344.7154.6766.07679.26771.701025.836755.53206616.30236.52134.4574.62100973.581690.54255.3269.95480.2167.0982.60768.75754.90916.837729.99243415.33239.00139.5266.58108942.811754.61229.6165.1473.9483.60842.771168.81259.65145.29105283.541773.67OpenBenchmarking.org

Threaded I/O Tester

64MB Random Write - 32 Threads

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 2017050364MB Random Write - 32 ThreadsCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem2004006008001000SE +/- 5.80, N = 6SE +/- 1.00, N = 3SE +/- 5.79, N = 3SE +/- 2.37, N = 3SE +/- 3.19, N = 3SE +/- 3.91, N = 3SE +/- 9.35, N = 6SE +/- 10.53, N = 3SE +/- 27.51, N = 6299.60300.23337.00214.01229.61255.32151.00555.54958.961. (CC) gcc options: -O2

SQLite

Timed SQLite Insertions

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.22Timed SQLite InsertionsCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem20406080100SE +/- 0.10, N = 3SE +/- 0.34, N = 3SE +/- 0.77, N = 4SE +/- 0.38, N = 3SE +/- 0.78, N = 3SE +/- 1.07, N = 3SE +/- 0.93, N = 3SE +/- 0.28, N = 6SE +/- 0.06, N = 346.2145.1052.7598.3065.1469.95109.4817.2920.611. (CC) gcc options: -O2 -ldl -lpthread

Dbench

12 Clients

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.012 ClientsCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem30060090012001500SE +/- 2.75, N = 3SE +/- 2.72, N = 3SE +/- 1.93, N = 3SE +/- 0.66, N = 3SE +/- 1.35, N = 3SE +/- 3.55, N = 3SE +/- 7.08, N = 3SE +/- 4.49, N = 3691.87773.20683.49417.51480.21344.71800.771285.751. (CC) gcc options: -lpopt -O2

Dbench

1 Clients

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.01 ClientsCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem4080120160200SE +/- 1.69, N = 4SE +/- 1.69, N = 3SE +/- 0.76, N = 3SE +/- 2.01, N = 6SE +/- 0.56, N = 3SE +/- 0.30, N = 3SE +/- 0.39, N = 3SE +/- 1.04, N = 3SE +/- 2.91, N = 398.67101.2782.8556.0573.9467.0954.67197.93179.021. (CC) gcc options: -lpopt -O2

FS-Mark

1000 Files, 1MB Size

OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.31000 Files, 1MB SizeCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem4080120160200SE +/- 0.80, N = 3SE +/- 1.35, N = 6SE +/- 1.36, N = 5SE +/- 0.29, N = 3SE +/- 0.46, N = 3SE +/- 1.25, N = 4SE +/- 0.64, N = 3SE +/- 1.29, N = 3SE +/- 4.84, N = 683.5395.5087.9861.9383.6082.6066.07159.03152.131. (CC) gcc options: -static

Dbench

48 Clients

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.048 ClientsCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem30060090012001500SE +/- 8.61, N = 3SE +/- 2.21, N = 3SE +/- 7.36, N = 3SE +/- 1.19, N = 3SE +/- 12.96, N = 3SE +/- 3.97, N = 3SE +/- 2.02, N = 3SE +/- 2.82, N = 3SE +/- 98.18, N = 6938.321055.65968.60712.22842.77768.75679.261220.01812.431. (CC) gcc options: -lpopt -O2

Dbench

128 Clients

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.0128 ClientsCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem30060090012001500SE +/- 2.85, N = 3SE +/- 11.99, N = 3SE +/- 11.71, N = 3SE +/- 5.18, N = 3SE +/- 6.81, N = 3SE +/- 3.58, N = 3SE +/- 6.02, N = 3SE +/- 10.43, N = 3970.311055.64965.17779.86754.90771.701336.95959.001. (CC) gcc options: -lpopt -O2

Compile Bench

Test: Compile

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: CompileCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 330060090012001500SE +/- 15.77, N = 3SE +/- 22.78, N = 6SE +/- 4.80, N = 3SE +/- 14.27, N = 3SE +/- 28.08, N = 6SE +/- 19.45, N = 61148.881028.881112.431168.81916.831025.83

Apache Benchmark

Static Web Page Serving

OpenBenchmarking.orgRequests Per Second, More Is BetterApache Benchmark 2.4.29Static Web Page ServingCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem2K4K6K8K10KSE +/- 49.14, N = 3SE +/- 197.05, N = 6SE +/- 99.52, N = 6SE +/- 128.93, N = 6SE +/- 88.00, N = 3SE +/- 80.76, N = 3SE +/- 125.92, N = 4SE +/- 37.64, N = 38550.117162.877336.677961.197729.996755.537272.347307.721. (CC) gcc options: -shared -fPIC -O2 -pthread

PostMark

Disk Transaction Performance

OpenBenchmarking.orgTPS, More Is BetterPostMark 1.51Disk Transaction PerformanceCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem5001000150020002500SE +/- 21.11, N = 3SE +/- 34.53, N = 3SE +/- 16.19, N = 3SE +/- 31.10, N = 3SE +/- 15.67, N = 3SE +/- 35.31, N = 5SE +/- 53.62, N = 6244322062149227324342066229924091. (CC) gcc options: -O3

Unpacking The Linux Kernel

linux-4.15.tar.xz

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernellinux-4.15.tar.xzCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem48121620SE +/- 0.14, N = 4SE +/- 0.42, N = 8SE +/- 0.19, N = 8SE +/- 0.24, N = 5SE +/- 0.33, N = 8SE +/- 0.27, N = 4SE +/- 0.19, N = 7SE +/- 0.07, N = 414.5314.7715.4115.6815.3316.3014.7114.45

Compile Bench

Test: Read Compiled Tree

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Read Compiled TreeCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 360120180240300SE +/- 0.73, N = 3SE +/- 2.94, N = 3SE +/- 5.76, N = 3SE +/- 7.99, N = 3SE +/- 1.50, N = 3SE +/- 5.63, N = 3260.08260.32250.96259.65239.00236.52

Compile Bench

Test: Initial Create

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Initial CreateCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3306090120150SE +/- 2.37, N = 3SE +/- 4.01, N = 3SE +/- 1.49, N = 3SE +/- 2.37, N = 3SE +/- 1.96, N = 3SE +/- 1.69, N = 3144.53135.49136.01145.29139.52134.45

PostgreSQL pgbench

Scaling: On-Disk - Test: Normal Load - Mode: Read Write

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 10.3Scaling: On-Disk - Test: Normal Load - Mode: Read WriteCEPH Jewel 1 OSD w/ external JournalDirect SSD io=native cache=none8001600240032004000SE +/- 63.93, N = 3SE +/- 14.08, N = 31824.703642.911. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm

Gzip Compression

Linux Source Tree Archiving To .tar.gz

OpenBenchmarking.orgSeconds, Fewer Is BetterGzip CompressionLinux Source Tree Archiving To .tar.gzCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem20406080100SE +/- 2.16, N = 6SE +/- 1.35, N = 3SE +/- 1.77, N = 6SE +/- 2.62, N = 6SE +/- 0.70, N = 3SE +/- 1.47, N = 3SE +/- 2.46, N = 6SE +/- 2.64, N = 671.7467.5773.3770.3766.5874.6269.3971.33

Threaded I/O Tester

64MB Random Read - 32 Threads

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 2017050364MB Random Read - 32 ThreadsCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem20K40K60K80K100KSE +/- 2213.07, N = 6SE +/- 2822.07, N = 6SE +/- 1303.43, N = 3SE +/- 9550.32, N = 6SE +/- 885.93, N = 3SE +/- 7596.75, N = 6SE +/- 2403.25, N = 6SE +/- 1990.78, N = 6SE +/- 3323.01, N = 6102558.87100449.37107041.8384936.34105283.54108942.81100973.58115753.7160691.531. (CC) gcc options: -O2

AIO-Stress

Random Write

OpenBenchmarking.orgMB/s, More Is BetterAIO-Stress 0.21Random WriteCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem400800120016002000SE +/- 13.54, N = 3SE +/- 109.84, N = 6SE +/- 25.85, N = 3SE +/- 24.90, N = 3SE +/- 68.62, N = 6SE +/- 96.24, N = 6SE +/- 25.18, N = 6SE +/- 55.02, N = 6SE +/- 73.02, N = 61340.551822.871721.821818.641773.671754.611690.541802.661478.381. (CC) gcc options: -pthread -laio


Phoronix Test Suite v10.8.4