ocdcephbenchmarks Running disk benchmark against various CEPH versions and configurations local filesystem: Processor: 8 x QEMU Virtual 2.5+ @ 2.19GHz (8 Cores), Motherboard: Red Hat KVM (1.11.0-2.el7 BIOS), Memory: 2 x 16384 MB RAM, Disk: 28GB, Graphics: cirrusdrmfb OS: CentOS Linux 7, Kernel: 3.10.0-862.3.2.el7.x86_64 (x86_64), Compiler: GCC 4.8.5 20150623, File-System: xfs, Screen Resolution: 1024x768, System Layer: KVM QEMU CEPH Jewel 3 OSDs replica 1: Processor: 8 x QEMU Virtual 2.5+ @ 2.19GHz (8 Cores), Motherboard: Red Hat KVM (1.11.0-2.el7 BIOS), Memory: 2 x 16384 MB RAM, Disk: 1024GB, Graphics: cirrusdrmfb OS: CentOS Linux 7, Kernel: 3.10.0-862.3.2.el7.x86_64 (x86_64), Compiler: GCC 4.8.5 20150623, File-System: xfs, Screen Resolution: 1024x768, System Layer: KVM QEMU Direct SSD io=native cache=none: Processor: 8 x QEMU Virtual 2.5+ @ 2.19GHz (8 Cores), Motherboard: Red Hat KVM (1.11.0-2.el7 BIOS), Memory: 2 x 16384 MB RAM, Disk: 1788GB, Graphics: cirrusdrmfb OS: CentOS Linux 7, Kernel: 3.10.0-862.3.2.el7.x86_64 (x86_64), Compiler: GCC 4.8.5 20150623, File-System: xfs, Screen Resolution: 1024x768, System Layer: KVM QEMU CEPH Jewel 1 OSD w/ external Journal: Processor: 8 x QEMU Virtual 2.5+ @ 2.19GHz (8 Cores), Motherboard: Red Hat KVM (1.11.0-2.el7 BIOS), Memory: 2 x 16384 MB RAM, Disk: 28GB, Graphics: cirrusdrmfb OS: CentOS Linux 7, Kernel: 3.10.0-862.3.2.el7.x86_64 (x86_64), Compiler: GCC 4.8.5 20150623, File-System: xfs, Screen Resolution: 1024x768, System Layer: KVM QEMU CEPH Jewel 1 OSD: Processor: 8 x QEMU Virtual 2.5+ @ 2.19GHz (8 Cores), Motherboard: Red Hat KVM (1.11.0-2.el7 BIOS), Memory: 2 x 16384 MB RAM, Disk: 28GB, Graphics: cirrusdrmfb OS: CentOS Linux 7, Kernel: 3.10.0-862.3.2.el7.x86_64 (x86_64), Compiler: GCC 4.8.5 20150623, File-System: xfs, Screen Resolution: 1024x768, System Layer: KVM QEMU CEPH Jewel 3 OSDs replica 3: Processor: 8 x QEMU Virtual 2.5+ @ 2.19GHz (8 Cores), Motherboard: Red Hat KVM (1.11.0-2.el7 BIOS), Memory: 2 x 16384 MB RAM, Disk: 28GB, Graphics: cirrusdrmfb OS: CentOS Linux 7, Kernel: 3.10.0-862.3.2.el7.x86_64 (x86_64), Compiler: GCC 4.8.5 20150623, File-System: xfs, Screen Resolution: 1024x768, System Layer: KVM QEMU CEPH luminous bluestore 3 OSDs replica 3: Processor: 8 x QEMU Virtual 2.5+ @ 2.19GHz (8 Cores), Motherboard: Red Hat KVM (1.11.0-2.el7 BIOS), Memory: 2 x 16384 MB RAM, Disk: 1024GB, Graphics: cirrusdrmfb OS: CentOS Linux 7, Kernel: 3.10.0-862.3.2.el7.x86_64 (x86_64), Compiler: GCC 4.8.5 20150623, File-System: xfs, Screen Resolution: 1024x768, System Layer: KVM QEMU CEPH luminous bluestore 3 OSDs replica 1: Processor: 8 x QEMU Virtual 2.5+ @ 2.19GHz (8 Cores), Motherboard: Red Hat KVM (1.11.0-2.el7 BIOS), Memory: 2 x 16384 MB RAM, Disk: 1024GB, Graphics: cirrusdrmfb OS: CentOS Linux 7, Kernel: 3.10.0-862.3.2.el7.x86_64 (x86_64), Compiler: GCC 4.8.5 20150623, File-System: xfs, Screen Resolution: 1024x768, System Layer: KVM QEMU CEPH luminous bluestore 1 OSD: Processor: 8 x QEMU Virtual 2.5+ @ 2.19GHz (8 Cores), Motherboard: Red Hat KVM (1.11.0-2.el7 BIOS), Memory: 2 x 16384 MB RAM, Disk: 1024GB, Graphics: cirrusdrmfb OS: CentOS Linux 7, Kernel: 3.10.0-862.3.2.el7.x86_64 (x86_64), Compiler: GCC 4.8.5 20150623, File-System: xfs, Screen Resolution: 1024x768, System Layer: KVM QEMU AIO-Stress 0.21 Random Write MB/s > Higher Is Better local filesystem ......................... 1478.38 |====================== CEPH Jewel 3 OSDs replica 1 .............. 1721.82 |========================== Direct SSD io=native cache=none .......... 1802.66 |=========================== CEPH Jewel 1 OSD w/ external Journal ..... 1822.87 |=========================== CEPH Jewel 1 OSD ......................... 1340.55 |==================== CEPH Jewel 3 OSDs replica 3 .............. 1818.64 |=========================== CEPH luminous bluestore 3 OSDs replica 3 . 1690.54 |========================= CEPH luminous bluestore 3 OSDs replica 1 . 1754.61 |========================== CEPH luminous bluestore 1 OSD ............ 1773.67 |========================== SQLite 3.22 Timed SQLite Insertions Seconds < Lower Is Better local filesystem ......................... 20.61 |===== CEPH Jewel 3 OSDs replica 1 .............. 52.75 |============= Direct SSD io=native cache=none .......... 17.29 |==== CEPH Jewel 1 OSD w/ external Journal ..... 45.10 |============ CEPH Jewel 1 OSD ......................... 46.21 |============ CEPH Jewel 3 OSDs replica 3 .............. 98.30 |========================= CEPH luminous bluestore 3 OSDs replica 3 . 109.48 |============================ CEPH luminous bluestore 3 OSDs replica 1 . 69.95 |================== CEPH luminous bluestore 1 OSD ............ 65.14 |================= FS-Mark 3.3 1000 Files, 1MB Size Files/s > Higher Is Better local filesystem ......................... 152.13 |=========================== CEPH Jewel 3 OSDs replica 1 .............. 87.98 |=============== Direct SSD io=native cache=none .......... 159.03 |============================ CEPH Jewel 1 OSD w/ external Journal ..... 95.50 |================= CEPH Jewel 1 OSD ......................... 83.53 |=============== CEPH Jewel 3 OSDs replica 3 .............. 61.93 |=========== CEPH luminous bluestore 3 OSDs replica 3 . 66.07 |============ CEPH luminous bluestore 3 OSDs replica 1 . 82.60 |=============== CEPH luminous bluestore 1 OSD ............ 83.60 |=============== Dbench 4.0 12 Clients MB/s > Higher Is Better local filesystem ......................... 1285.75 |=========================== CEPH Jewel 3 OSDs replica 1 .............. 683.49 |============== Direct SSD io=native cache=none .......... 800.77 |================= CEPH Jewel 1 OSD w/ external Journal ..... 773.20 |================ CEPH Jewel 1 OSD ......................... 691.87 |=============== CEPH Jewel 3 OSDs replica 3 .............. 417.51 |========= CEPH luminous bluestore 3 OSDs replica 3 . 344.71 |======= CEPH luminous bluestore 3 OSDs replica 1 . 480.21 |========== Dbench 4.0 48 Clients MB/s > Higher Is Better local filesystem ......................... 812.43 |================== CEPH Jewel 3 OSDs replica 1 .............. 968.60 |===================== Direct SSD io=native cache=none .......... 1220.01 |=========================== CEPH Jewel 1 OSD w/ external Journal ..... 1055.65 |======================= CEPH Jewel 1 OSD ......................... 938.32 |===================== CEPH Jewel 3 OSDs replica 3 .............. 712.22 |================ CEPH luminous bluestore 3 OSDs replica 3 . 679.26 |=============== CEPH luminous bluestore 3 OSDs replica 1 . 768.75 |================= CEPH luminous bluestore 1 OSD ............ 842.77 |=================== Dbench 4.0 128 Clients MB/s > Higher Is Better local filesystem ......................... 959.00 |=================== CEPH Jewel 3 OSDs replica 1 .............. 965.17 |=================== Direct SSD io=native cache=none .......... 1336.95 |=========================== CEPH Jewel 1 OSD w/ external Journal ..... 1055.64 |===================== CEPH Jewel 1 OSD ......................... 970.31 |==================== CEPH Jewel 3 OSDs replica 3 .............. 779.86 |================ CEPH luminous bluestore 3 OSDs replica 3 . 771.70 |================ CEPH luminous bluestore 3 OSDs replica 1 . 754.90 |=============== Dbench 4.0 1 Clients MB/s > Higher Is Better local filesystem ......................... 179.02 |========================= CEPH Jewel 3 OSDs replica 1 .............. 82.85 |============ Direct SSD io=native cache=none .......... 197.93 |============================ CEPH Jewel 1 OSD w/ external Journal ..... 101.27 |============== CEPH Jewel 1 OSD ......................... 98.67 |============== CEPH Jewel 3 OSDs replica 3 .............. 56.05 |======== CEPH luminous bluestore 3 OSDs replica 3 . 54.67 |======== CEPH luminous bluestore 3 OSDs replica 1 . 67.09 |========= CEPH luminous bluestore 1 OSD ............ 73.94 |========== Threaded I/O Tester 20170503 64MB Random Read - 32 Threads MB/s > Higher Is Better local filesystem ......................... 60691.53 |============= CEPH Jewel 3 OSDs replica 1 .............. 107041.83 |======================= Direct SSD io=native cache=none .......... 115753.71 |========================= CEPH Jewel 1 OSD w/ external Journal ..... 100449.37 |====================== CEPH Jewel 1 OSD ......................... 102558.87 |====================== CEPH Jewel 3 OSDs replica 3 .............. 84936.34 |================== CEPH luminous bluestore 3 OSDs replica 3 . 100973.58 |====================== CEPH luminous bluestore 3 OSDs replica 1 . 108942.81 |======================== CEPH luminous bluestore 1 OSD ............ 105283.54 |======================= Threaded I/O Tester 20170503 64MB Random Write - 32 Threads MB/s > Higher Is Better local filesystem ......................... 958.96 |============================ CEPH Jewel 3 OSDs replica 1 .............. 337.00 |========== Direct SSD io=native cache=none .......... 555.54 |================ CEPH Jewel 1 OSD w/ external Journal ..... 300.23 |========= CEPH Jewel 1 OSD ......................... 299.60 |========= CEPH Jewel 3 OSDs replica 3 .............. 214.01 |====== CEPH luminous bluestore 3 OSDs replica 3 . 151.00 |==== CEPH luminous bluestore 3 OSDs replica 1 . 255.32 |======= CEPH luminous bluestore 1 OSD ............ 229.61 |======= Unpacking The Linux Kernel linux-4.15.tar.xz Seconds < Lower Is Better local filesystem ......................... 14.45 |========================== CEPH Jewel 3 OSDs replica 1 .............. 15.41 |=========================== Direct SSD io=native cache=none .......... 14.71 |========================== CEPH Jewel 1 OSD w/ external Journal ..... 14.77 |========================== CEPH Jewel 1 OSD ......................... 14.53 |========================== CEPH Jewel 3 OSDs replica 3 .............. 15.68 |============================ CEPH luminous bluestore 3 OSDs replica 3 . 16.30 |============================= CEPH luminous bluestore 3 OSDs replica 1 . 15.33 |=========================== PostMark 1.51 Disk Transaction Performance TPS > Higher Is Better local filesystem ......................... 2409 |============================== CEPH Jewel 3 OSDs replica 1 .............. 2149 |========================== Direct SSD io=native cache=none .......... 2299 |============================ CEPH Jewel 1 OSD w/ external Journal ..... 2206 |=========================== CEPH Jewel 1 OSD ......................... 2443 |============================== CEPH Jewel 3 OSDs replica 3 .............. 2273 |============================ CEPH luminous bluestore 3 OSDs replica 3 . 2066 |========================= CEPH luminous bluestore 3 OSDs replica 1 . 2434 |============================== Gzip Compression Linux Source Tree Archiving To .tar.gz Seconds < Lower Is Better local filesystem ......................... 71.33 |============================ CEPH Jewel 3 OSDs replica 1 .............. 73.37 |============================= Direct SSD io=native cache=none .......... 69.39 |=========================== CEPH Jewel 1 OSD w/ external Journal ..... 67.57 |========================== CEPH Jewel 1 OSD ......................... 71.74 |============================ CEPH Jewel 3 OSDs replica 3 .............. 70.37 |=========================== CEPH luminous bluestore 3 OSDs replica 3 . 74.62 |============================= CEPH luminous bluestore 3 OSDs replica 1 . 66.58 |========================== Apache Benchmark 2.4.29 Static Web Page Serving Requests Per Second > Higher Is Better local filesystem ......................... 7307.72 |======================= CEPH Jewel 3 OSDs replica 1 .............. 7336.67 |======================= Direct SSD io=native cache=none .......... 7272.34 |======================= CEPH Jewel 1 OSD w/ external Journal ..... 7162.87 |======================= CEPH Jewel 1 OSD ......................... 8550.11 |=========================== CEPH Jewel 3 OSDs replica 3 .............. 7961.19 |========================= CEPH luminous bluestore 3 OSDs replica 3 . 6755.53 |===================== CEPH luminous bluestore 3 OSDs replica 1 . 7729.99 |======================== PostgreSQL pgbench 10.3 Scaling: On-Disk - Test: Normal Load - Mode: Read Write TPS > Higher Is Better Direct SSD io=native cache=none ...... 3642.91 |=============================== CEPH Jewel 1 OSD w/ external Journal . 1824.70 |================ Compile Bench 0.6 Test: Compile MB/s > Higher Is Better CEPH Jewel 1 OSD w/ external Journal ..... 1028.88 |======================== CEPH Jewel 1 OSD ......................... 1148.88 |=========================== CEPH Jewel 3 OSDs replica 3 .............. 1112.43 |========================== CEPH luminous bluestore 3 OSDs replica 3 . 1025.83 |======================== CEPH luminous bluestore 3 OSDs replica 1 . 916.83 |===================== CEPH luminous bluestore 1 OSD ............ 1168.81 |=========================== Compile Bench 0.6 Test: Initial Create MB/s > Higher Is Better CEPH Jewel 1 OSD w/ external Journal ..... 135.49 |========================== CEPH Jewel 1 OSD ......................... 144.53 |============================ CEPH Jewel 3 OSDs replica 3 .............. 136.01 |========================== CEPH luminous bluestore 3 OSDs replica 3 . 134.45 |========================== CEPH luminous bluestore 3 OSDs replica 1 . 139.52 |=========================== CEPH luminous bluestore 1 OSD ............ 145.29 |============================ Compile Bench 0.6 Test: Read Compiled Tree MB/s > Higher Is Better CEPH Jewel 1 OSD w/ external Journal ..... 260.32 |============================ CEPH Jewel 1 OSD ......................... 260.08 |============================ CEPH Jewel 3 OSDs replica 3 .............. 250.96 |=========================== CEPH luminous bluestore 3 OSDs replica 3 . 236.52 |========================= CEPH luminous bluestore 3 OSDs replica 1 . 239.00 |========================== CEPH luminous bluestore 1 OSD ............ 259.65 |============================