ocdcephbenchmarks CEPH Jewel with 3 SSDs, each it's own OSD, no external journal, replica 3, Micron 5100 MAX 1.92TB local filesystem: Processor: 8 x QEMU Virtual 2.5+ @ 2.19GHz (8 Cores), Motherboard: Red Hat KVM (1.11.0-2.el7 BIOS), Memory: 2 x 16384 MB RAM, Disk: 28GB, Graphics: cirrusdrmfb OS: CentOS Linux 7, Kernel: 3.10.0-862.3.2.el7.x86_64 (x86_64), Compiler: GCC 4.8.5 20150623, File-System: xfs, Screen Resolution: 1024x768, System Layer: KVM QEMU CEPH Jewel 3 OSDs: Processor: 8 x QEMU Virtual 2.5+ @ 2.19GHz (8 Cores), Motherboard: Red Hat KVM (1.11.0-2.el7 BIOS), Memory: 2 x 16384 MB RAM, Disk: 1024GB, Graphics: cirrusdrmfb OS: CentOS Linux 7, Kernel: 3.10.0-862.3.2.el7.x86_64 (x86_64), Compiler: GCC 4.8.5 20150623, File-System: xfs, Screen Resolution: 1024x768, System Layer: KVM QEMU Direct SSD io=native cache=none: Processor: 8 x QEMU Virtual 2.5+ @ 2.19GHz (8 Cores), Motherboard: Red Hat KVM (1.11.0-2.el7 BIOS), Memory: 2 x 16384 MB RAM, Disk: 1788GB, Graphics: cirrusdrmfb OS: CentOS Linux 7, Kernel: 3.10.0-862.3.2.el7.x86_64 (x86_64), Compiler: GCC 4.8.5 20150623, File-System: xfs, Screen Resolution: 1024x768, System Layer: KVM QEMU CEPH Jewel 1 OSD w/ external Journal: Processor: 8 x QEMU Virtual 2.5+ @ 2.19GHz (8 Cores), Motherboard: Red Hat KVM (1.11.0-2.el7 BIOS), Memory: 2 x 16384 MB RAM, Disk: 28GB, Graphics: cirrusdrmfb OS: CentOS Linux 7, Kernel: 3.10.0-862.3.2.el7.x86_64 (x86_64), Compiler: GCC 4.8.5 20150623, File-System: xfs, Screen Resolution: 1024x768, System Layer: KVM QEMU CEPH Jewel 1 OSD: Processor: 8 x QEMU Virtual 2.5+ @ 2.19GHz (8 Cores), Motherboard: Red Hat KVM (1.11.0-2.el7 BIOS), Memory: 2 x 16384 MB RAM, Disk: 28GB, Graphics: cirrusdrmfb OS: CentOS Linux 7, Kernel: 3.10.0-862.3.2.el7.x86_64 (x86_64), Compiler: GCC 4.8.5 20150623, File-System: xfs, Screen Resolution: 1024x768, System Layer: KVM QEMU CEPH jewel 3 OSDs replica 3: Processor: 8 x QEMU Virtual 2.5+ @ 2.19GHz (8 Cores), Motherboard: Red Hat KVM (1.11.0-2.el7 BIOS), Memory: 2 x 16384 MB RAM, Disk: 28GB, Graphics: cirrusdrmfb OS: CentOS Linux 7, Kernel: 3.10.0-862.3.2.el7.x86_64 (x86_64), Compiler: GCC 4.8.5 20150623, File-System: xfs, Screen Resolution: 1024x768, System Layer: KVM QEMU AIO-Stress 0.21 Random Write MB/s > Higher Is Better local filesystem ..................... 1478.38 |========================= CEPH Jewel 3 OSDs .................... 1721.82 |============================= Direct SSD io=native cache=none ...... 1802.66 |=============================== CEPH Jewel 1 OSD w/ external Journal . 1822.87 |=============================== CEPH Jewel 1 OSD ..................... 1340.55 |======================= CEPH jewel 3 OSDs replica 3 .......... 1818.64 |=============================== SQLite 3.22 Timed SQLite Insertions Seconds < Lower Is Better local filesystem ..................... 20.61 |======= CEPH Jewel 3 OSDs .................... 52.75 |================== Direct SSD io=native cache=none ...... 17.29 |====== CEPH Jewel 1 OSD w/ external Journal . 45.10 |=============== CEPH Jewel 1 OSD ..................... 46.21 |================ CEPH jewel 3 OSDs replica 3 .......... 98.30 |================================= FS-Mark 3.3 1000 Files, 1MB Size Files/s > Higher Is Better local filesystem ..................... 152.13 |=============================== CEPH Jewel 3 OSDs .................... 87.98 |================== Direct SSD io=native cache=none ...... 159.03 |================================ CEPH Jewel 1 OSD w/ external Journal . 95.50 |=================== CEPH Jewel 1 OSD ..................... 83.53 |================= CEPH jewel 3 OSDs replica 3 .......... 61.93 |============ Dbench 4.0 12 Clients MB/s > Higher Is Better local filesystem ..................... 1285.75 |=============================== CEPH Jewel 3 OSDs .................... 683.49 |================ Direct SSD io=native cache=none ...... 800.77 |=================== CEPH Jewel 1 OSD w/ external Journal . 773.20 |=================== CEPH Jewel 1 OSD ..................... 691.87 |================= CEPH jewel 3 OSDs replica 3 .......... 417.51 |========== Dbench 4.0 48 Clients MB/s > Higher Is Better local filesystem ..................... 812.43 |===================== CEPH Jewel 3 OSDs .................... 968.60 |========================= Direct SSD io=native cache=none ...... 1220.01 |=============================== CEPH Jewel 1 OSD w/ external Journal . 1055.65 |=========================== CEPH Jewel 1 OSD ..................... 938.32 |======================== CEPH jewel 3 OSDs replica 3 .......... 712.22 |================== Dbench 4.0 128 Clients MB/s > Higher Is Better local filesystem ..................... 959.00 |====================== CEPH Jewel 3 OSDs .................... 965.17 |====================== Direct SSD io=native cache=none ...... 1336.95 |=============================== CEPH Jewel 1 OSD w/ external Journal . 1055.64 |======================== CEPH Jewel 1 OSD ..................... 970.31 |====================== CEPH jewel 3 OSDs replica 3 .......... 779.86 |================== Dbench 4.0 1 Clients MB/s > Higher Is Better local filesystem ..................... 179.02 |============================= CEPH Jewel 3 OSDs .................... 82.85 |============= Direct SSD io=native cache=none ...... 197.93 |================================ CEPH Jewel 1 OSD w/ external Journal . 101.27 |================ CEPH Jewel 1 OSD ..................... 98.67 |================ CEPH jewel 3 OSDs replica 3 .......... 56.05 |========= Threaded I/O Tester 20170503 64MB Random Read - 32 Threads MB/s > Higher Is Better local filesystem ..................... 60691.53 |=============== CEPH Jewel 3 OSDs .................... 107041.83 |=========================== Direct SSD io=native cache=none ...... 115753.71 |============================= CEPH Jewel 1 OSD w/ external Journal . 100449.37 |========================= CEPH Jewel 1 OSD ..................... 102558.87 |========================== CEPH jewel 3 OSDs replica 3 .......... 84936.34 |===================== Threaded I/O Tester 20170503 64MB Random Write - 32 Threads MB/s > Higher Is Better local filesystem ..................... 958.96 |================================ CEPH Jewel 3 OSDs .................... 337.00 |=========== Direct SSD io=native cache=none ...... 555.54 |=================== CEPH Jewel 1 OSD w/ external Journal . 300.23 |========== CEPH Jewel 1 OSD ..................... 299.60 |========== CEPH jewel 3 OSDs replica 3 .......... 214.01 |======= Unpacking The Linux Kernel linux-4.15.tar.xz Seconds < Lower Is Better local filesystem ..................... 14.45 |============================== CEPH Jewel 3 OSDs .................... 15.41 |================================ Direct SSD io=native cache=none ...... 14.71 |=============================== CEPH Jewel 1 OSD w/ external Journal . 14.77 |=============================== CEPH Jewel 1 OSD ..................... 14.53 |=============================== CEPH jewel 3 OSDs replica 3 .......... 15.68 |================================= PostMark 1.51 Disk Transaction Performance TPS > Higher Is Better local filesystem ..................... 2409 |================================== CEPH Jewel 3 OSDs .................... 2149 |============================== Direct SSD io=native cache=none ...... 2299 |================================ CEPH Jewel 1 OSD w/ external Journal . 2206 |=============================== CEPH Jewel 1 OSD ..................... 2443 |================================== CEPH jewel 3 OSDs replica 3 .......... 2273 |================================ Gzip Compression Linux Source Tree Archiving To .tar.gz Seconds < Lower Is Better local filesystem ..................... 71.33 |================================ CEPH Jewel 3 OSDs .................... 73.37 |================================= Direct SSD io=native cache=none ...... 69.39 |=============================== CEPH Jewel 1 OSD w/ external Journal . 67.57 |============================== CEPH Jewel 1 OSD ..................... 71.74 |================================ CEPH jewel 3 OSDs replica 3 .......... 70.37 |================================ Apache Benchmark 2.4.29 Static Web Page Serving Requests Per Second > Higher Is Better local filesystem ..................... 7307.72 |========================== CEPH Jewel 3 OSDs .................... 7336.67 |=========================== Direct SSD io=native cache=none ...... 7272.34 |========================== CEPH Jewel 1 OSD w/ external Journal . 7162.87 |========================== CEPH Jewel 1 OSD ..................... 8550.11 |=============================== CEPH jewel 3 OSDs replica 3 .......... 7961.19 |============================= PostgreSQL pgbench 10.3 Scaling: On-Disk - Test: Normal Load - Mode: Read Write TPS > Higher Is Better Direct SSD io=native cache=none ...... 3642.91 |=============================== CEPH Jewel 1 OSD w/ external Journal . 1824.70 |================ Compile Bench 0.6 Test: Compile MB/s > Higher Is Better CEPH Jewel 1 OSD w/ external Journal . 1028.88 |============================ CEPH Jewel 1 OSD ..................... 1148.88 |=============================== CEPH jewel 3 OSDs replica 3 .......... 1112.43 |============================== Compile Bench 0.6 Test: Initial Create MB/s > Higher Is Better CEPH Jewel 1 OSD w/ external Journal . 135.49 |============================== CEPH Jewel 1 OSD ..................... 144.53 |================================ CEPH jewel 3 OSDs replica 3 .......... 136.01 |============================== Compile Bench 0.6 Test: Read Compiled Tree MB/s > Higher Is Better CEPH Jewel 1 OSD w/ external Journal . 260.32 |================================ CEPH Jewel 1 OSD ..................... 260.08 |================================ CEPH jewel 3 OSDs replica 3 .......... 250.96 |===============================