Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 1805308-FO-OCDCEPHBE08 ocdcephbenchmarks - Phoronix Test Suite ocdcephbenchmarks Running disk benchmark against various CEPH versions and configurations
HTML result view exported from: https://openbenchmarking.org/result/1805308-FO-OCDCEPHBE08&rdt&grs .
ocdcephbenchmarks Processor Motherboard Memory Disk Graphics OS Kernel Compiler File-System Screen Resolution System Layer local filesystem CEPH Jewel 3 OSDs replica 1 Direct SSD io=native cache=none CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH Jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 1 OSD 8 x QEMU Virtual 2.5+ @ 2.19GHz (8 Cores) Red Hat KVM (1.11.0-2.el7 BIOS) 2 x 16384 MB RAM 28GB cirrusdrmfb CentOS Linux 7 3.10.0-862.3.2.el7.x86_64 (x86_64) GCC 4.8.5 20150623 xfs 1024x768 KVM QEMU 1024GB 1788GB 28GB 1024GB OpenBenchmarking.org Compiler Details - --build=x86_64-redhat-linux --disable-libgcj --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,objc,obj-c++,java,fortran,ada,go,lto --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-linker-hash-style=gnu --with-tune=generic System Details - local filesystem: The root filesystem of the VM. QCOW on XFS on LVM on MD-RAID RAID 1 over two SSDs Micron 5100 MAX 240GB - CEPH Jewel 3 OSDs replica 1: CEPH, Jewel, 3 OSDs, Filestore, on-disk journal, replica 1, Micron 5100 MAX 1.9 TB - Direct SSD io=native cache=none: Direct SSD, Micron 5100 MAX 1.9 TB - CEPH Jewel 1 OSD w/ external Journal: CEPH, Jewel, 1 OSDs, Filestore, journal on separate SSD, replica 1, Micron 5100 MAX 1.9 TB - CEPH Jewel 1 OSD: CEPH, Jewel, 1 OSDs, Filestore, on-disk journal, replica 1, Micron 5100 MAX 1.9 TB - CEPH Jewel 3 OSDs replica 3: CEPH, Jewel, 3 OSDs, Filestore, on-disk journal, replica 3, Micron 5100 MAX 1.9 TB - CEPH luminous bluestore 3 OSDs replica 3: CEPH, Luminous, 3 OSDs, Bluestore, replica 3, Micron 5100 MAX 1.9 TB - CEPH luminous bluestore 3 OSDs replica 1: CEPH, Luminous, 3 OSDs, Bluestore, replica 1, Micron 5100 MAX 1.9 TB Disk Mount Options Details - attr2,inode64,noquota,relatime,rw,seclabel Python Details - Python 2.7.5 Security Details - SELinux + KPTI + Load fences + Retpoline without IBPB Protection
ocdcephbenchmarks tiobench: 64MB Rand Write - 32 Threads sqlite: Timed SQLite Insertions dbench: 12 Clients dbench: 1 Clients fs-mark: 1000 Files, 1MB Size dbench: 48 Clients dbench: 128 Clients compilebench: Compile apache: Static Web Page Serving postmark: Disk Transaction Performance unpack-linux: linux-4.15.tar.xz compilebench: Read Compiled Tree compilebench: Initial Create pgbench: On-Disk - Normal Load - Read Write compress-gzip: Linux Source Tree Archiving To .tar.gz tiobench: 64MB Rand Read - 32 Threads aio-stress: Rand Write local filesystem CEPH Jewel 3 OSDs replica 1 Direct SSD io=native cache=none CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH Jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 1 OSD 958.96 20.61 1285.75 179.02 152.13 812.43 959.00 7307.72 2409 14.45 71.33 60691.53 1478.38 337.00 52.75 683.49 82.85 87.98 968.60 965.17 7336.67 2149 15.41 73.37 107041.83 1721.82 555.54 17.29 800.77 197.93 159.03 1220.01 1336.95 7272.34 2299 14.71 3642.91 69.39 115753.71 1802.66 300.23 45.10 773.20 101.27 95.50 1055.65 1055.64 1028.88 7162.87 2206 14.77 260.32 135.49 1824.70 67.57 100449.37 1822.87 299.60 46.21 691.87 98.67 83.53 938.32 970.31 1148.88 8550.11 2443 14.53 260.08 144.53 71.74 102558.87 1340.55 214.01 98.30 417.51 56.05 61.93 712.22 779.86 1112.43 7961.19 2273 15.68 250.96 136.01 70.37 84936.34 1818.64 151.00 109.48 344.71 54.67 66.07 679.26 771.70 1025.83 6755.53 2066 16.30 236.52 134.45 74.62 100973.58 1690.54 255.32 69.95 480.21 67.09 82.60 768.75 754.90 916.83 7729.99 2434 15.33 239.00 139.52 66.58 108942.81 1754.61 229.61 65.14 73.94 83.60 842.77 1168.81 259.65 145.29 105283.54 1773.67 OpenBenchmarking.org
Threaded I/O Tester 64MB Random Write - 32 Threads OpenBenchmarking.org MB/s, More Is Better Threaded I/O Tester 20170503 64MB Random Write - 32 Threads local filesystem CEPH Jewel 3 OSDs replica 1 Direct SSD io=native cache=none CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH Jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 1 OSD 200 400 600 800 1000 SE +/- 27.51, N = 6 SE +/- 5.79, N = 3 SE +/- 10.53, N = 3 SE +/- 1.00, N = 3 SE +/- 5.80, N = 6 SE +/- 2.37, N = 3 SE +/- 9.35, N = 6 SE +/- 3.91, N = 3 SE +/- 3.19, N = 3 958.96 337.00 555.54 300.23 299.60 214.01 151.00 255.32 229.61 1. (CC) gcc options: -O2
SQLite Timed SQLite Insertions OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.22 Timed SQLite Insertions local filesystem CEPH Jewel 3 OSDs replica 1 Direct SSD io=native cache=none CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH Jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 1 OSD 20 40 60 80 100 SE +/- 0.06, N = 3 SE +/- 0.77, N = 4 SE +/- 0.28, N = 6 SE +/- 0.34, N = 3 SE +/- 0.10, N = 3 SE +/- 0.38, N = 3 SE +/- 0.93, N = 3 SE +/- 1.07, N = 3 SE +/- 0.78, N = 3 20.61 52.75 17.29 45.10 46.21 98.30 109.48 69.95 65.14 1. (CC) gcc options: -O2 -ldl -lpthread
Dbench 12 Clients OpenBenchmarking.org MB/s, More Is Better Dbench 4.0 12 Clients local filesystem CEPH Jewel 3 OSDs replica 1 Direct SSD io=native cache=none CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH Jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 300 600 900 1200 1500 SE +/- 4.49, N = 3 SE +/- 1.93, N = 3 SE +/- 7.08, N = 3 SE +/- 2.72, N = 3 SE +/- 2.75, N = 3 SE +/- 0.66, N = 3 SE +/- 3.55, N = 3 SE +/- 1.35, N = 3 1285.75 683.49 800.77 773.20 691.87 417.51 344.71 480.21 1. (CC) gcc options: -lpopt -O2
Dbench 1 Clients OpenBenchmarking.org MB/s, More Is Better Dbench 4.0 1 Clients local filesystem CEPH Jewel 3 OSDs replica 1 Direct SSD io=native cache=none CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH Jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 1 OSD 40 80 120 160 200 SE +/- 2.91, N = 3 SE +/- 0.76, N = 3 SE +/- 1.04, N = 3 SE +/- 1.69, N = 3 SE +/- 1.69, N = 4 SE +/- 2.01, N = 6 SE +/- 0.39, N = 3 SE +/- 0.30, N = 3 SE +/- 0.56, N = 3 179.02 82.85 197.93 101.27 98.67 56.05 54.67 67.09 73.94 1. (CC) gcc options: -lpopt -O2
FS-Mark 1000 Files, 1MB Size OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 1000 Files, 1MB Size local filesystem CEPH Jewel 3 OSDs replica 1 Direct SSD io=native cache=none CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH Jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 1 OSD 40 80 120 160 200 SE +/- 4.84, N = 6 SE +/- 1.36, N = 5 SE +/- 1.29, N = 3 SE +/- 1.35, N = 6 SE +/- 0.80, N = 3 SE +/- 0.29, N = 3 SE +/- 0.64, N = 3 SE +/- 1.25, N = 4 SE +/- 0.46, N = 3 152.13 87.98 159.03 95.50 83.53 61.93 66.07 82.60 83.60 1. (CC) gcc options: -static
Dbench 48 Clients OpenBenchmarking.org MB/s, More Is Better Dbench 4.0 48 Clients local filesystem CEPH Jewel 3 OSDs replica 1 Direct SSD io=native cache=none CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH Jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 1 OSD 300 600 900 1200 1500 SE +/- 98.18, N = 6 SE +/- 7.36, N = 3 SE +/- 2.82, N = 3 SE +/- 2.21, N = 3 SE +/- 8.61, N = 3 SE +/- 1.19, N = 3 SE +/- 2.02, N = 3 SE +/- 3.97, N = 3 SE +/- 12.96, N = 3 812.43 968.60 1220.01 1055.65 938.32 712.22 679.26 768.75 842.77 1. (CC) gcc options: -lpopt -O2
Dbench 128 Clients OpenBenchmarking.org MB/s, More Is Better Dbench 4.0 128 Clients local filesystem CEPH Jewel 3 OSDs replica 1 Direct SSD io=native cache=none CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH Jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 300 600 900 1200 1500 SE +/- 10.43, N = 3 SE +/- 11.71, N = 3 SE +/- 6.02, N = 3 SE +/- 11.99, N = 3 SE +/- 2.85, N = 3 SE +/- 5.18, N = 3 SE +/- 3.58, N = 3 SE +/- 6.81, N = 3 959.00 965.17 1336.95 1055.64 970.31 779.86 771.70 754.90 1. (CC) gcc options: -lpopt -O2
Compile Bench Test: Compile OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Compile CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH Jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 1 OSD 300 600 900 1200 1500 SE +/- 22.78, N = 6 SE +/- 15.77, N = 3 SE +/- 4.80, N = 3 SE +/- 19.45, N = 6 SE +/- 28.08, N = 6 SE +/- 14.27, N = 3 1028.88 1148.88 1112.43 1025.83 916.83 1168.81
Apache Benchmark Static Web Page Serving OpenBenchmarking.org Requests Per Second, More Is Better Apache Benchmark 2.4.29 Static Web Page Serving local filesystem CEPH Jewel 3 OSDs replica 1 Direct SSD io=native cache=none CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH Jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 2K 4K 6K 8K 10K SE +/- 37.64, N = 3 SE +/- 99.52, N = 6 SE +/- 125.92, N = 4 SE +/- 197.05, N = 6 SE +/- 49.14, N = 3 SE +/- 128.93, N = 6 SE +/- 80.76, N = 3 SE +/- 88.00, N = 3 7307.72 7336.67 7272.34 7162.87 8550.11 7961.19 6755.53 7729.99 1. (CC) gcc options: -shared -fPIC -O2 -pthread
PostMark Disk Transaction Performance OpenBenchmarking.org TPS, More Is Better PostMark 1.51 Disk Transaction Performance local filesystem CEPH Jewel 3 OSDs replica 1 Direct SSD io=native cache=none CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH Jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 500 1000 1500 2000 2500 SE +/- 53.62, N = 6 SE +/- 16.19, N = 3 SE +/- 35.31, N = 5 SE +/- 34.53, N = 3 SE +/- 21.11, N = 3 SE +/- 31.10, N = 3 SE +/- 15.67, N = 3 2409 2149 2299 2206 2443 2273 2066 2434 1. (CC) gcc options: -O3
Unpacking The Linux Kernel linux-4.15.tar.xz OpenBenchmarking.org Seconds, Fewer Is Better Unpacking The Linux Kernel linux-4.15.tar.xz local filesystem CEPH Jewel 3 OSDs replica 1 Direct SSD io=native cache=none CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH Jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 4 8 12 16 20 SE +/- 0.07, N = 4 SE +/- 0.19, N = 8 SE +/- 0.19, N = 7 SE +/- 0.42, N = 8 SE +/- 0.14, N = 4 SE +/- 0.24, N = 5 SE +/- 0.27, N = 4 SE +/- 0.33, N = 8 14.45 15.41 14.71 14.77 14.53 15.68 16.30 15.33
Compile Bench Test: Read Compiled Tree OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Read Compiled Tree CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH Jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 1 OSD 60 120 180 240 300 SE +/- 2.94, N = 3 SE +/- 0.73, N = 3 SE +/- 5.76, N = 3 SE +/- 5.63, N = 3 SE +/- 1.50, N = 3 SE +/- 7.99, N = 3 260.32 260.08 250.96 236.52 239.00 259.65
Compile Bench Test: Initial Create OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Initial Create CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH Jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 1 OSD 30 60 90 120 150 SE +/- 4.01, N = 3 SE +/- 2.37, N = 3 SE +/- 1.49, N = 3 SE +/- 1.69, N = 3 SE +/- 1.96, N = 3 SE +/- 2.37, N = 3 135.49 144.53 136.01 134.45 139.52 145.29
PostgreSQL pgbench Scaling: On-Disk - Test: Normal Load - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 10.3 Scaling: On-Disk - Test: Normal Load - Mode: Read Write Direct SSD io=native cache=none CEPH Jewel 1 OSD w/ external Journal 800 1600 2400 3200 4000 SE +/- 14.08, N = 3 SE +/- 63.93, N = 3 3642.91 1824.70 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm
Gzip Compression Linux Source Tree Archiving To .tar.gz OpenBenchmarking.org Seconds, Fewer Is Better Gzip Compression Linux Source Tree Archiving To .tar.gz local filesystem CEPH Jewel 3 OSDs replica 1 Direct SSD io=native cache=none CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH Jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 20 40 60 80 100 SE +/- 2.64, N = 6 SE +/- 1.77, N = 6 SE +/- 2.46, N = 6 SE +/- 1.35, N = 3 SE +/- 2.16, N = 6 SE +/- 2.62, N = 6 SE +/- 1.47, N = 3 SE +/- 0.70, N = 3 71.33 73.37 69.39 67.57 71.74 70.37 74.62 66.58
Threaded I/O Tester 64MB Random Read - 32 Threads OpenBenchmarking.org MB/s, More Is Better Threaded I/O Tester 20170503 64MB Random Read - 32 Threads local filesystem CEPH Jewel 3 OSDs replica 1 Direct SSD io=native cache=none CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH Jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 1 OSD 20K 40K 60K 80K 100K SE +/- 3323.01, N = 6 SE +/- 1303.43, N = 3 SE +/- 1990.78, N = 6 SE +/- 2822.07, N = 6 SE +/- 2213.07, N = 6 SE +/- 9550.32, N = 6 SE +/- 2403.25, N = 6 SE +/- 7596.75, N = 6 SE +/- 885.93, N = 3 60691.53 107041.83 115753.71 100449.37 102558.87 84936.34 100973.58 108942.81 105283.54 1. (CC) gcc options: -O2
AIO-Stress Random Write OpenBenchmarking.org MB/s, More Is Better AIO-Stress 0.21 Random Write local filesystem CEPH Jewel 3 OSDs replica 1 Direct SSD io=native cache=none CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH Jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 1 OSD 400 800 1200 1600 2000 SE +/- 73.02, N = 6 SE +/- 25.85, N = 3 SE +/- 55.02, N = 6 SE +/- 109.84, N = 6 SE +/- 13.54, N = 3 SE +/- 24.90, N = 3 SE +/- 25.18, N = 6 SE +/- 96.24, N = 6 SE +/- 68.62, N = 6 1478.38 1721.82 1802.66 1822.87 1340.55 1818.64 1690.54 1754.61 1773.67 1. (CC) gcc options: -pthread -laio
Phoronix Test Suite v10.8.4