Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 1805308-FO-OCDCEPHBE08 ocdcephbenchmarks - Phoronix Test Suite ocdcephbenchmarks Running disk benchmark against various CEPH versions and configurations
HTML result view exported from: https://openbenchmarking.org/result/1805308-FO-OCDCEPHBE08&grt&rdt .
ocdcephbenchmarks Processor Motherboard Memory Disk Graphics OS Kernel Compiler File-System Screen Resolution System Layer local filesystem CEPH Jewel 3 OSDs replica 1 Direct SSD io=native cache=none CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH Jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 1 OSD 8 x QEMU Virtual 2.5+ @ 2.19GHz (8 Cores) Red Hat KVM (1.11.0-2.el7 BIOS) 2 x 16384 MB RAM 28GB cirrusdrmfb CentOS Linux 7 3.10.0-862.3.2.el7.x86_64 (x86_64) GCC 4.8.5 20150623 xfs 1024x768 KVM QEMU 1024GB 1788GB 28GB 1024GB OpenBenchmarking.org Compiler Details - --build=x86_64-redhat-linux --disable-libgcj --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,objc,obj-c++,java,fortran,ada,go,lto --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-linker-hash-style=gnu --with-tune=generic System Details - local filesystem: The root filesystem of the VM. QCOW on XFS on LVM on MD-RAID RAID 1 over two SSDs Micron 5100 MAX 240GB - CEPH Jewel 3 OSDs replica 1: CEPH, Jewel, 3 OSDs, Filestore, on-disk journal, replica 1, Micron 5100 MAX 1.9 TB - Direct SSD io=native cache=none: Direct SSD, Micron 5100 MAX 1.9 TB - CEPH Jewel 1 OSD w/ external Journal: CEPH, Jewel, 1 OSDs, Filestore, journal on separate SSD, replica 1, Micron 5100 MAX 1.9 TB - CEPH Jewel 1 OSD: CEPH, Jewel, 1 OSDs, Filestore, on-disk journal, replica 1, Micron 5100 MAX 1.9 TB - CEPH Jewel 3 OSDs replica 3: CEPH, Jewel, 3 OSDs, Filestore, on-disk journal, replica 3, Micron 5100 MAX 1.9 TB - CEPH luminous bluestore 3 OSDs replica 3: CEPH, Luminous, 3 OSDs, Bluestore, replica 3, Micron 5100 MAX 1.9 TB - CEPH luminous bluestore 3 OSDs replica 1: CEPH, Luminous, 3 OSDs, Bluestore, replica 1, Micron 5100 MAX 1.9 TB Disk Mount Options Details - attr2,inode64,noquota,relatime,rw,seclabel Python Details - Python 2.7.5 Security Details - SELinux + KPTI + Load fences + Retpoline without IBPB Protection
ocdcephbenchmarks aio-stress: Rand Write apache: Static Web Page Serving compilebench: Compile compilebench: Initial Create compilebench: Read Compiled Tree dbench: 12 Clients dbench: 48 Clients dbench: 128 Clients dbench: 1 Clients fs-mark: 1000 Files, 1MB Size compress-gzip: Linux Source Tree Archiving To .tar.gz pgbench: On-Disk - Normal Load - Read Write postmark: Disk Transaction Performance sqlite: Timed SQLite Insertions tiobench: 64MB Rand Read - 32 Threads tiobench: 64MB Rand Write - 32 Threads unpack-linux: linux-4.15.tar.xz local filesystem CEPH Jewel 3 OSDs replica 1 Direct SSD io=native cache=none CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH Jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 1 OSD 1478.38 7307.72 1285.75 812.43 959.00 179.02 152.13 71.33 2409 20.61 60691.53 958.96 14.45 1721.82 7336.67 683.49 968.60 965.17 82.85 87.98 73.37 2149 52.75 107041.83 337.00 15.41 1802.66 7272.34 800.77 1220.01 1336.95 197.93 159.03 69.39 3642.91 2299 17.29 115753.71 555.54 14.71 1822.87 7162.87 1028.88 135.49 260.32 773.20 1055.65 1055.64 101.27 95.50 67.57 1824.70 2206 45.10 100449.37 300.23 14.77 1340.55 8550.11 1148.88 144.53 260.08 691.87 938.32 970.31 98.67 83.53 71.74 2443 46.21 102558.87 299.60 14.53 1818.64 7961.19 1112.43 136.01 250.96 417.51 712.22 779.86 56.05 61.93 70.37 2273 98.30 84936.34 214.01 15.68 1690.54 6755.53 1025.83 134.45 236.52 344.71 679.26 771.70 54.67 66.07 74.62 2066 109.48 100973.58 151.00 16.30 1754.61 7729.99 916.83 139.52 239.00 480.21 768.75 754.90 67.09 82.60 66.58 2434 69.95 108942.81 255.32 15.33 1773.67 1168.81 145.29 259.65 842.77 73.94 83.60 65.14 105283.54 229.61 OpenBenchmarking.org
AIO-Stress Random Write OpenBenchmarking.org MB/s, More Is Better AIO-Stress 0.21 Random Write local filesystem CEPH Jewel 3 OSDs replica 1 Direct SSD io=native cache=none CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH Jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 1 OSD 400 800 1200 1600 2000 SE +/- 73.02, N = 6 SE +/- 25.85, N = 3 SE +/- 55.02, N = 6 SE +/- 109.84, N = 6 SE +/- 13.54, N = 3 SE +/- 24.90, N = 3 SE +/- 25.18, N = 6 SE +/- 96.24, N = 6 SE +/- 68.62, N = 6 1478.38 1721.82 1802.66 1822.87 1340.55 1818.64 1690.54 1754.61 1773.67 1. (CC) gcc options: -pthread -laio
Apache Benchmark Static Web Page Serving OpenBenchmarking.org Requests Per Second, More Is Better Apache Benchmark 2.4.29 Static Web Page Serving local filesystem CEPH Jewel 3 OSDs replica 1 Direct SSD io=native cache=none CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH Jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 2K 4K 6K 8K 10K SE +/- 37.64, N = 3 SE +/- 99.52, N = 6 SE +/- 125.92, N = 4 SE +/- 197.05, N = 6 SE +/- 49.14, N = 3 SE +/- 128.93, N = 6 SE +/- 80.76, N = 3 SE +/- 88.00, N = 3 7307.72 7336.67 7272.34 7162.87 8550.11 7961.19 6755.53 7729.99 1. (CC) gcc options: -shared -fPIC -O2 -pthread
Compile Bench Test: Compile OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Compile CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH Jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 1 OSD 300 600 900 1200 1500 SE +/- 22.78, N = 6 SE +/- 15.77, N = 3 SE +/- 4.80, N = 3 SE +/- 19.45, N = 6 SE +/- 28.08, N = 6 SE +/- 14.27, N = 3 1028.88 1148.88 1112.43 1025.83 916.83 1168.81
Compile Bench Test: Initial Create OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Initial Create CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH Jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 1 OSD 30 60 90 120 150 SE +/- 4.01, N = 3 SE +/- 2.37, N = 3 SE +/- 1.49, N = 3 SE +/- 1.69, N = 3 SE +/- 1.96, N = 3 SE +/- 2.37, N = 3 135.49 144.53 136.01 134.45 139.52 145.29
Compile Bench Test: Read Compiled Tree OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Read Compiled Tree CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH Jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 1 OSD 60 120 180 240 300 SE +/- 2.94, N = 3 SE +/- 0.73, N = 3 SE +/- 5.76, N = 3 SE +/- 5.63, N = 3 SE +/- 1.50, N = 3 SE +/- 7.99, N = 3 260.32 260.08 250.96 236.52 239.00 259.65
Dbench 12 Clients OpenBenchmarking.org MB/s, More Is Better Dbench 4.0 12 Clients local filesystem CEPH Jewel 3 OSDs replica 1 Direct SSD io=native cache=none CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH Jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 300 600 900 1200 1500 SE +/- 4.49, N = 3 SE +/- 1.93, N = 3 SE +/- 7.08, N = 3 SE +/- 2.72, N = 3 SE +/- 2.75, N = 3 SE +/- 0.66, N = 3 SE +/- 3.55, N = 3 SE +/- 1.35, N = 3 1285.75 683.49 800.77 773.20 691.87 417.51 344.71 480.21 1. (CC) gcc options: -lpopt -O2
Dbench 48 Clients OpenBenchmarking.org MB/s, More Is Better Dbench 4.0 48 Clients local filesystem CEPH Jewel 3 OSDs replica 1 Direct SSD io=native cache=none CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH Jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 1 OSD 300 600 900 1200 1500 SE +/- 98.18, N = 6 SE +/- 7.36, N = 3 SE +/- 2.82, N = 3 SE +/- 2.21, N = 3 SE +/- 8.61, N = 3 SE +/- 1.19, N = 3 SE +/- 2.02, N = 3 SE +/- 3.97, N = 3 SE +/- 12.96, N = 3 812.43 968.60 1220.01 1055.65 938.32 712.22 679.26 768.75 842.77 1. (CC) gcc options: -lpopt -O2
Dbench 128 Clients OpenBenchmarking.org MB/s, More Is Better Dbench 4.0 128 Clients local filesystem CEPH Jewel 3 OSDs replica 1 Direct SSD io=native cache=none CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH Jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 300 600 900 1200 1500 SE +/- 10.43, N = 3 SE +/- 11.71, N = 3 SE +/- 6.02, N = 3 SE +/- 11.99, N = 3 SE +/- 2.85, N = 3 SE +/- 5.18, N = 3 SE +/- 3.58, N = 3 SE +/- 6.81, N = 3 959.00 965.17 1336.95 1055.64 970.31 779.86 771.70 754.90 1. (CC) gcc options: -lpopt -O2
Dbench 1 Clients OpenBenchmarking.org MB/s, More Is Better Dbench 4.0 1 Clients local filesystem CEPH Jewel 3 OSDs replica 1 Direct SSD io=native cache=none CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH Jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 1 OSD 40 80 120 160 200 SE +/- 2.91, N = 3 SE +/- 0.76, N = 3 SE +/- 1.04, N = 3 SE +/- 1.69, N = 3 SE +/- 1.69, N = 4 SE +/- 2.01, N = 6 SE +/- 0.39, N = 3 SE +/- 0.30, N = 3 SE +/- 0.56, N = 3 179.02 82.85 197.93 101.27 98.67 56.05 54.67 67.09 73.94 1. (CC) gcc options: -lpopt -O2
FS-Mark 1000 Files, 1MB Size OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 1000 Files, 1MB Size local filesystem CEPH Jewel 3 OSDs replica 1 Direct SSD io=native cache=none CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH Jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 1 OSD 40 80 120 160 200 SE +/- 4.84, N = 6 SE +/- 1.36, N = 5 SE +/- 1.29, N = 3 SE +/- 1.35, N = 6 SE +/- 0.80, N = 3 SE +/- 0.29, N = 3 SE +/- 0.64, N = 3 SE +/- 1.25, N = 4 SE +/- 0.46, N = 3 152.13 87.98 159.03 95.50 83.53 61.93 66.07 82.60 83.60 1. (CC) gcc options: -static
Gzip Compression Linux Source Tree Archiving To .tar.gz OpenBenchmarking.org Seconds, Fewer Is Better Gzip Compression Linux Source Tree Archiving To .tar.gz local filesystem CEPH Jewel 3 OSDs replica 1 Direct SSD io=native cache=none CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH Jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 20 40 60 80 100 SE +/- 2.64, N = 6 SE +/- 1.77, N = 6 SE +/- 2.46, N = 6 SE +/- 1.35, N = 3 SE +/- 2.16, N = 6 SE +/- 2.62, N = 6 SE +/- 1.47, N = 3 SE +/- 0.70, N = 3 71.33 73.37 69.39 67.57 71.74 70.37 74.62 66.58
PostgreSQL pgbench Scaling: On-Disk - Test: Normal Load - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 10.3 Scaling: On-Disk - Test: Normal Load - Mode: Read Write Direct SSD io=native cache=none CEPH Jewel 1 OSD w/ external Journal 800 1600 2400 3200 4000 SE +/- 14.08, N = 3 SE +/- 63.93, N = 3 3642.91 1824.70 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm
PostMark Disk Transaction Performance OpenBenchmarking.org TPS, More Is Better PostMark 1.51 Disk Transaction Performance local filesystem CEPH Jewel 3 OSDs replica 1 Direct SSD io=native cache=none CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH Jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 500 1000 1500 2000 2500 SE +/- 53.62, N = 6 SE +/- 16.19, N = 3 SE +/- 35.31, N = 5 SE +/- 34.53, N = 3 SE +/- 21.11, N = 3 SE +/- 31.10, N = 3 SE +/- 15.67, N = 3 2409 2149 2299 2206 2443 2273 2066 2434 1. (CC) gcc options: -O3
SQLite Timed SQLite Insertions OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.22 Timed SQLite Insertions local filesystem CEPH Jewel 3 OSDs replica 1 Direct SSD io=native cache=none CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH Jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 1 OSD 20 40 60 80 100 SE +/- 0.06, N = 3 SE +/- 0.77, N = 4 SE +/- 0.28, N = 6 SE +/- 0.34, N = 3 SE +/- 0.10, N = 3 SE +/- 0.38, N = 3 SE +/- 0.93, N = 3 SE +/- 1.07, N = 3 SE +/- 0.78, N = 3 20.61 52.75 17.29 45.10 46.21 98.30 109.48 69.95 65.14 1. (CC) gcc options: -O2 -ldl -lpthread
Threaded I/O Tester 64MB Random Read - 32 Threads OpenBenchmarking.org MB/s, More Is Better Threaded I/O Tester 20170503 64MB Random Read - 32 Threads local filesystem CEPH Jewel 3 OSDs replica 1 Direct SSD io=native cache=none CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH Jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 1 OSD 20K 40K 60K 80K 100K SE +/- 3323.01, N = 6 SE +/- 1303.43, N = 3 SE +/- 1990.78, N = 6 SE +/- 2822.07, N = 6 SE +/- 2213.07, N = 6 SE +/- 9550.32, N = 6 SE +/- 2403.25, N = 6 SE +/- 7596.75, N = 6 SE +/- 885.93, N = 3 60691.53 107041.83 115753.71 100449.37 102558.87 84936.34 100973.58 108942.81 105283.54 1. (CC) gcc options: -O2
Threaded I/O Tester 64MB Random Write - 32 Threads OpenBenchmarking.org MB/s, More Is Better Threaded I/O Tester 20170503 64MB Random Write - 32 Threads local filesystem CEPH Jewel 3 OSDs replica 1 Direct SSD io=native cache=none CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH Jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 1 OSD 200 400 600 800 1000 SE +/- 27.51, N = 6 SE +/- 5.79, N = 3 SE +/- 10.53, N = 3 SE +/- 1.00, N = 3 SE +/- 5.80, N = 6 SE +/- 2.37, N = 3 SE +/- 9.35, N = 6 SE +/- 3.91, N = 3 SE +/- 3.19, N = 3 958.96 337.00 555.54 300.23 299.60 214.01 151.00 255.32 229.61 1. (CC) gcc options: -O2
Unpacking The Linux Kernel linux-4.15.tar.xz OpenBenchmarking.org Seconds, Fewer Is Better Unpacking The Linux Kernel linux-4.15.tar.xz local filesystem CEPH Jewel 3 OSDs replica 1 Direct SSD io=native cache=none CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH Jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 4 8 12 16 20 SE +/- 0.07, N = 4 SE +/- 0.19, N = 8 SE +/- 0.19, N = 7 SE +/- 0.42, N = 8 SE +/- 0.14, N = 4 SE +/- 0.24, N = 5 SE +/- 0.27, N = 4 SE +/- 0.33, N = 8 14.45 15.41 14.71 14.77 14.53 15.68 16.30 15.33
Phoronix Test Suite v10.8.4