KVM QEMU testing on CentOS Linux 7 via the Phoronix Test Suite.
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 1805300-FO-OCDCEPHBE02 ocdcephbenchmarks - Phoronix Test Suite ocdcephbenchmarks KVM QEMU testing on CentOS Linux 7 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/1805300-FO-OCDCEPHBE02&grs&rdt&rro .
ocdcephbenchmarks Processor Motherboard Memory Disk Graphics OS Kernel Compiler File-System Screen Resolution System Layer local filesystem CEPH Jewel 3 OSDs Direct SSD io=native cache=none CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 csum_type=none CEPH luminous bluestore 3 OSDs replica 1 8 x QEMU Virtual 2.5+ @ 2.19GHz (8 Cores) Red Hat KVM (1.11.0-2.el7 BIOS) 2 x 16384 MB RAM 28GB cirrusdrmfb CentOS Linux 7 3.10.0-862.3.2.el7.x86_64 (x86_64) GCC 4.8.5 20150623 xfs 1024x768 KVM QEMU 1024GB 1788GB 28GB 1024GB OpenBenchmarking.org Compiler Details - --build=x86_64-redhat-linux --disable-libgcj --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,objc,obj-c++,java,fortran,ada,go,lto --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-linker-hash-style=gnu --with-tune=generic Disk Mount Options Details - attr2,inode64,noquota,relatime,rw,seclabel Python Details - Python 2.7.5 Security Details - SELinux + KPTI + Load fences + Retpoline without IBPB Protection
ocdcephbenchmarks tiobench: 64MB Rand Write - 32 Threads sqlite: Timed SQLite Insertions dbench: 12 Clients dbench: 1 Clients fs-mark: 1000 Files, 1MB Size dbench: 48 Clients dbench: 128 Clients apache: Static Web Page Serving postmark: Disk Transaction Performance unpack-linux: linux-4.15.tar.xz compilebench: Read Compiled Tree compilebench: Initial Create compilebench: Compile pgbench: On-Disk - Normal Load - Read Write compress-gzip: Linux Source Tree Archiving To .tar.gz tiobench: 64MB Rand Read - 32 Threads aio-stress: Rand Write local filesystem CEPH Jewel 3 OSDs Direct SSD io=native cache=none CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 csum_type=none CEPH luminous bluestore 3 OSDs replica 1 958.96 20.61 1285.75 179.02 152.13 812.43 959.00 7307.72 2409 14.45 71.33 60691.53 1478.38 337.00 52.75 683.49 82.85 87.98 968.60 965.17 7336.67 2149 15.41 73.37 107041.83 1721.82 555.54 17.29 800.77 197.93 159.03 1220.01 1336.95 7272.34 2299 14.71 3642.91 69.39 115753.71 1802.66 300.23 45.10 773.20 101.27 95.50 1055.65 1055.64 7162.87 2206 14.77 260.32 135.49 1028.88 1824.70 67.57 100449.37 1822.87 299.60 46.21 691.87 98.67 83.53 938.32 970.31 8550.11 2443 14.53 260.08 144.53 1148.88 71.74 102558.87 1340.55 214.01 98.30 417.51 56.05 61.93 712.22 779.86 7961.19 2273 15.68 250.96 136.01 1112.43 70.37 84936.34 1818.64 151.00 109.48 344.71 54.67 66.07 679.26 771.70 6755.53 2066 16.30 236.52 134.45 1025.83 74.62 100973.58 1690.54 107.78 351.46 65.27 1599.59 255.32 69.95 480.21 67.09 82.60 768.75 754.90 7729.99 2434 15.33 239.00 139.52 916.83 66.58 108942.81 1754.61 OpenBenchmarking.org
Threaded I/O Tester 64MB Random Write - 32 Threads OpenBenchmarking.org MB/s, More Is Better Threaded I/O Tester 20170503 64MB Random Write - 32 Threads CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 3 OSDs replica 3 CEPH jewel 3 OSDs replica 3 CEPH Jewel 1 OSD CEPH Jewel 1 OSD w/ external Journal Direct SSD io=native cache=none CEPH Jewel 3 OSDs local filesystem 200 400 600 800 1000 SE +/- 3.91, N = 3 SE +/- 9.35, N = 6 SE +/- 2.37, N = 3 SE +/- 5.80, N = 6 SE +/- 1.00, N = 3 SE +/- 10.53, N = 3 SE +/- 5.79, N = 3 SE +/- 27.51, N = 6 255.32 151.00 214.01 299.60 300.23 555.54 337.00 958.96 1. (CC) gcc options: -O2
SQLite Timed SQLite Insertions OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.22 Timed SQLite Insertions CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 3 OSDs replica 3 csum_type=none CEPH luminous bluestore 3 OSDs replica 3 CEPH jewel 3 OSDs replica 3 CEPH Jewel 1 OSD CEPH Jewel 1 OSD w/ external Journal Direct SSD io=native cache=none CEPH Jewel 3 OSDs local filesystem 20 40 60 80 100 SE +/- 1.07, N = 3 SE +/- 1.84, N = 3 SE +/- 0.93, N = 3 SE +/- 0.38, N = 3 SE +/- 0.10, N = 3 SE +/- 0.34, N = 3 SE +/- 0.28, N = 6 SE +/- 0.77, N = 4 SE +/- 0.06, N = 3 69.95 107.78 109.48 98.30 46.21 45.10 17.29 52.75 20.61 1. (CC) gcc options: -O2 -ldl -lpthread
Dbench 12 Clients OpenBenchmarking.org MB/s, More Is Better Dbench 4.0 12 Clients CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 3 OSDs replica 3 csum_type=none CEPH luminous bluestore 3 OSDs replica 3 CEPH jewel 3 OSDs replica 3 CEPH Jewel 1 OSD CEPH Jewel 1 OSD w/ external Journal Direct SSD io=native cache=none CEPH Jewel 3 OSDs local filesystem 300 600 900 1200 1500 SE +/- 1.35, N = 3 SE +/- 1.81, N = 3 SE +/- 3.55, N = 3 SE +/- 0.66, N = 3 SE +/- 2.75, N = 3 SE +/- 2.72, N = 3 SE +/- 7.08, N = 3 SE +/- 1.93, N = 3 SE +/- 4.49, N = 3 480.21 351.46 344.71 417.51 691.87 773.20 800.77 683.49 1285.75 1. (CC) gcc options: -lpopt -O2
Dbench 1 Clients OpenBenchmarking.org MB/s, More Is Better Dbench 4.0 1 Clients CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 3 OSDs replica 3 CEPH jewel 3 OSDs replica 3 CEPH Jewel 1 OSD CEPH Jewel 1 OSD w/ external Journal Direct SSD io=native cache=none CEPH Jewel 3 OSDs local filesystem 40 80 120 160 200 SE +/- 0.30, N = 3 SE +/- 0.39, N = 3 SE +/- 2.01, N = 6 SE +/- 1.69, N = 4 SE +/- 1.69, N = 3 SE +/- 1.04, N = 3 SE +/- 0.76, N = 3 SE +/- 2.91, N = 3 67.09 54.67 56.05 98.67 101.27 197.93 82.85 179.02 1. (CC) gcc options: -lpopt -O2
FS-Mark 1000 Files, 1MB Size OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 1000 Files, 1MB Size CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 3 OSDs replica 3 csum_type=none CEPH luminous bluestore 3 OSDs replica 3 CEPH jewel 3 OSDs replica 3 CEPH Jewel 1 OSD CEPH Jewel 1 OSD w/ external Journal Direct SSD io=native cache=none CEPH Jewel 3 OSDs local filesystem 40 80 120 160 200 SE +/- 1.25, N = 4 SE +/- 0.83, N = 3 SE +/- 0.64, N = 3 SE +/- 0.29, N = 3 SE +/- 0.80, N = 3 SE +/- 1.35, N = 6 SE +/- 1.29, N = 3 SE +/- 1.36, N = 5 SE +/- 4.84, N = 6 82.60 65.27 66.07 61.93 83.53 95.50 159.03 87.98 152.13 1. (CC) gcc options: -static
Dbench 48 Clients OpenBenchmarking.org MB/s, More Is Better Dbench 4.0 48 Clients CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 3 OSDs replica 3 CEPH jewel 3 OSDs replica 3 CEPH Jewel 1 OSD CEPH Jewel 1 OSD w/ external Journal Direct SSD io=native cache=none CEPH Jewel 3 OSDs local filesystem 300 600 900 1200 1500 SE +/- 3.97, N = 3 SE +/- 2.02, N = 3 SE +/- 1.19, N = 3 SE +/- 8.61, N = 3 SE +/- 2.21, N = 3 SE +/- 2.82, N = 3 SE +/- 7.36, N = 3 SE +/- 98.18, N = 6 768.75 679.26 712.22 938.32 1055.65 1220.01 968.60 812.43 1. (CC) gcc options: -lpopt -O2
Dbench 128 Clients OpenBenchmarking.org MB/s, More Is Better Dbench 4.0 128 Clients CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 3 OSDs replica 3 CEPH jewel 3 OSDs replica 3 CEPH Jewel 1 OSD CEPH Jewel 1 OSD w/ external Journal Direct SSD io=native cache=none CEPH Jewel 3 OSDs local filesystem 300 600 900 1200 1500 SE +/- 6.81, N = 3 SE +/- 3.58, N = 3 SE +/- 5.18, N = 3 SE +/- 2.85, N = 3 SE +/- 11.99, N = 3 SE +/- 6.02, N = 3 SE +/- 11.71, N = 3 SE +/- 10.43, N = 3 754.90 771.70 779.86 970.31 1055.64 1336.95 965.17 959.00 1. (CC) gcc options: -lpopt -O2
Apache Benchmark Static Web Page Serving OpenBenchmarking.org Requests Per Second, More Is Better Apache Benchmark 2.4.29 Static Web Page Serving CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 3 OSDs replica 3 CEPH jewel 3 OSDs replica 3 CEPH Jewel 1 OSD CEPH Jewel 1 OSD w/ external Journal Direct SSD io=native cache=none CEPH Jewel 3 OSDs local filesystem 2K 4K 6K 8K 10K SE +/- 88.00, N = 3 SE +/- 80.76, N = 3 SE +/- 128.93, N = 6 SE +/- 49.14, N = 3 SE +/- 197.05, N = 6 SE +/- 125.92, N = 4 SE +/- 99.52, N = 6 SE +/- 37.64, N = 3 7729.99 6755.53 7961.19 8550.11 7162.87 7272.34 7336.67 7307.72 1. (CC) gcc options: -shared -fPIC -O2 -pthread
PostMark Disk Transaction Performance OpenBenchmarking.org TPS, More Is Better PostMark 1.51 Disk Transaction Performance CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 3 OSDs replica 3 CEPH jewel 3 OSDs replica 3 CEPH Jewel 1 OSD CEPH Jewel 1 OSD w/ external Journal Direct SSD io=native cache=none CEPH Jewel 3 OSDs local filesystem 500 1000 1500 2000 2500 SE +/- 15.67, N = 3 SE +/- 31.10, N = 3 SE +/- 21.11, N = 3 SE +/- 34.53, N = 3 SE +/- 35.31, N = 5 SE +/- 16.19, N = 3 SE +/- 53.62, N = 6 2434 2066 2273 2443 2206 2299 2149 2409 1. (CC) gcc options: -O3
Unpacking The Linux Kernel linux-4.15.tar.xz OpenBenchmarking.org Seconds, Fewer Is Better Unpacking The Linux Kernel linux-4.15.tar.xz CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 3 OSDs replica 3 CEPH jewel 3 OSDs replica 3 CEPH Jewel 1 OSD CEPH Jewel 1 OSD w/ external Journal Direct SSD io=native cache=none CEPH Jewel 3 OSDs local filesystem 4 8 12 16 20 SE +/- 0.33, N = 8 SE +/- 0.27, N = 4 SE +/- 0.24, N = 5 SE +/- 0.14, N = 4 SE +/- 0.42, N = 8 SE +/- 0.19, N = 7 SE +/- 0.19, N = 8 SE +/- 0.07, N = 4 15.33 16.30 15.68 14.53 14.77 14.71 15.41 14.45
Compile Bench Test: Read Compiled Tree OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Read Compiled Tree CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 3 OSDs replica 3 CEPH jewel 3 OSDs replica 3 CEPH Jewel 1 OSD CEPH Jewel 1 OSD w/ external Journal 60 120 180 240 300 SE +/- 1.50, N = 3 SE +/- 5.63, N = 3 SE +/- 5.76, N = 3 SE +/- 0.73, N = 3 SE +/- 2.94, N = 3 239.00 236.52 250.96 260.08 260.32
Compile Bench Test: Initial Create OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Initial Create CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 3 OSDs replica 3 CEPH jewel 3 OSDs replica 3 CEPH Jewel 1 OSD CEPH Jewel 1 OSD w/ external Journal 30 60 90 120 150 SE +/- 1.96, N = 3 SE +/- 1.69, N = 3 SE +/- 1.49, N = 3 SE +/- 2.37, N = 3 SE +/- 4.01, N = 3 139.52 134.45 136.01 144.53 135.49
Compile Bench Test: Compile OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Compile CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 3 OSDs replica 3 CEPH jewel 3 OSDs replica 3 CEPH Jewel 1 OSD CEPH Jewel 1 OSD w/ external Journal 200 400 600 800 1000 SE +/- 28.08, N = 6 SE +/- 19.45, N = 6 SE +/- 4.80, N = 3 SE +/- 15.77, N = 3 SE +/- 22.78, N = 6 916.83 1025.83 1112.43 1148.88 1028.88
PostgreSQL pgbench Scaling: On-Disk - Test: Normal Load - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 10.3 Scaling: On-Disk - Test: Normal Load - Mode: Read Write CEPH Jewel 1 OSD w/ external Journal Direct SSD io=native cache=none 800 1600 2400 3200 4000 SE +/- 63.93, N = 3 SE +/- 14.08, N = 3 1824.70 3642.91 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm
Gzip Compression Linux Source Tree Archiving To .tar.gz OpenBenchmarking.org Seconds, Fewer Is Better Gzip Compression Linux Source Tree Archiving To .tar.gz CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 3 OSDs replica 3 CEPH jewel 3 OSDs replica 3 CEPH Jewel 1 OSD CEPH Jewel 1 OSD w/ external Journal Direct SSD io=native cache=none CEPH Jewel 3 OSDs local filesystem 20 40 60 80 100 SE +/- 0.70, N = 3 SE +/- 1.47, N = 3 SE +/- 2.62, N = 6 SE +/- 2.16, N = 6 SE +/- 1.35, N = 3 SE +/- 2.46, N = 6 SE +/- 1.77, N = 6 SE +/- 2.64, N = 6 66.58 74.62 70.37 71.74 67.57 69.39 73.37 71.33
Threaded I/O Tester 64MB Random Read - 32 Threads OpenBenchmarking.org MB/s, More Is Better Threaded I/O Tester 20170503 64MB Random Read - 32 Threads CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 3 OSDs replica 3 CEPH jewel 3 OSDs replica 3 CEPH Jewel 1 OSD CEPH Jewel 1 OSD w/ external Journal Direct SSD io=native cache=none CEPH Jewel 3 OSDs local filesystem 20K 40K 60K 80K 100K SE +/- 7596.75, N = 6 SE +/- 2403.25, N = 6 SE +/- 9550.32, N = 6 SE +/- 2213.07, N = 6 SE +/- 2822.07, N = 6 SE +/- 1990.78, N = 6 SE +/- 1303.43, N = 3 SE +/- 3323.01, N = 6 108942.81 100973.58 84936.34 102558.87 100449.37 115753.71 107041.83 60691.53 1. (CC) gcc options: -O2
AIO-Stress Random Write OpenBenchmarking.org MB/s, More Is Better AIO-Stress 0.21 Random Write CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 3 OSDs replica 3 csum_type=none CEPH luminous bluestore 3 OSDs replica 3 CEPH jewel 3 OSDs replica 3 CEPH Jewel 1 OSD CEPH Jewel 1 OSD w/ external Journal Direct SSD io=native cache=none CEPH Jewel 3 OSDs local filesystem 400 800 1200 1600 2000 SE +/- 96.24, N = 6 SE +/- 22.97, N = 3 SE +/- 25.18, N = 6 SE +/- 24.90, N = 3 SE +/- 13.54, N = 3 SE +/- 109.84, N = 6 SE +/- 55.02, N = 6 SE +/- 25.85, N = 3 SE +/- 73.02, N = 6 1754.61 1599.59 1690.54 1818.64 1340.55 1822.87 1802.66 1721.82 1478.38 1. (CC) gcc options: -pthread -laio
Phoronix Test Suite v10.8.4