KVM QEMU testing on CentOS Linux 7 via the Phoronix Test Suite.
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 1805300-FO-OCDCEPHBE02 ocdcephbenchmarks - Phoronix Test Suite ocdcephbenchmarks KVM QEMU testing on CentOS Linux 7 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/1805300-FO-OCDCEPHBE02&grw&sor&rro .
ocdcephbenchmarks Processor Motherboard Memory Disk Graphics OS Kernel Compiler File-System Screen Resolution System Layer local filesystem CEPH Jewel 3 OSDs Direct SSD io=native cache=none CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 csum_type=none CEPH luminous bluestore 3 OSDs replica 1 8 x QEMU Virtual 2.5+ @ 2.19GHz (8 Cores) Red Hat KVM (1.11.0-2.el7 BIOS) 2 x 16384 MB RAM 28GB cirrusdrmfb CentOS Linux 7 3.10.0-862.3.2.el7.x86_64 (x86_64) GCC 4.8.5 20150623 xfs 1024x768 KVM QEMU 1024GB 1788GB 28GB 1024GB OpenBenchmarking.org Compiler Details - --build=x86_64-redhat-linux --disable-libgcj --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,objc,obj-c++,java,fortran,ada,go,lto --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-linker-hash-style=gnu --with-tune=generic Disk Mount Options Details - attr2,inode64,noquota,relatime,rw,seclabel Python Details - Python 2.7.5 Security Details - SELinux + KPTI + Load fences + Retpoline without IBPB Protection
ocdcephbenchmarks compilebench: Compile compilebench: Initial Create tiobench: 64MB Rand Write - 32 Threads compilebench: Read Compiled Tree unpack-linux: linux-4.15.tar.xz aio-stress: Rand Write dbench: 12 Clients tiobench: 64MB Rand Read - 32 Threads dbench: 48 Clients dbench: 128 Clients dbench: 1 Clients fs-mark: 1000 Files, 1MB Size postmark: Disk Transaction Performance apache: Static Web Page Serving sqlite: Timed SQLite Insertions pgbench: On-Disk - Normal Load - Read Write compress-gzip: Linux Source Tree Archiving To .tar.gz local filesystem CEPH Jewel 3 OSDs Direct SSD io=native cache=none CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 1 OSD CEPH jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 csum_type=none CEPH luminous bluestore 3 OSDs replica 1 958.96 14.45 1478.38 1285.75 60691.53 812.43 959.00 179.02 152.13 2409 7307.72 20.61 71.33 337.00 15.41 1721.82 683.49 107041.83 968.60 965.17 82.85 87.98 2149 7336.67 52.75 73.37 555.54 14.71 1802.66 800.77 115753.71 1220.01 1336.95 197.93 159.03 2299 7272.34 17.29 3642.91 69.39 1028.88 135.49 300.23 260.32 14.77 1822.87 773.20 100449.37 1055.65 1055.64 101.27 95.50 2206 7162.87 45.10 1824.70 67.57 1148.88 144.53 299.60 260.08 14.53 1340.55 691.87 102558.87 938.32 970.31 98.67 83.53 2443 8550.11 46.21 71.74 1112.43 136.01 214.01 250.96 15.68 1818.64 417.51 84936.34 712.22 779.86 56.05 61.93 2273 7961.19 98.30 70.37 1025.83 134.45 151.00 236.52 16.30 1690.54 344.71 100973.58 679.26 771.70 54.67 66.07 2066 6755.53 109.48 74.62 1599.59 351.46 65.27 107.78 916.83 139.52 255.32 239.00 15.33 1754.61 480.21 108942.81 768.75 754.90 67.09 82.60 2434 7729.99 69.95 66.58 OpenBenchmarking.org
Compile Bench Test: Compile OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Compile CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 3 OSDs replica 3 CEPH Jewel 1 OSD w/ external Journal CEPH jewel 3 OSDs replica 3 CEPH Jewel 1 OSD 200 400 600 800 1000 SE +/- 28.08, N = 6 SE +/- 19.45, N = 6 SE +/- 22.78, N = 6 SE +/- 4.80, N = 3 SE +/- 15.77, N = 3 916.83 1025.83 1028.88 1112.43 1148.88
Compile Bench Test: Initial Create OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Initial Create CEPH luminous bluestore 3 OSDs replica 3 CEPH Jewel 1 OSD w/ external Journal CEPH jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 CEPH Jewel 1 OSD 30 60 90 120 150 SE +/- 1.69, N = 3 SE +/- 4.01, N = 3 SE +/- 1.49, N = 3 SE +/- 1.96, N = 3 SE +/- 2.37, N = 3 134.45 135.49 136.01 139.52 144.53
Threaded I/O Tester 64MB Random Write - 32 Threads OpenBenchmarking.org MB/s, More Is Better Threaded I/O Tester 20170503 64MB Random Write - 32 Threads CEPH luminous bluestore 3 OSDs replica 3 CEPH jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 CEPH Jewel 1 OSD CEPH Jewel 1 OSD w/ external Journal CEPH Jewel 3 OSDs Direct SSD io=native cache=none local filesystem 200 400 600 800 1000 SE +/- 9.35, N = 6 SE +/- 2.37, N = 3 SE +/- 3.91, N = 3 SE +/- 5.80, N = 6 SE +/- 1.00, N = 3 SE +/- 5.79, N = 3 SE +/- 10.53, N = 3 SE +/- 27.51, N = 6 151.00 214.01 255.32 299.60 300.23 337.00 555.54 958.96 1. (CC) gcc options: -O2
Compile Bench Test: Read Compiled Tree OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Read Compiled Tree CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 CEPH jewel 3 OSDs replica 3 CEPH Jewel 1 OSD CEPH Jewel 1 OSD w/ external Journal 60 120 180 240 300 SE +/- 5.63, N = 3 SE +/- 1.50, N = 3 SE +/- 5.76, N = 3 SE +/- 0.73, N = 3 SE +/- 2.94, N = 3 236.52 239.00 250.96 260.08 260.32
Unpacking The Linux Kernel linux-4.15.tar.xz OpenBenchmarking.org Seconds, Fewer Is Better Unpacking The Linux Kernel linux-4.15.tar.xz CEPH luminous bluestore 3 OSDs replica 3 CEPH jewel 3 OSDs replica 3 CEPH Jewel 3 OSDs CEPH luminous bluestore 3 OSDs replica 1 CEPH Jewel 1 OSD w/ external Journal Direct SSD io=native cache=none CEPH Jewel 1 OSD local filesystem 4 8 12 16 20 SE +/- 0.27, N = 4 SE +/- 0.24, N = 5 SE +/- 0.19, N = 8 SE +/- 0.33, N = 8 SE +/- 0.42, N = 8 SE +/- 0.19, N = 7 SE +/- 0.14, N = 4 SE +/- 0.07, N = 4 16.30 15.68 15.41 15.33 14.77 14.71 14.53 14.45
AIO-Stress Random Write OpenBenchmarking.org MB/s, More Is Better AIO-Stress 0.21 Random Write CEPH Jewel 1 OSD local filesystem CEPH luminous bluestore 3 OSDs replica 3 csum_type=none CEPH luminous bluestore 3 OSDs replica 3 CEPH Jewel 3 OSDs CEPH luminous bluestore 3 OSDs replica 1 Direct SSD io=native cache=none CEPH jewel 3 OSDs replica 3 CEPH Jewel 1 OSD w/ external Journal 400 800 1200 1600 2000 SE +/- 13.54, N = 3 SE +/- 73.02, N = 6 SE +/- 22.97, N = 3 SE +/- 25.18, N = 6 SE +/- 25.85, N = 3 SE +/- 96.24, N = 6 SE +/- 55.02, N = 6 SE +/- 24.90, N = 3 SE +/- 109.84, N = 6 1340.55 1478.38 1599.59 1690.54 1721.82 1754.61 1802.66 1818.64 1822.87 1. (CC) gcc options: -pthread -laio
Dbench 12 Clients OpenBenchmarking.org MB/s, More Is Better Dbench 4.0 12 Clients CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 csum_type=none CEPH jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 CEPH Jewel 3 OSDs CEPH Jewel 1 OSD CEPH Jewel 1 OSD w/ external Journal Direct SSD io=native cache=none local filesystem 300 600 900 1200 1500 SE +/- 3.55, N = 3 SE +/- 1.81, N = 3 SE +/- 0.66, N = 3 SE +/- 1.35, N = 3 SE +/- 1.93, N = 3 SE +/- 2.75, N = 3 SE +/- 2.72, N = 3 SE +/- 7.08, N = 3 SE +/- 4.49, N = 3 344.71 351.46 417.51 480.21 683.49 691.87 773.20 800.77 1285.75 1. (CC) gcc options: -lpopt -O2
Threaded I/O Tester 64MB Random Read - 32 Threads OpenBenchmarking.org MB/s, More Is Better Threaded I/O Tester 20170503 64MB Random Read - 32 Threads local filesystem CEPH jewel 3 OSDs replica 3 CEPH Jewel 1 OSD w/ external Journal CEPH luminous bluestore 3 OSDs replica 3 CEPH Jewel 1 OSD CEPH Jewel 3 OSDs CEPH luminous bluestore 3 OSDs replica 1 Direct SSD io=native cache=none 20K 40K 60K 80K 100K SE +/- 3323.01, N = 6 SE +/- 9550.32, N = 6 SE +/- 2822.07, N = 6 SE +/- 2403.25, N = 6 SE +/- 2213.07, N = 6 SE +/- 1303.43, N = 3 SE +/- 7596.75, N = 6 SE +/- 1990.78, N = 6 60691.53 84936.34 100449.37 100973.58 102558.87 107041.83 108942.81 115753.71 1. (CC) gcc options: -O2
Dbench 48 Clients OpenBenchmarking.org MB/s, More Is Better Dbench 4.0 48 Clients CEPH luminous bluestore 3 OSDs replica 3 CEPH jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 local filesystem CEPH Jewel 1 OSD CEPH Jewel 3 OSDs CEPH Jewel 1 OSD w/ external Journal Direct SSD io=native cache=none 300 600 900 1200 1500 SE +/- 2.02, N = 3 SE +/- 1.19, N = 3 SE +/- 3.97, N = 3 SE +/- 98.18, N = 6 SE +/- 8.61, N = 3 SE +/- 7.36, N = 3 SE +/- 2.21, N = 3 SE +/- 2.82, N = 3 679.26 712.22 768.75 812.43 938.32 968.60 1055.65 1220.01 1. (CC) gcc options: -lpopt -O2
Dbench 128 Clients OpenBenchmarking.org MB/s, More Is Better Dbench 4.0 128 Clients CEPH luminous bluestore 3 OSDs replica 1 CEPH luminous bluestore 3 OSDs replica 3 CEPH jewel 3 OSDs replica 3 local filesystem CEPH Jewel 3 OSDs CEPH Jewel 1 OSD CEPH Jewel 1 OSD w/ external Journal Direct SSD io=native cache=none 300 600 900 1200 1500 SE +/- 6.81, N = 3 SE +/- 3.58, N = 3 SE +/- 5.18, N = 3 SE +/- 10.43, N = 3 SE +/- 11.71, N = 3 SE +/- 2.85, N = 3 SE +/- 11.99, N = 3 SE +/- 6.02, N = 3 754.90 771.70 779.86 959.00 965.17 970.31 1055.64 1336.95 1. (CC) gcc options: -lpopt -O2
Dbench 1 Clients OpenBenchmarking.org MB/s, More Is Better Dbench 4.0 1 Clients CEPH luminous bluestore 3 OSDs replica 3 CEPH jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 CEPH Jewel 3 OSDs CEPH Jewel 1 OSD CEPH Jewel 1 OSD w/ external Journal local filesystem Direct SSD io=native cache=none 40 80 120 160 200 SE +/- 0.39, N = 3 SE +/- 2.01, N = 6 SE +/- 0.30, N = 3 SE +/- 0.76, N = 3 SE +/- 1.69, N = 4 SE +/- 1.69, N = 3 SE +/- 2.91, N = 3 SE +/- 1.04, N = 3 54.67 56.05 67.09 82.85 98.67 101.27 179.02 197.93 1. (CC) gcc options: -lpopt -O2
FS-Mark 1000 Files, 1MB Size OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 1000 Files, 1MB Size CEPH jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 csum_type=none CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 CEPH Jewel 1 OSD CEPH Jewel 3 OSDs CEPH Jewel 1 OSD w/ external Journal local filesystem Direct SSD io=native cache=none 40 80 120 160 200 SE +/- 0.29, N = 3 SE +/- 0.83, N = 3 SE +/- 0.64, N = 3 SE +/- 1.25, N = 4 SE +/- 0.80, N = 3 SE +/- 1.36, N = 5 SE +/- 1.35, N = 6 SE +/- 4.84, N = 6 SE +/- 1.29, N = 3 61.93 65.27 66.07 82.60 83.53 87.98 95.50 152.13 159.03 1. (CC) gcc options: -static
PostMark Disk Transaction Performance OpenBenchmarking.org TPS, More Is Better PostMark 1.51 Disk Transaction Performance CEPH luminous bluestore 3 OSDs replica 3 CEPH Jewel 3 OSDs CEPH Jewel 1 OSD w/ external Journal CEPH jewel 3 OSDs replica 3 Direct SSD io=native cache=none local filesystem CEPH luminous bluestore 3 OSDs replica 1 CEPH Jewel 1 OSD 500 1000 1500 2000 2500 SE +/- 16.19, N = 3 SE +/- 34.53, N = 3 SE +/- 31.10, N = 3 SE +/- 35.31, N = 5 SE +/- 53.62, N = 6 SE +/- 15.67, N = 3 SE +/- 21.11, N = 3 2066 2149 2206 2273 2299 2409 2434 2443 1. (CC) gcc options: -O3
Apache Benchmark Static Web Page Serving OpenBenchmarking.org Requests Per Second, More Is Better Apache Benchmark 2.4.29 Static Web Page Serving CEPH luminous bluestore 3 OSDs replica 3 CEPH Jewel 1 OSD w/ external Journal Direct SSD io=native cache=none local filesystem CEPH Jewel 3 OSDs CEPH luminous bluestore 3 OSDs replica 1 CEPH jewel 3 OSDs replica 3 CEPH Jewel 1 OSD 2K 4K 6K 8K 10K SE +/- 80.76, N = 3 SE +/- 197.05, N = 6 SE +/- 125.92, N = 4 SE +/- 37.64, N = 3 SE +/- 99.52, N = 6 SE +/- 88.00, N = 3 SE +/- 128.93, N = 6 SE +/- 49.14, N = 3 6755.53 7162.87 7272.34 7307.72 7336.67 7729.99 7961.19 8550.11 1. (CC) gcc options: -shared -fPIC -O2 -pthread
SQLite Timed SQLite Insertions OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.22 Timed SQLite Insertions CEPH luminous bluestore 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 3 csum_type=none CEPH jewel 3 OSDs replica 3 CEPH luminous bluestore 3 OSDs replica 1 CEPH Jewel 3 OSDs CEPH Jewel 1 OSD CEPH Jewel 1 OSD w/ external Journal local filesystem Direct SSD io=native cache=none 20 40 60 80 100 SE +/- 0.93, N = 3 SE +/- 1.84, N = 3 SE +/- 0.38, N = 3 SE +/- 1.07, N = 3 SE +/- 0.77, N = 4 SE +/- 0.10, N = 3 SE +/- 0.34, N = 3 SE +/- 0.06, N = 3 SE +/- 0.28, N = 6 109.48 107.78 98.30 69.95 52.75 46.21 45.10 20.61 17.29 1. (CC) gcc options: -O2 -ldl -lpthread
PostgreSQL pgbench Scaling: On-Disk - Test: Normal Load - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 10.3 Scaling: On-Disk - Test: Normal Load - Mode: Read Write CEPH Jewel 1 OSD w/ external Journal Direct SSD io=native cache=none 800 1600 2400 3200 4000 SE +/- 63.93, N = 3 SE +/- 14.08, N = 3 1824.70 3642.91 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm
Gzip Compression Linux Source Tree Archiving To .tar.gz OpenBenchmarking.org Seconds, Fewer Is Better Gzip Compression Linux Source Tree Archiving To .tar.gz CEPH luminous bluestore 3 OSDs replica 3 CEPH Jewel 3 OSDs CEPH Jewel 1 OSD local filesystem CEPH jewel 3 OSDs replica 3 Direct SSD io=native cache=none CEPH Jewel 1 OSD w/ external Journal CEPH luminous bluestore 3 OSDs replica 1 20 40 60 80 100 SE +/- 1.47, N = 3 SE +/- 1.77, N = 6 SE +/- 2.16, N = 6 SE +/- 2.64, N = 6 SE +/- 2.62, N = 6 SE +/- 2.46, N = 6 SE +/- 1.35, N = 3 SE +/- 0.70, N = 3 74.62 73.37 71.74 71.33 70.37 69.39 67.57 66.58
Phoronix Test Suite v10.8.4