SSD directly attached to the system (type='raw' cache='none' io='native'), Micron 5100 MAX 1.92TB
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 1805287-FO-OCDCEPHBE12 ocdcephbenchmarks - Phoronix Test Suite ocdcephbenchmarks SSD directly attached to the system (type='raw' cache='none' io='native'), Micron 5100 MAX 1.92TB
HTML result view exported from: https://openbenchmarking.org/result/1805287-FO-OCDCEPHBE12&export=txt&gru&sor&rro .
ocdcephbenchmarks Processor Motherboard Memory Disk Graphics OS Kernel Compiler File-System Screen Resolution System Layer local filesystem CEPH Jewel 3 OSDs Direct SSD io=native cache=none 8 x QEMU Virtual 2.5+ @ 2.19GHz (8 Cores) Red Hat KVM (1.11.0-2.el7 BIOS) 2 x 16384 MB RAM 28GB cirrusdrmfb CentOS Linux 7 3.10.0-862.3.2.el7.x86_64 (x86_64) GCC 4.8.5 20150623 xfs 1024x768 KVM QEMU 1024GB 1788GB OpenBenchmarking.org Compiler Details - --build=x86_64-redhat-linux --disable-libgcj --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,objc,obj-c++,java,fortran,ada,go,lto --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-linker-hash-style=gnu --with-tune=generic Disk Mount Options Details - attr2,inode64,noquota,relatime,rw,seclabel Python Details - Python 2.7.5 Security Details - SELinux + KPTI + Load fences + Retpoline without IBPB Protection
ocdcephbenchmarks fs-mark: 1000 Files, 1MB Size aio-stress: Rand Write dbench: 12 Clients dbench: 48 Clients dbench: 128 Clients dbench: 1 Clients tiobench: 64MB Rand Read - 32 Threads tiobench: 64MB Rand Write - 32 Threads apache: Static Web Page Serving postmark: Disk Transaction Performance pgbench: On-Disk - Normal Load - Read Write sqlite: Timed SQLite Insertions unpack-linux: linux-4.15.tar.xz compress-gzip: Linux Source Tree Archiving To .tar.gz local filesystem CEPH Jewel 3 OSDs Direct SSD io=native cache=none 152.13 1478.38 1285.75 812.43 959.00 179.02 60691.53 958.96 7307.72 2409 20.61 14.45 71.33 87.98 1721.82 683.49 968.60 965.17 82.85 107041.83 337.00 7336.67 2149 52.75 15.41 73.37 159.03 1802.66 800.77 1220.01 1336.95 197.93 115753.71 555.54 7272.34 2299 3642.91 17.29 14.71 69.39 OpenBenchmarking.org
FS-Mark 1000 Files, 1MB Size OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 1000 Files, 1MB Size CEPH Jewel 3 OSDs local filesystem Direct SSD io=native cache=none 40 80 120 160 200 SE +/- 1.36, N = 5 SE +/- 4.84, N = 6 SE +/- 1.29, N = 3 87.98 152.13 159.03 1. (CC) gcc options: -static
AIO-Stress Random Write OpenBenchmarking.org MB/s, More Is Better AIO-Stress 0.21 Random Write local filesystem CEPH Jewel 3 OSDs Direct SSD io=native cache=none 400 800 1200 1600 2000 SE +/- 73.02, N = 6 SE +/- 25.85, N = 3 SE +/- 55.02, N = 6 1478.38 1721.82 1802.66 1. (CC) gcc options: -pthread -laio
Dbench 12 Clients OpenBenchmarking.org MB/s, More Is Better Dbench 4.0 12 Clients CEPH Jewel 3 OSDs Direct SSD io=native cache=none local filesystem 300 600 900 1200 1500 SE +/- 1.93, N = 3 SE +/- 7.08, N = 3 SE +/- 4.49, N = 3 683.49 800.77 1285.75 1. (CC) gcc options: -lpopt -O2
Dbench 48 Clients OpenBenchmarking.org MB/s, More Is Better Dbench 4.0 48 Clients local filesystem CEPH Jewel 3 OSDs Direct SSD io=native cache=none 300 600 900 1200 1500 SE +/- 98.18, N = 6 SE +/- 7.36, N = 3 SE +/- 2.82, N = 3 812.43 968.60 1220.01 1. (CC) gcc options: -lpopt -O2
Dbench 128 Clients OpenBenchmarking.org MB/s, More Is Better Dbench 4.0 128 Clients local filesystem CEPH Jewel 3 OSDs Direct SSD io=native cache=none 300 600 900 1200 1500 SE +/- 10.43, N = 3 SE +/- 11.71, N = 3 SE +/- 6.02, N = 3 959.00 965.17 1336.95 1. (CC) gcc options: -lpopt -O2
Dbench 1 Clients OpenBenchmarking.org MB/s, More Is Better Dbench 4.0 1 Clients CEPH Jewel 3 OSDs local filesystem Direct SSD io=native cache=none 40 80 120 160 200 SE +/- 0.76, N = 3 SE +/- 2.91, N = 3 SE +/- 1.04, N = 3 82.85 179.02 197.93 1. (CC) gcc options: -lpopt -O2
Threaded I/O Tester 64MB Random Read - 32 Threads OpenBenchmarking.org MB/s, More Is Better Threaded I/O Tester 20170503 64MB Random Read - 32 Threads local filesystem CEPH Jewel 3 OSDs Direct SSD io=native cache=none 20K 40K 60K 80K 100K SE +/- 3323.01, N = 6 SE +/- 1303.43, N = 3 SE +/- 1990.78, N = 6 60691.53 107041.83 115753.71 1. (CC) gcc options: -O2
Threaded I/O Tester 64MB Random Write - 32 Threads OpenBenchmarking.org MB/s, More Is Better Threaded I/O Tester 20170503 64MB Random Write - 32 Threads CEPH Jewel 3 OSDs Direct SSD io=native cache=none local filesystem 200 400 600 800 1000 SE +/- 5.79, N = 3 SE +/- 10.53, N = 3 SE +/- 27.51, N = 6 337.00 555.54 958.96 1. (CC) gcc options: -O2
Apache Benchmark Static Web Page Serving OpenBenchmarking.org Requests Per Second, More Is Better Apache Benchmark 2.4.29 Static Web Page Serving Direct SSD io=native cache=none local filesystem CEPH Jewel 3 OSDs 1600 3200 4800 6400 8000 SE +/- 125.92, N = 4 SE +/- 37.64, N = 3 SE +/- 99.52, N = 6 7272.34 7307.72 7336.67 1. (CC) gcc options: -shared -fPIC -O2 -pthread
PostMark Disk Transaction Performance OpenBenchmarking.org TPS, More Is Better PostMark 1.51 Disk Transaction Performance CEPH Jewel 3 OSDs Direct SSD io=native cache=none local filesystem 500 1000 1500 2000 2500 SE +/- 16.19, N = 3 SE +/- 35.31, N = 5 SE +/- 53.62, N = 6 2149 2299 2409 1. (CC) gcc options: -O3
PostgreSQL pgbench Scaling: On-Disk - Test: Normal Load - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 10.3 Scaling: On-Disk - Test: Normal Load - Mode: Read Write Direct SSD io=native cache=none 800 1600 2400 3200 4000 SE +/- 14.08, N = 3 3642.91 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm
SQLite Timed SQLite Insertions OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.22 Timed SQLite Insertions CEPH Jewel 3 OSDs local filesystem Direct SSD io=native cache=none 12 24 36 48 60 SE +/- 0.77, N = 4 SE +/- 0.06, N = 3 SE +/- 0.28, N = 6 52.75 20.61 17.29 1. (CC) gcc options: -O2 -ldl -lpthread
Unpacking The Linux Kernel linux-4.15.tar.xz OpenBenchmarking.org Seconds, Fewer Is Better Unpacking The Linux Kernel linux-4.15.tar.xz CEPH Jewel 3 OSDs Direct SSD io=native cache=none local filesystem 4 8 12 16 20 SE +/- 0.19, N = 8 SE +/- 0.19, N = 7 SE +/- 0.07, N = 4 15.41 14.71 14.45
Gzip Compression Linux Source Tree Archiving To .tar.gz OpenBenchmarking.org Seconds, Fewer Is Better Gzip Compression Linux Source Tree Archiving To .tar.gz CEPH Jewel 3 OSDs local filesystem Direct SSD io=native cache=none 16 32 48 64 80 SE +/- 1.77, N = 6 SE +/- 2.64, N = 6 SE +/- 2.46, N = 6 73.37 71.33 69.39
Phoronix Test Suite v10.8.4