SSD directly attached to the system (type='raw' cache='none' io='native'), Micron 5100 MAX 1.92TB
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 1805287-FO-OCDCEPHBE12
ocdcephbenchmarks
SSD directly attached to the system (type='raw' cache='none' io='native'), Micron 5100 MAX 1.92TB
local filesystem:
Processor: 8 x QEMU Virtual 2.5+ @ 2.19GHz (8 Cores), Motherboard: Red Hat KVM (1.11.0-2.el7 BIOS), Memory: 2 x 16384 MB RAM, Disk: 28GB, Graphics: cirrusdrmfb
OS: CentOS Linux 7, Kernel: 3.10.0-862.3.2.el7.x86_64 (x86_64), Compiler: GCC 4.8.5 20150623, File-System: xfs, Screen Resolution: 1024x768, System Layer: KVM QEMU
CEPH Jewel 3 OSDs:
Processor: 8 x QEMU Virtual 2.5+ @ 2.19GHz (8 Cores), Motherboard: Red Hat KVM (1.11.0-2.el7 BIOS), Memory: 2 x 16384 MB RAM, Disk: 1024GB, Graphics: cirrusdrmfb
OS: CentOS Linux 7, Kernel: 3.10.0-862.3.2.el7.x86_64 (x86_64), Compiler: GCC 4.8.5 20150623, File-System: xfs, Screen Resolution: 1024x768, System Layer: KVM QEMU
Direct SSD io=native cache=none:
Processor: 8 x QEMU Virtual 2.5+ @ 2.19GHz (8 Cores), Motherboard: Red Hat KVM (1.11.0-2.el7 BIOS), Memory: 2 x 16384 MB RAM, Disk: 1788GB, Graphics: cirrusdrmfb
OS: CentOS Linux 7, Kernel: 3.10.0-862.3.2.el7.x86_64 (x86_64), Compiler: GCC 4.8.5 20150623, File-System: xfs, Screen Resolution: 1024x768, System Layer: KVM QEMU
Unpacking The Linux Kernel
linux-4.15.tar.xz
Seconds < Lower Is Better
local filesystem ................ 14.45 |====================================
CEPH Jewel 3 OSDs ............... 15.41 |======================================
Direct SSD io=native cache=none . 14.71 |====================================
Threaded I/O Tester 20170503
64MB Random Read - 32 Threads
MB/s > Higher Is Better
local filesystem ................ 60691.53 |==================
CEPH Jewel 3 OSDs ............... 107041.83 |===============================
Direct SSD io=native cache=none . 115753.71 |==================================
Threaded I/O Tester 20170503
64MB Random Write - 32 Threads
MB/s > Higher Is Better
local filesystem ................ 958.96 |=====================================
CEPH Jewel 3 OSDs ............... 337.00 |=============
Direct SSD io=native cache=none . 555.54 |=====================
Dbench 4.0
12 Clients
MB/s > Higher Is Better
local filesystem ................ 1285.75 |====================================
CEPH Jewel 3 OSDs ............... 683.49 |===================
Direct SSD io=native cache=none . 800.77 |======================
Dbench 4.0
48 Clients
MB/s > Higher Is Better
local filesystem ................ 812.43 |========================
CEPH Jewel 3 OSDs ............... 968.60 |=============================
Direct SSD io=native cache=none . 1220.01 |====================================
Dbench 4.0
128 Clients
MB/s > Higher Is Better
local filesystem ................ 959.00 |==========================
CEPH Jewel 3 OSDs ............... 965.17 |==========================
Direct SSD io=native cache=none . 1336.95 |====================================
Dbench 4.0
1 Clients
MB/s > Higher Is Better
local filesystem ................ 179.02 |=================================
CEPH Jewel 3 OSDs ............... 82.85 |===============
Direct SSD io=native cache=none . 197.93 |=====================================
FS-Mark 3.3
1000 Files, 1MB Size
Files/s > Higher Is Better
local filesystem ................ 152.13 |===================================
CEPH Jewel 3 OSDs ............... 87.98 |====================
Direct SSD io=native cache=none . 159.03 |=====================================
PostMark 1.51
Disk Transaction Performance
TPS > Higher Is Better
local filesystem ................ 2409 |=======================================
CEPH Jewel 3 OSDs ............... 2149 |===================================
Direct SSD io=native cache=none . 2299 |=====================================
Apache Benchmark 2.4.29
Static Web Page Serving
Requests Per Second > Higher Is Better
local filesystem ................ 7307.72 |====================================
CEPH Jewel 3 OSDs ............... 7336.67 |====================================
Direct SSD io=native cache=none . 7272.34 |====================================
SQLite 3.22
Timed SQLite Insertions
Seconds < Lower Is Better
local filesystem ................ 20.61 |===============
CEPH Jewel 3 OSDs ............... 52.75 |======================================
Direct SSD io=native cache=none . 17.29 |============
PostgreSQL pgbench 10.3
Scaling: On-Disk - Test: Normal Load - Mode: Read Write
TPS > Higher Is Better
Direct SSD io=native cache=none . 3642.91 |====================================
Gzip Compression
Linux Source Tree Archiving To .tar.gz
Seconds < Lower Is Better
local filesystem ................ 71.33 |=====================================
CEPH Jewel 3 OSDs ............... 73.37 |======================================
Direct SSD io=native cache=none . 69.39 |====================================
AIO-Stress 0.21
Random Write
MB/s > Higher Is Better
local filesystem ................ 1478.38 |==============================
CEPH Jewel 3 OSDs ............... 1721.82 |==================================
Direct SSD io=native cache=none . 1802.66 |====================================