ocdcephbenchmarks

SSD directly attached to the system (type='raw' cache='none' io='native'), Micron 5100 MAX 1.92TB

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 1805287-FO-OCDCEPHBE12
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

C/C++ Compiler Tests 2 Tests
CPU Massive 2 Tests
Database Test Suite 2 Tests
Disk Test Suite 4 Tests
Common Kernel Benchmarks 3 Tests
Server 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
local filesystem
May 27 2018
 
CEPH Jewel 3 OSDs
May 27 2018
 
Direct SSD io=native cache=none
May 27 2018
 
Invert Hiding All Results Option
 

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


ocdcephbenchmarks SSD directly attached to the system (type='raw' cache='none' io='native'), Micron 5100 MAX 1.92TB local filesystem: Processor: 8 x QEMU Virtual 2.5+ @ 2.19GHz (8 Cores), Motherboard: Red Hat KVM (1.11.0-2.el7 BIOS), Memory: 2 x 16384 MB RAM, Disk: 28GB, Graphics: cirrusdrmfb OS: CentOS Linux 7, Kernel: 3.10.0-862.3.2.el7.x86_64 (x86_64), Compiler: GCC 4.8.5 20150623, File-System: xfs, Screen Resolution: 1024x768, System Layer: KVM QEMU CEPH Jewel 3 OSDs: Processor: 8 x QEMU Virtual 2.5+ @ 2.19GHz (8 Cores), Motherboard: Red Hat KVM (1.11.0-2.el7 BIOS), Memory: 2 x 16384 MB RAM, Disk: 1024GB, Graphics: cirrusdrmfb OS: CentOS Linux 7, Kernel: 3.10.0-862.3.2.el7.x86_64 (x86_64), Compiler: GCC 4.8.5 20150623, File-System: xfs, Screen Resolution: 1024x768, System Layer: KVM QEMU Direct SSD io=native cache=none: Processor: 8 x QEMU Virtual 2.5+ @ 2.19GHz (8 Cores), Motherboard: Red Hat KVM (1.11.0-2.el7 BIOS), Memory: 2 x 16384 MB RAM, Disk: 1788GB, Graphics: cirrusdrmfb OS: CentOS Linux 7, Kernel: 3.10.0-862.3.2.el7.x86_64 (x86_64), Compiler: GCC 4.8.5 20150623, File-System: xfs, Screen Resolution: 1024x768, System Layer: KVM QEMU PostgreSQL pgbench 10.3 Scaling: On-Disk - Test: Normal Load - Mode: Read Write TPS > Higher Is Better Direct SSD io=native cache=none . 3642.91 |==================================== Apache Benchmark 2.4.29 Static Web Page Serving Requests Per Second > Higher Is Better local filesystem ................ 7307.72 |==================================== CEPH Jewel 3 OSDs ............... 7336.67 |==================================== Direct SSD io=native cache=none . 7272.34 |==================================== Gzip Compression Linux Source Tree Archiving To .tar.gz Seconds < Lower Is Better local filesystem ................ 71.33 |===================================== CEPH Jewel 3 OSDs ............... 73.37 |====================================== Direct SSD io=native cache=none . 69.39 |==================================== PostMark 1.51 Disk Transaction Performance TPS > Higher Is Better local filesystem ................ 2409 |======================================= CEPH Jewel 3 OSDs ............... 2149 |=================================== Direct SSD io=native cache=none . 2299 |===================================== Unpacking The Linux Kernel linux-4.15.tar.xz Seconds < Lower Is Better local filesystem ................ 14.45 |==================================== CEPH Jewel 3 OSDs ............... 15.41 |====================================== Direct SSD io=native cache=none . 14.71 |==================================== Threaded I/O Tester 20170503 64MB Random Write - 32 Threads MB/s > Higher Is Better local filesystem ................ 958.96 |===================================== CEPH Jewel 3 OSDs ............... 337.00 |============= Direct SSD io=native cache=none . 555.54 |===================== Threaded I/O Tester 20170503 64MB Random Read - 32 Threads MB/s > Higher Is Better local filesystem ................ 60691.53 |================== CEPH Jewel 3 OSDs ............... 107041.83 |=============================== Direct SSD io=native cache=none . 115753.71 |================================== Dbench 4.0 1 Clients MB/s > Higher Is Better local filesystem ................ 179.02 |================================= CEPH Jewel 3 OSDs ............... 82.85 |=============== Direct SSD io=native cache=none . 197.93 |===================================== Dbench 4.0 128 Clients MB/s > Higher Is Better local filesystem ................ 959.00 |========================== CEPH Jewel 3 OSDs ............... 965.17 |========================== Direct SSD io=native cache=none . 1336.95 |==================================== Dbench 4.0 48 Clients MB/s > Higher Is Better local filesystem ................ 812.43 |======================== CEPH Jewel 3 OSDs ............... 968.60 |============================= Direct SSD io=native cache=none . 1220.01 |==================================== Dbench 4.0 12 Clients MB/s > Higher Is Better local filesystem ................ 1285.75 |==================================== CEPH Jewel 3 OSDs ............... 683.49 |=================== Direct SSD io=native cache=none . 800.77 |====================== FS-Mark 3.3 1000 Files, 1MB Size Files/s > Higher Is Better local filesystem ................ 152.13 |=================================== CEPH Jewel 3 OSDs ............... 87.98 |==================== Direct SSD io=native cache=none . 159.03 |===================================== SQLite 3.22 Timed SQLite Insertions Seconds < Lower Is Better local filesystem ................ 20.61 |=============== CEPH Jewel 3 OSDs ............... 52.75 |====================================== Direct SSD io=native cache=none . 17.29 |============ AIO-Stress 0.21 Random Write MB/s > Higher Is Better local filesystem ................ 1478.38 |============================== CEPH Jewel 3 OSDs ............... 1721.82 |================================== Direct SSD io=native cache=none . 1802.66 |====================================