ocdcephbenchmarks

Running disk benchmark against various CEPH versions and configurations

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 1805308-FO-OCDCEPHBE08
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

C/C++ Compiler Tests 2 Tests
CPU Massive 3 Tests
Database Test Suite 2 Tests
Disk Test Suite 5 Tests
Common Kernel Benchmarks 3 Tests
Server 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
local filesystem
May 27 2018
 
CEPH Jewel 3 OSDs replica 1
May 27 2018
 
Direct SSD io=native cache=none
May 27 2018
 
CEPH Jewel 1 OSD w/ external Journal
May 28 2018
 
CEPH Jewel 1 OSD
May 29 2018
 
CEPH Jewel 3 OSDs replica 3
May 29 2018
 
CEPH luminous bluestore 3 OSDs replica 3
May 30 2018
 
CEPH luminous bluestore 3 OSDs replica 1
May 30 2018
 
CEPH luminous bluestore 1 OSD
May 30 2018
 
Invert Hiding All Results Option
 

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


ocdcephbenchmarksOpenBenchmarking.orgPhoronix Test Suite8 x QEMU Virtual 2.5+ @ 2.19GHz (8 Cores)Red Hat KVM (1.11.0-2.el7 BIOS)2 x 16384 MB RAM28GB1024GB1788GBcirrusdrmfbCentOS Linux 73.10.0-862.3.2.el7.x86_64 (x86_64)GCC 4.8.5 20150623xfs1024x768KVM QEMUProcessorMotherboardMemoryDisksGraphicsOSKernelCompilerFile-SystemScreen ResolutionSystem LayerOcdcephbenchmarks PerformanceSystem Logs- --build=x86_64-redhat-linux --disable-libgcj --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,objc,obj-c++,java,fortran,ada,go,lto --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-linker-hash-style=gnu --with-tune=generic - local filesystem: The root filesystem of the VM. QCOW on XFS on LVM on MD-RAID RAID 1 over two SSDs Micron 5100 MAX 240GB - CEPH Jewel 3 OSDs replica 1: CEPH, Jewel, 3 OSDs, Filestore, on-disk journal, replica 1, Micron 5100 MAX 1.9 TB - Direct SSD io=native cache=none: Direct SSD, Micron 5100 MAX 1.9 TB - CEPH Jewel 1 OSD w/ external Journal: CEPH, Jewel, 1 OSDs, Filestore, journal on separate SSD, replica 1, Micron 5100 MAX 1.9 TB - CEPH Jewel 1 OSD: CEPH, Jewel, 1 OSDs, Filestore, on-disk journal, replica 1, Micron 5100 MAX 1.9 TB - CEPH Jewel 3 OSDs replica 3: CEPH, Jewel, 3 OSDs, Filestore, on-disk journal, replica 3, Micron 5100 MAX 1.9 TB - CEPH luminous bluestore 3 OSDs replica 3: CEPH, Luminous, 3 OSDs, Bluestore, replica 3, Micron 5100 MAX 1.9 TB - CEPH luminous bluestore 3 OSDs replica 1: CEPH, Luminous, 3 OSDs, Bluestore, replica 1, Micron 5100 MAX 1.9 TB - attr2,inode64,noquota,relatime,rw,seclabel- Python 2.7.5- SELinux + KPTI + Load fences + Retpoline without IBPB Protection

local filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSDLogarithmic Result OverviewPhoronix Test SuiteSQLiteFS-MarkDbenchThreaded I/O TesterAIO-Stress

ocdcephbenchmarksaio-stress: Rand Writesqlite: Timed SQLite Insertionsfs-mark: 1000 Files, 1MB Sizedbench: 12 Clientsdbench: 48 Clientsdbench: 128 Clientsdbench: 1 Clientstiobench: 64MB Rand Read - 32 Threadstiobench: 64MB Rand Write - 32 Threadsunpack-linux: linux-4.15.tar.xzpostmark: Disk Transaction Performancecompress-gzip: Linux Source Tree Archiving To .tar.gzapache: Static Web Page Servingpgbench: On-Disk - Normal Load - Read Writecompilebench: Compilecompilebench: Initial Createcompilebench: Read Compiled Treelocal filesystemCEPH Jewel 3 OSDs replica 1Direct SSD io=native cache=noneCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 1 OSDCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 1 OSD1478.3820.61152.131285.75812.43959.00179.0260691.53958.9614.45240971.337307.721721.8252.7587.98683.49968.60965.1782.85107041.83337.0015.41214973.377336.671802.6617.29159.03800.771220.011336.95197.93115753.71555.5414.71229969.397272.343642.911822.8745.1095.50773.201055.651055.64101.27100449.37300.2314.77220667.577162.871824.701028.88135.49260.321340.5546.2183.53691.87938.32970.3198.67102558.87299.6014.53244371.748550.111148.88144.53260.081818.6498.3061.93417.51712.22779.8656.0584936.34214.0115.68227370.377961.191112.43136.01250.961690.54109.4866.07344.71679.26771.7054.67100973.58151.0016.30206674.626755.531025.83134.45236.521754.6169.9582.60480.21768.75754.9067.09108942.81255.3215.33243466.587729.99916.83139.52239.001773.6765.1483.60842.7773.94105283.54229.611168.81145.29259.65OpenBenchmarking.org

AIO-Stress

AIO-Stress is an a-synchronous I/O benchmark created by SuSE. Current this profile uses a 2048MB test file and a 64KB record size. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterAIO-Stress 0.21Random WriteCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem400800120016002000SE +/- 13.54, N = 3SE +/- 109.84, N = 6SE +/- 25.85, N = 3SE +/- 24.90, N = 3SE +/- 68.62, N = 6SE +/- 96.24, N = 6SE +/- 25.18, N = 6SE +/- 55.02, N = 6SE +/- 73.02, N = 61340.551822.871721.821818.641773.671754.611690.541802.661478.381. (CC) gcc options: -pthread -laio
OpenBenchmarking.orgMB/s, More Is BetterAIO-Stress 0.21Random WriteCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem30060090012001500Min: 1323.05 / Avg: 1340.55 / Max: 1367.19Min: 1607.74 / Avg: 1822.87 / Max: 2359.8Min: 1670.31 / Avg: 1721.82 / Max: 1751.47Min: 1791.93 / Avg: 1818.64 / Max: 1868.39Min: 1431.99 / Avg: 1773.67 / Max: 1855.94Min: 1510.09 / Avg: 1754.61 / Max: 2069.11Min: 1613.54 / Avg: 1690.54 / Max: 1784.09Min: 1612.88 / Avg: 1802.66 / Max: 2005.14Min: 1165.71 / Avg: 1478.38 / Max: 1665.951. (CC) gcc options: -pthread -laio

SQLite

This is a simple benchmark of SQLite. At present this test profile just measures the time to perform a pre-defined number of insertions on an indexed database. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.22Timed SQLite InsertionsCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem20406080100SE +/- 0.10, N = 3SE +/- 0.34, N = 3SE +/- 0.77, N = 4SE +/- 0.38, N = 3SE +/- 0.78, N = 3SE +/- 1.07, N = 3SE +/- 0.93, N = 3SE +/- 0.28, N = 6SE +/- 0.06, N = 346.2145.1052.7598.3065.1469.95109.4817.2920.611. (CC) gcc options: -O2 -ldl -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.22Timed SQLite InsertionsCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem20406080100Min: 46.05 / Avg: 46.21 / Max: 46.39Min: 44.6 / Avg: 45.1 / Max: 45.75Min: 50.68 / Avg: 52.75 / Max: 54.42Min: 97.54 / Avg: 98.3 / Max: 98.8Min: 63.72 / Avg: 65.14 / Max: 66.43Min: 68.18 / Avg: 69.95 / Max: 71.87Min: 108.18 / Avg: 109.48 / Max: 111.29Min: 16.43 / Avg: 17.29 / Max: 18.39Min: 20.52 / Avg: 20.61 / Max: 20.711. (CC) gcc options: -O2 -ldl -lpthread

FS-Mark

FS_Mark is designed to test a system's file-system performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.31000 Files, 1MB SizeCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem4080120160200SE +/- 0.80, N = 3SE +/- 1.35, N = 6SE +/- 1.36, N = 5SE +/- 0.29, N = 3SE +/- 0.46, N = 3SE +/- 1.25, N = 4SE +/- 0.64, N = 3SE +/- 1.29, N = 3SE +/- 4.84, N = 683.5395.5087.9861.9383.6082.6066.07159.03152.131. (CC) gcc options: -static
OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.31000 Files, 1MB SizeCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem306090120150Min: 82 / Avg: 83.53 / Max: 84.7Min: 89.6 / Avg: 95.5 / Max: 98.9Min: 83 / Avg: 87.98 / Max: 91.1Min: 61.4 / Avg: 61.93 / Max: 62.4Min: 83 / Avg: 83.6 / Max: 84.5Min: 79.7 / Avg: 82.6 / Max: 85.7Min: 64.8 / Avg: 66.07 / Max: 66.9Min: 157.5 / Avg: 159.03 / Max: 161.6Min: 128.2 / Avg: 152.13 / Max: 158.91. (CC) gcc options: -static

Dbench

Dbench is a benchmark designed by the Samba project as a free alternative to netbench, but dbench contains only file-system calls for testing the disk performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.012 ClientsCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem30060090012001500SE +/- 2.75, N = 3SE +/- 2.72, N = 3SE +/- 1.93, N = 3SE +/- 0.66, N = 3SE +/- 1.35, N = 3SE +/- 3.55, N = 3SE +/- 7.08, N = 3SE +/- 4.49, N = 3691.87773.20683.49417.51480.21344.71800.771285.751. (CC) gcc options: -lpopt -O2
OpenBenchmarking.orgMB/s, More Is BetterDbench 4.012 ClientsCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem2004006008001000Min: 689 / Avg: 691.87 / Max: 697.37Min: 768.03 / Avg: 773.2 / Max: 777.25Min: 679.64 / Avg: 683.49 / Max: 685.62Min: 416.29 / Avg: 417.51 / Max: 418.58Min: 477.53 / Avg: 480.21 / Max: 481.87Min: 337.63 / Avg: 344.71 / Max: 348.56Min: 786.7 / Avg: 800.77 / Max: 809.12Min: 1279.14 / Avg: 1285.75 / Max: 1294.311. (CC) gcc options: -lpopt -O2

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.048 ClientsCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem30060090012001500SE +/- 8.61, N = 3SE +/- 2.21, N = 3SE +/- 7.36, N = 3SE +/- 1.19, N = 3SE +/- 12.96, N = 3SE +/- 3.97, N = 3SE +/- 2.02, N = 3SE +/- 2.82, N = 3SE +/- 98.18, N = 6938.321055.65968.60712.22842.77768.75679.261220.01812.431. (CC) gcc options: -lpopt -O2
OpenBenchmarking.orgMB/s, More Is BetterDbench 4.048 ClientsCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem2004006008001000Min: 929.17 / Avg: 938.32 / Max: 955.52Min: 1051.83 / Avg: 1055.65 / Max: 1059.5Min: 955.35 / Avg: 968.6 / Max: 980.76Min: 710 / Avg: 712.22 / Max: 714.09Min: 829.23 / Avg: 842.77 / Max: 868.68Min: 763.58 / Avg: 768.75 / Max: 776.56Min: 675.24 / Avg: 679.26 / Max: 681.64Min: 1214.44 / Avg: 1220.01 / Max: 1223.55Min: 323.51 / Avg: 812.43 / Max: 942.181. (CC) gcc options: -lpopt -O2

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.0128 ClientsCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem30060090012001500SE +/- 2.85, N = 3SE +/- 11.99, N = 3SE +/- 11.71, N = 3SE +/- 5.18, N = 3SE +/- 6.81, N = 3SE +/- 3.58, N = 3SE +/- 6.02, N = 3SE +/- 10.43, N = 3970.311055.64965.17779.86754.90771.701336.95959.001. (CC) gcc options: -lpopt -O2
OpenBenchmarking.orgMB/s, More Is BetterDbench 4.0128 ClientsCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem2004006008001000Min: 967.22 / Avg: 970.31 / Max: 976Min: 1033.51 / Avg: 1055.64 / Max: 1074.69Min: 947.47 / Avg: 965.17 / Max: 987.29Min: 772.41 / Avg: 779.86 / Max: 789.81Min: 742.56 / Avg: 754.9 / Max: 766.07Min: 767.42 / Avg: 771.7 / Max: 778.81Min: 1327.9 / Avg: 1336.95 / Max: 1348.35Min: 942.87 / Avg: 959 / Max: 978.521. (CC) gcc options: -lpopt -O2

OpenBenchmarking.orgMB/s, More Is BetterDbench 4.01 ClientsCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem4080120160200SE +/- 1.69, N = 4SE +/- 1.69, N = 3SE +/- 0.76, N = 3SE +/- 2.01, N = 6SE +/- 0.56, N = 3SE +/- 0.30, N = 3SE +/- 0.39, N = 3SE +/- 1.04, N = 3SE +/- 2.91, N = 398.67101.2782.8556.0573.9467.0954.67197.93179.021. (CC) gcc options: -lpopt -O2
OpenBenchmarking.orgMB/s, More Is BetterDbench 4.01 ClientsCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem4080120160200Min: 96.03 / Avg: 98.67 / Max: 103.56Min: 98.92 / Avg: 101.27 / Max: 104.56Min: 81.87 / Avg: 82.85 / Max: 84.35Min: 46.01 / Avg: 56.05 / Max: 58.78Min: 72.99 / Avg: 73.94 / Max: 74.93Min: 66.51 / Avg: 67.09 / Max: 67.53Min: 54.19 / Avg: 54.67 / Max: 55.45Min: 196.06 / Avg: 197.93 / Max: 199.63Min: 174.78 / Avg: 179.02 / Max: 184.61. (CC) gcc options: -lpopt -O2

Threaded I/O Tester

Tiotester (Threaded I/O Tester) benchmarks the hard disk drive / file-system performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 2017050364MB Random Read - 32 ThreadsCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem20K40K60K80K100KSE +/- 2213.07, N = 6SE +/- 2822.07, N = 6SE +/- 1303.43, N = 3SE +/- 9550.32, N = 6SE +/- 885.93, N = 3SE +/- 7596.75, N = 6SE +/- 2403.25, N = 6SE +/- 1990.78, N = 6SE +/- 3323.01, N = 6102558.87100449.37107041.8384936.34105283.54108942.81100973.58115753.7160691.531. (CC) gcc options: -O2
OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 2017050364MB Random Read - 32 ThreadsCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem20K40K60K80K100KMin: 95451.16 / Avg: 102558.87 / Max: 111195.57Min: 92165.07 / Avg: 100449.37 / Max: 112391.62Min: 105214.49 / Avg: 107041.83 / Max: 109565.59Min: 38765.85 / Avg: 84936.34 / Max: 102589.79Min: 103617.51 / Avg: 105283.54 / Max: 106638.9Min: 85611.57 / Avg: 108942.81 / Max: 136406.02Min: 94718.34 / Avg: 100973.58 / Max: 109900.72Min: 109460.18 / Avg: 115753.71 / Max: 120811.7Min: 48647.24 / Avg: 60691.53 / Max: 70718.231. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 2017050364MB Random Write - 32 ThreadsCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem2004006008001000SE +/- 5.80, N = 6SE +/- 1.00, N = 3SE +/- 5.79, N = 3SE +/- 2.37, N = 3SE +/- 3.19, N = 3SE +/- 3.91, N = 3SE +/- 9.35, N = 6SE +/- 10.53, N = 3SE +/- 27.51, N = 6299.60300.23337.00214.01229.61255.32151.00555.54958.961. (CC) gcc options: -O2
OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 2017050364MB Random Write - 32 ThreadsCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem2004006008001000Min: 281.43 / Avg: 299.6 / Max: 325.01Min: 298.51 / Avg: 300.23 / Max: 301.97Min: 328.62 / Avg: 337 / Max: 348.1Min: 210.62 / Avg: 214.01 / Max: 218.58Min: 223.27 / Avg: 229.61 / Max: 233.39Min: 247.79 / Avg: 255.32 / Max: 260.94Min: 119.93 / Avg: 151 / Max: 168.02Min: 542.66 / Avg: 555.54 / Max: 576.42Min: 885.98 / Avg: 958.96 / Max: 1045.531. (CC) gcc options: -O2

Unpacking The Linux Kernel

This test measures how long it takes to extract the .tar.xz Linux kernel package. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernellinux-4.15.tar.xzCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem48121620SE +/- 0.14, N = 4SE +/- 0.42, N = 8SE +/- 0.19, N = 8SE +/- 0.24, N = 5SE +/- 0.33, N = 8SE +/- 0.27, N = 4SE +/- 0.19, N = 7SE +/- 0.07, N = 414.5314.7715.4115.6815.3316.3014.7114.45
OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernellinux-4.15.tar.xzCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem48121620Min: 14.21 / Avg: 14.53 / Max: 14.86Min: 13.51 / Avg: 14.77 / Max: 16.55Min: 14.83 / Avg: 15.41 / Max: 16.26Min: 15.29 / Avg: 15.68 / Max: 16.63Min: 13.95 / Avg: 15.33 / Max: 16.5Min: 15.79 / Avg: 16.3 / Max: 16.98Min: 13.87 / Avg: 14.71 / Max: 15.5Min: 14.35 / Avg: 14.45 / Max: 14.67

PostMark

This is a test of NetApp's PostMark benchmark designed to simulate small-file testing similar to the tasks endured by web and mail servers. This test profile will set PostMark to perform 25,000 transactions with 500 files simultaneously with the file sizes ranging between 5 and 512 kilobytes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostMark 1.51Disk Transaction PerformanceCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem5001000150020002500SE +/- 21.11, N = 3SE +/- 34.53, N = 3SE +/- 16.19, N = 3SE +/- 31.10, N = 3SE +/- 15.67, N = 3SE +/- 35.31, N = 5SE +/- 53.62, N = 6244322062149227324342066229924091. (CC) gcc options: -O3
OpenBenchmarking.orgTPS, More Is BetterPostMark 1.51Disk Transaction PerformanceCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem400800120016002000Min: 2403 / Avg: 2442.67 / Max: 2475Min: 2155 / Avg: 2206.33 / Max: 2272Min: 2118 / Avg: 2148.67 / Max: 2173Min: 2212 / Avg: 2273 / Max: 2314Min: 2403 / Avg: 2434.33 / Max: 2450Min: 2173 / Avg: 2299.4 / Max: 2358Min: 2272 / Avg: 2409.33 / Max: 25771. (CC) gcc options: -O3

Gzip Compression

This test measures the time needed to archive/compress two copies of the Linux 4.13 kernel source tree using Gzip compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGzip CompressionLinux Source Tree Archiving To .tar.gzCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem20406080100SE +/- 2.16, N = 6SE +/- 1.35, N = 3SE +/- 1.77, N = 6SE +/- 2.62, N = 6SE +/- 0.70, N = 3SE +/- 1.47, N = 3SE +/- 2.46, N = 6SE +/- 2.64, N = 671.7467.5773.3770.3766.5874.6269.3971.33
OpenBenchmarking.orgSeconds, Fewer Is BetterGzip CompressionLinux Source Tree Archiving To .tar.gzCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem1428425670Min: 64.13 / Avg: 71.74 / Max: 75.62Min: 65.8 / Avg: 67.57 / Max: 70.21Min: 68.44 / Avg: 73.37 / Max: 80.72Min: 63.07 / Avg: 70.37 / Max: 78.9Min: 65.22 / Avg: 66.58 / Max: 67.56Min: 72.81 / Avg: 74.62 / Max: 77.53Min: 63.18 / Avg: 69.39 / Max: 78.01Min: 64.62 / Avg: 71.33 / Max: 79.05

Apache Benchmark

This is a test of ab, which is the Apache benchmark program. This test profile measures how many requests per second a given system can sustain when carrying out 1,000,000 requests with 100 requests being carried out concurrently. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterApache Benchmark 2.4.29Static Web Page ServingCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem2K4K6K8K10KSE +/- 49.14, N = 3SE +/- 197.05, N = 6SE +/- 99.52, N = 6SE +/- 128.93, N = 6SE +/- 88.00, N = 3SE +/- 80.76, N = 3SE +/- 125.92, N = 4SE +/- 37.64, N = 38550.117162.877336.677961.197729.996755.537272.347307.721. (CC) gcc options: -shared -fPIC -O2 -pthread
OpenBenchmarking.orgRequests Per Second, More Is BetterApache Benchmark 2.4.29Static Web Page ServingCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 1CEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3Direct SSD io=native cache=nonelocal filesystem15003000450060007500Min: 8471.78 / Avg: 8550.11 / Max: 8640.68Min: 6794.3 / Avg: 7162.87 / Max: 8056.21Min: 7029.38 / Avg: 7336.67 / Max: 7735.56Min: 7446.95 / Avg: 7961.19 / Max: 8361.04Min: 7587.43 / Avg: 7729.99 / Max: 7890.66Min: 6670.55 / Avg: 6755.53 / Max: 6916.97Min: 7031.72 / Avg: 7272.34 / Max: 7503.78Min: 7248.53 / Avg: 7307.72 / Max: 7377.61. (CC) gcc options: -shared -fPIC -O2 -pthread

PostgreSQL pgbench

This is a simple benchmark of PostgreSQL using pgbench. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 10.3Scaling: On-Disk - Test: Normal Load - Mode: Read WriteCEPH Jewel 1 OSD w/ external JournalDirect SSD io=native cache=none8001600240032004000SE +/- 63.93, N = 3SE +/- 14.08, N = 31824.703642.911. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 10.3Scaling: On-Disk - Test: Normal Load - Mode: Read WriteCEPH Jewel 1 OSD w/ external JournalDirect SSD io=native cache=none6001200180024003000Min: 1696.93 / Avg: 1824.7 / Max: 1892.89Min: 3619.53 / Avg: 3642.91 / Max: 3668.191. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm

Compile Bench

Compilebench tries to age a filesystem by simulating some of the disk IO common in creating, compiling, patching, stating and reading kernel trees. It indirectly measures how well filesystems can maintain directory locality as the disk fills up and directories age. This current test is setup to use the makej mode with 10 initial directories Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: CompileCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 330060090012001500SE +/- 15.77, N = 3SE +/- 22.78, N = 6SE +/- 4.80, N = 3SE +/- 14.27, N = 3SE +/- 28.08, N = 6SE +/- 19.45, N = 61148.881028.881112.431168.81916.831025.83
OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: CompileCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 32004006008001000Min: 1127.52 / Avg: 1148.88 / Max: 1179.65Min: 986.25 / Avg: 1028.88 / Max: 1121.91Min: 1105.74 / Avg: 1112.43 / Max: 1121.73Min: 1152.31 / Avg: 1168.81 / Max: 1197.23Min: 835.03 / Avg: 916.83 / Max: 998.87Min: 955.81 / Avg: 1025.83 / Max: 1089.1

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Initial CreateCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3306090120150SE +/- 2.37, N = 3SE +/- 4.01, N = 3SE +/- 1.49, N = 3SE +/- 2.37, N = 3SE +/- 1.96, N = 3SE +/- 1.69, N = 3144.53135.49136.01145.29139.52134.45
OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Initial CreateCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 3306090120150Min: 139.8 / Avg: 144.53 / Max: 146.95Min: 128.74 / Avg: 135.49 / Max: 142.62Min: 134.26 / Avg: 136.01 / Max: 138.97Min: 140.96 / Avg: 145.29 / Max: 149.14Min: 136.52 / Avg: 139.52 / Max: 143.21Min: 132.3 / Avg: 134.45 / Max: 137.79

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Read Compiled TreeCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 360120180240300SE +/- 0.73, N = 3SE +/- 2.94, N = 3SE +/- 5.76, N = 3SE +/- 7.99, N = 3SE +/- 1.50, N = 3SE +/- 5.63, N = 3260.08260.32250.96259.65239.00236.52
OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Read Compiled TreeCEPH Jewel 1 OSDCEPH Jewel 1 OSD w/ external JournalCEPH Jewel 3 OSDs replica 3CEPH luminous bluestore 1 OSDCEPH luminous bluestore 3 OSDs replica 1CEPH luminous bluestore 3 OSDs replica 350100150200250Min: 258.62 / Avg: 260.08 / Max: 260.88Min: 254.5 / Avg: 260.32 / Max: 263.98Min: 239.46 / Avg: 250.96 / Max: 257.37Min: 243.76 / Avg: 259.65 / Max: 269.07Min: 236.08 / Avg: 239 / Max: 241.01Min: 225.52 / Avg: 236.52 / Max: 244.07