db-fs-results

Docker testing on Ubuntu 20.04.3 LTS via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2112219-TJ-DBFSRESUL67
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts

Limit displaying results to tests within:

C/C++ Compiler Tests 4 Tests
CPU Massive 3 Tests
Database Test Suite 13 Tests
Java Tests 2 Tests
Common Kernel Benchmarks 4 Tests
Multi-Core 3 Tests
Server 13 Tests
Server CPU Tests 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
EXT4
December 18 2021
  1 Day, 15 Hours, 24 Minutes
XFS
December 19 2021
  2 Days, 10 Hours, 38 Minutes
Invert Hiding All Results Option
  2 Days, 1 Hour, 1 Minute
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


db-fs-resultsOpenBenchmarking.orgPhoronix Test Suite2 x Intel Xeon E5-2630 v3 @ 3.20GHz (16 Cores / 32 Threads)Dell 0CNCJW (2.2.5 BIOS)64GB731GB PERC H730P MinimgadrmfbUbuntu 20.04.3 LTS3.10.0-1160.36.2.el7.x86_64 (x86_64)GCC 9.3.0ext4xfs1024x768DockerProcessorMotherboardMemoryDiskGraphicsOSKernelCompilerFile-SystemsScreen ResolutionSystem LayerDb-fs-results PerformanceSystem Logs- Transparent Huge Pages: never- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate performance - CPU Microcode: 0x44- DEADLINE- OpenJDK Runtime Environment (build 11.0.13+8-Ubuntu-0ubuntu1.20.04)- itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of Load fences usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full retpoline IBPB + srbds: Not affected + tsx_async_abort: Not affected

EXT4 vs. XFS ComparisonPhoronix Test SuiteBaseline+48.5%+48.5%+97%+97%+145.5%+145.5%126%96.2%81.7%76.1%53.9%48.6%38.3%35.7%35.7%18.7%18.7%17.6%14.5%14.1%14%13%11.4%11.2%11.1%9.7%9.6%5.9%4.5%3.9%3.7%3.7%3.2%3.2%3.2%3.1%3.1%2.6%2.6%2.5%2.5%2.3%2.3%2.2%2.2%2.2%2.2%2.1%2.1%Rand Read - 128194%Rand Read - 128191.5%Rand Write - 16Rand Read - 64113.6%Rand Read - 64112.5%Rand Write - 16Rand Write - 32Rand Write - 32Rand Write - 128Rand Write - 64Rand Write - 64Rand Write - 128Seq Write - 32Rand Read - 3233%Rand Read - 3232.5%Seq Read - 1624%Seq Read - 1623.8%Async Rand Read - 3220.3%Async Rand Read - 3220.2%1000 - 500 - Read Write1000 - 500 - Read Write - Average LatencyIncrement - 117.9%Rand Read - 1617.9%Rand Read - 1617.8%Increment - 117.7%Seq Write - 16Seq Read - 3216.2%Seq Read - 3215.8%Seq Write - 161000 - 250 - Read Write1000 - 250 - Read Write - Average LatencyR.R.W.RRand Read - 1Rand Read - 1Seq Write - 128Rand Write - 411.1%Seq Write - 410%Seq Write - 32Seq Write - 128Rand Write - 1Increment - 45.5%Seq Read - 45.5%Increment - 45.4%Seq Read - 645.3%Increment - 165%Increment - 164.9%Seq Read - 644.9%1 - 250 - Read Write - Average LatencyRand Read - 44.2%Rand Read - 44.1%Redis4%Update RandIncrement - 323.9%Increment - 323.8%1000 - 500 - Read Only1000 - 500 - Read Only - Average LatencyRand Write - 43.5%100 - 250 - Read Write100 - 250 - Read Write - Average LatencyAsync Rand Read - 13.2%1 - 250 - Read WriteSeq Read - 43.2%Seq Write - 643.1%100 - 500 - Read Write - Average Latency100 - 500 - Read WriteAsync Rand Read - 12.9%Mixed 1:32.8%12.6%100 - 100 - Read Write - Average Latency100 - 100 - Read WriteSET2.5%ReadsSeq Read - 128Seq Read - 128Rand Fill100 - 100 - Read Only100 - 50 - Read Write - Average Latency100 - 50 - Read Write100 - 100 - Read Only - Average Latency1 - 500 - Read Only - Average Latency1 - 500 - Read OnlyApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBasePostgreSQL pgbenchPostgreSQL pgbenchApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBasePostgreSQL pgbenchPostgreSQL pgbenchFacebook RocksDBApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBasePostgreSQL pgbenchApache HBaseApache HBaseMemtier_benchmarkFacebook RocksDBApache HBaseApache HBasePostgreSQL pgbenchPostgreSQL pgbenchApache HBasePostgreSQL pgbenchPostgreSQL pgbenchApache HBasePostgreSQL pgbenchApache HBaseApache HBasePostgreSQL pgbenchPostgreSQL pgbenchApache HBaseApache CassandraSQLitePostgreSQL pgbenchPostgreSQL pgbenchRedisApache CassandraApache HBaseApache HBaseFacebook RocksDBPostgreSQL pgbenchPostgreSQL pgbenchPostgreSQL pgbenchPostgreSQL pgbenchPostgreSQL pgbenchPostgreSQL pgbenchEXT4XFS

db-fs-resultshbase: Rand Write - 128hbase: Rand Write - 128hbase: Async Rand Read - 128hbase: Async Rand Read - 128hbase: Seq Write - 64hbase: Seq Write - 64hbase: Rand Read - 128hbase: Rand Read - 128hbase: Async Rand Write - 128hbase: Async Rand Write - 128hbase: Seq Write - 128hbase: Seq Write - 128hbase: Increment - 128hbase: Increment - 128cassandra: Readshbase: Async Rand Read - 64hbase: Async Rand Read - 64hbase: Async Rand Read - 32hbase: Async Rand Read - 32pgbench: 1 - 250 - Read Write - Average Latencypgbench: 1 - 250 - Read Writecassandra: Mixed 1:3hbase: Rand Write - 64hbase: Rand Write - 64pgbench: 1 - 500 - Read Write - Average Latencypgbench: 1 - 500 - Read Writehbase: Async Rand Read - 16hbase: Async Rand Write - 64hbase: Async Rand Write - 64hbase: Seq Read - 1hbase: Rand Read - 64hbase: Rand Read - 64hbase: Seq Read - 64hbase: Seq Read - 64memtier-benchmark: Redishbase: Async Rand Read - 16hbase: Increment - 64hbase: Increment - 64pgbench: 1000 - 1 - Read Only - Average Latencypgbench: 1000 - 1 - Read Onlypgbench: 1000 - 100 - Read Only - Average Latencypgbench: 1000 - 100 - Read Onlypgbench: 1000 - 500 - Read Write - Average Latencypgbench: 1000 - 500 - Read Writepgbench: 1000 - 500 - Read Only - Average Latencypgbench: 1000 - 500 - Read Onlypgbench: 1000 - 50 - Read Only - Average Latencypgbench: 1000 - 50 - Read Onlypgbench: 1000 - 250 - Read Only - Average Latencypgbench: 1000 - 250 - Read Onlypgbench: 1000 - 100 - Read Write - Average Latencypgbench: 1000 - 100 - Read Writepgbench: 1000 - 250 - Read Write - Average Latencypgbench: 1000 - 250 - Read Writepgbench: 1000 - 50 - Read Write - Average Latencypgbench: 1000 - 50 - Read Writepgbench: 1000 - 1 - Read Write - Average Latencypgbench: 1000 - 1 - Read Writehbase: Seq Read - 128hbase: Seq Read - 128cassandra: Mixed 1:1pgbench: 100 - 50 - Read Write - Average Latencypgbench: 100 - 50 - Read Writehbase: Rand Write - 32hbase: Rand Write - 32cassandra: Writeshbase: Async Rand Write - 32hbase: Async Rand Write - 32hbase: Seq Write - 32hbase: Seq Write - 32hbase: Seq Read - 4hbase: Seq Read - 4rocksdb: Seq Fillhbase: Async Rand Read - 4hbase: Async Rand Read - 4couchdb: 100 - 1000 - 24rocksdb: Read Rand Write Randhbase: Seq Read - 16hbase: Seq Read - 16hbase: Async Rand Read - 1hbase: Async Rand Read - 1hbase: Increment - 1hbase: Increment - 1hbase: Seq Read - 1hbase: Increment - 32hbase: Increment - 32hbase: Async Rand Write - 16hbase: Async Rand Write - 16sqlite-speedtest: Timed Time - Size 1,000pgbench: 100 - 500 - Read Only - Average Latencypgbench: 100 - 500 - Read Onlypgbench: 100 - 500 - Read Write - Average Latencypgbench: 100 - 500 - Read Writepgbench: 100 - 250 - Read Only - Average Latencypgbench: 100 - 250 - Read Onlypgbench: 100 - 1 - Read Write - Average Latencypgbench: 100 - 1 - Read Writepgbench: 100 - 100 - Read Write - Average Latencypgbench: 100 - 100 - Read Writepgbench: 100 - 250 - Read Write - Average Latencypgbench: 100 - 250 - Read Writepgbench: 100 - 1 - Read Only - Average Latencypgbench: 100 - 1 - Read Onlypgbench: 100 - 100 - Read Only - Average Latencypgbench: 100 - 100 - Read Onlypgbench: 100 - 50 - Read Only - Average Latencypgbench: 100 - 50 - Read Onlyleveldb: Rand Deleteleveldb: Seq Fillleveldb: Seq Fillpgbench: 1 - 50 - Read Write - Average Latencypgbench: 1 - 50 - Read Writepgbench: 1 - 100 - Read Write - Average Latencypgbench: 1 - 100 - Read Writepgbench: 1 - 500 - Read Only - Average Latencypgbench: 1 - 500 - Read Onlypgbench: 1 - 250 - Read Only - Average Latencypgbench: 1 - 250 - Read Onlypgbench: 1 - 100 - Read Only - Average Latencypgbench: 1 - 100 - Read Onlypgbench: 1 - 50 - Read Only - Average Latencypgbench: 1 - 50 - Read Onlypgbench: 1 - 1 - Read Write - Average Latencypgbench: 1 - 1 - Read Writepgbench: 1 - 1 - Read Only - Average Latencypgbench: 1 - 1 - Read Onlyhbase: Rand Write - 16hbase: Rand Write - 16influxdb: 4 - 10000 - 2,5000,1 - 10000influxdb: 1024 - 10000 - 2,5000,1 - 10000influxdb: 64 - 10000 - 2,5000,1 - 10000hbase: Rand Read - 1hbase: Rand Read - 1hbase: Rand Read - 16hbase: Rand Read - 16leveldb: Hot Readhbase: Rand Read - 32hbase: Rand Read - 32hbase: Seq Write - 16hbase: Seq Write - 16hbase: Seq Read - 32hbase: Seq Read - 32keydb: hbase: Increment - 16hbase: Increment - 16hbase: Async Rand Write - 4hbase: Async Rand Write - 4hbase: Async Rand Write - 1hbase: Async Rand Write - 1hbase: Increment - 4hbase: Increment - 4rocksdb: Rand Fill Syncrocksdb: Rand Fillrocksdb: Update Randrocksdb: Read While Writingrocksdb: Rand Readhbase: Rand Read - 4hbase: Rand Read - 4hbase: Rand Write - 4hbase: Rand Write - 4leveldb: Seek Randhbase: Seq Write - 4hbase: Seq Write - 4leveldb: Rand Readleveldb: Overwriteleveldb: Overwriteleveldb: Rand Fillleveldb: Rand Fillhbase: Rand Write - 1hbase: Rand Write - 1hbase: Seq Write - 1hbase: Seq Write - 1redis: SETredis: GETredis: LPUSHredis: LPOPredis: SADDsqlite: 1leveldb: Fill Syncleveldb: Fill Syncmysqlslap: 1EXT4XFS131611015137203455924628285511851070304254300004992590643038419786907813814616247367261177.014143366998667110431668.755771542062115301826681633100261778817651549937.942941493426840.091109830.521191884118.40342232.9421699580.2551962671.26419774035.072285277.109324217.51428550.6311584133295539672504.14812061318124449789801146278391333179721822184614818318521404112.38057170726260689155639222344521487464260873421724149.3862.55419578145.280110471.1162240470.57117518.3001205021.527116140.078128770.4742110820.232215937288.116288.30012.213.784362736.59727332.3412136101.0452392720.4452247560.2232240840.53218800.07613247165147758859395.6867557.9886748.916659802237093035.284352900636034780743273558321216.8146833959474840338325973081290565263146594133336241098748097925166238724516214444.1433019773535.393288.49712.2287.32812.318538791465319878810.46873340.33877540.98880911.58899392.793.159543.7716.48551495203778341132432743213484367194273298664492840453058417257079813874605956955966169.336147965175449152697671.078772214129806135247175819779401490596.161482429980.092109110.51519423099.72550142.8361763250.2541965501.28619441034.632288867.610369817.56028470.6261598130097724674884.0591232217521918178269115327685983486811922117614978718421565110.56964576932549038160621426337827754103873421717149.7182.53319740443.921113871.1032265820.57417438.0881236420.852119910.078128400.4642157400.230217222289.401288.98312.213.939358736.52727382.2922181021.0332420590.4432256300.2222256980.53018880.0751339873289850848341.6860201.8877126.214966482636021635.252468679575139839350263518319106.4149132349466855638725723251224465252149915138575239542848239829173229275015672744.1383320134135.529292.07212.1288.48512.217543591465732857147.73882156.81876195.46873824.56883258.923.242539.8026.5OpenBenchmarking.org

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Random Write - Clients: 128EXT4XFS30060090012001500SE +/- 185.46, N = 9SE +/- 67.50, N = 21316855

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Random Write - Clients: 128EXT4XFS30K60K90K120K150KSE +/- 12285.62, N = 9SE +/- 12570.00, N = 2110151149520

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Async Random Read - Clients: 128EXT4XFS8001600240032004000SE +/- 118.38, N = 9SE +/- 147.69, N = 937203778

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Async Random Read - Clients: 128EXT4XFS7K14K21K28K35KSE +/- 903.39, N = 9SE +/- 1068.83, N = 93455934113

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Sequential Write - Clients: 64EXT4XFS50100150200250SE +/- 8.21, N = 15SE +/- 9.01, N = 5246243

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Sequential Write - Clients: 64EXT4XFS60K120K180K240K300KSE +/- 4775.23, N = 15SE +/- 4870.75, N = 5282855274321

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Random Read - Clients: 128EXT4XFS7001400210028003500SE +/- 3.48, N = 3SE +/- 50.52, N = 911853484

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Random Read - Clients: 128EXT4XFS20K40K60K80K100KSE +/- 288.51, N = 3SE +/- 500.56, N = 910703036719

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Async Random Write - Clients: 128EXT4XFS9001800270036004500SE +/- 15.77, N = 3SE +/- 26.27, N = 342544273

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Async Random Write - Clients: 128EXT4XFS6K12K18K24K30KSE +/- 110.41, N = 3SE +/- 178.78, N = 33000029866

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Sequential Write - Clients: 128EXT4XFS110220330440550SE +/- 12.36, N = 9SE +/- 7.63, N = 9499449

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Sequential Write - Clients: 128EXT4XFS60K120K180K240K300KSE +/- 6427.65, N = 9SE +/- 4813.28, N = 9259064284045

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Increment - Clients: 128EXT4XFS7001400210028003500SE +/- 36.84, N = 3SE +/- 39.56, N = 430383058

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Increment - Clients: 128EXT4XFS9K18K27K36K45KSE +/- 480.65, N = 3SE +/- 520.80, N = 44197841725

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: ReadsEXT4XFS15K30K45K60K75KSE +/- 983.14, N = 11SE +/- 1471.42, N = 126907870798

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Async Random Read - Clients: 64EXT4XFS30060090012001500SE +/- 11.61, N = 3SE +/- 25.60, N = 913811387

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Async Random Read - Clients: 64EXT4XFS10K20K30K40K50KSE +/- 399.45, N = 3SE +/- 776.29, N = 94616246059

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

Scaling Factor: 10000 - Clients: 250 - Mode: Read Write

EXT4: pgbench: error: client 55 aborted in command 0 (set) of script 0; evaluation of meta-command failed

XFS: pgbench: error: client 15 aborted in command 0 (set) of script 0; evaluation of meta-command failed

Scaling Factor: 10000 - Clients: 100 - Mode: Read Write

EXT4: pgbench: error: client 15 aborted in command 0 (set) of script 0; evaluation of meta-command failed

XFS: pgbench: error: client 11 aborted in command 0 (set) of script 0; evaluation of meta-command failed

Scaling Factor: 10000 - Clients: 500 - Mode: Read Write

EXT4: pgbench: error: client 159 aborted in command 0 (set) of script 0; evaluation of meta-command failed

XFS: pgbench: error: client 63 aborted in command 0 (set) of script 0; evaluation of meta-command failed

Scaling Factor: 10000 - Clients: 500 - Mode: Read Only

EXT4: pgbench: error: client 207 aborted in command 0 (set) of script 0; evaluation of meta-command failed

XFS: pgbench: error: client 143 aborted in command 0 (set) of script 0; evaluation of meta-command failed

Scaling Factor: 10000 - Clients: 50 - Mode: Read Only

EXT4: pgbench: error: client 33 aborted in command 0 (set) of script 0; evaluation of meta-command failed

XFS: error: client 37 aborted in command 0 (set) of script 0; evaluation of meta-command failed

Scaling Factor: 10000 - Clients: 1 - Mode: Read Write

EXT4: pgbench: error: client 0 aborted in command 0 (set) of script 0; evaluation of meta-command failed

XFS: pgbench: error: client 0 aborted in command 0 (set) of script 0; evaluation of meta-command failed

Scaling Factor: 10000 - Clients: 1 - Mode: Read Only

EXT4: pgbench: error: client 0 aborted in command 0 (set) of script 0; evaluation of meta-command failed

XFS: pgbench: error: client 0 aborted in command 0 (set) of script 0; evaluation of meta-command failed

Scaling Factor: 10000 - Clients: 50 - Mode: Read Write

EXT4: error: client 23 aborted in command 0 (set) of script 0; evaluation of meta-command failed

XFS: pgbench:error: error: client 3 aborted in command 0 (set) of script 0; evaluation of meta-command failed

Scaling Factor: 10000 - Clients: 100 - Mode: Read Only

EXT4: error: client 11 aborted in command 0 (set) of script 0; evaluation of meta-command failed

XFS: pgbench: error: client 93 aborted in command 0 (set) of script 0; evaluation of meta-command failed

Scaling Factor: 10000 - Clients: 250 - Mode: Read Only

EXT4: pgbench: error: client 15 aborted in command 0 (set) of script 0; evaluation of meta-command failed

XFS: pgbench: error: client 191 aborted in command 0 (set) of script 0; evaluation of meta-command failed

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Async Random Read - Clients: 32EXT4XFS120240360480600SE +/- 4.40, N = 15SE +/- 6.53, N = 15473569

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Async Random Read - Clients: 32EXT4XFS14K28K42K56K70KSE +/- 603.25, N = 15SE +/- 591.92, N = 156726155966

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1 - Clients: 250 - Mode: Read Write - Average LatencyEXT4XFS4080120160200SE +/- 7.19, N = 12SE +/- 2.01, N = 12177.01169.341. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1 - Clients: 250 - Mode: Read WriteEXT4XFS30060090012001500SE +/- 47.54, N = 12SE +/- 17.14, N = 12143314791. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: Mixed 1:3EXT4XFS14K28K42K56K70KSE +/- 601.71, N = 7SE +/- 668.69, N = 126699865175

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Random Write - Clients: 64EXT4XFS140280420560700SE +/- 78.20, N = 9SE +/- 33.25, N = 15667449

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Random Write - Clients: 64EXT4XFS30K60K90K120K150KSE +/- 16815.61, N = 9SE +/- 12366.87, N = 15110431152697

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1 - Clients: 500 - Mode: Read Write - Average LatencyEXT4XFS140280420560700SE +/- 37.22, N = 12SE +/- 49.27, N = 9668.76671.081. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1 - Clients: 500 - Mode: Read WriteEXT4XFS170340510680850SE +/- 38.50, N = 12SE +/- 47.92, N = 97717721. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Async Random Read - Clients: 16EXT412K24K36K48K60KSE +/- 924.97, N = 1554206

Test: Async Random Read - Clients: 16

XFS: Test failed to run.

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Async Random Write - Clients: 64EXT4XFS5001000150020002500SE +/- 18.21, N = 3SE +/- 18.77, N = 321152141

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Async Random Write - Clients: 64EXT4XFS6K12K18K24K30KSE +/- 250.74, N = 3SE +/- 254.18, N = 33018229806

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Sequential Read - Clients: 1EXT414002800420056007000SE +/- 78.59, N = 156681

Test: Sequential Read - Clients: 1

XFS: Test failed to run.

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Random Read - Clients: 64EXT4XFS30060090012001500SE +/- 5.70, N = 3SE +/- 13.39, N = 66331352

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Random Read - Clients: 64EXT4XFS20K40K60K80K100KSE +/- 931.32, N = 3SE +/- 460.72, N = 610026147175

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Sequential Read - Clients: 64EXT4XFS2004006008001000SE +/- 5.24, N = 3SE +/- 14.49, N = 9778819

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Sequential Read - Clients: 64EXT4XFS20K40K60K80K100KSE +/- 527.81, N = 3SE +/- 1325.65, N = 98176577940

Test: Random Write - Clients: 256

EXT4: Test failed to run.

XFS: Test failed to run.

Memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool. This current test profile currently just stresses the Redis protocol and basic options exposed wotj a 1:1 Set/Get ratio, 30 pipeline, 100 clients per thread, and thread count equal to the number of CPU cores/threads present. Patches to extend the test are welcome as always. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterMemtier_benchmark 1.2.17Protocol: RedisEXT4XFS300K600K900K1200K1500KSE +/- 49791.68, N = 12SE +/- 20674.68, N = 121549937.941490596.161. (CXX) g++ options: -O2 -levent -lpthread -lz -lpcre

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Async Random Read - Clients: 16EXT460120180240300SE +/- 5.80, N = 15294

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Increment - Clients: 64EXT4XFS30060090012001500SE +/- 7.57, N = 3SE +/- 13.00, N = 314931482

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Increment - Clients: 64EXT4XFS9K18K27K36K45KSE +/- 167.74, N = 3SE +/- 329.46, N = 34268442998

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1000 - Clients: 1 - Mode: Read Only - Average LatencyEXT4XFS0.02070.04140.06210.08280.1035SE +/- 0.001, N = 3SE +/- 0.001, N = 30.0910.0921. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1000 - Clients: 1 - Mode: Read OnlyEXT4XFS2K4K6K8K10KSE +/- 79.57, N = 3SE +/- 121.75, N = 310983109111. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1000 - Clients: 100 - Mode: Read Only - Average LatencyEXT4XFS0.11720.23440.35160.46880.586SE +/- 0.002, N = 3SE +/- 0.001, N = 30.5210.5151. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1000 - Clients: 100 - Mode: Read OnlyEXT4XFS40K80K120K160K200KSE +/- 701.91, N = 3SE +/- 354.76, N = 31918841942301. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1000 - Clients: 500 - Mode: Read Write - Average LatencyEXT4XFS306090120150SE +/- 0.62, N = 3SE +/- 0.60, N = 3118.4099.731. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1000 - Clients: 500 - Mode: Read WriteEXT4XFS11002200330044005500SE +/- 22.33, N = 3SE +/- 29.81, N = 3422350141. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1000 - Clients: 500 - Mode: Read Only - Average LatencyEXT4XFS0.6621.3241.9862.6483.31SE +/- 0.016, N = 3SE +/- 0.008, N = 32.9422.8361. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1000 - Clients: 500 - Mode: Read OnlyEXT4XFS40K80K120K160K200KSE +/- 917.32, N = 3SE +/- 464.17, N = 31699581763251. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1000 - Clients: 50 - Mode: Read Only - Average LatencyEXT4XFS0.05740.11480.17220.22960.287SE +/- 0.000, N = 3SE +/- 0.000, N = 30.2550.2541. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1000 - Clients: 50 - Mode: Read OnlyEXT4XFS40K80K120K160K200KSE +/- 172.61, N = 3SE +/- 352.21, N = 31962671965501. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1000 - Clients: 250 - Mode: Read Only - Average LatencyEXT4XFS0.28940.57880.86821.15761.447SE +/- 0.004, N = 3SE +/- 0.002, N = 31.2641.2861. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1000 - Clients: 250 - Mode: Read OnlyEXT4XFS40K80K120K160K200KSE +/- 670.87, N = 3SE +/- 269.64, N = 31977401944101. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1000 - Clients: 100 - Mode: Read Write - Average LatencyEXT4XFS816243240SE +/- 0.22, N = 3SE +/- 0.23, N = 335.0734.631. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1000 - Clients: 100 - Mode: Read WriteEXT4XFS6001200180024003000SE +/- 18.10, N = 3SE +/- 19.33, N = 3285228881. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1000 - Clients: 250 - Mode: Read Write - Average LatencyEXT4XFS20406080100SE +/- 0.24, N = 3SE +/- 0.55, N = 377.1167.611. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1000 - Clients: 250 - Mode: Read WriteEXT4XFS8001600240032004000SE +/- 9.96, N = 3SE +/- 29.98, N = 3324236981. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1000 - Clients: 50 - Mode: Read Write - Average LatencyEXT4XFS48121620SE +/- 0.03, N = 3SE +/- 0.07, N = 317.5117.561. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1000 - Clients: 50 - Mode: Read WriteEXT4XFS6001200180024003000SE +/- 4.82, N = 3SE +/- 11.14, N = 3285528471. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1000 - Clients: 1 - Mode: Read Write - Average LatencyEXT4XFS0.1420.2840.4260.5680.71SE +/- 0.001, N = 3SE +/- 0.002, N = 30.6310.6261. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1000 - Clients: 1 - Mode: Read WriteEXT4XFS30060090012001500SE +/- 1.22, N = 3SE +/- 3.75, N = 3158415981. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Sequential Read - Clients: 128EXT4XFS30060090012001500SE +/- 4.67, N = 3SE +/- 4.81, N = 313321300

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Sequential Read - Clients: 128EXT4XFS20K40K60K80K100KSE +/- 278.67, N = 3SE +/- 313.48, N = 39553997724

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: Mixed 1:1EXT4XFS14K28K42K56K70KSE +/- 805.19, N = 3SE +/- 670.05, N = 66725067488

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average LatencyEXT4XFS0.93331.86662.79993.73324.6665SE +/- 0.037, N = 8SE +/- 0.040, N = 34.1484.0591. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 50 - Mode: Read WriteEXT4XFS3K6K9K12K15KSE +/- 106.55, N = 8SE +/- 121.71, N = 312061123221. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Random Write - Clients: 32EXT4XFS70140210280350SE +/- 37.02, N = 12SE +/- 23.38, N = 15318175

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Random Write - Clients: 32EXT4XFS50K100K150K200K250KSE +/- 21213.62, N = 12SE +/- 21678.90, N = 15124449219181

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: WritesEXT4XFS20K40K60K80K100KSE +/- 670.12, N = 8SE +/- 353.38, N = 37898078269

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Async Random Write - Clients: 32EXT4XFS2004006008001000SE +/- 4.04, N = 3SE +/- 4.18, N = 311461153

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Async Random Write - Clients: 32EXT4XFS6K12K18K24K30KSE +/- 93.71, N = 3SE +/- 98.53, N = 32783927685

Test: Sequential Write - Clients: 256

EXT4: Test failed to run.

XFS: Test failed to run.

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Sequential Write - Clients: 32EXT4XFS306090120150SE +/- 10.62, N = 15SE +/- 2.61, N = 1513398

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Sequential Write - Clients: 32EXT4XFS70K140K210K280K350KSE +/- 9145.09, N = 15SE +/- 5477.25, N = 15317972348681

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Sequential Read - Clients: 4EXT4XFS4080120160200SE +/- 1.31, N = 15SE +/- 8.92, N = 15182192

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Sequential Read - Clients: 4EXT4XFS5K10K15K20K25KSE +/- 149.90, N = 15SE +/- 662.38, N = 152184621176

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Sequential FillEXT4XFS30K60K90K120K150KSE +/- 1750.15, N = 3SE +/- 641.02, N = 31481831497871. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Async Random Read - Clients: 4EXT4XFS4080120160200SE +/- 2.06, N = 15SE +/- 1.86, N = 15185184

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Async Random Read - Clients: 4EXT4XFS5K10K15K20K25KSE +/- 217.88, N = 15SE +/- 201.09, N = 152140421565

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.2.1Bulk Size: 100 - Inserts: 1000 - Rounds: 24EXT4XFS306090120150SE +/- 1.03, N = 7SE +/- 0.47, N = 3112.38110.571. (CXX) g++ options: -std=c++14 -lmozjs-68 -lm -lei -fPIC -MMD

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Read Random Write RandomEXT4XFS140K280K420K560K700KSE +/- 9698.77, N = 15SE +/- 4151.31, N = 35717076457691. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Sequential Read - Clients: 16EXT4XFS70140210280350SE +/- 2.30, N = 15SE +/- 4.16, N = 3262325

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Sequential Read - Clients: 16EXT4XFS13K26K39K52K65KSE +/- 505.75, N = 15SE +/- 649.38, N = 36068949038

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Async Random Read - Clients: 1EXT4XFS4080120160200SE +/- 1.51, N = 15SE +/- 2.11, N = 15155160

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Async Random Read - Clients: 1EXT4XFS14002800420056007000SE +/- 57.80, N = 15SE +/- 72.29, N = 1563926214

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Increment - Clients: 1EXT4XFS60120180240300SE +/- 2.33, N = 3SE +/- 3.68, N = 15223263

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Increment - Clients: 1EXT4XFS10002000300040005000SE +/- 46.72, N = 3SE +/- 47.01, N = 1544523782

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Sequential Read - Clients: 1EXT4306090120150SE +/- 1.88, N = 15148

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Increment - Clients: 32EXT4XFS2004006008001000SE +/- 1.00, N = 3SE +/- 7.33, N = 3746775

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Increment - Clients: 32EXT4XFS9K18K27K36K45KSE +/- 40.16, N = 3SE +/- 450.19, N = 34260841038

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Async Random Write - Clients: 16EXT4XFS160320480640800SE +/- 10.14, N = 3SE +/- 3.06, N = 3734734

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Async Random Write - Clients: 16EXT4XFS5K10K15K20K25KSE +/- 298.85, N = 3SE +/- 89.80, N = 32172421717

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000EXT4XFS306090120150SE +/- 0.88, N = 3SE +/- 0.40, N = 3149.39149.721. (CC) gcc options: -O2 -ldl -lz -lpthread

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read Only - Average LatencyEXT4XFS0.57471.14941.72412.29882.8735SE +/- 0.012, N = 3SE +/- 0.007, N = 32.5542.5331. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read OnlyEXT4XFS40K80K120K160K200KSE +/- 906.00, N = 3SE +/- 548.17, N = 31957811974041. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read Write - Average LatencyEXT4XFS1020304050SE +/- 0.63, N = 3SE +/- 0.52, N = 345.2843.921. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read WriteEXT4XFS2K4K6K8K10KSE +/- 153.46, N = 3SE +/- 134.98, N = 311047113871. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average LatencyEXT4XFS0.25110.50220.75331.00441.2555SE +/- 0.002, N = 3SE +/- 0.003, N = 31.1161.1031. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read OnlyEXT4XFS50K100K150K200K250KSE +/- 466.97, N = 3SE +/- 688.93, N = 32240472265821. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 1 - Mode: Read Write - Average LatencyEXT4XFS0.12920.25840.38760.51680.646SE +/- 0.002, N = 3SE +/- 0.003, N = 30.5710.5741. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 1 - Mode: Read WriteEXT4XFS400800120016002000SE +/- 6.13, N = 3SE +/- 8.81, N = 3175117431. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average LatencyEXT4XFS246810SE +/- 0.059, N = 3SE +/- 0.051, N = 38.3008.0881. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 100 - Mode: Read WriteEXT4XFS3K6K9K12K15KSE +/- 85.22, N = 3SE +/- 78.18, N = 312050123641. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average LatencyEXT4XFS510152025SE +/- 0.15, N = 3SE +/- 0.15, N = 321.5320.851. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read WriteEXT4XFS3K6K9K12K15KSE +/- 80.58, N = 3SE +/- 88.54, N = 311614119911. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 1 - Mode: Read Only - Average LatencyEXT4XFS0.01760.03520.05280.07040.088SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0780.0781. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 1 - Mode: Read OnlyEXT4XFS3K6K9K12K15KSE +/- 46.38, N = 3SE +/- 43.12, N = 312877128401. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average LatencyEXT4XFS0.10670.21340.32010.42680.5335SE +/- 0.004, N = 3SE +/- 0.001, N = 30.4740.4641. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 100 - Mode: Read OnlyEXT4XFS50K100K150K200K250KSE +/- 1749.85, N = 3SE +/- 517.47, N = 32110822157401. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average LatencyEXT4XFS0.05220.10440.15660.20880.261SE +/- 0.001, N = 3SE +/- 0.000, N = 30.2320.2301. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 50 - Mode: Read OnlyEXT4XFS50K100K150K200K250KSE +/- 968.71, N = 3SE +/- 353.34, N = 32159372172221. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random DeleteEXT4XFS60120180240300SE +/- 0.20, N = 3SE +/- 0.24, N = 3288.12289.401. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Sequential FillEXT4XFS60120180240300SE +/- 0.97, N = 3SE +/- 0.70, N = 3288.30288.981. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Sequential FillEXT4XFS3691215SE +/- 0.03, N = 3SE +/- 0.03, N = 312.212.21. (CXX) g++ options: -O3 -lsnappy -lpthread

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average LatencyEXT4XFS48121620SE +/- 0.01, N = 3SE +/- 0.04, N = 313.7813.941. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1 - Clients: 50 - Mode: Read WriteEXT4XFS8001600240032004000SE +/- 3.19, N = 3SE +/- 9.88, N = 3362735871. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1 - Clients: 100 - Mode: Read Write - Average LatencyEXT4XFS816243240SE +/- 0.18, N = 3SE +/- 0.33, N = 336.6036.531. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1 - Clients: 100 - Mode: Read WriteEXT4XFS6001200180024003000SE +/- 13.77, N = 3SE +/- 25.01, N = 3273327381. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1 - Clients: 500 - Mode: Read Only - Average LatencyEXT4XFS0.52671.05341.58012.10682.6335SE +/- 0.010, N = 3SE +/- 0.012, N = 32.3412.2921. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1 - Clients: 500 - Mode: Read OnlyEXT4XFS50K100K150K200K250KSE +/- 936.66, N = 3SE +/- 1176.20, N = 32136102181021. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1 - Clients: 250 - Mode: Read Only - Average LatencyEXT4XFS0.23510.47020.70530.94041.1755SE +/- 0.002, N = 3SE +/- 0.004, N = 31.0451.0331. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1 - Clients: 250 - Mode: Read OnlyEXT4XFS50K100K150K200K250KSE +/- 435.69, N = 3SE +/- 1017.69, N = 32392722420591. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1 - Clients: 100 - Mode: Read Only - Average LatencyEXT4XFS0.10010.20020.30030.40040.5005SE +/- 0.001, N = 3SE +/- 0.001, N = 30.4450.4431. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1 - Clients: 100 - Mode: Read OnlyEXT4XFS50K100K150K200K250KSE +/- 402.72, N = 3SE +/- 560.78, N = 32247562256301. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average LatencyEXT4XFS0.05020.10040.15060.20080.251SE +/- 0.001, N = 3SE +/- 0.001, N = 30.2230.2221. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1 - Clients: 50 - Mode: Read OnlyEXT4XFS50K100K150K200K250KSE +/- 885.40, N = 3SE +/- 673.80, N = 32240842256981. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1 - Clients: 1 - Mode: Read Write - Average LatencyEXT4XFS0.11970.23940.35910.47880.5985SE +/- 0.004, N = 3SE +/- 0.001, N = 30.5320.5301. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1 - Clients: 1 - Mode: Read WriteEXT4XFS400800120016002000SE +/- 14.82, N = 3SE +/- 2.19, N = 3188018881. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1 - Clients: 1 - Mode: Read Only - Average LatencyEXT4XFS0.01710.03420.05130.06840.0855SE +/- 0.001, N = 3SE +/- 0.000, N = 30.0760.0751. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 1 - Clients: 1 - Mode: Read OnlyEXT4XFS3K6K9K12K15KSE +/- 93.30, N = 3SE +/- 32.01, N = 313247133981. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Random Write - Clients: 16EXT4XFS4080120160200SE +/- 29.81, N = 15SE +/- 20.22, N = 1216573

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Random Write - Clients: 16EXT4XFS60K120K180K240K300KSE +/- 22515.90, N = 15SE +/- 26659.17, N = 12147758289850

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000EXT4XFS200K400K600K800K1000KSE +/- 1636.42, N = 3SE +/- 4939.53, N = 3859395.6848341.6

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 1024 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000EXT4XFS200K400K600K800K1000KSE +/- 1298.48, N = 3SE +/- 2255.62, N = 3867557.9860201.8

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000EXT4XFS200K400K600K800K1000KSE +/- 354.79, N = 3SE +/- 2719.95, N = 3886748.9877126.2

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Random Read - Clients: 1EXT4XFS4080120160200SE +/- 1.89, N = 5SE +/- 1.44, N = 15166149

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Random Read - Clients: 1EXT4XFS14002800420056007000SE +/- 65.40, N = 5SE +/- 59.65, N = 1559806648

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Random Read - Clients: 16EXT4XFS60120180240300SE +/- 2.33, N = 3SE +/- 2.02, N = 9223263

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Random Read - Clients: 16EXT4XFS15K30K45K60K75KSE +/- 706.49, N = 3SE +/- 473.67, N = 97093060216

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Hot ReadEXT4XFS816243240SE +/- 0.33, N = 3SE +/- 0.25, N = 1535.2835.251. (CXX) g++ options: -O3 -lsnappy -lpthread

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Random Read - Clients: 32EXT4XFS100200300400500SE +/- 1.86, N = 3SE +/- 5.12, N = 4352468

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Random Read - Clients: 32EXT4XFS20K40K60K80K100KSE +/- 460.85, N = 3SE +/- 741.01, N = 49006367957

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Sequential Write - Clients: 16EXT4XFS1326395265SE +/- 8.10, N = 15SE +/- 6.89, N = 156051

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Sequential Write - Clients: 16EXT4XFS90K180K270K360K450KSE +/- 9367.43, N = 15SE +/- 10607.49, N = 15347807398393

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Sequential Read - Clients: 32EXT4XFS110220330440550SE +/- 4.81, N = 3SE +/- 6.03, N = 3432502

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Sequential Read - Clients: 32EXT4XFS16K32K48K64K80KSE +/- 849.17, N = 3SE +/- 782.94, N = 37355863518

KeyDB

A benchmark of KeyDB as a multi-threaded fork of the Redis server. The KeyDB benchmark is conducted using memtier-benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.2.0EXT4XFS70K140K210K280K350KSE +/- 1831.29, N = 3SE +/- 3240.95, N = 3321216.81319106.411. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Increment - Clients: 16EXT4XFS110220330440550SE +/- 3.48, N = 3SE +/- 3.51, N = 3468491

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Increment - Clients: 16EXT4XFS7K14K21K28K35KSE +/- 262.12, N = 3SE +/- 243.82, N = 33395932349

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Async Random Write - Clients: 4EXT4XFS100200300400500SE +/- 2.60, N = 3SE +/- 4.36, N = 3474466

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Async Random Write - Clients: 4EXT4XFS2K4K6K8K10KSE +/- 47.05, N = 3SE +/- 77.44, N = 384038556

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Async Random Write - Clients: 1EXT4XFS80160240320400SE +/- 3.33, N = 3SE +/- 5.00, N = 3383387

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Async Random Write - Clients: 1EXT4XFS6001200180024003000SE +/- 22.01, N = 3SE +/- 32.67, N = 325972572

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Increment - Clients: 4EXT4XFS70140210280350SE +/- 2.89, N = 3SE +/- 3.59, N = 4308325

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Increment - Clients: 4EXT4XFS3K6K9K12K15KSE +/- 138.30, N = 3SE +/- 140.72, N = 41290512244

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Random Fill SyncEXT4XFS14K28K42K56K70KSE +/- 81.04, N = 3SE +/- 71.15, N = 365263652521. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Random FillEXT4XFS30K60K90K120K150KSE +/- 1137.72, N = 3SE +/- 1431.41, N = 31465941499151. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Update RandomEXT4XFS30K60K90K120K150KSE +/- 966.89, N = 3SE +/- 1044.42, N = 31333361385751. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Read While WritingEXT4XFS500K1000K1500K2000K2500KSE +/- 31860.75, N = 3SE +/- 30012.35, N = 3241098723954281. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Random ReadEXT4XFS10M20M30M40M50MSE +/- 427414.69, N = 3SE +/- 442272.59, N = 348097925482398291. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

Test: Async Random Read - Clients: 256

EXT4: Test failed to run.

XFS: Test failed to run.

Test: Async Random Write - Clients: 256

EXT4: Test failed to run.

XFS: Test failed to run.

Test: Random Read - Clients: 256

EXT4: Test failed to run.

XFS: Test failed to run.

Test: Increment - Clients: 256

EXT4: Test failed to run.

XFS: Test failed to run.

Test: Sequential Read - Clients: 256

EXT4: Test failed to run.

XFS: Test failed to run.

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Random Read - Clients: 4EXT4XFS4080120160200SE +/- 0.88, N = 3SE +/- 1.89, N = 5166173

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Random Read - Clients: 4EXT4XFS5K10K15K20K25KSE +/- 119.26, N = 3SE +/- 250.88, N = 52387222927

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Random Write - Clients: 4EXT4XFS1122334455SE +/- 16.39, N = 12SE +/- 19.39, N = 124550

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Random Write - Clients: 4EXT4XFS30K60K90K120K150KSE +/- 18825.90, N = 12SE +/- 18474.95, N = 12162144156727

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Seek RandomEXT4XFS1020304050SE +/- 0.17, N = 3SE +/- 0.31, N = 344.1444.141. (CXX) g++ options: -O3 -lsnappy -lpthread

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Sequential Write - Clients: 4EXT4XFS816243240SE +/- 12.02, N = 12SE +/- 15.17, N = 123033

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Sequential Write - Clients: 4EXT4XFS40K80K120K160K200KSE +/- 17028.67, N = 12SE +/- 12069.51, N = 12197735201341

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random ReadEXT4XFS816243240SE +/- 0.06, N = 3SE +/- 0.18, N = 335.3935.531. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: OverwriteEXT4XFS60120180240300SE +/- 0.36, N = 3SE +/- 0.32, N = 3288.50292.071. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: OverwriteEXT4XFS3691215SE +/- 0.00, N = 3SE +/- 0.00, N = 312.212.11. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random FillEXT4XFS60120180240300SE +/- 0.66, N = 3SE +/- 0.38, N = 3287.33288.491. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Random FillEXT4XFS3691215SE +/- 0.03, N = 3SE +/- 0.00, N = 312.312.21. (CXX) g++ options: -O3 -lsnappy -lpthread

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Random Write - Clients: 1EXT4XFS48121620SE +/- 0.23, N = 13SE +/- 0.24, N = 151817

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Random Write - Clients: 1EXT4XFS12K24K36K48K60KSE +/- 611.63, N = 13SE +/- 624.11, N = 155387954359

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Sequential Write - Clients: 1EXT4XFS48121620SE +/- 0.25, N = 15SE +/- 0.21, N = 151414

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Sequential Write - Clients: 1EXT4XFS14K28K42K56K70KSE +/- 946.43, N = 15SE +/- 803.17, N = 156531965732

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETEXT4XFS200K400K600K800K1000KSE +/- 10546.64, N = 3SE +/- 3946.38, N = 3878810.46857147.731. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETEXT4XFS200K400K600K800K1000KSE +/- 4177.59, N = 3SE +/- 3146.20, N = 3873340.33882156.811. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSHEXT4XFS200K400K600K800K1000KSE +/- 3122.84, N = 3SE +/- 3139.63, N = 3877540.98876195.461. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOPEXT4XFS200K400K600K800K1000KSE +/- 3329.62, N = 3SE +/- 4956.07, N = 3880911.58873824.561. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADDEXT4XFS200K400K600K800K1000KSE +/- 12939.69, N = 3SE +/- 9058.69, N = 3899392.79883258.921. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

SQLite

This is a simple benchmark of SQLite. At present this test profile just measures the time to perform a pre-defined number of insertions on an indexed database. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.30.1Threads / Copies: 1EXT4XFS0.72951.4592.18852.9183.6475SE +/- 0.005, N = 3SE +/- 0.019, N = 33.1593.2421. (CC) gcc options: -O2 -lz -lm -ldl -lpthread

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Fill SyncEXT4XFS120240360480600SE +/- 4.28, N = 3SE +/- 4.82, N = 3543.77539.801. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Fill SyncEXT4XFS246810SE +/- 0.03, N = 3SE +/- 0.06, N = 36.46.51. (CXX) g++ options: -O3 -lsnappy -lpthread

MariaDB

This is a MariaDB MySQL database server benchmark making use of mysqlslap. Learn more via the OpenBenchmarking.org test page.

Clients: 8

EXT4: ./mysqlslap: 3: ./bin/mysqlslap: not found

XFS: ./mysqlslap: 3: ./bin/mysqlslap: not found

Clients: 4096

EXT4: ./mysqlslap: 3: ./bin/mysqlslap: not found

XFS: ./mysqlslap: 3: ./bin/mysqlslap: not found

Clients: 2048

EXT4: ./mysqlslap: 3: ./bin/mysqlslap: not found

XFS: ./mysqlslap: 3: ./bin/mysqlslap: not found

Clients: 1024

EXT4: ./mysqlslap: 3: ./bin/mysqlslap: not found

XFS: ./mysqlslap: 3: ./bin/mysqlslap: not found

Clients: 512

EXT4: ./mysqlslap: 3: ./bin/mysqlslap: not found

XFS: ./mysqlslap: 3: ./bin/mysqlslap: not found

Clients: 256

EXT4: ./mysqlslap: 3: ./bin/mysqlslap: not found

XFS: ./mysqlslap: 3: ./bin/mysqlslap: not found

Clients: 128

EXT4: ./mysqlslap: 3: ./bin/mysqlslap: not found

XFS: ./mysqlslap: 3: ./bin/mysqlslap: not found

Clients: 64

EXT4: ./mysqlslap: 3: ./bin/mysqlslap: not found

XFS: ./mysqlslap: 3: ./bin/mysqlslap: not found

Clients: 32

EXT4: ./mysqlslap: 3: ./bin/mysqlslap: not found

XFS: ./mysqlslap: 3: ./bin/mysqlslap: not found

Clients: 16

EXT4: ./mysqlslap: 3: ./bin/mysqlslap: not found

XFS: ./mysqlslap: 3: ./bin/mysqlslap: not found

Clients: 1

EXT4: ./mysqlslap: 3: ./bin/mysqlslap: not found

XFS: ./mysqlslap: 3: ./bin/mysqlslap: not found

180 Results Shown

Apache HBase:
  Rand Write - 128:
    Microseconds - Average Latency
    Rows Per Second
  Async Rand Read - 128:
    Microseconds - Average Latency
    Rows Per Second
  Seq Write - 64:
    Microseconds - Average Latency
    Rows Per Second
  Rand Read - 128:
    Microseconds - Average Latency
    Rows Per Second
  Async Rand Write - 128:
    Microseconds - Average Latency
    Rows Per Second
  Seq Write - 128:
    Microseconds - Average Latency
    Rows Per Second
  Increment - 128:
    Microseconds - Average Latency
    Rows Per Second
Apache Cassandra
Apache HBase:
  Async Rand Read - 64:
    Microseconds - Average Latency
    Rows Per Second
  Async Rand Read - 32:
    Microseconds - Average Latency
    Rows Per Second
PostgreSQL pgbench:
  1 - 250 - Read Write - Average Latency
  1 - 250 - Read Write
Apache Cassandra
Apache HBase:
  Rand Write - 64:
    Microseconds - Average Latency
    Rows Per Second
PostgreSQL pgbench:
  1 - 500 - Read Write - Average Latency
  1 - 500 - Read Write
Apache HBase:
  Async Rand Read - 16
  Async Rand Write - 64
  Async Rand Write - 64
  Seq Read - 1
  Rand Read - 64
  Rand Read - 64
  Seq Read - 64
  Seq Read - 64
Memtier_benchmark
Apache HBase:
  Async Rand Read - 16
  Increment - 64
  Increment - 64
PostgreSQL pgbench:
  1000 - 1 - Read Only - Average Latency
  1000 - 1 - Read Only
  1000 - 100 - Read Only - Average Latency
  1000 - 100 - Read Only
  1000 - 500 - Read Write - Average Latency
  1000 - 500 - Read Write
  1000 - 500 - Read Only - Average Latency
  1000 - 500 - Read Only
  1000 - 50 - Read Only - Average Latency
  1000 - 50 - Read Only
  1000 - 250 - Read Only - Average Latency
  1000 - 250 - Read Only
  1000 - 100 - Read Write - Average Latency
  1000 - 100 - Read Write
  1000 - 250 - Read Write - Average Latency
  1000 - 250 - Read Write
  1000 - 50 - Read Write - Average Latency
  1000 - 50 - Read Write
  1000 - 1 - Read Write - Average Latency
  1000 - 1 - Read Write
Apache HBase:
  Seq Read - 128:
    Microseconds - Average Latency
    Rows Per Second
Apache Cassandra
PostgreSQL pgbench:
  100 - 50 - Read Write - Average Latency
  100 - 50 - Read Write
Apache HBase:
  Rand Write - 32:
    Microseconds - Average Latency
    Rows Per Second
Apache Cassandra
Apache HBase:
  Async Rand Write - 32:
    Microseconds - Average Latency
    Rows Per Second
  Seq Write - 32:
    Microseconds - Average Latency
    Rows Per Second
  Seq Read - 4:
    Microseconds - Average Latency
    Rows Per Second
Facebook RocksDB
Apache HBase:
  Async Rand Read - 4:
    Microseconds - Average Latency
    Rows Per Second
Apache CouchDB
Facebook RocksDB
Apache HBase:
  Seq Read - 16:
    Microseconds - Average Latency
    Rows Per Second
  Async Rand Read - 1:
    Microseconds - Average Latency
    Rows Per Second
  Increment - 1:
    Microseconds - Average Latency
    Rows Per Second
  Seq Read - 1:
    Microseconds - Average Latency
  Increment - 32:
    Microseconds - Average Latency
    Rows Per Second
  Async Rand Write - 16:
    Microseconds - Average Latency
    Rows Per Second
SQLite Speedtest
PostgreSQL pgbench:
  100 - 500 - Read Only - Average Latency
  100 - 500 - Read Only
  100 - 500 - Read Write - Average Latency
  100 - 500 - Read Write
  100 - 250 - Read Only - Average Latency
  100 - 250 - Read Only
  100 - 1 - Read Write - Average Latency
  100 - 1 - Read Write
  100 - 100 - Read Write - Average Latency
  100 - 100 - Read Write
  100 - 250 - Read Write - Average Latency
  100 - 250 - Read Write
  100 - 1 - Read Only - Average Latency
  100 - 1 - Read Only
  100 - 100 - Read Only - Average Latency
  100 - 100 - Read Only
  100 - 50 - Read Only - Average Latency
  100 - 50 - Read Only
LevelDB:
  Rand Delete
  Seq Fill
  Seq Fill
PostgreSQL pgbench:
  1 - 50 - Read Write - Average Latency
  1 - 50 - Read Write
  1 - 100 - Read Write - Average Latency
  1 - 100 - Read Write
  1 - 500 - Read Only - Average Latency
  1 - 500 - Read Only
  1 - 250 - Read Only - Average Latency
  1 - 250 - Read Only
  1 - 100 - Read Only - Average Latency
  1 - 100 - Read Only
  1 - 50 - Read Only - Average Latency
  1 - 50 - Read Only
  1 - 1 - Read Write - Average Latency
  1 - 1 - Read Write
  1 - 1 - Read Only - Average Latency
  1 - 1 - Read Only
Apache HBase:
  Rand Write - 16:
    Microseconds - Average Latency
    Rows Per Second
InfluxDB:
  4 - 10000 - 2,5000,1 - 10000
  1024 - 10000 - 2,5000,1 - 10000
  64 - 10000 - 2,5000,1 - 10000
Apache HBase:
  Rand Read - 1:
    Microseconds - Average Latency
    Rows Per Second
  Rand Read - 16:
    Microseconds - Average Latency
    Rows Per Second
LevelDB
Apache HBase:
  Rand Read - 32:
    Microseconds - Average Latency
    Rows Per Second
  Seq Write - 16:
    Microseconds - Average Latency
    Rows Per Second
  Seq Read - 32:
    Microseconds - Average Latency
    Rows Per Second
KeyDB
Apache HBase:
  Increment - 16:
    Microseconds - Average Latency
    Rows Per Second
  Async Rand Write - 4:
    Microseconds - Average Latency
    Rows Per Second
  Async Rand Write - 1:
    Microseconds - Average Latency
    Rows Per Second
  Increment - 4:
    Microseconds - Average Latency
    Rows Per Second
Facebook RocksDB:
  Rand Fill Sync
  Rand Fill
  Update Rand
  Read While Writing
  Rand Read
Apache HBase:
  Rand Read - 4:
    Microseconds - Average Latency
    Rows Per Second
  Rand Write - 4:
    Microseconds - Average Latency
    Rows Per Second
LevelDB
Apache HBase:
  Seq Write - 4:
    Microseconds - Average Latency
    Rows Per Second
LevelDB:
  Rand Read
  Overwrite
  Overwrite
  Rand Fill
  Rand Fill
Apache HBase:
  Rand Write - 1:
    Microseconds - Average Latency
    Rows Per Second
  Seq Write - 1:
    Microseconds - Average Latency
    Rows Per Second
Redis:
  SET
  GET
  LPUSH
  LPOP
  SADD
SQLite
LevelDB:
  Fill Sync:
    Microseconds Per Op
    MB/s