Server Server

A collection of common server tests.

See how your system performs with this suite using the Phoronix Test Suite. It's as easy as running the phoronix-test-suite benchmark server command..

Tests In This Suite

  • Apache Cassandra

  •         Test: Writes
  • Apache CouchDB

  •         Bulk Size: 100 - Inserts: 1000 - Rounds: 30
  •         Bulk Size: 100 - Inserts: 3000 - Rounds: 30
  •         Bulk Size: 300 - Inserts: 1000 - Rounds: 30
  •         Bulk Size: 300 - Inserts: 3000 - Rounds: 30
  •         Bulk Size: 500 - Inserts: 1000 - Rounds: 30
  •         Bulk Size: 500 - Inserts: 3000 - Rounds: 30
  • Apache Hadoop

  •         Operation: Create - Threads: 20 - Files: 100000
  •         Operation: Create - Threads: 20 - Files: 1000000
  •         Operation: Create - Threads: 20 - Files: 10000000
  •         Operation: Create - Threads: 50 - Files: 100000
  •         Operation: Create - Threads: 50 - Files: 1000000
  •         Operation: Create - Threads: 50 - Files: 10000000
  •         Operation: Create - Threads: 100 - Files: 100000
  •         Operation: Create - Threads: 100 - Files: 1000000
  •         Operation: Create - Threads: 100 - Files: 10000000
  •         Operation: Create - Threads: 500 - Files: 100000
  •         Operation: Create - Threads: 500 - Files: 1000000
  •         Operation: Create - Threads: 500 - Files: 10000000
  •         Operation: Create - Threads: 1000 - Files: 100000
  •         Operation: Create - Threads: 1000 - Files: 1000000
  •         Operation: Create - Threads: 1000 - Files: 10000000
  •         Operation: Open - Threads: 20 - Files: 100000
  •         Operation: Open - Threads: 20 - Files: 1000000
  •         Operation: Open - Threads: 20 - Files: 10000000
  •         Operation: Open - Threads: 50 - Files: 100000
  •         Operation: Open - Threads: 50 - Files: 1000000
  •         Operation: Open - Threads: 50 - Files: 10000000
  •         Operation: Open - Threads: 100 - Files: 100000
  •         Operation: Open - Threads: 100 - Files: 1000000
  •         Operation: Open - Threads: 100 - Files: 10000000
  •         Operation: Open - Threads: 500 - Files: 100000
  •         Operation: Open - Threads: 500 - Files: 1000000
  •         Operation: Open - Threads: 500 - Files: 10000000
  •         Operation: Open - Threads: 1000 - Files: 100000
  •         Operation: Open - Threads: 1000 - Files: 1000000
  •         Operation: Open - Threads: 1000 - Files: 10000000
  •         Operation: Delete - Threads: 20 - Files: 100000
  •         Operation: Delete - Threads: 20 - Files: 1000000
  •         Operation: Delete - Threads: 20 - Files: 10000000
  •         Operation: Delete - Threads: 50 - Files: 100000
  •         Operation: Delete - Threads: 50 - Files: 1000000
  •         Operation: Delete - Threads: 50 - Files: 10000000
  •         Operation: Delete - Threads: 100 - Files: 100000
  •         Operation: Delete - Threads: 100 - Files: 1000000
  •         Operation: Delete - Threads: 100 - Files: 10000000
  •         Operation: Delete - Threads: 500 - Files: 100000
  •         Operation: Delete - Threads: 500 - Files: 1000000
  •         Operation: Delete - Threads: 500 - Files: 10000000
  •         Operation: Delete - Threads: 1000 - Files: 100000
  •         Operation: Delete - Threads: 1000 - Files: 1000000
  •         Operation: Delete - Threads: 1000 - Files: 10000000
  •         Operation: File Status - Threads: 20 - Files: 100000
  •         Operation: File Status - Threads: 20 - Files: 1000000
  •         Operation: File Status - Threads: 20 - Files: 10000000
  •         Operation: File Status - Threads: 50 - Files: 100000
  •         Operation: File Status - Threads: 50 - Files: 1000000
  •         Operation: File Status - Threads: 50 - Files: 10000000
  •         Operation: File Status - Threads: 100 - Files: 100000
  •         Operation: File Status - Threads: 100 - Files: 1000000
  •         Operation: File Status - Threads: 100 - Files: 10000000
  •         Operation: File Status - Threads: 500 - Files: 100000
  •         Operation: File Status - Threads: 500 - Files: 1000000
  •         Operation: File Status - Threads: 500 - Files: 10000000
  •         Operation: File Status - Threads: 1000 - Files: 100000
  •         Operation: File Status - Threads: 1000 - Files: 1000000
  •         Operation: File Status - Threads: 1000 - Files: 10000000
  •         Operation: Rename - Threads: 20 - Files: 100000
  •         Operation: Rename - Threads: 20 - Files: 1000000
  •         Operation: Rename - Threads: 20 - Files: 10000000
  •         Operation: Rename - Threads: 50 - Files: 100000
  •         Operation: Rename - Threads: 50 - Files: 1000000
  •         Operation: Rename - Threads: 50 - Files: 10000000
  •         Operation: Rename - Threads: 100 - Files: 100000
  •         Operation: Rename - Threads: 100 - Files: 1000000
  •         Operation: Rename - Threads: 100 - Files: 10000000
  •         Operation: Rename - Threads: 500 - Files: 100000
  •         Operation: Rename - Threads: 500 - Files: 1000000
  •         Operation: Rename - Threads: 500 - Files: 10000000
  •         Operation: Rename - Threads: 1000 - Files: 100000
  •         Operation: Rename - Threads: 1000 - Files: 1000000
  •         Operation: Rename - Threads: 1000 - Files: 10000000
  • Apache HBase

  •         Rows: 10000 - Test: Random Write - Clients: 1
  •         Rows: 10000 - Test: Random Write - Clients: 4
  •         Rows: 10000 - Test: Random Write - Clients: 16
  •         Rows: 10000 - Test: Random Write - Clients: 32
  •         Rows: 10000 - Test: Random Write - Clients: 64
  •         Rows: 10000 - Test: Random Write - Clients: 128
  •         Rows: 10000 - Test: Random Write - Clients: 256
  •         Rows: 10000 - Test: Random Write - Clients: 500
  •         Rows: 10000 - Test: Async Random Write - Clients: 1
  •         Rows: 10000 - Test: Async Random Write - Clients: 4
  •         Rows: 10000 - Test: Async Random Write - Clients: 16
  •         Rows: 10000 - Test: Async Random Write - Clients: 32
  •         Rows: 10000 - Test: Async Random Write - Clients: 64
  •         Rows: 10000 - Test: Async Random Write - Clients: 128
  •         Rows: 10000 - Test: Async Random Write - Clients: 256
  •         Rows: 10000 - Test: Async Random Write - Clients: 500
  •         Rows: 10000 - Test: Random Read - Clients: 1
  •         Rows: 10000 - Test: Random Read - Clients: 4
  •         Rows: 10000 - Test: Random Read - Clients: 16
  •         Rows: 10000 - Test: Random Read - Clients: 32
  •         Rows: 10000 - Test: Random Read - Clients: 64
  •         Rows: 10000 - Test: Random Read - Clients: 128
  •         Rows: 10000 - Test: Random Read - Clients: 256
  •         Rows: 10000 - Test: Random Read - Clients: 500
  •         Rows: 10000 - Test: Async Random Read - Clients: 1
  •         Rows: 10000 - Test: Async Random Read - Clients: 4
  •         Rows: 10000 - Test: Async Random Read - Clients: 16
  •         Rows: 10000 - Test: Async Random Read - Clients: 32
  •         Rows: 10000 - Test: Async Random Read - Clients: 64
  •         Rows: 10000 - Test: Async Random Read - Clients: 128
  •         Rows: 10000 - Test: Async Random Read - Clients: 256
  •         Rows: 10000 - Test: Async Random Read - Clients: 500
  •         Rows: 10000 - Test: Sequential Write - Clients: 1
  •         Rows: 10000 - Test: Sequential Write - Clients: 4
  •         Rows: 10000 - Test: Sequential Write - Clients: 16
  •         Rows: 10000 - Test: Sequential Write - Clients: 32
  •         Rows: 10000 - Test: Sequential Write - Clients: 64
  •         Rows: 10000 - Test: Sequential Write - Clients: 128
  •         Rows: 10000 - Test: Sequential Write - Clients: 256
  •         Rows: 10000 - Test: Sequential Write - Clients: 500
  •         Rows: 10000 - Test: Sequential Read - Clients: 1
  •         Rows: 10000 - Test: Sequential Read - Clients: 4
  •         Rows: 10000 - Test: Sequential Read - Clients: 16
  •         Rows: 10000 - Test: Sequential Read - Clients: 32
  •         Rows: 10000 - Test: Sequential Read - Clients: 64
  •         Rows: 10000 - Test: Sequential Read - Clients: 128
  •         Rows: 10000 - Test: Sequential Read - Clients: 256
  •         Rows: 10000 - Test: Sequential Read - Clients: 500
  •         Rows: 10000 - Test: Scan - Clients: 1
  •         Rows: 10000 - Test: Scan - Clients: 4
  •         Rows: 10000 - Test: Scan - Clients: 16
  •         Rows: 10000 - Test: Scan - Clients: 32
  •         Rows: 10000 - Test: Scan - Clients: 64
  •         Rows: 10000 - Test: Scan - Clients: 128
  •         Rows: 10000 - Test: Scan - Clients: 256
  •         Rows: 10000 - Test: Scan - Clients: 500
  •         Rows: 10000 - Test: Increment - Clients: 1
  •         Rows: 10000 - Test: Increment - Clients: 4
  •         Rows: 10000 - Test: Increment - Clients: 16
  •         Rows: 10000 - Test: Increment - Clients: 32
  •         Rows: 10000 - Test: Increment - Clients: 64
  •         Rows: 10000 - Test: Increment - Clients: 128
  •         Rows: 10000 - Test: Increment - Clients: 256
  •         Rows: 10000 - Test: Increment - Clients: 500
  •         Rows: 1000000 - Test: Random Write - Clients: 1
  •         Rows: 1000000 - Test: Random Write - Clients: 4
  •         Rows: 1000000 - Test: Random Write - Clients: 16
  •         Rows: 1000000 - Test: Random Write - Clients: 32
  •         Rows: 1000000 - Test: Random Write - Clients: 64
  •         Rows: 1000000 - Test: Random Write - Clients: 128
  •         Rows: 1000000 - Test: Random Write - Clients: 256
  •         Rows: 1000000 - Test: Random Write - Clients: 500
  •         Rows: 1000000 - Test: Async Random Write - Clients: 1
  •         Rows: 1000000 - Test: Async Random Write - Clients: 4
  •         Rows: 1000000 - Test: Async Random Write - Clients: 16
  •         Rows: 1000000 - Test: Async Random Write - Clients: 32
  •         Rows: 1000000 - Test: Async Random Write - Clients: 64
  •         Rows: 1000000 - Test: Async Random Write - Clients: 128
  •         Rows: 1000000 - Test: Async Random Write - Clients: 256
  •         Rows: 1000000 - Test: Async Random Write - Clients: 500
  •         Rows: 1000000 - Test: Random Read - Clients: 1
  •         Rows: 1000000 - Test: Random Read - Clients: 4
  •         Rows: 1000000 - Test: Random Read - Clients: 16
  •         Rows: 1000000 - Test: Random Read - Clients: 32
  •         Rows: 1000000 - Test: Random Read - Clients: 64
  •         Rows: 1000000 - Test: Random Read - Clients: 128
  •         Rows: 1000000 - Test: Random Read - Clients: 256
  •         Rows: 1000000 - Test: Random Read - Clients: 500
  •         Rows: 1000000 - Test: Async Random Read - Clients: 1
  •         Rows: 1000000 - Test: Async Random Read - Clients: 4
  •         Rows: 1000000 - Test: Async Random Read - Clients: 16
  •         Rows: 1000000 - Test: Async Random Read - Clients: 32
  •         Rows: 1000000 - Test: Async Random Read - Clients: 64
  •         Rows: 1000000 - Test: Async Random Read - Clients: 128
  •         Rows: 1000000 - Test: Async Random Read - Clients: 256
  •         Rows: 1000000 - Test: Async Random Read - Clients: 500
  •         Rows: 1000000 - Test: Sequential Write - Clients: 1
  •         Rows: 1000000 - Test: Sequential Write - Clients: 4
  •         Rows: 1000000 - Test: Sequential Write - Clients: 16
  •         Rows: 1000000 - Test: Sequential Write - Clients: 32
  •         Rows: 1000000 - Test: Sequential Write - Clients: 64
  •         Rows: 1000000 - Test: Sequential Write - Clients: 128
  •         Rows: 1000000 - Test: Sequential Write - Clients: 256
  •         Rows: 1000000 - Test: Sequential Write - Clients: 500
  •         Rows: 1000000 - Test: Sequential Read - Clients: 1
  •         Rows: 1000000 - Test: Sequential Read - Clients: 4
  •         Rows: 1000000 - Test: Sequential Read - Clients: 16
  •         Rows: 1000000 - Test: Sequential Read - Clients: 32
  •         Rows: 1000000 - Test: Sequential Read - Clients: 64
  •         Rows: 1000000 - Test: Sequential Read - Clients: 128
  •         Rows: 1000000 - Test: Sequential Read - Clients: 256
  •         Rows: 1000000 - Test: Sequential Read - Clients: 500
  •         Rows: 1000000 - Test: Scan - Clients: 1
  •         Rows: 1000000 - Test: Scan - Clients: 4
  •         Rows: 1000000 - Test: Scan - Clients: 16
  •         Rows: 1000000 - Test: Scan - Clients: 32
  •         Rows: 1000000 - Test: Scan - Clients: 64
  •         Rows: 1000000 - Test: Scan - Clients: 128
  •         Rows: 1000000 - Test: Scan - Clients: 256
  •         Rows: 1000000 - Test: Scan - Clients: 500
  •         Rows: 1000000 - Test: Increment - Clients: 1
  •         Rows: 1000000 - Test: Increment - Clients: 4
  •         Rows: 1000000 - Test: Increment - Clients: 16
  •         Rows: 1000000 - Test: Increment - Clients: 32
  •         Rows: 1000000 - Test: Increment - Clients: 64
  •         Rows: 1000000 - Test: Increment - Clients: 128
  •         Rows: 1000000 - Test: Increment - Clients: 256
  •         Rows: 1000000 - Test: Increment - Clients: 500
  •         Rows: 2000000 - Test: Random Write - Clients: 1
  •         Rows: 2000000 - Test: Random Write - Clients: 4
  •         Rows: 2000000 - Test: Random Write - Clients: 16
  •         Rows: 2000000 - Test: Random Write - Clients: 32
  •         Rows: 2000000 - Test: Random Write - Clients: 64
  •         Rows: 2000000 - Test: Random Write - Clients: 128
  •         Rows: 2000000 - Test: Random Write - Clients: 256
  •         Rows: 2000000 - Test: Random Write - Clients: 500
  •         Rows: 2000000 - Test: Async Random Write - Clients: 1
  •         Rows: 2000000 - Test: Async Random Write - Clients: 4
  •         Rows: 2000000 - Test: Async Random Write - Clients: 16
  •         Rows: 2000000 - Test: Async Random Write - Clients: 32
  •         Rows: 2000000 - Test: Async Random Write - Clients: 64
  •         Rows: 2000000 - Test: Async Random Write - Clients: 128
  •         Rows: 2000000 - Test: Async Random Write - Clients: 256
  •         Rows: 2000000 - Test: Async Random Write - Clients: 500
  •         Rows: 2000000 - Test: Random Read - Clients: 1
  •         Rows: 2000000 - Test: Random Read - Clients: 4
  •         Rows: 2000000 - Test: Random Read - Clients: 16
  •         Rows: 2000000 - Test: Random Read - Clients: 32
  •         Rows: 2000000 - Test: Random Read - Clients: 64
  •         Rows: 2000000 - Test: Random Read - Clients: 128
  •         Rows: 2000000 - Test: Random Read - Clients: 256
  •         Rows: 2000000 - Test: Random Read - Clients: 500
  •         Rows: 2000000 - Test: Async Random Read - Clients: 1
  •         Rows: 2000000 - Test: Async Random Read - Clients: 4
  •         Rows: 2000000 - Test: Async Random Read - Clients: 16
  •         Rows: 2000000 - Test: Async Random Read - Clients: 32
  •         Rows: 2000000 - Test: Async Random Read - Clients: 64
  •         Rows: 2000000 - Test: Async Random Read - Clients: 128
  •         Rows: 2000000 - Test: Async Random Read - Clients: 256
  •         Rows: 2000000 - Test: Async Random Read - Clients: 500
  •         Rows: 2000000 - Test: Sequential Write - Clients: 1
  •         Rows: 2000000 - Test: Sequential Write - Clients: 4
  •         Rows: 2000000 - Test: Sequential Write - Clients: 16
  •         Rows: 2000000 - Test: Sequential Write - Clients: 32
  •         Rows: 2000000 - Test: Sequential Write - Clients: 64
  •         Rows: 2000000 - Test: Sequential Write - Clients: 128
  •         Rows: 2000000 - Test: Sequential Write - Clients: 256
  •         Rows: 2000000 - Test: Sequential Write - Clients: 500
  •         Rows: 2000000 - Test: Sequential Read - Clients: 1
  •         Rows: 2000000 - Test: Sequential Read - Clients: 4
  •         Rows: 2000000 - Test: Sequential Read - Clients: 16
  •         Rows: 2000000 - Test: Sequential Read - Clients: 32
  •         Rows: 2000000 - Test: Sequential Read - Clients: 64
  •         Rows: 2000000 - Test: Sequential Read - Clients: 128
  •         Rows: 2000000 - Test: Sequential Read - Clients: 256
  •         Rows: 2000000 - Test: Sequential Read - Clients: 500
  •         Rows: 2000000 - Test: Scan - Clients: 1
  •         Rows: 2000000 - Test: Scan - Clients: 4
  •         Rows: 2000000 - Test: Scan - Clients: 16
  •         Rows: 2000000 - Test: Scan - Clients: 32
  •         Rows: 2000000 - Test: Scan - Clients: 64
  •         Rows: 2000000 - Test: Scan - Clients: 128
  •         Rows: 2000000 - Test: Scan - Clients: 256
  •         Rows: 2000000 - Test: Scan - Clients: 500
  •         Rows: 2000000 - Test: Increment - Clients: 1
  •         Rows: 2000000 - Test: Increment - Clients: 4
  •         Rows: 2000000 - Test: Increment - Clients: 16
  •         Rows: 2000000 - Test: Increment - Clients: 32
  •         Rows: 2000000 - Test: Increment - Clients: 64
  •         Rows: 2000000 - Test: Increment - Clients: 128
  •         Rows: 2000000 - Test: Increment - Clients: 256
  •         Rows: 2000000 - Test: Increment - Clients: 500
  •         Rows: 10000000 - Test: Random Write - Clients: 1
  •         Rows: 10000000 - Test: Random Write - Clients: 4
  •         Rows: 10000000 - Test: Random Write - Clients: 16
  •         Rows: 10000000 - Test: Random Write - Clients: 32
  •         Rows: 10000000 - Test: Random Write - Clients: 64
  •         Rows: 10000000 - Test: Random Write - Clients: 128
  •         Rows: 10000000 - Test: Random Write - Clients: 256
  •         Rows: 10000000 - Test: Random Write - Clients: 500
  •         Rows: 10000000 - Test: Async Random Write - Clients: 1
  •         Rows: 10000000 - Test: Async Random Write - Clients: 4
  •         Rows: 10000000 - Test: Async Random Write - Clients: 16
  •         Rows: 10000000 - Test: Async Random Write - Clients: 32
  •         Rows: 10000000 - Test: Async Random Write - Clients: 64
  •         Rows: 10000000 - Test: Async Random Write - Clients: 128
  •         Rows: 10000000 - Test: Async Random Write - Clients: 256
  •         Rows: 10000000 - Test: Async Random Write - Clients: 500
  •         Rows: 10000000 - Test: Random Read - Clients: 1
  •         Rows: 10000000 - Test: Random Read - Clients: 4
  •         Rows: 10000000 - Test: Random Read - Clients: 16
  •         Rows: 10000000 - Test: Random Read - Clients: 32
  •         Rows: 10000000 - Test: Random Read - Clients: 64
  •         Rows: 10000000 - Test: Random Read - Clients: 128
  •         Rows: 10000000 - Test: Random Read - Clients: 256
  •         Rows: 10000000 - Test: Random Read - Clients: 500
  •         Rows: 10000000 - Test: Async Random Read - Clients: 1
  •         Rows: 10000000 - Test: Async Random Read - Clients: 4
  •         Rows: 10000000 - Test: Async Random Read - Clients: 16
  •         Rows: 10000000 - Test: Async Random Read - Clients: 32
  •         Rows: 10000000 - Test: Async Random Read - Clients: 64
  •         Rows: 10000000 - Test: Async Random Read - Clients: 128
  •         Rows: 10000000 - Test: Async Random Read - Clients: 256
  •         Rows: 10000000 - Test: Async Random Read - Clients: 500
  •         Rows: 10000000 - Test: Sequential Write - Clients: 1
  •         Rows: 10000000 - Test: Sequential Write - Clients: 4
  •         Rows: 10000000 - Test: Sequential Write - Clients: 16
  •         Rows: 10000000 - Test: Sequential Write - Clients: 32
  •         Rows: 10000000 - Test: Sequential Write - Clients: 64
  •         Rows: 10000000 - Test: Sequential Write - Clients: 128
  •         Rows: 10000000 - Test: Sequential Write - Clients: 256
  •         Rows: 10000000 - Test: Sequential Write - Clients: 500
  •         Rows: 10000000 - Test: Sequential Read - Clients: 1
  •         Rows: 10000000 - Test: Sequential Read - Clients: 4
  •         Rows: 10000000 - Test: Sequential Read - Clients: 16
  •         Rows: 10000000 - Test: Sequential Read - Clients: 32
  •         Rows: 10000000 - Test: Sequential Read - Clients: 64
  •         Rows: 10000000 - Test: Sequential Read - Clients: 128
  •         Rows: 10000000 - Test: Sequential Read - Clients: 256
  •         Rows: 10000000 - Test: Sequential Read - Clients: 500
  •         Rows: 10000000 - Test: Scan - Clients: 1
  •         Rows: 10000000 - Test: Scan - Clients: 4
  •         Rows: 10000000 - Test: Scan - Clients: 16
  •         Rows: 10000000 - Test: Scan - Clients: 32
  •         Rows: 10000000 - Test: Scan - Clients: 64
  •         Rows: 10000000 - Test: Scan - Clients: 128
  •         Rows: 10000000 - Test: Scan - Clients: 256
  •         Rows: 10000000 - Test: Scan - Clients: 500
  •         Rows: 10000000 - Test: Increment - Clients: 1
  •         Rows: 10000000 - Test: Increment - Clients: 4
  •         Rows: 10000000 - Test: Increment - Clients: 16
  •         Rows: 10000000 - Test: Increment - Clients: 32
  •         Rows: 10000000 - Test: Increment - Clients: 64
  •         Rows: 10000000 - Test: Increment - Clients: 128
  •         Rows: 10000000 - Test: Increment - Clients: 256
  •         Rows: 10000000 - Test: Increment - Clients: 500
  • Apache HTTP Server

  •         Concurrent Requests: 4
  •         Concurrent Requests: 20
  •         Concurrent Requests: 100
  •         Concurrent Requests: 200
  •         Concurrent Requests: 500
  •         Concurrent Requests: 1000
  • Apache IoTDB

  •         Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100
  •         Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 400
  •         Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100
  •         Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 400
  •         Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100
  •         Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 400
  •         Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100
  •         Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400
  •         Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100
  •         Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400
  •         Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100
  •         Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400
  •         Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100
  •         Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 400
  •         Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100
  •         Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 400
  •         Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100
  •         Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 400
  •         Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100
  •         Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400
  •         Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100
  •         Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400
  •         Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100
  •         Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400
  •         Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100
  •         Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 400
  •         Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100
  •         Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 400
  •         Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100
  •         Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 400
  •         Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100
  •         Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400
  •         Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100
  •         Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400
  •         Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100
  •         Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400
  •         Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100
  •         Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 400
  •         Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100
  •         Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 400
  •         Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100
  •         Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 400
  •         Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100
  •         Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400
  •         Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100
  •         Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400
  •         Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100
  •         Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400
  • Apache Siege

  •         Concurrent Users: 200
  •         Concurrent Users: 250
  • Apache Spark

  •         Row Count: 1000000 - Partitions: 100
  •         Row Count: 1000000 - Partitions: 500
  •         Row Count: 1000000 - Partitions: 1000
  •         Row Count: 1000000 - Partitions: 2000
  •         Row Count: 10000000 - Partitions: 100
  •         Row Count: 10000000 - Partitions: 500
  •         Row Count: 10000000 - Partitions: 1000
  •         Row Count: 10000000 - Partitions: 2000
  •         Row Count: 20000000 - Partitions: 100
  •         Row Count: 20000000 - Partitions: 500
  •         Row Count: 20000000 - Partitions: 1000
  •         Row Count: 20000000 - Partitions: 2000
  •         Row Count: 40000000 - Partitions: 100
  •         Row Count: 40000000 - Partitions: 500
  •         Row Count: 40000000 - Partitions: 1000
  •         Row Count: 40000000 - Partitions: 2000
  • Apache Spark TPC-DS

  •         Scale Factor: 1
  •         Scale Factor: 10
  •         Scale Factor: 50
  •         Scale Factor: 100
  •         Scale Factor: 500
  •         Scale Factor: 3000
  •         Scale Factor: 10000
  • Apache Spark TPC-H

  •         Scale Factor: 1
  •         Scale Factor: 10
  •         Scale Factor: 50
  •         Scale Factor: 100
  •         Scale Factor: 3000
  •         Scale Factor: 10000
  • BlogBench

  •         Test: Read
  •         Test: Write
  • ClickHouse

  • CockroachDB

  •         Workload: KV, 95% Reads - Concurrency: 128
  •         Workload: KV, 95% Reads - Concurrency: 256
  •         Workload: KV, 95% Reads - Concurrency: 512
  •         Workload: KV, 95% Reads - Concurrency: 1024
  •         Workload: KV, 50% Reads - Concurrency: 128
  •         Workload: KV, 50% Reads - Concurrency: 256
  •         Workload: KV, 50% Reads - Concurrency: 512
  •         Workload: KV, 50% Reads - Concurrency: 1024
  •         Workload: KV, 60% Reads - Concurrency: 128
  •         Workload: KV, 60% Reads - Concurrency: 256
  •         Workload: KV, 60% Reads - Concurrency: 512
  •         Workload: KV, 60% Reads - Concurrency: 1024
  •         Workload: KV, 10% Reads - Concurrency: 128
  •         Workload: KV, 10% Reads - Concurrency: 256
  •         Workload: KV, 10% Reads - Concurrency: 512
  •         Workload: KV, 10% Reads - Concurrency: 1024
  •         Workload: MoVR - Concurrency: 128
  •         Workload: MoVR - Concurrency: 256
  •         Workload: MoVR - Concurrency: 512
  •         Workload: MoVR - Concurrency: 1024
  • Dragonflydb

  •         Clients Per Thread: 10 - Set To Get Ratio: 1:100
  •         Clients Per Thread: 10 - Set To Get Ratio: 1:10
  •         Clients Per Thread: 10 - Set To Get Ratio: 1:5
  •         Clients Per Thread: 10 - Set To Get Ratio: 1:1
  •         Clients Per Thread: 10 - Set To Get Ratio: 5:1
  •         Clients Per Thread: 20 - Set To Get Ratio: 1:100
  •         Clients Per Thread: 20 - Set To Get Ratio: 1:10
  •         Clients Per Thread: 20 - Set To Get Ratio: 1:5
  •         Clients Per Thread: 20 - Set To Get Ratio: 1:1
  •         Clients Per Thread: 20 - Set To Get Ratio: 5:1
  •         Clients Per Thread: 50 - Set To Get Ratio: 1:100
  •         Clients Per Thread: 50 - Set To Get Ratio: 1:10
  •         Clients Per Thread: 50 - Set To Get Ratio: 1:5
  •         Clients Per Thread: 50 - Set To Get Ratio: 1:1
  •         Clients Per Thread: 50 - Set To Get Ratio: 5:1
  •         Clients Per Thread: 60 - Set To Get Ratio: 1:100
  •         Clients Per Thread: 60 - Set To Get Ratio: 1:10
  •         Clients Per Thread: 60 - Set To Get Ratio: 1:5
  •         Clients Per Thread: 60 - Set To Get Ratio: 1:1
  •         Clients Per Thread: 60 - Set To Get Ratio: 5:1
  •         Clients Per Thread: 100 - Set To Get Ratio: 1:100
  •         Clients Per Thread: 100 - Set To Get Ratio: 1:10
  •         Clients Per Thread: 100 - Set To Get Ratio: 1:5
  •         Clients Per Thread: 100 - Set To Get Ratio: 1:1
  •         Clients Per Thread: 100 - Set To Get Ratio: 5:1
  • DuckDB

  •         Benchmark: IMDB
  •         Benchmark: TPC-H Parquet
  •         Benchmark: Clickbench
  • ebizzy

  • etcd

  •         Test: PUT - Connections: 50 - Clients: 100
  •         Test: PUT - Connections: 50 - Clients: 1000
  •         Test: PUT - Connections: 100 - Clients: 100
  •         Test: PUT - Connections: 100 - Clients: 1000
  •         Test: PUT - Connections: 500 - Clients: 100
  •         Test: PUT - Connections: 500 - Clients: 1000
  •         Test: RANGE - Connections: 50 - Clients: 100
  •         Test: RANGE - Connections: 50 - Clients: 1000
  •         Test: RANGE - Connections: 100 - Clients: 100
  •         Test: RANGE - Connections: 100 - Clients: 1000
  •         Test: RANGE - Connections: 500 - Clients: 100
  •         Test: RANGE - Connections: 500 - Clients: 1000
  • InfluxDB

  •         Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000
  •         Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000
  •         Concurrent Streams: 1024 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000
  • KeyDB

  •         Test: SET - Parallel Connections: 50
  •         Test: SET - Parallel Connections: 100
  •         Test: SET - Parallel Connections: 500
  •         Test: SET - Parallel Connections: 900
  •         Test: HMSET - Parallel Connections: 50
  •         Test: HMSET - Parallel Connections: 100
  •         Test: HMSET - Parallel Connections: 500
  •         Test: HMSET - Parallel Connections: 900
  •         Test: GET - Parallel Connections: 50
  •         Test: GET - Parallel Connections: 100
  •         Test: GET - Parallel Connections: 500
  •         Test: GET - Parallel Connections: 900
  •         Test: LPUSH - Parallel Connections: 50
  •         Test: LPUSH - Parallel Connections: 100
  •         Test: LPUSH - Parallel Connections: 500
  •         Test: LPUSH - Parallel Connections: 900
  •         Test: LPOP - Parallel Connections: 50
  •         Test: LPOP - Parallel Connections: 100
  •         Test: LPOP - Parallel Connections: 500
  •         Test: LPOP - Parallel Connections: 900
  •         Test: SADD - Parallel Connections: 50
  •         Test: SADD - Parallel Connections: 100
  •         Test: SADD - Parallel Connections: 500
  •         Test: SADD - Parallel Connections: 900
  • LevelDB

  •         Benchmark: Sequential Fill
  •         Benchmark: Random Fill
  •         Benchmark: Overwrite
  •         Benchmark: Fill Sync
  •         Benchmark: Random Read
  •         Benchmark: Random Delete
  •         Benchmark: Hot Read
  •         Benchmark: Seek Random
  • MariaDB

  •         Test: oltp_read_write - Threads: 1
  •         Test: oltp_read_write - Threads: 16
  •         Test: oltp_read_write - Threads: 32
  •         Test: oltp_read_write - Threads: 64
  •         Test: oltp_read_write - Threads: 128
  •         Test: oltp_read_write - Threads: 256
  •         Test: oltp_read_write - Threads: 512
  •         Test: oltp_read_write - Threads: 768
  •         Test: oltp_read_only - Threads: 1
  •         Test: oltp_read_only - Threads: 16
  •         Test: oltp_read_only - Threads: 32
  •         Test: oltp_read_only - Threads: 64
  •         Test: oltp_read_only - Threads: 128
  •         Test: oltp_read_only - Threads: 256
  •         Test: oltp_read_only - Threads: 512
  •         Test: oltp_read_only - Threads: 768
  •         Test: oltp_write_only - Threads: 1
  •         Test: oltp_write_only - Threads: 16
  •         Test: oltp_write_only - Threads: 32
  •         Test: oltp_write_only - Threads: 64
  •         Test: oltp_write_only - Threads: 128
  •         Test: oltp_write_only - Threads: 256
  •         Test: oltp_write_only - Threads: 512
  •         Test: oltp_write_only - Threads: 768
  •         Test: oltp_point_select - Threads: 1
  •         Test: oltp_point_select - Threads: 16
  •         Test: oltp_point_select - Threads: 32
  •         Test: oltp_point_select - Threads: 64
  •         Test: oltp_point_select - Threads: 128
  •         Test: oltp_point_select - Threads: 256
  •         Test: oltp_point_select - Threads: 512
  •         Test: oltp_point_select - Threads: 768
  •         Test: oltp_update_non_index - Threads: 1
  •         Test: oltp_update_non_index - Threads: 16
  •         Test: oltp_update_non_index - Threads: 32
  •         Test: oltp_update_non_index - Threads: 64
  •         Test: oltp_update_non_index - Threads: 128
  •         Test: oltp_update_non_index - Threads: 256
  •         Test: oltp_update_non_index - Threads: 512
  •         Test: oltp_update_non_index - Threads: 768
  •         Test: oltp_update_index - Threads: 1
  •         Test: oltp_update_index - Threads: 16
  •         Test: oltp_update_index - Threads: 32
  •         Test: oltp_update_index - Threads: 64
  •         Test: oltp_update_index - Threads: 128
  •         Test: oltp_update_index - Threads: 256
  •         Test: oltp_update_index - Threads: 512
  •         Test: oltp_update_index - Threads: 768
  • MariaDB mariadb-slap

  •         Clients: 64
  •         Clients: 256
  •         Clients: 1
  •         Clients: 32
  •         Clients: 128
  •         Clients: 512
  •         Clients: 1024
  •         Clients: 2048
  •         Clients: 4096
  •         Clients: 8192
  • Memcached

  •         Set To Get Ratio: 1:100
  •         Set To Get Ratio: 1:10
  •         Set To Get Ratio: 1:5
  •         Set To Get Ratio: 1:1
  •         Set To Get Ratio: 5:1
  • Memcached mcperf

  •         Method: Get - Connections: 1
  •         Method: Get - Connections: 4
  •         Method: Get - Connections: 16
  •         Method: Get - Connections: 32
  •         Method: Get - Connections: 64
  •         Method: Get - Connections: 128
  •         Method: Get - Connections: 256
  •         Method: Set - Connections: 1
  •         Method: Set - Connections: 4
  •         Method: Set - Connections: 16
  •         Method: Set - Connections: 32
  •         Method: Set - Connections: 64
  •         Method: Set - Connections: 128
  •         Method: Set - Connections: 256
  •         Method: Delete - Connections: 1
  •         Method: Delete - Connections: 4
  •         Method: Delete - Connections: 16
  •         Method: Delete - Connections: 32
  •         Method: Delete - Connections: 64
  •         Method: Delete - Connections: 128
  •         Method: Delete - Connections: 256
  •         Method: Add - Connections: 1
  •         Method: Add - Connections: 4
  •         Method: Add - Connections: 16
  •         Method: Add - Connections: 32
  •         Method: Add - Connections: 64
  •         Method: Add - Connections: 128
  •         Method: Add - Connections: 256
  •         Method: Replace - Connections: 1
  •         Method: Replace - Connections: 4
  •         Method: Replace - Connections: 16
  •         Method: Replace - Connections: 32
  •         Method: Replace - Connections: 64
  •         Method: Replace - Connections: 128
  •         Method: Replace - Connections: 256
  •         Method: Append - Connections: 1
  •         Method: Append - Connections: 4
  •         Method: Append - Connections: 16
  •         Method: Append - Connections: 32
  •         Method: Append - Connections: 64
  •         Method: Append - Connections: 128
  •         Method: Append - Connections: 256
  •         Method: Prepend - Connections: 1
  •         Method: Prepend - Connections: 4
  •         Method: Prepend - Connections: 16
  •         Method: Prepend - Connections: 32
  •         Method: Prepend - Connections: 64
  •         Method: Prepend - Connections: 128
  •         Method: Prepend - Connections: 256
  • nginx

  •         Connections: 1
  •         Connections: 20
  •         Connections: 100
  •         Connections: 200
  •         Connections: 500
  •         Connections: 1000
  •         Connections: 4000
  • Node.js Express HTTP Load Test

  • Node.js V8 Web Tooling Benchmark

  • OpenSSL

  •         Algorithm: RSA4096
  •         Algorithm: SHA256
  •         Algorithm: SHA512
  •         Algorithm: AES-128-GCM
  •         Algorithm: AES-256-GCM
  •         Algorithm: ChaCha20
  •         Algorithm: ChaCha20-Poly1305
  • Perl Benchmarks

  •         Test: Pod2html
  •         Test: Interpreter
  • PHP Micro Benchmarks

  •         Test: Zend bench
  •         Test: Zend micro_bench
  • PHPBench

  • PostgreSQL

  •         Scaling: Buffer Test - Test: Normal Load - Mode: Read Write
  •         Scaling: Buffer Test - Test: Normal Load - Mode: Read Only
  •         Scaling: Buffer Test - Test: Heavy Contention - Mode: Read Write
  •         Scaling: Buffer Test - Test: Heavy Contention - Mode: Read Only
  •         Scaling Factor: 1 - Clients: 1 - Mode: Read Write
  •         Scaling Factor: 1 - Clients: 1 - Mode: Read Only
  •         Scaling Factor: 1 - Clients: 50 - Mode: Read Write
  •         Scaling Factor: 1 - Clients: 50 - Mode: Read Only
  •         Scaling Factor: 1 - Clients: 100 - Mode: Read Write
  •         Scaling Factor: 1 - Clients: 100 - Mode: Read Only
  •         Scaling Factor: 1 - Clients: 250 - Mode: Read Write
  •         Scaling Factor: 1 - Clients: 250 - Mode: Read Only
  •         Scaling Factor: 1 - Clients: 500 - Mode: Read Write
  •         Scaling Factor: 1 - Clients: 500 - Mode: Read Only
  •         Scaling Factor: 1 - Clients: 800 - Mode: Read Write
  •         Scaling Factor: 1 - Clients: 800 - Mode: Read Only
  •         Scaling Factor: 1 - Clients: 1000 - Mode: Read Write
  •         Scaling Factor: 1 - Clients: 1000 - Mode: Read Only
  •         Scaling Factor: 1 - Clients: 5000 - Mode: Read Write
  •         Scaling Factor: 1 - Clients: 5000 - Mode: Read Only
  •         Scaling Factor: 100 - Clients: 1 - Mode: Read Write
  •         Scaling Factor: 100 - Clients: 1 - Mode: Read Only
  •         Scaling Factor: 100 - Clients: 50 - Mode: Read Write
  •         Scaling Factor: 100 - Clients: 50 - Mode: Read Only
  •         Scaling Factor: 100 - Clients: 100 - Mode: Read Write
  •         Scaling Factor: 100 - Clients: 100 - Mode: Read Only
  •         Scaling Factor: 100 - Clients: 250 - Mode: Read Write
  •         Scaling Factor: 100 - Clients: 250 - Mode: Read Only
  •         Scaling Factor: 100 - Clients: 500 - Mode: Read Write
  •         Scaling Factor: 100 - Clients: 500 - Mode: Read Only
  •         Scaling Factor: 100 - Clients: 800 - Mode: Read Write
  •         Scaling Factor: 100 - Clients: 800 - Mode: Read Only
  •         Scaling Factor: 100 - Clients: 1000 - Mode: Read Write
  •         Scaling Factor: 100 - Clients: 1000 - Mode: Read Only
  •         Scaling Factor: 100 - Clients: 5000 - Mode: Read Write
  •         Scaling Factor: 100 - Clients: 5000 - Mode: Read Only
  •         Scaling Factor: 1000 - Clients: 1 - Mode: Read Write
  •         Scaling Factor: 1000 - Clients: 1 - Mode: Read Only
  •         Scaling Factor: 1000 - Clients: 50 - Mode: Read Write
  •         Scaling Factor: 1000 - Clients: 50 - Mode: Read Only
  •         Scaling Factor: 1000 - Clients: 100 - Mode: Read Write
  •         Scaling Factor: 1000 - Clients: 100 - Mode: Read Only
  •         Scaling Factor: 1000 - Clients: 250 - Mode: Read Write
  •         Scaling Factor: 1000 - Clients: 250 - Mode: Read Only
  •         Scaling Factor: 1000 - Clients: 500 - Mode: Read Write
  •         Scaling Factor: 1000 - Clients: 500 - Mode: Read Only
  •         Scaling Factor: 1000 - Clients: 800 - Mode: Read Write
  •         Scaling Factor: 1000 - Clients: 800 - Mode: Read Only
  •         Scaling Factor: 1000 - Clients: 1000 - Mode: Read Write
  •         Scaling Factor: 1000 - Clients: 1000 - Mode: Read Only
  •         Scaling Factor: 1000 - Clients: 5000 - Mode: Read Write
  •         Scaling Factor: 1000 - Clients: 5000 - Mode: Read Only
  •         Scaling Factor: 10000 - Clients: 1 - Mode: Read Write
  •         Scaling Factor: 10000 - Clients: 1 - Mode: Read Only
  •         Scaling Factor: 10000 - Clients: 50 - Mode: Read Write
  •         Scaling Factor: 10000 - Clients: 50 - Mode: Read Only
  •         Scaling Factor: 10000 - Clients: 100 - Mode: Read Write
  •         Scaling Factor: 10000 - Clients: 100 - Mode: Read Only
  •         Scaling Factor: 10000 - Clients: 250 - Mode: Read Write
  •         Scaling Factor: 10000 - Clients: 250 - Mode: Read Only
  •         Scaling Factor: 10000 - Clients: 500 - Mode: Read Write
  •         Scaling Factor: 10000 - Clients: 500 - Mode: Read Only
  •         Scaling Factor: 10000 - Clients: 800 - Mode: Read Write
  •         Scaling Factor: 10000 - Clients: 800 - Mode: Read Only
  •         Scaling Factor: 10000 - Clients: 1000 - Mode: Read Write
  •         Scaling Factor: 10000 - Clients: 1000 - Mode: Read Only
  •         Scaling Factor: 10000 - Clients: 5000 - Mode: Read Write
  •         Scaling Factor: 10000 - Clients: 5000 - Mode: Read Only
  •         Scaling Factor: 25000 - Clients: 1 - Mode: Read Write
  •         Scaling Factor: 25000 - Clients: 1 - Mode: Read Only
  •         Scaling Factor: 25000 - Clients: 50 - Mode: Read Write
  •         Scaling Factor: 25000 - Clients: 50 - Mode: Read Only
  •         Scaling Factor: 25000 - Clients: 100 - Mode: Read Write
  •         Scaling Factor: 25000 - Clients: 100 - Mode: Read Only
  •         Scaling Factor: 25000 - Clients: 250 - Mode: Read Write
  •         Scaling Factor: 25000 - Clients: 250 - Mode: Read Only
  •         Scaling Factor: 25000 - Clients: 500 - Mode: Read Write
  •         Scaling Factor: 25000 - Clients: 500 - Mode: Read Only
  •         Scaling Factor: 25000 - Clients: 800 - Mode: Read Write
  •         Scaling Factor: 25000 - Clients: 800 - Mode: Read Only
  •         Scaling Factor: 25000 - Clients: 1000 - Mode: Read Write
  •         Scaling Factor: 25000 - Clients: 1000 - Mode: Read Only
  •         Scaling Factor: 25000 - Clients: 5000 - Mode: Read Write
  •         Scaling Factor: 25000 - Clients: 5000 - Mode: Read Only
  • Redis

  •         Test: SET - Parallel Connections: 50
  •         Test: SET - Parallel Connections: 500
  •         Test: SET - Parallel Connections: 1000
  •         Test: GET - Parallel Connections: 50
  •         Test: GET - Parallel Connections: 500
  •         Test: GET - Parallel Connections: 1000
  •         Test: LPUSH - Parallel Connections: 50
  •         Test: LPUSH - Parallel Connections: 500
  •         Test: LPUSH - Parallel Connections: 1000
  •         Test: LPOP - Parallel Connections: 50
  •         Test: LPOP - Parallel Connections: 500
  •         Test: LPOP - Parallel Connections: 1000
  •         Test: SADD - Parallel Connections: 50
  •         Test: SADD - Parallel Connections: 500
  •         Test: SADD - Parallel Connections: 1000
  • Redis 7.0.12 + memtier_benchmark

  •         Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10
  •         Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5
  •         Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1
  •         Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1
  •         Protocol: Redis - Clients: 50 - Set To Get Ratio: 10:1
  •         Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10
  •         Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5
  •         Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:1
  •         Protocol: Redis - Clients: 100 - Set To Get Ratio: 5:1
  •         Protocol: Redis - Clients: 100 - Set To Get Ratio: 10:1
  •         Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10
  •         Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5
  •         Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:1
  •         Protocol: Redis - Clients: 500 - Set To Get Ratio: 5:1
  •         Protocol: Redis - Clients: 500 - Set To Get Ratio: 10:1
  • RocksDB

  •         Test: Sequential Fill
  •         Test: Random Fill
  •         Test: Random Fill Sync
  •         Test: Random Read
  •         Test: Read While Writing
  •         Test: Read Random Write Random
  •         Test: Update Random
  •         Test: Overwrite
  • Rustls

  •         Benchmark: handshake - Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
  •         Benchmark: handshake - Suite: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
  •         Benchmark: handshake - Suite: TLS13_CHACHA20_POLY1305_SHA256
  •         Benchmark: handshake - Suite: TLS13_AES_256_GCM_SHA384
  •         Benchmark: handshake-ticket - Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
  •         Benchmark: handshake-ticket - Suite: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
  •         Benchmark: handshake-ticket - Suite: TLS13_CHACHA20_POLY1305_SHA256
  •         Benchmark: handshake-ticket - Suite: TLS13_AES_256_GCM_SHA384
  •         Benchmark: handshake-resume - Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
  •         Benchmark: handshake-resume - Suite: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
  •         Benchmark: handshake-resume - Suite: TLS13_CHACHA20_POLY1305_SHA256
  •         Benchmark: handshake-resume - Suite: TLS13_AES_256_GCM_SHA384
  • ScyllaDB

  •         Test: Writes
  •         Test: Mixed 1:1
  •         Test: Mixed 1:3
  • simdjson

  •         Throughput Test: PartialTweets
  •         Throughput Test: LargeRandom
  •         Throughput Test: Kostya
  •         Throughput Test: DistinctUserID
  •         Throughput Test: TopTweet
  • Speedb

  •         Test: Sequential Fill
  •         Test: Random Fill
  •         Test: Random Fill Sync
  •         Test: Random Read
  •         Test: Read While Writing
  •         Test: Read Random Write Random
  •         Test: Update Random
  • SQLite

  •         Threads / Copies: 1
  • SQLite Speedtest

  • Valkey

  •         Test: SET - Parallel Connections: 50
  •         Test: SET - Parallel Connections: 500
  •         Test: SET - Parallel Connections: 800
  •         Test: SET - Parallel Connections: 1000
  •         Test: GET - Parallel Connections: 50
  •         Test: GET - Parallel Connections: 500
  •         Test: GET - Parallel Connections: 800
  •         Test: GET - Parallel Connections: 1000
  •         Test: LPOP - Parallel Connections: 50
  •         Test: LPOP - Parallel Connections: 500
  •         Test: LPOP - Parallel Connections: 800
  •         Test: LPOP - Parallel Connections: 1000
  •         Test: SADD - Parallel Connections: 50
  •         Test: SADD - Parallel Connections: 500
  •         Test: SADD - Parallel Connections: 800
  •         Test: SADD - Parallel Connections: 1000
  •         Test: SPOP - Parallel Connections: 50
  •         Test: SPOP - Parallel Connections: 500
  •         Test: SPOP - Parallel Connections: 800
  •         Test: SPOP - Parallel Connections: 1000
  •         Test: HSET - Parallel Connections: 50
  •         Test: HSET - Parallel Connections: 500
  •         Test: HSET - Parallel Connections: 800
  •         Test: HSET - Parallel Connections: 1000
  •         Test: INCR - Parallel Connections: 50
  •         Test: INCR - Parallel Connections: 500
  •         Test: INCR - Parallel Connections: 800
  •         Test: INCR - Parallel Connections: 1000
  • YugabyteDB

  •         Workload: CassandraKeyValue - Num Threads Write: 0 - Num Threads Read: 0
  •         Workload: CassandraKeyValue - Num Threads Write: 0 - Num Threads Read: 1
  •         Workload: CassandraKeyValue - Num Threads Write: 0 - Num Threads Read: 16
  •         Workload: CassandraKeyValue - Num Threads Write: 0 - Num Threads Read: 32
  •         Workload: CassandraKeyValue - Num Threads Write: 0 - Num Threads Read: 128
  •         Workload: CassandraKeyValue - Num Threads Write: 0 - Num Threads Read: 256
  •         Workload: CassandraKeyValue - Num Threads Write: 1 - Num Threads Read: 0
  •         Workload: CassandraKeyValue - Num Threads Write: 1 - Num Threads Read: 1
  •         Workload: CassandraKeyValue - Num Threads Write: 1 - Num Threads Read: 16
  •         Workload: CassandraKeyValue - Num Threads Write: 1 - Num Threads Read: 32
  •         Workload: CassandraKeyValue - Num Threads Write: 1 - Num Threads Read: 128
  •         Workload: CassandraKeyValue - Num Threads Write: 1 - Num Threads Read: 256
  •         Workload: CassandraKeyValue - Num Threads Write: 16 - Num Threads Read: 0
  •         Workload: CassandraKeyValue - Num Threads Write: 16 - Num Threads Read: 1
  •         Workload: CassandraKeyValue - Num Threads Write: 16 - Num Threads Read: 16
  •         Workload: CassandraKeyValue - Num Threads Write: 16 - Num Threads Read: 32
  •         Workload: CassandraKeyValue - Num Threads Write: 16 - Num Threads Read: 128
  •         Workload: CassandraKeyValue - Num Threads Write: 16 - Num Threads Read: 256
  •         Workload: CassandraKeyValue - Num Threads Write: 32 - Num Threads Read: 0
  •         Workload: CassandraKeyValue - Num Threads Write: 32 - Num Threads Read: 1
  •         Workload: CassandraKeyValue - Num Threads Write: 32 - Num Threads Read: 16
  •         Workload: CassandraKeyValue - Num Threads Write: 32 - Num Threads Read: 32
  •         Workload: CassandraKeyValue - Num Threads Write: 32 - Num Threads Read: 128
  •         Workload: CassandraKeyValue - Num Threads Write: 32 - Num Threads Read: 256
  •         Workload: CassandraKeyValue - Num Threads Write: 64 - Num Threads Read: 0
  •         Workload: CassandraKeyValue - Num Threads Write: 64 - Num Threads Read: 1
  •         Workload: CassandraKeyValue - Num Threads Write: 64 - Num Threads Read: 16
  •         Workload: CassandraKeyValue - Num Threads Write: 64 - Num Threads Read: 32
  •         Workload: CassandraKeyValue - Num Threads Write: 64 - Num Threads Read: 128
  •         Workload: CassandraKeyValue - Num Threads Write: 64 - Num Threads Read: 256
  •         Workload: CassandraKeyValue - Num Threads Write: 128 - Num Threads Read: 0
  •         Workload: CassandraKeyValue - Num Threads Write: 128 - Num Threads Read: 1
  •         Workload: CassandraKeyValue - Num Threads Write: 128 - Num Threads Read: 16
  •         Workload: CassandraKeyValue - Num Threads Write: 128 - Num Threads Read: 32
  •         Workload: CassandraKeyValue - Num Threads Write: 128 - Num Threads Read: 128
  •         Workload: CassandraKeyValue - Num Threads Write: 128 - Num Threads Read: 256
  •         Workload: CassandraKeyValue - Num Threads Write: 256 - Num Threads Read: 0
  •         Workload: CassandraKeyValue - Num Threads Write: 256 - Num Threads Read: 1
  •         Workload: CassandraKeyValue - Num Threads Write: 256 - Num Threads Read: 16
  •         Workload: CassandraKeyValue - Num Threads Write: 256 - Num Threads Read: 32
  •         Workload: CassandraKeyValue - Num Threads Write: 256 - Num Threads Read: 128
  •         Workload: CassandraKeyValue - Num Threads Write: 256 - Num Threads Read: 256
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 0 - Num Threads Read: 0
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 0 - Num Threads Read: 1
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 0 - Num Threads Read: 16
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 0 - Num Threads Read: 32
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 0 - Num Threads Read: 128
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 0 - Num Threads Read: 256
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 1 - Num Threads Read: 0
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 1 - Num Threads Read: 1
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 1 - Num Threads Read: 16
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 1 - Num Threads Read: 32
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 1 - Num Threads Read: 128
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 1 - Num Threads Read: 256
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 16 - Num Threads Read: 0
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 16 - Num Threads Read: 1
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 16 - Num Threads Read: 16
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 16 - Num Threads Read: 32
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 16 - Num Threads Read: 128
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 16 - Num Threads Read: 256
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 32 - Num Threads Read: 0
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 32 - Num Threads Read: 1
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 32 - Num Threads Read: 16
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 32 - Num Threads Read: 32
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 32 - Num Threads Read: 128
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 32 - Num Threads Read: 256
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 64 - Num Threads Read: 0
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 64 - Num Threads Read: 1
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 64 - Num Threads Read: 16
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 64 - Num Threads Read: 32
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 64 - Num Threads Read: 128
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 64 - Num Threads Read: 256
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 128 - Num Threads Read: 0
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 128 - Num Threads Read: 1
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 128 - Num Threads Read: 16
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 128 - Num Threads Read: 32
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 128 - Num Threads Read: 128
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 128 - Num Threads Read: 256
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 256 - Num Threads Read: 0
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 256 - Num Threads Read: 1
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 256 - Num Threads Read: 16
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 256 - Num Threads Read: 32
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 256 - Num Threads Read: 128
  •         Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 256 - Num Threads Read: 256
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 0 - Num Threads Read: 0
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 0 - Num Threads Read: 1
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 0 - Num Threads Read: 16
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 0 - Num Threads Read: 32
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 0 - Num Threads Read: 128
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 0 - Num Threads Read: 256
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 1 - Num Threads Read: 0
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 1 - Num Threads Read: 1
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 1 - Num Threads Read: 16
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 1 - Num Threads Read: 32
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 1 - Num Threads Read: 128
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 1 - Num Threads Read: 256
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 16 - Num Threads Read: 0
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 16 - Num Threads Read: 1
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 16 - Num Threads Read: 16
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 16 - Num Threads Read: 32
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 16 - Num Threads Read: 128
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 16 - Num Threads Read: 256
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 32 - Num Threads Read: 0
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 32 - Num Threads Read: 1
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 32 - Num Threads Read: 16
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 32 - Num Threads Read: 32
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 32 - Num Threads Read: 128
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 32 - Num Threads Read: 256
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 64 - Num Threads Read: 0
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 64 - Num Threads Read: 1
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 64 - Num Threads Read: 16
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 64 - Num Threads Read: 32
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 64 - Num Threads Read: 128
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 64 - Num Threads Read: 256
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 128 - Num Threads Read: 0
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 128 - Num Threads Read: 1
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 128 - Num Threads Read: 16
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 128 - Num Threads Read: 32
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 128 - Num Threads Read: 128
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 128 - Num Threads Read: 256
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 256 - Num Threads Read: 0
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 256 - Num Threads Read: 1
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 256 - Num Threads Read: 16
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 256 - Num Threads Read: 32
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 256 - Num Threads Read: 128
  •         Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 256 - Num Threads Read: 256

Revision History Revision History

pts/server-1.4.2     Sun, 24 Nov 2024 10:49:39 GMT
Add Rustls to server test suite.

pts/server-1.4.1     Thu, 12 Aug 2021 19:12:15 GMT
Set BATCH for apache and nginx test profiles given they now expose options.

pts/server-1.4.0     Thu, 14 Jan 2021 13:54:34 GMT
Add new tests.

pts/server-1.3.3     Thu, 28 May 2020 15:52:05 GMT
Add additional tests.

pts/server-1.3.2     Sat, 23 May 2020 16:18:04 GMT
Fix blogbench arg handling.

pts/server-1.3.1     Wed, 08 Apr 2020 16:20:53 GMT
Add Apache Hbase.

pts/server-1.3.0     Wed, 08 Apr 2020 14:04:12 GMT
Add latest server tests.

pts/server-1.2.1     Fri, 10 May 2019 15:58:31 GMT
Update tests...

pts/server-1.2.0     Mon, 06 Dec 2010 23:34:25 GMT
Initial import into OpenBenchmarking.org