s5epyc40c256gbram2468gbvram_

AMD EPYC 7302 16-Core testing with a TYAN S8030GM4NE-2T (V1.00 BIOS) and ASPEED 24GB on Ubuntu 23.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2312164-NE-S5EPYC40C72
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

C/C++ Compiler Tests 10 Tests
CPU Massive 19 Tests
Creator Workloads 3 Tests
Database Test Suite 19 Tests
Go Language Tests 2 Tests
HPC - High Performance Computing 3 Tests
Java Tests 6 Tests
Common Kernel Benchmarks 6 Tests
Multi-Core 10 Tests
Node.js + NPM Tests 2 Tests
NVIDIA GPU Compute 2 Tests
OpenCL 2 Tests
OpenMPI Tests 2 Tests
Programmer / Developer System Benchmarks 4 Tests
Python Tests 3 Tests
Server 32 Tests
Server CPU Tests 9 Tests
Single-Threaded 7 Tests
Common Workstation Benchmarks 9 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
S5.EPYC40c+256GBRAM+24+6+8GBVRAM
December 16 2023
  1 Minute
S5.EPYC40c+256GBRAM+24+6+8GBVRAM _
December 16 2023
  1 Minute
Invert Hiding All Results Option
  1 Minute
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


s5epyc40c256gbram2468gbvram_ Suite 1.0.0 System Test suite extracted from s5epyc40c256gbram2468gbvram_. pts/influxdb-1.0.2 -c 1024 -b 10000 -t 2,5000,1 -p 10000 Concurrent Streams: 1024 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 pts/influxdb-1.0.2 -c 64 -b 10000 -t 2,5000,1 -p 10000 Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 pts/influxdb-1.0.2 -c 4 -b 10000 -t 2,5000,1 -p 10000 Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 pts/php-1.0.0 php_/Zend/micro_bench.php Test: Zend micro_bench pts/php-1.0.0 php_/Zend/bench.php Test: Zend bench pts/phpbench-1.1.6 pts/apache-3.0.0 -c 1000 Concurrent Requests: 1000 pts/apache-3.0.0 -c 500 Concurrent Requests: 500 pts/apache-3.0.0 -c 200 Concurrent Requests: 200 pts/apache-3.0.0 -c 100 Concurrent Requests: 100 pts/apache-3.0.0 -c 20 Concurrent Requests: 20 pts/apache-3.0.0 -c 4 Concurrent Requests: 4 pts/hbase-1.1.0 --rows=10000000 asyncRandomWrite 500 Rows: 10000000 - Test: Async Random Write - Clients: 500 pts/hbase-1.1.0 --rows=10000000 asyncRandomWrite 256 Rows: 10000000 - Test: Async Random Write - Clients: 256 pts/hbase-1.1.0 --rows=10000000 asyncRandomWrite 128 Rows: 10000000 - Test: Async Random Write - Clients: 128 pts/hbase-1.1.0 --rows=2000000 asyncRandomWrite 500 Rows: 2000000 - Test: Async Random Write - Clients: 500 pts/hbase-1.1.0 --rows=2000000 asyncRandomWrite 256 Rows: 2000000 - Test: Async Random Write - Clients: 256 pts/hbase-1.1.0 --rows=2000000 asyncRandomWrite 128 Rows: 2000000 - Test: Async Random Write - Clients: 128 pts/hbase-1.1.0 --rows=10000000 asyncRandomWrite 64 Rows: 10000000 - Test: Async Random Write - Clients: 64 pts/hbase-1.1.0 --rows=10000000 asyncRandomWrite 32 Rows: 10000000 - Test: Async Random Write - Clients: 32 pts/hbase-1.1.0 --rows=10000000 asyncRandomWrite 16 Rows: 10000000 - Test: Async Random Write - Clients: 16 pts/hbase-1.1.0 --rows=10000000 asyncRandomRead 500 Rows: 10000000 - Test: Async Random Read - Clients: 500 pts/hbase-1.1.0 --rows=10000000 asyncRandomRead 256 Rows: 10000000 - Test: Async Random Read - Clients: 256 pts/hbase-1.1.0 --rows=10000000 asyncRandomRead 128 Rows: 10000000 - Test: Async Random Read - Clients: 128 pts/hbase-1.1.0 --rows=1000000 asyncRandomWrite 500 Rows: 1000000 - Test: Async Random Write - Clients: 500 pts/hbase-1.1.0 --rows=1000000 asyncRandomWrite 256 Rows: 1000000 - Test: Async Random Write - Clients: 256 pts/hbase-1.1.0 --rows=1000000 asyncRandomWrite 128 Rows: 1000000 - Test: Async Random Write - Clients: 128 pts/hbase-1.1.0 --rows=2000000 asyncRandomWrite 64 Rows: 2000000 - Test: Async Random Write - Clients: 64 pts/hbase-1.1.0 --rows=2000000 asyncRandomWrite 32 Rows: 2000000 - Test: Async Random Write - Clients: 32 pts/hbase-1.1.0 --rows=2000000 asyncRandomWrite 16 Rows: 2000000 - Test: Async Random Write - Clients: 16 pts/hbase-1.1.0 --rows=2000000 asyncRandomRead 500 Rows: 2000000 - Test: Async Random Read - Clients: 500 pts/hbase-1.1.0 --rows=2000000 asyncRandomRead 256 Rows: 2000000 - Test: Async Random Read - Clients: 256 pts/hbase-1.1.0 --rows=2000000 asyncRandomRead 128 Rows: 2000000 - Test: Async Random Read - Clients: 128 pts/hbase-1.1.0 --rows=10000000 sequentialWrite 500 Rows: 10000000 - Test: Sequential Write - Clients: 500 pts/hbase-1.1.0 --rows=10000000 sequentialWrite 256 Rows: 10000000 - Test: Sequential Write - Clients: 256 pts/hbase-1.1.0 --rows=10000000 sequentialWrite 128 Rows: 10000000 - Test: Sequential Write - Clients: 128 pts/hbase-1.1.0 --rows=10000000 asyncRandomWrite 4 Rows: 10000000 - Test: Async Random Write - Clients: 4 pts/hbase-1.1.0 --rows=10000000 asyncRandomWrite 1 Rows: 10000000 - Test: Async Random Write - Clients: 1 pts/hbase-1.1.0 --rows=10000000 asyncRandomRead 64 Rows: 10000000 - Test: Async Random Read - Clients: 64 pts/hbase-1.1.0 --rows=10000000 asyncRandomRead 32 Rows: 10000000 - Test: Async Random Read - Clients: 32 pts/hbase-1.1.0 --rows=10000000 asyncRandomRead 16 Rows: 10000000 - Test: Async Random Read - Clients: 16 pts/hbase-1.1.0 --rows=1000000 asyncRandomWrite 64 Rows: 1000000 - Test: Async Random Write - Clients: 64 pts/hbase-1.1.0 --rows=1000000 asyncRandomWrite 32 Rows: 1000000 - Test: Async Random Write - Clients: 32 pts/hbase-1.1.0 --rows=1000000 asyncRandomWrite 16 Rows: 1000000 - Test: Async Random Write - Clients: 16 pts/hbase-1.1.0 --rows=1000000 asyncRandomRead 500 Rows: 1000000 - Test: Async Random Read - Clients: 500 pts/hbase-1.1.0 --rows=1000000 asyncRandomRead 256 Rows: 1000000 - Test: Async Random Read - Clients: 256 pts/hbase-1.1.0 --rows=1000000 asyncRandomRead 128 Rows: 1000000 - Test: Async Random Read - Clients: 128 pts/hbase-1.1.0 --rows=2000000 sequentialWrite 500 Rows: 2000000 - Test: Sequential Write - Clients: 500 pts/hbase-1.1.0 --rows=2000000 sequentialWrite 256 Rows: 2000000 - Test: Sequential Write - Clients: 256 pts/hbase-1.1.0 --rows=2000000 sequentialWrite 128 Rows: 2000000 - Test: Sequential Write - Clients: 128 pts/hbase-1.1.0 --rows=2000000 asyncRandomWrite 4 Rows: 2000000 - Test: Async Random Write - Clients: 4 pts/hbase-1.1.0 --rows=2000000 asyncRandomWrite 1 Rows: 2000000 - Test: Async Random Write - Clients: 1 pts/hbase-1.1.0 --rows=2000000 asyncRandomRead 64 Rows: 2000000 - Test: Async Random Read - Clients: 64 pts/hbase-1.1.0 --rows=2000000 asyncRandomRead 32 Rows: 2000000 - Test: Async Random Read - Clients: 32 pts/hbase-1.1.0 --rows=2000000 asyncRandomRead 16 Rows: 2000000 - Test: Async Random Read - Clients: 16 pts/hbase-1.1.0 --rows=10000000 sequentialWrite 64 Rows: 10000000 - Test: Sequential Write - Clients: 64 pts/hbase-1.1.0 --rows=10000000 sequentialWrite 32 Rows: 10000000 - Test: Sequential Write - Clients: 32 pts/hbase-1.1.0 --rows=10000000 sequentialWrite 16 Rows: 10000000 - Test: Sequential Write - Clients: 16 pts/hbase-1.1.0 --rows=10000000 sequentialRead 500 Rows: 10000000 - Test: Sequential Read - Clients: 500 pts/hbase-1.1.0 --rows=10000000 sequentialRead 256 Rows: 10000000 - Test: Sequential Read - Clients: 256 pts/hbase-1.1.0 --rows=10000000 sequentialRead 128 Rows: 10000000 - Test: Sequential Read - Clients: 128 pts/hbase-1.1.0 --rows=10000000 asyncRandomRead 4 Rows: 10000000 - Test: Async Random Read - Clients: 4 pts/hbase-1.1.0 --rows=10000000 asyncRandomRead 1 Rows: 10000000 - Test: Async Random Read - Clients: 1 pts/hbase-1.1.0 --rows=1000000 sequentialWrite 500 Rows: 1000000 - Test: Sequential Write - Clients: 500 pts/hbase-1.1.0 --rows=1000000 sequentialWrite 256 Rows: 1000000 - Test: Sequential Write - Clients: 256 pts/hbase-1.1.0 --rows=1000000 sequentialWrite 128 Rows: 1000000 - Test: Sequential Write - Clients: 128 pts/hbase-1.1.0 --rows=1000000 asyncRandomWrite 4 Rows: 1000000 - Test: Async Random Write - Clients: 4 pts/hbase-1.1.0 --rows=1000000 asyncRandomWrite 1 Rows: 1000000 - Test: Async Random Write - Clients: 1 pts/hbase-1.1.0 --rows=1000000 asyncRandomRead 64 Rows: 1000000 - Test: Async Random Read - Clients: 64 pts/hbase-1.1.0 --rows=1000000 asyncRandomRead 32 Rows: 1000000 - Test: Async Random Read - Clients: 32 pts/hbase-1.1.0 --rows=1000000 asyncRandomRead 16 Rows: 1000000 - Test: Async Random Read - Clients: 16 pts/hbase-1.1.0 --rows=10000 asyncRandomWrite 500 Rows: 10000 - Test: Async Random Write - Clients: 500 pts/hbase-1.1.0 --rows=10000 asyncRandomWrite 256 Rows: 10000 - Test: Async Random Write - Clients: 256 pts/hbase-1.1.0 --rows=10000 asyncRandomWrite 128 Rows: 10000 - Test: Async Random Write - Clients: 128 pts/hbase-1.1.0 --rows=2000000 sequentialWrite 64 Rows: 2000000 - Test: Sequential Write - Clients: 64 pts/hbase-1.1.0 --rows=2000000 sequentialWrite 32 Rows: 2000000 - Test: Sequential Write - Clients: 32 pts/hbase-1.1.0 --rows=2000000 sequentialWrite 16 Rows: 2000000 - Test: Sequential Write - Clients: 16 pts/hbase-1.1.0 --rows=2000000 sequentialRead 500 Rows: 2000000 - Test: Sequential Read - Clients: 500 pts/hbase-1.1.0 --rows=2000000 sequentialRead 256 Rows: 2000000 - Test: Sequential Read - Clients: 256 pts/hbase-1.1.0 --rows=2000000 sequentialRead 128 Rows: 2000000 - Test: Sequential Read - Clients: 128 pts/hbase-1.1.0 --rows=2000000 asyncRandomRead 4 Rows: 2000000 - Test: Async Random Read - Clients: 4 pts/hbase-1.1.0 --rows=2000000 asyncRandomRead 1 Rows: 2000000 - Test: Async Random Read - Clients: 1 pts/hbase-1.1.0 --rows=10000000 sequentialWrite 4 Rows: 10000000 - Test: Sequential Write - Clients: 4 pts/hbase-1.1.0 --rows=10000000 sequentialWrite 1 Rows: 10000000 - Test: Sequential Write - Clients: 1 pts/hbase-1.1.0 --rows=10000000 sequentialRead 64 Rows: 10000000 - Test: Sequential Read - Clients: 64 pts/hbase-1.1.0 --rows=10000000 sequentialRead 32 Rows: 10000000 - Test: Sequential Read - Clients: 32 pts/hbase-1.1.0 --rows=10000000 sequentialRead 16 Rows: 10000000 - Test: Sequential Read - Clients: 16 pts/hbase-1.1.0 --rows=1000000 sequentialWrite 64 Rows: 1000000 - Test: Sequential Write - Clients: 64 pts/hbase-1.1.0 --rows=1000000 sequentialWrite 32 Rows: 1000000 - Test: Sequential Write - Clients: 32 pts/hbase-1.1.0 --rows=1000000 sequentialWrite 16 Rows: 1000000 - Test: Sequential Write - Clients: 16 pts/hbase-1.1.0 --rows=1000000 sequentialRead 500 Rows: 1000000 - Test: Sequential Read - Clients: 500 pts/hbase-1.1.0 --rows=1000000 sequentialRead 256 Rows: 1000000 - Test: Sequential Read - Clients: 256 pts/hbase-1.1.0 --rows=1000000 sequentialRead 128 Rows: 1000000 - Test: Sequential Read - Clients: 128 pts/hbase-1.1.0 --rows=1000000 asyncRandomRead 4 Rows: 1000000 - Test: Async Random Read - Clients: 4 pts/hbase-1.1.0 --rows=1000000 asyncRandomRead 1 Rows: 1000000 - Test: Async Random Read - Clients: 1 pts/hbase-1.1.0 --rows=10000 asyncRandomWrite 64 Rows: 10000 - Test: Async Random Write - Clients: 64 pts/hbase-1.1.0 --rows=10000 asyncRandomWrite 32 Rows: 10000 - Test: Async Random Write - Clients: 32 pts/hbase-1.1.0 --rows=10000 asyncRandomWrite 16 Rows: 10000 - Test: Async Random Write - Clients: 16 pts/hbase-1.1.0 --rows=10000 asyncRandomRead 500 Rows: 10000 - Test: Async Random Read - Clients: 500 pts/hbase-1.1.0 --rows=10000 asyncRandomRead 256 Rows: 10000 - Test: Async Random Read - Clients: 256 pts/hbase-1.1.0 --rows=10000 asyncRandomRead 128 Rows: 10000 - Test: Async Random Read - Clients: 128 pts/hbase-1.1.0 --rows=2000000 sequentialWrite 4 Rows: 2000000 - Test: Sequential Write - Clients: 4 pts/hbase-1.1.0 --rows=2000000 sequentialWrite 1 Rows: 2000000 - Test: Sequential Write - Clients: 1 pts/hbase-1.1.0 --rows=2000000 sequentialRead 64 Rows: 2000000 - Test: Sequential Read - Clients: 64 pts/hbase-1.1.0 --rows=2000000 sequentialRead 32 Rows: 2000000 - Test: Sequential Read - Clients: 32 pts/hbase-1.1.0 --rows=2000000 sequentialRead 16 Rows: 2000000 - Test: Sequential Read - Clients: 16 pts/hbase-1.1.0 --rows=10000000 sequentialRead 4 Rows: 10000000 - Test: Sequential Read - Clients: 4 pts/hbase-1.1.0 --rows=10000000 sequentialRead 1 Rows: 10000000 - Test: Sequential Read - Clients: 1 pts/hbase-1.1.0 --rows=1000000 sequentialWrite 4 Rows: 1000000 - Test: Sequential Write - Clients: 4 pts/hbase-1.1.0 --rows=1000000 sequentialWrite 1 Rows: 1000000 - Test: Sequential Write - Clients: 1 pts/hbase-1.1.0 --rows=1000000 sequentialRead 64 Rows: 1000000 - Test: Sequential Read - Clients: 64 pts/hbase-1.1.0 --rows=1000000 sequentialRead 32 Rows: 1000000 - Test: Sequential Read - Clients: 32 pts/hbase-1.1.0 --rows=1000000 sequentialRead 16 Rows: 1000000 - Test: Sequential Read - Clients: 16 pts/hbase-1.1.0 --rows=10000 sequentialWrite 500 Rows: 10000 - Test: Sequential Write - Clients: 500 pts/hbase-1.1.0 --rows=10000 sequentialWrite 256 Rows: 10000 - Test: Sequential Write - Clients: 256 pts/hbase-1.1.0 --rows=10000 sequentialWrite 128 Rows: 10000 - Test: Sequential Write - Clients: 128 pts/hbase-1.1.0 --rows=10000 asyncRandomWrite 4 Rows: 10000 - Test: Async Random Write - Clients: 4 pts/hbase-1.1.0 --rows=10000 asyncRandomWrite 1 Rows: 10000 - Test: Async Random Write - Clients: 1 pts/hbase-1.1.0 --rows=10000 asyncRandomRead 64 Rows: 10000 - Test: Async Random Read - Clients: 64 pts/hbase-1.1.0 --rows=10000 asyncRandomRead 32 Rows: 10000 - Test: Async Random Read - Clients: 32 pts/hbase-1.1.0 --rows=10000 asyncRandomRead 16 Rows: 10000 - Test: Async Random Read - Clients: 16 pts/hbase-1.1.0 --rows=2000000 sequentialRead 4 Rows: 2000000 - Test: Sequential Read - Clients: 4 pts/hbase-1.1.0 --rows=2000000 sequentialRead 1 Rows: 2000000 - Test: Sequential Read - Clients: 1 pts/hbase-1.1.0 --rows=10000000 randomWrite 500 Rows: 10000000 - Test: Random Write - Clients: 500 pts/hbase-1.1.0 --rows=10000000 randomWrite 256 Rows: 10000000 - Test: Random Write - Clients: 256 pts/hbase-1.1.0 --rows=10000000 randomWrite 128 Rows: 10000000 - Test: Random Write - Clients: 128 pts/hbase-1.1.0 --rows=1000000 sequentialRead 4 Rows: 1000000 - Test: Sequential Read - Clients: 4 pts/hbase-1.1.0 --rows=1000000 sequentialRead 1 Rows: 1000000 - Test: Sequential Read - Clients: 1 pts/hbase-1.1.0 --rows=10000 sequentialWrite 64 Rows: 10000 - Test: Sequential Write - Clients: 64 pts/hbase-1.1.0 --rows=10000 sequentialWrite 32 Rows: 10000 - Test: Sequential Write - Clients: 32 pts/hbase-1.1.0 --rows=10000 sequentialWrite 16 Rows: 10000 - Test: Sequential Write - Clients: 16 pts/hbase-1.1.0 --rows=10000 sequentialRead 500 Rows: 10000 - Test: Sequential Read - Clients: 500 pts/hbase-1.1.0 --rows=10000 sequentialRead 256 Rows: 10000 - Test: Sequential Read - Clients: 256 pts/hbase-1.1.0 --rows=10000 sequentialRead 128 Rows: 10000 - Test: Sequential Read - Clients: 128 pts/hbase-1.1.0 --rows=10000 asyncRandomRead 4 Rows: 10000 - Test: Async Random Read - Clients: 4 pts/hbase-1.1.0 --rows=10000 asyncRandomRead 1 Rows: 10000 - Test: Async Random Read - Clients: 1 pts/hbase-1.1.0 --rows=2000000 randomWrite 500 Rows: 2000000 - Test: Random Write - Clients: 500 pts/hbase-1.1.0 --rows=2000000 randomWrite 256 Rows: 2000000 - Test: Random Write - Clients: 256 pts/hbase-1.1.0 --rows=2000000 randomWrite 128 Rows: 2000000 - Test: Random Write - Clients: 128 pts/hbase-1.1.0 --rows=10000000 randomWrite 64 Rows: 10000000 - Test: Random Write - Clients: 64 pts/hbase-1.1.0 --rows=10000000 randomWrite 32 Rows: 10000000 - Test: Random Write - Clients: 32 pts/hbase-1.1.0 --rows=10000000 randomWrite 16 Rows: 10000000 - Test: Random Write - Clients: 16 pts/hbase-1.1.0 --rows=10000000 randomRead 500 Rows: 10000000 - Test: Random Read - Clients: 500 pts/hbase-1.1.0 --rows=10000000 randomRead 256 Rows: 10000000 - Test: Random Read - Clients: 256 pts/hbase-1.1.0 --rows=10000000 randomRead 128 Rows: 10000000 - Test: Random Read - Clients: 128 pts/hbase-1.1.0 --rows=1000000 randomWrite 500 Rows: 1000000 - Test: Random Write - Clients: 500 pts/hbase-1.1.0 --rows=1000000 randomWrite 256 Rows: 1000000 - Test: Random Write - Clients: 256 pts/hbase-1.1.0 --rows=1000000 randomWrite 128 Rows: 1000000 - Test: Random Write - Clients: 128 pts/hbase-1.1.0 --rows=10000 sequentialWrite 4 Rows: 10000 - Test: Sequential Write - Clients: 4 pts/hbase-1.1.0 --rows=10000 sequentialWrite 1 Rows: 10000 - Test: Sequential Write - Clients: 1 pts/hbase-1.1.0 --rows=10000 sequentialRead 64 Rows: 10000 - Test: Sequential Read - Clients: 64 pts/hbase-1.1.0 --rows=10000 sequentialRead 32 Rows: 10000 - Test: Sequential Read - Clients: 32 pts/hbase-1.1.0 --rows=10000 sequentialRead 16 Rows: 10000 - Test: Sequential Read - Clients: 16 pts/hbase-1.1.0 --rows=2000000 randomWrite 64 Rows: 2000000 - Test: Random Write - Clients: 64 pts/hbase-1.1.0 --rows=2000000 randomWrite 32 Rows: 2000000 - Test: Random Write - Clients: 32 pts/hbase-1.1.0 --rows=2000000 randomWrite 16 Rows: 2000000 - Test: Random Write - Clients: 16 pts/hbase-1.1.0 --rows=2000000 randomRead 500 Rows: 2000000 - Test: Random Read - Clients: 500 pts/hbase-1.1.0 --rows=2000000 randomRead 256 Rows: 2000000 - Test: Random Read - Clients: 256 pts/hbase-1.1.0 --rows=2000000 randomRead 128 Rows: 2000000 - Test: Random Read - Clients: 128 pts/hbase-1.1.0 --rows=10000000 randomWrite 4 Rows: 10000000 - Test: Random Write - Clients: 4 pts/hbase-1.1.0 --rows=10000000 randomWrite 1 Rows: 10000000 - Test: Random Write - Clients: 1 pts/hbase-1.1.0 --rows=10000000 randomRead 64 Rows: 10000000 - Test: Random Read - Clients: 64 pts/hbase-1.1.0 --rows=10000000 randomRead 32 Rows: 10000000 - Test: Random Read - Clients: 32 pts/hbase-1.1.0 --rows=10000000 randomRead 16 Rows: 10000000 - Test: Random Read - Clients: 16 pts/hbase-1.1.0 --rows=1000000 randomWrite 64 Rows: 1000000 - Test: Random Write - Clients: 64 pts/hbase-1.1.0 --rows=1000000 randomWrite 32 Rows: 1000000 - Test: Random Write - Clients: 32 pts/hbase-1.1.0 --rows=1000000 randomWrite 16 Rows: 1000000 - Test: Random Write - Clients: 16 pts/hbase-1.1.0 --rows=1000000 randomRead 500 Rows: 1000000 - Test: Random Read - Clients: 500 pts/hbase-1.1.0 --rows=1000000 randomRead 256 Rows: 1000000 - Test: Random Read - Clients: 256 pts/hbase-1.1.0 --rows=1000000 randomRead 128 Rows: 1000000 - Test: Random Read - Clients: 128 pts/hbase-1.1.0 --rows=10000 sequentialRead 4 Rows: 10000 - Test: Sequential Read - Clients: 4 pts/hbase-1.1.0 --rows=10000 sequentialRead 1 Rows: 10000 - Test: Sequential Read - Clients: 1 pts/hbase-1.1.0 --rows=2000000 randomWrite 4 Rows: 2000000 - Test: Random Write - Clients: 4 pts/hbase-1.1.0 --rows=2000000 randomWrite 1 Rows: 2000000 - Test: Random Write - Clients: 1 pts/hbase-1.1.0 --rows=2000000 randomRead 64 Rows: 2000000 - Test: Random Read - Clients: 64 pts/hbase-1.1.0 --rows=2000000 randomRead 32 Rows: 2000000 - Test: Random Read - Clients: 32 pts/hbase-1.1.0 --rows=2000000 randomRead 16 Rows: 2000000 - Test: Random Read - Clients: 16 pts/hbase-1.1.0 --rows=10000000 randomRead 4 Rows: 10000000 - Test: Random Read - Clients: 4 pts/hbase-1.1.0 --rows=10000000 randomRead 1 Rows: 10000000 - Test: Random Read - Clients: 1 pts/hbase-1.1.0 --rows=10000000 increment 500 Rows: 10000000 - Test: Increment - Clients: 500 pts/hbase-1.1.0 --rows=10000000 increment 256 Rows: 10000000 - Test: Increment - Clients: 256 pts/hbase-1.1.0 --rows=10000000 increment 128 Rows: 10000000 - Test: Increment - Clients: 128 pts/hbase-1.1.0 --rows=1000000 randomWrite 4 Rows: 1000000 - Test: Random Write - Clients: 4 pts/hbase-1.1.0 --rows=1000000 randomWrite 1 Rows: 1000000 - Test: Random Write - Clients: 1 pts/hbase-1.1.0 --rows=1000000 randomRead 64 Rows: 1000000 - Test: Random Read - Clients: 64 pts/hbase-1.1.0 --rows=1000000 randomRead 32 Rows: 1000000 - Test: Random Read - Clients: 32 pts/hbase-1.1.0 --rows=1000000 randomRead 16 Rows: 1000000 - Test: Random Read - Clients: 16 pts/hbase-1.1.0 --rows=10000 randomWrite 500 Rows: 10000 - Test: Random Write - Clients: 500 pts/hbase-1.1.0 --rows=10000 randomWrite 256 Rows: 10000 - Test: Random Write - Clients: 256 pts/hbase-1.1.0 --rows=10000 randomWrite 128 Rows: 10000 - Test: Random Write - Clients: 128 pts/hbase-1.1.0 --rows=2000000 randomRead 4 Rows: 2000000 - Test: Random Read - Clients: 4 pts/hbase-1.1.0 --rows=2000000 randomRead 1 Rows: 2000000 - Test: Random Read - Clients: 1 pts/hbase-1.1.0 --rows=2000000 increment 500 Rows: 2000000 - Test: Increment - Clients: 500 pts/hbase-1.1.0 --rows=2000000 increment 256 Rows: 2000000 - Test: Increment - Clients: 256 pts/hbase-1.1.0 --rows=2000000 increment 128 Rows: 2000000 - Test: Increment - Clients: 128 pts/hbase-1.1.0 --rows=10000000 increment 64 Rows: 10000000 - Test: Increment - Clients: 64 pts/hbase-1.1.0 --rows=10000000 increment 32 Rows: 10000000 - Test: Increment - Clients: 32 pts/hbase-1.1.0 --rows=10000000 increment 16 Rows: 10000000 - Test: Increment - Clients: 16 pts/hbase-1.1.0 --rows=1000000 randomRead 4 Rows: 1000000 - Test: Random Read - Clients: 4 pts/hbase-1.1.0 --rows=1000000 randomRead 1 Rows: 1000000 - Test: Random Read - Clients: 1 pts/hbase-1.1.0 --rows=1000000 increment 500 Rows: 1000000 - Test: Increment - Clients: 500 pts/hbase-1.1.0 --rows=1000000 increment 256 Rows: 1000000 - Test: Increment - Clients: 256 pts/hbase-1.1.0 --rows=1000000 increment 128 Rows: 1000000 - Test: Increment - Clients: 128 pts/hbase-1.1.0 --rows=10000 randomWrite 64 Rows: 10000 - Test: Random Write - Clients: 64 pts/hbase-1.1.0 --rows=10000 randomWrite 32 Rows: 10000 - Test: Random Write - Clients: 32 pts/hbase-1.1.0 --rows=10000 randomWrite 16 Rows: 10000 - Test: Random Write - Clients: 16 pts/hbase-1.1.0 --rows=10000 randomRead 500 Rows: 10000 - Test: Random Read - Clients: 500 pts/hbase-1.1.0 --rows=10000 randomRead 256 Rows: 10000 - Test: Random Read - Clients: 256 pts/hbase-1.1.0 --rows=10000 randomRead 128 Rows: 10000 - Test: Random Read - Clients: 128 pts/hbase-1.1.0 --rows=2000000 increment 64 Rows: 2000000 - Test: Increment - Clients: 64 pts/hbase-1.1.0 --rows=2000000 increment 32 Rows: 2000000 - Test: Increment - Clients: 32 pts/hbase-1.1.0 --rows=2000000 increment 16 Rows: 2000000 - Test: Increment - Clients: 16 pts/hbase-1.1.0 --rows=10000000 increment 4 Rows: 10000000 - Test: Increment - Clients: 4 pts/hbase-1.1.0 --rows=10000000 increment 1 Rows: 10000000 - Test: Increment - Clients: 1 pts/hbase-1.1.0 --rows=1000000 increment 64 Rows: 1000000 - Test: Increment - Clients: 64 pts/hbase-1.1.0 --rows=1000000 increment 32 Rows: 1000000 - Test: Increment - Clients: 32 pts/hbase-1.1.0 --rows=1000000 increment 16 Rows: 1000000 - Test: Increment - Clients: 16 pts/hbase-1.1.0 --rows=10000 randomWrite 4 Rows: 10000 - Test: Random Write - Clients: 4 pts/hbase-1.1.0 --rows=10000 randomWrite 1 Rows: 10000 - Test: Random Write - Clients: 1 pts/hbase-1.1.0 --rows=10000 randomRead 64 Rows: 10000 - Test: Random Read - Clients: 64 pts/hbase-1.1.0 --rows=10000 randomRead 32 Rows: 10000 - Test: Random Read - Clients: 32 pts/hbase-1.1.0 --rows=10000 randomRead 16 Rows: 10000 - Test: Random Read - Clients: 16 pts/hbase-1.1.0 --rows=2000000 increment 4 Rows: 2000000 - Test: Increment - Clients: 4 pts/hbase-1.1.0 --rows=2000000 increment 1 Rows: 2000000 - Test: Increment - Clients: 1 pts/hbase-1.1.0 --rows=1000000 increment 4 Rows: 1000000 - Test: Increment - Clients: 4 pts/hbase-1.1.0 --rows=1000000 increment 1 Rows: 1000000 - Test: Increment - Clients: 1 pts/hbase-1.1.0 --rows=10000 randomRead 4 Rows: 10000 - Test: Random Read - Clients: 4 pts/hbase-1.1.0 --rows=10000 randomRead 1 Rows: 10000 - Test: Random Read - Clients: 1 pts/hbase-1.1.0 --rows=10000 increment 500 Rows: 10000 - Test: Increment - Clients: 500 pts/hbase-1.1.0 --rows=10000 increment 256 Rows: 10000 - Test: Increment - Clients: 256 pts/hbase-1.1.0 --rows=10000 increment 128 Rows: 10000 - Test: Increment - Clients: 128 pts/hbase-1.1.0 --rows=10000 increment 64 Rows: 10000 - Test: Increment - Clients: 64 pts/hbase-1.1.0 --rows=10000 increment 32 Rows: 10000 - Test: Increment - Clients: 32 pts/hbase-1.1.0 --rows=10000 increment 16 Rows: 10000 - Test: Increment - Clients: 16 pts/hbase-1.1.0 --rows=10000000 scan 500 Rows: 10000000 - Test: Scan - Clients: 500 pts/hbase-1.1.0 --rows=10000000 scan 256 Rows: 10000000 - Test: Scan - Clients: 256 pts/hbase-1.1.0 --rows=10000000 scan 128 Rows: 10000000 - Test: Scan - Clients: 128 pts/hbase-1.1.0 --rows=10000 increment 4 Rows: 10000 - Test: Increment - Clients: 4 pts/hbase-1.1.0 --rows=10000 increment 1 Rows: 10000 - Test: Increment - Clients: 1 pts/hbase-1.1.0 --rows=2000000 scan 500 Rows: 2000000 - Test: Scan - Clients: 500 pts/hbase-1.1.0 --rows=2000000 scan 256 Rows: 2000000 - Test: Scan - Clients: 256 pts/hbase-1.1.0 --rows=2000000 scan 128 Rows: 2000000 - Test: Scan - Clients: 128 pts/hbase-1.1.0 --rows=10000000 scan 64 Rows: 10000000 - Test: Scan - Clients: 64 pts/hbase-1.1.0 --rows=10000000 scan 32 Rows: 10000000 - Test: Scan - Clients: 32 pts/hbase-1.1.0 --rows=10000000 scan 16 Rows: 10000000 - Test: Scan - Clients: 16 pts/hbase-1.1.0 --rows=1000000 scan 500 Rows: 1000000 - Test: Scan - Clients: 500 pts/hbase-1.1.0 --rows=1000000 scan 256 Rows: 1000000 - Test: Scan - Clients: 256 pts/hbase-1.1.0 --rows=1000000 scan 128 Rows: 1000000 - Test: Scan - Clients: 128 pts/hbase-1.1.0 --rows=2000000 scan 64 Rows: 2000000 - Test: Scan - Clients: 64 pts/hbase-1.1.0 --rows=2000000 scan 32 Rows: 2000000 - Test: Scan - Clients: 32 pts/hbase-1.1.0 --rows=2000000 scan 16 Rows: 2000000 - Test: Scan - Clients: 16 pts/hbase-1.1.0 --rows=10000000 scan 4 Rows: 10000000 - Test: Scan - Clients: 4 pts/hbase-1.1.0 --rows=10000000 scan 1 Rows: 10000000 - Test: Scan - Clients: 1 pts/hbase-1.1.0 --rows=1000000 scan 64 Rows: 1000000 - Test: Scan - Clients: 64 pts/hbase-1.1.0 --rows=1000000 scan 32 Rows: 1000000 - Test: Scan - Clients: 32 pts/hbase-1.1.0 --rows=1000000 scan 16 Rows: 1000000 - Test: Scan - Clients: 16 pts/hbase-1.1.0 --rows=2000000 scan 4 Rows: 2000000 - Test: Scan - Clients: 4 pts/hbase-1.1.0 --rows=2000000 scan 1 Rows: 2000000 - Test: Scan - Clients: 1 pts/hbase-1.1.0 --rows=1000000 scan 4 Rows: 1000000 - Test: Scan - Clients: 4 pts/hbase-1.1.0 --rows=1000000 scan 1 Rows: 1000000 - Test: Scan - Clients: 1 pts/hbase-1.1.0 --rows=10000 scan 500 Rows: 10000 - Test: Scan - Clients: 500 pts/hbase-1.1.0 --rows=10000 scan 256 Rows: 10000 - Test: Scan - Clients: 256 pts/hbase-1.1.0 --rows=10000 scan 128 Rows: 10000 - Test: Scan - Clients: 128 pts/hbase-1.1.0 --rows=10000 scan 64 Rows: 10000 - Test: Scan - Clients: 64 pts/hbase-1.1.0 --rows=10000 scan 32 Rows: 10000 - Test: Scan - Clients: 32 pts/hbase-1.1.0 --rows=10000 scan 16 Rows: 10000 - Test: Scan - Clients: 16 pts/hbase-1.1.0 --rows=10000 scan 4 Rows: 10000 - Test: Scan - Clients: 4 pts/hbase-1.1.0 --rows=10000 scan 1 Rows: 10000 - Test: Scan - Clients: 1 pts/nginx-3.0.1 -c 4000 Connections: 4000 pts/nginx-3.0.1 -c 1000 Connections: 1000 pts/nginx-3.0.1 -c 500 Connections: 500 pts/nginx-3.0.1 -c 200 Connections: 200 pts/nginx-3.0.1 -c 100 Connections: 100 pts/nginx-3.0.1 -c 20 Connections: 20 pts/nginx-3.0.1 -c 1 Connections: 1 pts/hadoop-1.0.0 -op fileStatus -threads 1000 -files 10000000 Operation: File Status - Threads: 1000 - Files: 10000000 pts/hadoop-1.0.0 -op fileStatus -threads 500 -files 10000000 Operation: File Status - Threads: 500 - Files: 10000000 pts/hadoop-1.0.0 -op fileStatus -threads 1000 -files 1000000 Operation: File Status - Threads: 1000 - Files: 1000000 pts/hadoop-1.0.0 -op fileStatus -threads 100 -files 10000000 Operation: File Status - Threads: 100 - Files: 10000000 pts/hadoop-1.0.0 -op fileStatus -threads 500 -files 1000000 Operation: File Status - Threads: 500 - Files: 1000000 pts/hadoop-1.0.0 -op fileStatus -threads 50 -files 10000000 Operation: File Status - Threads: 50 - Files: 10000000 pts/hadoop-1.0.0 -op fileStatus -threads 20 -files 10000000 Operation: File Status - Threads: 20 - Files: 10000000 pts/hadoop-1.0.0 -op fileStatus -threads 1000 -files 100000 Operation: File Status - Threads: 1000 - Files: 100000 pts/hadoop-1.0.0 -op fileStatus -threads 100 -files 1000000 Operation: File Status - Threads: 100 - Files: 1000000 pts/hadoop-1.0.0 -op fileStatus -threads 500 -files 100000 Operation: File Status - Threads: 500 - Files: 100000 pts/hadoop-1.0.0 -op fileStatus -threads 50 -files 1000000 Operation: File Status - Threads: 50 - Files: 1000000 pts/hadoop-1.0.0 -op fileStatus -threads 20 -files 1000000 Operation: File Status - Threads: 20 - Files: 1000000 pts/hadoop-1.0.0 -op fileStatus -threads 100 -files 100000 Operation: File Status - Threads: 100 - Files: 100000 pts/hadoop-1.0.0 -op fileStatus -threads 50 -files 100000 Operation: File Status - Threads: 50 - Files: 100000 pts/hadoop-1.0.0 -op fileStatus -threads 20 -files 100000 Operation: File Status - Threads: 20 - Files: 100000 pts/hadoop-1.0.0 -op rename -threads 1000 -files 10000000 Operation: Rename - Threads: 1000 - Files: 10000000 pts/hadoop-1.0.0 -op delete -threads 1000 -files 10000000 Operation: Delete - Threads: 1000 - Files: 10000000 pts/hadoop-1.0.0 -op create -threads 1000 -files 10000000 Operation: Create - Threads: 1000 - Files: 10000000 pts/hadoop-1.0.0 -op rename -threads 500 -files 10000000 Operation: Rename - Threads: 500 - Files: 10000000 pts/hadoop-1.0.0 -op rename -threads 1000 -files 1000000 Operation: Rename - Threads: 1000 - Files: 1000000 pts/hadoop-1.0.0 -op rename -threads 100 -files 10000000 Operation: Rename - Threads: 100 - Files: 10000000 pts/hadoop-1.0.0 -op delete -threads 500 -files 10000000 Operation: Delete - Threads: 500 - Files: 10000000 pts/hadoop-1.0.0 -op delete -threads 1000 -files 1000000 Operation: Delete - Threads: 1000 - Files: 1000000 pts/hadoop-1.0.0 -op delete -threads 100 -files 10000000 Operation: Delete - Threads: 100 - Files: 10000000 pts/hadoop-1.0.0 -op create -threads 500 -files 10000000 Operation: Create - Threads: 500 - Files: 10000000 pts/hadoop-1.0.0 -op create -threads 1000 -files 1000000 Operation: Create - Threads: 1000 - Files: 1000000 pts/hadoop-1.0.0 -op create -threads 100 -files 10000000 Operation: Create - Threads: 100 - Files: 10000000 pts/hadoop-1.0.0 -op rename -threads 500 -files 1000000 Operation: Rename - Threads: 500 - Files: 1000000 pts/hadoop-1.0.0 -op rename -threads 50 -files 10000000 Operation: Rename - Threads: 50 - Files: 10000000 pts/hadoop-1.0.0 -op rename -threads 20 -files 10000000 Operation: Rename - Threads: 20 - Files: 10000000 pts/hadoop-1.0.0 -op rename -threads 1000 -files 100000 Operation: Rename - Threads: 1000 - Files: 100000 pts/hadoop-1.0.0 -op rename -threads 100 -files 1000000 Operation: Rename - Threads: 100 - Files: 1000000 pts/hadoop-1.0.0 -op open -threads 1000 -files 10000000 Operation: Open - Threads: 1000 - Files: 10000000 pts/hadoop-1.0.0 -op delete -threads 500 -files 1000000 Operation: Delete - Threads: 500 - Files: 1000000 pts/hadoop-1.0.0 -op delete -threads 50 -files 10000000 Operation: Delete - Threads: 50 - Files: 10000000 pts/hadoop-1.0.0 -op delete -threads 20 -files 10000000 Operation: Delete - Threads: 20 - Files: 10000000 pts/hadoop-1.0.0 -op delete -threads 1000 -files 100000 Operation: Delete - Threads: 1000 - Files: 100000 pts/hadoop-1.0.0 -op delete -threads 100 -files 1000000 Operation: Delete - Threads: 100 - Files: 1000000 pts/hadoop-1.0.0 -op create -threads 500 -files 1000000 Operation: Create - Threads: 500 - Files: 1000000 pts/hadoop-1.0.0 -op create -threads 50 -files 10000000 Operation: Create - Threads: 50 - Files: 10000000 pts/hadoop-1.0.0 -op create -threads 20 -files 10000000 Operation: Create - Threads: 20 - Files: 10000000 pts/hadoop-1.0.0 -op create -threads 1000 -files 100000 Operation: Create - Threads: 1000 - Files: 100000 pts/hadoop-1.0.0 -op create -threads 100 -files 1000000 Operation: Create - Threads: 100 - Files: 1000000 pts/hadoop-1.0.0 -op rename -threads 500 -files 100000 Operation: Rename - Threads: 500 - Files: 100000 pts/hadoop-1.0.0 -op rename -threads 50 -files 1000000 Operation: Rename - Threads: 50 - Files: 1000000 pts/hadoop-1.0.0 -op rename -threads 20 -files 1000000 Operation: Rename - Threads: 20 - Files: 1000000 pts/hadoop-1.0.0 -op rename -threads 100 -files 100000 Operation: Rename - Threads: 100 - Files: 100000 pts/hadoop-1.0.0 -op open -threads 500 -files 10000000 Operation: Open - Threads: 500 - Files: 10000000 pts/hadoop-1.0.0 -op open -threads 1000 -files 1000000 Operation: Open - Threads: 1000 - Files: 1000000 pts/hadoop-1.0.0 -op open -threads 100 -files 10000000 Operation: Open - Threads: 100 - Files: 10000000 pts/hadoop-1.0.0 -op delete -threads 500 -files 100000 Operation: Delete - Threads: 500 - Files: 100000 pts/hadoop-1.0.0 -op delete -threads 50 -files 1000000 Operation: Delete - Threads: 50 - Files: 1000000 pts/hadoop-1.0.0 -op delete -threads 20 -files 1000000 Operation: Delete - Threads: 20 - Files: 1000000 pts/hadoop-1.0.0 -op delete -threads 100 -files 100000 Operation: Delete - Threads: 100 - Files: 100000 pts/hadoop-1.0.0 -op create -threads 500 -files 100000 Operation: Create - Threads: 500 - Files: 100000 pts/hadoop-1.0.0 -op create -threads 50 -files 1000000 Operation: Create - Threads: 50 - Files: 1000000 pts/hadoop-1.0.0 -op create -threads 20 -files 1000000 Operation: Create - Threads: 20 - Files: 1000000 pts/hadoop-1.0.0 -op create -threads 100 -files 100000 Operation: Create - Threads: 100 - Files: 100000 pts/hadoop-1.0.0 -op rename -threads 50 -files 100000 Operation: Rename - Threads: 50 - Files: 100000 pts/hadoop-1.0.0 -op rename -threads 20 -files 100000 Operation: Rename - Threads: 20 - Files: 100000 pts/hadoop-1.0.0 -op open -threads 500 -files 1000000 Operation: Open - Threads: 500 - Files: 1000000 pts/hadoop-1.0.0 -op open -threads 50 -files 10000000 Operation: Open - Threads: 50 - Files: 10000000 pts/hadoop-1.0.0 -op open -threads 20 -files 10000000 Operation: Open - Threads: 20 - Files: 10000000 pts/hadoop-1.0.0 -op open -threads 1000 -files 100000 Operation: Open - Threads: 1000 - Files: 100000 pts/hadoop-1.0.0 -op open -threads 100 -files 1000000 Operation: Open - Threads: 100 - Files: 1000000 pts/hadoop-1.0.0 -op delete -threads 50 -files 100000 Operation: Delete - Threads: 50 - Files: 100000 pts/hadoop-1.0.0 -op delete -threads 20 -files 100000 Operation: Delete - Threads: 20 - Files: 100000 pts/hadoop-1.0.0 -op create -threads 50 -files 100000 Operation: Create - Threads: 50 - Files: 100000 pts/hadoop-1.0.0 -op create -threads 20 -files 100000 Operation: Create - Threads: 20 - Files: 100000 pts/hadoop-1.0.0 -op open -threads 500 -files 100000 Operation: Open - Threads: 500 - Files: 100000 pts/hadoop-1.0.0 -op open -threads 50 -files 1000000 Operation: Open - Threads: 50 - Files: 1000000 pts/hadoop-1.0.0 -op open -threads 20 -files 1000000 Operation: Open - Threads: 20 - Files: 1000000 pts/hadoop-1.0.0 -op open -threads 100 -files 100000 Operation: Open - Threads: 100 - Files: 100000 pts/hadoop-1.0.0 -op open -threads 50 -files 100000 Operation: Open - Threads: 50 - Files: 100000 pts/hadoop-1.0.0 -op open -threads 20 -files 100000 Operation: Open - Threads: 20 - Files: 100000 pts/mcperf-1.4.1 --method=replace --num-conns=256 Method: Replace - Connections: 256 pts/mcperf-1.4.1 --method=replace --num-conns=128 Method: Replace - Connections: 128 pts/mcperf-1.4.1 --method=prepend --num-conns=256 Method: Prepend - Connections: 256 pts/mcperf-1.4.1 --method=prepend --num-conns=128 Method: Prepend - Connections: 128 pts/mcperf-1.4.1 --method=replace --num-conns=64 Method: Replace - Connections: 64 pts/mcperf-1.4.1 --method=replace --num-conns=32 Method: Replace - Connections: 32 pts/mcperf-1.4.1 --method=replace --num-conns=16 Method: Replace - Connections: 16 pts/mcperf-1.4.1 --method=prepend --num-conns=64 Method: Prepend - Connections: 64 pts/mcperf-1.4.1 --method=prepend --num-conns=32 Method: Prepend - Connections: 32 pts/mcperf-1.4.1 --method=prepend --num-conns=16 Method: Prepend - Connections: 16 pts/mcperf-1.4.1 --method=delete --num-conns=256 Method: Delete - Connections: 256 pts/mcperf-1.4.1 --method=delete --num-conns=128 Method: Delete - Connections: 128 pts/mcperf-1.4.1 --method=append --num-conns=256 Method: Append - Connections: 256 pts/mcperf-1.4.1 --method=append --num-conns=128 Method: Append - Connections: 128 pts/mcperf-1.4.1 --method=replace --num-conns=4 Method: Replace - Connections: 4 pts/mcperf-1.4.1 --method=replace --num-conns=1 Method: Replace - Connections: 1 pts/mcperf-1.4.1 --method=prepend --num-conns=4 Method: Prepend - Connections: 4 pts/mcperf-1.4.1 --method=prepend --num-conns=1 Method: Prepend - Connections: 1 pts/mcperf-1.4.1 --method=delete --num-conns=64 Method: Delete - Connections: 64 pts/mcperf-1.4.1 --method=delete --num-conns=32 Method: Delete - Connections: 32 pts/mcperf-1.4.1 --method=delete --num-conns=16 Method: Delete - Connections: 16 pts/mcperf-1.4.1 --method=append --num-conns=64 Method: Append - Connections: 64 pts/mcperf-1.4.1 --method=append --num-conns=32 Method: Append - Connections: 32 pts/mcperf-1.4.1 --method=append --num-conns=16 Method: Append - Connections: 16 pts/mcperf-1.4.1 --method=delete --num-conns=4 Method: Delete - Connections: 4 pts/mcperf-1.4.1 --method=delete --num-conns=1 Method: Delete - Connections: 1 pts/mcperf-1.4.1 --method=append --num-conns=4 Method: Append - Connections: 4 pts/mcperf-1.4.1 --method=append --num-conns=1 Method: Append - Connections: 1 pts/mcperf-1.4.1 --method=set --num-conns=256 Method: Set - Connections: 256 pts/mcperf-1.4.1 --method=set --num-conns=128 Method: Set - Connections: 128 pts/mcperf-1.4.1 --method=get --num-conns=256 Method: Get - Connections: 256 pts/mcperf-1.4.1 --method=get --num-conns=128 Method: Get - Connections: 128 pts/mcperf-1.4.1 --method=add --num-conns=256 Method: Add - Connections: 256 pts/mcperf-1.4.1 --method=add --num-conns=128 Method: Add - Connections: 128 pts/mcperf-1.4.1 --method=set --num-conns=64 Method: Set - Connections: 64 pts/mcperf-1.4.1 --method=set --num-conns=32 Method: Set - Connections: 32 pts/mcperf-1.4.1 --method=set --num-conns=16 Method: Set - Connections: 16 pts/mcperf-1.4.1 --method=get --num-conns=64 Method: Get - Connections: 64 pts/mcperf-1.4.1 --method=get --num-conns=32 Method: Get - Connections: 32 pts/mcperf-1.4.1 --method=get --num-conns=16 Method: Get - Connections: 16 pts/mcperf-1.4.1 --method=add --num-conns=64 Method: Add - Connections: 64 pts/mcperf-1.4.1 --method=add --num-conns=32 Method: Add - Connections: 32 pts/mcperf-1.4.1 --method=add --num-conns=16 Method: Add - Connections: 16 pts/mcperf-1.4.1 --method=set --num-conns=4 Method: Set - Connections: 4 pts/mcperf-1.4.1 --method=set --num-conns=1 Method: Set - Connections: 1 pts/mcperf-1.4.1 --method=get --num-conns=4 Method: Get - Connections: 4 pts/mcperf-1.4.1 --method=get --num-conns=1 Method: Get - Connections: 1 pts/mcperf-1.4.1 --method=add --num-conns=4 Method: Add - Connections: 4 pts/mcperf-1.4.1 --method=add --num-conns=1 Method: Add - Connections: 1 pts/rocksdb-1.5.0 --benchmarks="readrandomwriterandom" Test: Read Random Write Random pts/rocksdb-1.5.0 --benchmarks="readwhilewriting" Test: Read While Writing pts/rocksdb-1.5.0 --benchmarks="fillsync" Test: Random Fill Sync pts/rocksdb-1.5.0 --benchmarks="fillseq" Test: Sequential Fill pts/rocksdb-1.5.0 --benchmarks="updaterandom" Test: Update Random pts/rocksdb-1.5.0 --benchmarks="readrandom" Test: Random Read pts/rocksdb-1.5.0 --benchmarks="fillrandom" Test: Random Fill pts/cassandra-1.2.0 MIXED_1_3 Test: Mixed 1:3 pts/cassandra-1.2.0 MIXED_1_1 Test: Mixed 1:1 pts/cassandra-1.2.0 WRITE Test: Writes pts/sqlite-speedtest-1.0.1 pts/pgbench-1.14.0 BUFFER_TEST HEAVY_CONTENTION READ_WRITE Scaling: Buffer Test - Test: Heavy Contention - Mode: Read Write pts/pgbench-1.14.0 BUFFER_TEST HEAVY_CONTENTION READ_ONLY Scaling: Buffer Test - Test: Heavy Contention - Mode: Read Only pts/pgbench-1.14.0 BUFFER_TEST NORMAL_LOAD READ_WRITE Scaling: Buffer Test - Test: Normal Load - Mode: Read Write pts/pgbench-1.14.0 BUFFER_TEST NORMAL_LOAD READ_ONLY Scaling: Buffer Test - Test: Normal Load - Mode: Read Only pts/pgbench-1.14.0 -s 25000 -c 5000 Scaling Factor: 25000 - Clients: 5000 - Mode: Read Write pts/pgbench-1.14.0 -s 25000 -c 1000 Scaling Factor: 25000 - Clients: 1000 - Mode: Read Write pts/pgbench-1.14.0 -s 10000 -c 5000 Scaling Factor: 10000 - Clients: 5000 - Mode: Read Write pts/pgbench-1.14.0 -s 10000 -c 1000 Scaling Factor: 10000 - Clients: 1000 - Mode: Read Write pts/pgbench-1.14.0 -s 25000 -c 800 Scaling Factor: 25000 - Clients: 800 - Mode: Read Write pts/pgbench-1.14.0 -s 25000 -c 5000 -S Scaling Factor: 25000 - Clients: 5000 - Mode: Read Only pts/pgbench-1.14.0 -s 25000 -c 500 Scaling Factor: 25000 - Clients: 500 - Mode: Read Write pts/pgbench-1.14.0 -s 25000 -c 250 Scaling Factor: 25000 - Clients: 250 - Mode: Read Write pts/pgbench-1.14.0 -s 25000 -c 1000 -S Scaling Factor: 25000 - Clients: 1000 - Mode: Read Only pts/pgbench-1.14.0 -s 25000 -c 100 Scaling Factor: 25000 - Clients: 100 - Mode: Read Write pts/pgbench-1.14.0 -s 10000 -c 800 Scaling Factor: 10000 - Clients: 800 - Mode: Read Write pts/pgbench-1.14.0 -s 10000 -c 5000 -S Scaling Factor: 10000 - Clients: 5000 - Mode: Read Only pts/pgbench-1.14.0 -s 10000 -c 500 Scaling Factor: 10000 - Clients: 500 - Mode: Read Write pts/pgbench-1.14.0 -s 10000 -c 250 Scaling Factor: 10000 - Clients: 250 - Mode: Read Write pts/pgbench-1.14.0 -s 10000 -c 1000 -S Scaling Factor: 10000 - Clients: 1000 - Mode: Read Only pts/pgbench-1.14.0 -s 10000 -c 100 Scaling Factor: 10000 - Clients: 100 - Mode: Read Write pts/pgbench-1.14.0 -s 1000 -c 5000 Scaling Factor: 1000 - Clients: 5000 - Mode: Read Write pts/pgbench-1.14.0 -s 1000 -c 1000 Scaling Factor: 1000 - Clients: 1000 - Mode: Read Write pts/pgbench-1.14.0 -s 25000 -c 800 -S Scaling Factor: 25000 - Clients: 800 - Mode: Read Only pts/pgbench-1.14.0 -s 25000 -c 500 -S Scaling Factor: 25000 - Clients: 500 - Mode: Read Only pts/pgbench-1.14.0 -s 25000 -c 50 Scaling Factor: 25000 - Clients: 50 - Mode: Read Write pts/pgbench-1.14.0 -s 25000 -c 250 -S Scaling Factor: 25000 - Clients: 250 - Mode: Read Only pts/pgbench-1.14.0 -s 25000 -c 100 -S Scaling Factor: 25000 - Clients: 100 - Mode: Read Only pts/pgbench-1.14.0 -s 10000 -c 800 -S Scaling Factor: 10000 - Clients: 800 - Mode: Read Only pts/pgbench-1.14.0 -s 10000 -c 500 -S Scaling Factor: 10000 - Clients: 500 - Mode: Read Only pts/pgbench-1.14.0 -s 10000 -c 50 Scaling Factor: 10000 - Clients: 50 - Mode: Read Write pts/pgbench-1.14.0 -s 10000 -c 250 -S Scaling Factor: 10000 - Clients: 250 - Mode: Read Only pts/pgbench-1.14.0 -s 10000 -c 100 -S Scaling Factor: 10000 - Clients: 100 - Mode: Read Only pts/pgbench-1.14.0 -s 1000 -c 800 Scaling Factor: 1000 - Clients: 800 - Mode: Read Write pts/pgbench-1.14.0 -s 1000 -c 5000 -S Scaling Factor: 1000 - Clients: 5000 - Mode: Read Only pts/pgbench-1.14.0 -s 1000 -c 500 Scaling Factor: 1000 - Clients: 500 - Mode: Read Write pts/pgbench-1.14.0 -s 1000 -c 250 Scaling Factor: 1000 - Clients: 250 - Mode: Read Write pts/pgbench-1.14.0 -s 1000 -c 1000 -S Scaling Factor: 1000 - Clients: 1000 - Mode: Read Only pts/pgbench-1.14.0 -s 1000 -c 100 Scaling Factor: 1000 - Clients: 100 - Mode: Read Write pts/pgbench-1.14.0 -s 100 -c 5000 Scaling Factor: 100 - Clients: 5000 - Mode: Read Write pts/pgbench-1.14.0 -s 100 -c 1000 Scaling Factor: 100 - Clients: 1000 - Mode: Read Write pts/pgbench-1.14.0 -s 25000 -c 50 -S Scaling Factor: 25000 - Clients: 50 - Mode: Read Only pts/pgbench-1.14.0 -s 25000 -c 1 Scaling Factor: 25000 - Clients: 1 - Mode: Read Write pts/pgbench-1.14.0 -s 10000 -c 50 -S Scaling Factor: 10000 - Clients: 50 - Mode: Read Only pts/pgbench-1.14.0 -s 10000 -c 1 Scaling Factor: 10000 - Clients: 1 - Mode: Read Write pts/pgbench-1.14.0 -s 1000 -c 800 -S Scaling Factor: 1000 - Clients: 800 - Mode: Read Only pts/pgbench-1.14.0 -s 1000 -c 500 -S Scaling Factor: 1000 - Clients: 500 - Mode: Read Only pts/pgbench-1.14.0 -s 1000 -c 50 Scaling Factor: 1000 - Clients: 50 - Mode: Read Write pts/pgbench-1.14.0 -s 1000 -c 250 -S Scaling Factor: 1000 - Clients: 250 - Mode: Read Only pts/pgbench-1.14.0 -s 1000 -c 100 -S Scaling Factor: 1000 - Clients: 100 - Mode: Read Only pts/pgbench-1.14.0 -s 100 -c 800 Scaling Factor: 100 - Clients: 800 - Mode: Read Write pts/pgbench-1.14.0 -s 100 -c 5000 -S Scaling Factor: 100 - Clients: 5000 - Mode: Read Only pts/pgbench-1.14.0 -s 100 -c 500 Scaling Factor: 100 - Clients: 500 - Mode: Read Write pts/pgbench-1.14.0 -s 100 -c 250 Scaling Factor: 100 - Clients: 250 - Mode: Read Write pts/pgbench-1.14.0 -s 100 -c 1000 -S Scaling Factor: 100 - Clients: 1000 - Mode: Read Only pts/pgbench-1.14.0 -s 100 -c 100 Scaling Factor: 100 - Clients: 100 - Mode: Read Write pts/pgbench-1.14.0 -s 25000 -c 1 -S Scaling Factor: 25000 - Clients: 1 - Mode: Read Only pts/pgbench-1.14.0 -s 10000 -c 1 -S Scaling Factor: 10000 - Clients: 1 - Mode: Read Only pts/pgbench-1.14.0 -s 1000 -c 50 -S Scaling Factor: 1000 - Clients: 50 - Mode: Read Only pts/pgbench-1.14.0 -s 1000 -c 1 Scaling Factor: 1000 - Clients: 1 - Mode: Read Write pts/pgbench-1.14.0 -s 100 -c 800 -S Scaling Factor: 100 - Clients: 800 - Mode: Read Only pts/pgbench-1.14.0 -s 100 -c 500 -S Scaling Factor: 100 - Clients: 500 - Mode: Read Only pts/pgbench-1.14.0 -s 100 -c 50 Scaling Factor: 100 - Clients: 50 - Mode: Read Write pts/pgbench-1.14.0 -s 100 -c 250 -S Scaling Factor: 100 - Clients: 250 - Mode: Read Only pts/pgbench-1.14.0 -s 100 -c 100 -S Scaling Factor: 100 - Clients: 100 - Mode: Read Only pts/pgbench-1.14.0 -s 1 -c 5000 Scaling Factor: 1 - Clients: 5000 - Mode: Read Write pts/pgbench-1.14.0 -s 1 -c 1000 Scaling Factor: 1 - Clients: 1000 - Mode: Read Write pts/pgbench-1.14.0 -s 1000 -c 1 -S Scaling Factor: 1000 - Clients: 1 - Mode: Read Only pts/pgbench-1.14.0 -s 100 -c 50 -S Scaling Factor: 100 - Clients: 50 - Mode: Read Only pts/pgbench-1.14.0 -s 100 -c 1 Scaling Factor: 100 - Clients: 1 - Mode: Read Write pts/pgbench-1.14.0 -s 1 -c 800 Scaling Factor: 1 - Clients: 800 - Mode: Read Write pts/pgbench-1.14.0 -s 1 -c 5000 -S Scaling Factor: 1 - Clients: 5000 - Mode: Read Only pts/pgbench-1.14.0 -s 1 -c 500 Scaling Factor: 1 - Clients: 500 - Mode: Read Write pts/pgbench-1.14.0 -s 1 -c 250 Scaling Factor: 1 - Clients: 250 - Mode: Read Write pts/pgbench-1.14.0 -s 1 -c 1000 -S Scaling Factor: 1 - Clients: 1000 - Mode: Read Only pts/pgbench-1.14.0 -s 1 -c 100 Scaling Factor: 1 - Clients: 100 - Mode: Read Write pts/pgbench-1.14.0 -s 100 -c 1 -S Scaling Factor: 100 - Clients: 1 - Mode: Read Only pts/pgbench-1.14.0 -s 1 -c 800 -S Scaling Factor: 1 - Clients: 800 - Mode: Read Only pts/pgbench-1.14.0 -s 1 -c 500 -S Scaling Factor: 1 - Clients: 500 - Mode: Read Only pts/pgbench-1.14.0 -s 1 -c 50 Scaling Factor: 1 - Clients: 50 - Mode: Read Write pts/pgbench-1.14.0 -s 1 -c 250 -S Scaling Factor: 1 - Clients: 250 - Mode: Read Only pts/pgbench-1.14.0 -s 1 -c 100 -S Scaling Factor: 1 - Clients: 100 - Mode: Read Only pts/pgbench-1.14.0 -s 1 -c 50 -S Scaling Factor: 1 - Clients: 50 - Mode: Read Only pts/pgbench-1.14.0 -s 1 -c 1 Scaling Factor: 1 - Clients: 1 - Mode: Read Write pts/pgbench-1.14.0 -s 1 -c 1 -S Scaling Factor: 1 - Clients: 1 - Mode: Read Only pts/duckdb-1.0.0 benchmark/tpch/parquet Benchmark: TPC-H Parquet pts/duckdb-1.0.0 benchmark/imdb Benchmark: IMDB pts/mysqlslap-1.4.0 --concurrency=8192 Clients: 8192 pts/mysqlslap-1.4.0 --concurrency=4096 Clients: 4096 pts/mysqlslap-1.4.0 --concurrency=2048 Clients: 2048 pts/mysqlslap-1.4.0 --concurrency=1024 Clients: 1024 pts/mysqlslap-1.4.0 --concurrency=512 Clients: 512 pts/mysqlslap-1.4.0 --concurrency=256 Clients: 256 pts/mysqlslap-1.4.0 --concurrency=128 Clients: 128 pts/mysqlslap-1.4.0 --concurrency=64 Clients: 64 pts/mysqlslap-1.4.0 --concurrency=1 Clients: 1 pts/keydb-1.4.0 -t lpush -c 900 Test: LPUSH - Parallel Connections: 900 pts/keydb-1.4.0 -t lpush -c 500 Test: LPUSH - Parallel Connections: 500 pts/keydb-1.4.0 -t lpush -c 100 Test: LPUSH - Parallel Connections: 100 pts/keydb-1.4.0 -t hmset -c 900 Test: HMSET - Parallel Connections: 900 pts/keydb-1.4.0 -t hmset -c 500 Test: HMSET - Parallel Connections: 500 pts/keydb-1.4.0 -t hmset -c 100 Test: HMSET - Parallel Connections: 100 pts/keydb-1.4.0 -t sadd -c 900 Test: SADD - Parallel Connections: 900 pts/keydb-1.4.0 -t sadd -c 500 Test: SADD - Parallel Connections: 500 pts/keydb-1.4.0 -t sadd -c 100 Test: SADD - Parallel Connections: 100 pts/keydb-1.4.0 -t lpush -c 50 Test: LPUSH - Parallel Connections: 50 pts/keydb-1.4.0 -t lpop -c 900 Test: LPOP - Parallel Connections: 900 pts/keydb-1.4.0 -t lpop -c 500 Test: LPOP - Parallel Connections: 500 pts/keydb-1.4.0 -t lpop -c 100 Test: LPOP - Parallel Connections: 100 pts/keydb-1.4.0 -t hmset -c 50 Test: HMSET - Parallel Connections: 50 pts/keydb-1.4.0 -t set -c 900 Test: SET - Parallel Connections: 900 pts/keydb-1.4.0 -t set -c 500 Test: SET - Parallel Connections: 500 pts/keydb-1.4.0 -t set -c 100 Test: SET - Parallel Connections: 100 pts/keydb-1.4.0 -t sadd -c 50 Test: SADD - Parallel Connections: 50 pts/keydb-1.4.0 -t lpop -c 50 Test: LPOP - Parallel Connections: 50 pts/keydb-1.4.0 -t get -c 900 Test: GET - Parallel Connections: 900 pts/keydb-1.4.0 -t get -c 500 Test: GET - Parallel Connections: 500 pts/keydb-1.4.0 -t get -c 100 Test: GET - Parallel Connections: 100 pts/keydb-1.4.0 -t set -c 50 Test: SET - Parallel Connections: 50 pts/keydb-1.4.0 -t get -c 50 Test: GET - Parallel Connections: 50 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 256 --num_threads_read 256 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 256 - Num Threads Read: 256 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 256 --num_threads_read 128 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 256 - Num Threads Read: 128 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 128 --num_threads_read 256 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 128 - Num Threads Read: 256 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 128 --num_threads_read 128 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 128 - Num Threads Read: 128 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 256 --num_threads_read 256 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 256 - Num Threads Read: 256 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 256 --num_threads_read 128 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 256 - Num Threads Read: 128 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 128 --num_threads_read 256 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 128 - Num Threads Read: 256 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 128 --num_threads_read 128 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 128 - Num Threads Read: 128 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 64 --num_threads_read 256 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 64 - Num Threads Read: 256 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 64 --num_threads_read 128 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 64 - Num Threads Read: 128 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 32 --num_threads_read 256 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 32 - Num Threads Read: 256 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 32 --num_threads_read 128 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 32 - Num Threads Read: 128 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 256 --num_threads_read 32 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 256 - Num Threads Read: 32 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 256 --num_threads_read 16 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 256 - Num Threads Read: 16 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 16 --num_threads_read 256 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 16 - Num Threads Read: 256 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 16 --num_threads_read 128 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 16 - Num Threads Read: 128 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 128 --num_threads_read 32 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 128 - Num Threads Read: 32 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 128 --num_threads_read 16 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 128 - Num Threads Read: 16 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 64 --num_threads_read 256 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 64 - Num Threads Read: 256 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 64 --num_threads_read 128 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 64 - Num Threads Read: 128 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 32 --num_threads_read 256 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 32 - Num Threads Read: 256 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 32 --num_threads_read 128 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 32 - Num Threads Read: 128 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 256 --num_threads_read 32 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 256 - Num Threads Read: 32 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 256 --num_threads_read 16 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 256 - Num Threads Read: 16 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 16 --num_threads_read 256 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 16 - Num Threads Read: 256 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 16 --num_threads_read 128 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 16 - Num Threads Read: 128 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 128 --num_threads_read 32 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 128 - Num Threads Read: 32 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 128 --num_threads_read 16 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 128 - Num Threads Read: 16 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 64 --num_threads_read 32 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 64 - Num Threads Read: 32 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 64 --num_threads_read 16 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 64 - Num Threads Read: 16 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 32 --num_threads_read 32 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 32 - Num Threads Read: 32 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 32 --num_threads_read 16 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 32 - Num Threads Read: 16 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 256 --num_threads_read 1 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 256 - Num Threads Read: 1 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 256 --num_threads_read 0 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 256 - Num Threads Read: 0 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 16 --num_threads_read 32 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 16 - Num Threads Read: 32 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 16 --num_threads_read 16 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 16 - Num Threads Read: 16 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 128 --num_threads_read 1 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 128 - Num Threads Read: 1 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 128 --num_threads_read 0 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 128 - Num Threads Read: 0 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 1 --num_threads_read 256 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 1 - Num Threads Read: 256 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 1 --num_threads_read 128 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 1 - Num Threads Read: 128 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 0 --num_threads_read 256 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 0 - Num Threads Read: 256 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 0 --num_threads_read 128 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 0 - Num Threads Read: 128 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 64 --num_threads_read 32 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 64 - Num Threads Read: 32 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 64 --num_threads_read 16 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 64 - Num Threads Read: 16 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 32 --num_threads_read 32 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 32 - Num Threads Read: 32 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 32 --num_threads_read 16 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 32 - Num Threads Read: 16 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 256 --num_threads_read 1 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 256 - Num Threads Read: 1 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 256 --num_threads_read 0 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 256 - Num Threads Read: 0 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 16 --num_threads_read 32 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 16 - Num Threads Read: 32 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 16 --num_threads_read 16 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 16 - Num Threads Read: 16 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 128 --num_threads_read 1 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 128 - Num Threads Read: 1 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 128 --num_threads_read 0 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 128 - Num Threads Read: 0 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 1 --num_threads_read 256 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 1 - Num Threads Read: 256 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 1 --num_threads_read 128 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 1 - Num Threads Read: 128 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 0 --num_threads_read 256 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 0 - Num Threads Read: 256 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 0 --num_threads_read 128 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 0 - Num Threads Read: 128 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 64 --num_threads_read 1 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 64 - Num Threads Read: 1 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 64 --num_threads_read 0 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 64 - Num Threads Read: 0 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 32 --num_threads_read 1 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 32 - Num Threads Read: 1 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 32 --num_threads_read 0 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 32 - Num Threads Read: 0 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 16 --num_threads_read 1 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 16 - Num Threads Read: 1 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 16 --num_threads_read 0 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 16 - Num Threads Read: 0 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 1 --num_threads_read 32 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 1 - Num Threads Read: 32 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 1 --num_threads_read 16 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 1 - Num Threads Read: 16 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 0 --num_threads_read 32 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 0 - Num Threads Read: 32 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 0 --num_threads_read 16 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 0 - Num Threads Read: 16 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 64 --num_threads_read 1 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 64 - Num Threads Read: 1 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 64 --num_threads_read 0 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 64 - Num Threads Read: 0 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 32 --num_threads_read 1 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 32 - Num Threads Read: 1 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 32 --num_threads_read 0 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 32 - Num Threads Read: 0 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 16 --num_threads_read 1 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 16 - Num Threads Read: 1 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 16 --num_threads_read 0 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 16 - Num Threads Read: 0 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 1 --num_threads_read 32 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 1 - Num Threads Read: 32 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 1 --num_threads_read 16 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 1 - Num Threads Read: 16 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 0 --num_threads_read 32 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 0 - Num Threads Read: 32 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 0 --num_threads_read 16 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 0 - Num Threads Read: 16 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 1 --num_threads_read 1 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 1 - Num Threads Read: 1 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 1 --num_threads_read 0 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 1 - Num Threads Read: 0 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 0 --num_threads_read 1 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 0 - Num Threads Read: 1 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 25 --num_threads_write 0 --num_threads_read 0 Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 0 - Num Threads Read: 0 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 1 --num_threads_read 1 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 1 - Num Threads Read: 1 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 1 --num_threads_read 0 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 1 - Num Threads Read: 0 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 0 --num_threads_read 1 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 0 - Num Threads Read: 1 pts/yugabytedb-1.0.0 --workload CassandraBatchKeyValue --nouuid --value_size 256 --batch_size 10 --num_threads_write 0 --num_threads_read 0 Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 0 - Num Threads Read: 0 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 256 --num_threads_read 256 Workload: CassandraKeyValue - Num Threads Write: 256 - Num Threads Read: 256 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 256 --num_threads_read 128 Workload: CassandraKeyValue - Num Threads Write: 256 - Num Threads Read: 128 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 128 --num_threads_read 256 Workload: CassandraKeyValue - Num Threads Write: 128 - Num Threads Read: 256 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 128 --num_threads_read 128 Workload: CassandraKeyValue - Num Threads Write: 128 - Num Threads Read: 128 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 64 --num_threads_read 256 Workload: CassandraKeyValue - Num Threads Write: 64 - Num Threads Read: 256 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 64 --num_threads_read 128 Workload: CassandraKeyValue - Num Threads Write: 64 - Num Threads Read: 128 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 32 --num_threads_read 256 Workload: CassandraKeyValue - Num Threads Write: 32 - Num Threads Read: 256 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 32 --num_threads_read 128 Workload: CassandraKeyValue - Num Threads Write: 32 - Num Threads Read: 128 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 256 --num_threads_read 32 Workload: CassandraKeyValue - Num Threads Write: 256 - Num Threads Read: 32 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 256 --num_threads_read 16 Workload: CassandraKeyValue - Num Threads Write: 256 - Num Threads Read: 16 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 16 --num_threads_read 256 Workload: CassandraKeyValue - Num Threads Write: 16 - Num Threads Read: 256 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 16 --num_threads_read 128 Workload: CassandraKeyValue - Num Threads Write: 16 - Num Threads Read: 128 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 128 --num_threads_read 32 Workload: CassandraKeyValue - Num Threads Write: 128 - Num Threads Read: 32 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 128 --num_threads_read 16 Workload: CassandraKeyValue - Num Threads Write: 128 - Num Threads Read: 16 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 64 --num_threads_read 32 Workload: CassandraKeyValue - Num Threads Write: 64 - Num Threads Read: 32 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 64 --num_threads_read 16 Workload: CassandraKeyValue - Num Threads Write: 64 - Num Threads Read: 16 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 32 --num_threads_read 32 Workload: CassandraKeyValue - Num Threads Write: 32 - Num Threads Read: 32 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 32 --num_threads_read 16 Workload: CassandraKeyValue - Num Threads Write: 32 - Num Threads Read: 16 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 256 --num_threads_read 1 Workload: CassandraKeyValue - Num Threads Write: 256 - Num Threads Read: 1 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 256 --num_threads_read 0 Workload: CassandraKeyValue - Num Threads Write: 256 - Num Threads Read: 0 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 16 --num_threads_read 32 Workload: CassandraKeyValue - Num Threads Write: 16 - Num Threads Read: 32 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 16 --num_threads_read 16 Workload: CassandraKeyValue - Num Threads Write: 16 - Num Threads Read: 16 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 128 --num_threads_read 1 Workload: CassandraKeyValue - Num Threads Write: 128 - Num Threads Read: 1 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 128 --num_threads_read 0 Workload: CassandraKeyValue - Num Threads Write: 128 - Num Threads Read: 0 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 1 --num_threads_read 256 Workload: CassandraKeyValue - Num Threads Write: 1 - Num Threads Read: 256 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 1 --num_threads_read 128 Workload: CassandraKeyValue - Num Threads Write: 1 - Num Threads Read: 128 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 0 --num_threads_read 256 Workload: CassandraKeyValue - Num Threads Write: 0 - Num Threads Read: 256 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 0 --num_threads_read 128 Workload: CassandraKeyValue - Num Threads Write: 0 - Num Threads Read: 128 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 64 --num_threads_read 1 Workload: CassandraKeyValue - Num Threads Write: 64 - Num Threads Read: 1 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 64 --num_threads_read 0 Workload: CassandraKeyValue - Num Threads Write: 64 - Num Threads Read: 0 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 32 --num_threads_read 1 Workload: CassandraKeyValue - Num Threads Write: 32 - Num Threads Read: 1 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 32 --num_threads_read 0 Workload: CassandraKeyValue - Num Threads Write: 32 - Num Threads Read: 0 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 16 --num_threads_read 1 Workload: CassandraKeyValue - Num Threads Write: 16 - Num Threads Read: 1 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 16 --num_threads_read 0 Workload: CassandraKeyValue - Num Threads Write: 16 - Num Threads Read: 0 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 1 --num_threads_read 32 Workload: CassandraKeyValue - Num Threads Write: 1 - Num Threads Read: 32 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 1 --num_threads_read 16 Workload: CassandraKeyValue - Num Threads Write: 1 - Num Threads Read: 16 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 0 --num_threads_read 32 Workload: CassandraKeyValue - Num Threads Write: 0 - Num Threads Read: 32 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 0 --num_threads_read 16 Workload: CassandraKeyValue - Num Threads Write: 0 - Num Threads Read: 16 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 1 --num_threads_read 1 Workload: CassandraKeyValue - Num Threads Write: 1 - Num Threads Read: 1 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 1 --num_threads_read 0 Workload: CassandraKeyValue - Num Threads Write: 1 - Num Threads Read: 0 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 0 --num_threads_read 1 Workload: CassandraKeyValue - Num Threads Write: 0 - Num Threads Read: 1 pts/yugabytedb-1.0.0 --workload CassandraKeyValue --nouuid --value_size 256 --num_threads_write 0 --num_threads_read 0 Workload: CassandraKeyValue - Num Threads Write: 0 - Num Threads Read: 0 pts/redis-1.4.0 -t lpush -c 1000 Test: LPUSH - Parallel Connections: 1000 pts/redis-1.4.0 -t sadd -c 1000 Test: SADD - Parallel Connections: 1000 pts/redis-1.4.0 -t lpush -c 500 Test: LPUSH - Parallel Connections: 500 pts/redis-1.4.0 -t lpop -c 1000 Test: LPOP - Parallel Connections: 1000 pts/redis-1.4.0 -t set -c 1000 Test: SET - Parallel Connections: 1000 pts/redis-1.4.0 -t sadd -c 500 Test: SADD - Parallel Connections: 500 pts/redis-1.4.0 -t lpush -c 50 Test: LPUSH - Parallel Connections: 50 pts/redis-1.4.0 -t lpop -c 500 Test: LPOP - Parallel Connections: 500 pts/redis-1.4.0 -t get -c 1000 Test: GET - Parallel Connections: 1000 pts/redis-1.4.0 -t set -c 500 Test: SET - Parallel Connections: 500 pts/redis-1.4.0 -t sadd -c 50 Test: SADD - Parallel Connections: 50 pts/redis-1.4.0 -t lpop -c 50 Test: LPOP - Parallel Connections: 50 pts/redis-1.4.0 -t get -c 500 Test: GET - Parallel Connections: 500 pts/redis-1.4.0 -t set -c 50 Test: SET - Parallel Connections: 50 pts/redis-1.4.0 -t get -c 50 Test: GET - Parallel Connections: 50 pts/dragonflydb-1.1.0 -c 100 --ratio=1:100 Clients Per Thread: 100 - Set To Get Ratio: 1:100 pts/dragonflydb-1.1.0 -c 60 --ratio=1:100 Clients Per Thread: 60 - Set To Get Ratio: 1:100 pts/dragonflydb-1.1.0 -c 50 --ratio=1:100 Clients Per Thread: 50 - Set To Get Ratio: 1:100 pts/dragonflydb-1.1.0 -c 20 --ratio=1:100 Clients Per Thread: 20 - Set To Get Ratio: 1:100 pts/dragonflydb-1.1.0 -c 100 --ratio=1:10 Clients Per Thread: 100 - Set To Get Ratio: 1:10 pts/dragonflydb-1.1.0 -c 10 --ratio=1:100 Clients Per Thread: 10 - Set To Get Ratio: 1:100 pts/dragonflydb-1.1.0 -c 60 --ratio=1:10 Clients Per Thread: 60 - Set To Get Ratio: 1:10 pts/dragonflydb-1.1.0 -c 50 --ratio=1:10 Clients Per Thread: 50 - Set To Get Ratio: 1:10 pts/dragonflydb-1.1.0 -c 20 --ratio=1:10 Clients Per Thread: 20 - Set To Get Ratio: 1:10 pts/dragonflydb-1.1.0 -c 100 --ratio=5:1 Clients Per Thread: 100 - Set To Get Ratio: 5:1 pts/dragonflydb-1.1.0 -c 100 --ratio=1:5 Clients Per Thread: 100 - Set To Get Ratio: 1:5 pts/dragonflydb-1.1.0 -c 100 --ratio=1:1 Clients Per Thread: 100 - Set To Get Ratio: 1:1 pts/dragonflydb-1.1.0 -c 10 --ratio=1:10 Clients Per Thread: 10 - Set To Get Ratio: 1:10 pts/dragonflydb-1.1.0 -c 60 --ratio=5:1 Clients Per Thread: 60 - Set To Get Ratio: 5:1 pts/dragonflydb-1.1.0 -c 60 --ratio=1:5 Clients Per Thread: 60 - Set To Get Ratio: 1:5 pts/dragonflydb-1.1.0 -c 60 --ratio=1:1 Clients Per Thread: 60 - Set To Get Ratio: 1:1 pts/dragonflydb-1.1.0 -c 50 --ratio=5:1 Clients Per Thread: 50 - Set To Get Ratio: 5:1 pts/dragonflydb-1.1.0 -c 50 --ratio=1:5 Clients Per Thread: 50 - Set To Get Ratio: 1:5 pts/dragonflydb-1.1.0 -c 50 --ratio=1:1 Clients Per Thread: 50 - Set To Get Ratio: 1:1 pts/dragonflydb-1.1.0 -c 20 --ratio=5:1 Clients Per Thread: 20 - Set To Get Ratio: 5:1 pts/dragonflydb-1.1.0 -c 20 --ratio=1:5 Clients Per Thread: 20 - Set To Get Ratio: 1:5 pts/dragonflydb-1.1.0 -c 20 --ratio=1:1 Clients Per Thread: 20 - Set To Get Ratio: 1:1 pts/dragonflydb-1.1.0 -c 10 --ratio=5:1 Clients Per Thread: 10 - Set To Get Ratio: 5:1 pts/dragonflydb-1.1.0 -c 10 --ratio=1:5 Clients Per Thread: 10 - Set To Get Ratio: 1:5 pts/dragonflydb-1.1.0 -c 10 --ratio=1:1 Clients Per Thread: 10 - Set To Get Ratio: 1:1 pts/apache-iotdb-1.2.0 800 100 800 400 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400 pts/apache-iotdb-1.2.0 800 100 800 100 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 pts/apache-iotdb-1.2.0 800 100 500 400 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400 pts/apache-iotdb-1.2.0 800 100 500 100 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 pts/apache-iotdb-1.2.0 800 100 200 400 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400 pts/apache-iotdb-1.2.0 800 100 200 100 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 pts/apache-iotdb-1.2.0 500 100 800 400 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400 pts/apache-iotdb-1.2.0 500 100 800 100 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 pts/apache-iotdb-1.2.0 500 100 500 400 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400 pts/apache-iotdb-1.2.0 500 100 500 100 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 pts/apache-iotdb-1.2.0 500 100 200 400 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400 pts/apache-iotdb-1.2.0 500 100 200 100 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 pts/apache-iotdb-1.2.0 200 100 800 400 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400 pts/apache-iotdb-1.2.0 200 100 800 100 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 pts/apache-iotdb-1.2.0 200 100 500 400 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400 pts/apache-iotdb-1.2.0 200 100 500 100 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 pts/apache-iotdb-1.2.0 200 100 200 400 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400 pts/apache-iotdb-1.2.0 200 100 200 100 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 pts/apache-iotdb-1.2.0 100 100 800 400 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400 pts/apache-iotdb-1.2.0 100 100 800 100 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 pts/apache-iotdb-1.2.0 100 100 500 400 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400 pts/apache-iotdb-1.2.0 100 100 500 100 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 pts/apache-iotdb-1.2.0 100 100 200 400 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400 pts/apache-iotdb-1.2.0 100 100 200 100 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 pts/apache-iotdb-1.2.0 800 1 800 400 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 400 pts/apache-iotdb-1.2.0 800 1 800 100 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100 pts/apache-iotdb-1.2.0 800 1 500 400 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 400 pts/apache-iotdb-1.2.0 800 1 500 100 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100 pts/apache-iotdb-1.2.0 800 1 200 400 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 400 pts/apache-iotdb-1.2.0 800 1 200 100 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100 pts/apache-iotdb-1.2.0 500 1 800 400 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 400 pts/apache-iotdb-1.2.0 500 1 800 100 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100 pts/apache-iotdb-1.2.0 500 1 500 400 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 400 pts/apache-iotdb-1.2.0 500 1 500 100 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100 pts/apache-iotdb-1.2.0 500 1 200 400 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 400 pts/apache-iotdb-1.2.0 500 1 200 100 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100 pts/apache-iotdb-1.2.0 200 1 800 400 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 400 pts/apache-iotdb-1.2.0 200 1 800 100 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100 pts/apache-iotdb-1.2.0 200 1 500 400 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 400 pts/apache-iotdb-1.2.0 200 1 500 100 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100 pts/apache-iotdb-1.2.0 200 1 200 400 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 400 pts/apache-iotdb-1.2.0 200 1 200 100 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100 pts/apache-iotdb-1.2.0 100 1 800 400 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 400 pts/apache-iotdb-1.2.0 100 1 800 100 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100 pts/apache-iotdb-1.2.0 100 1 500 400 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 400 pts/apache-iotdb-1.2.0 100 1 500 100 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100 pts/apache-iotdb-1.2.0 100 1 200 400 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 400 pts/apache-iotdb-1.2.0 100 1 200 100 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100 pts/cockroach-1.0.2 kv --ramp 10s --read-percent 95 --concurrency 1024 Workload: KV, 95% Reads - Concurrency: 1024 pts/cockroach-1.0.2 kv --ramp 10s --read-percent 60 --concurrency 1024 Workload: KV, 60% Reads - Concurrency: 1024 pts/cockroach-1.0.2 kv --ramp 10s --read-percent 50 --concurrency 1024 Workload: KV, 50% Reads - Concurrency: 1024 pts/cockroach-1.0.2 kv --ramp 10s --read-percent 10 --concurrency 1024 Workload: KV, 10% Reads - Concurrency: 1024 pts/cockroach-1.0.2 kv --ramp 10s --read-percent 95 --concurrency 512 Workload: KV, 95% Reads - Concurrency: 512 pts/cockroach-1.0.2 kv --ramp 10s --read-percent 95 --concurrency 256 Workload: KV, 95% Reads - Concurrency: 256 pts/cockroach-1.0.2 kv --ramp 10s --read-percent 95 --concurrency 128 Workload: KV, 95% Reads - Concurrency: 128 pts/cockroach-1.0.2 kv --ramp 10s --read-percent 60 --concurrency 512 Workload: KV, 60% Reads - Concurrency: 512 pts/cockroach-1.0.2 kv --ramp 10s --read-percent 60 --concurrency 256 Workload: KV, 60% Reads - Concurrency: 256 pts/cockroach-1.0.2 kv --ramp 10s --read-percent 60 --concurrency 128 Workload: KV, 60% Reads - Concurrency: 128 pts/cockroach-1.0.2 kv --ramp 10s --read-percent 50 --concurrency 512 Workload: KV, 50% Reads - Concurrency: 512 pts/cockroach-1.0.2 kv --ramp 10s --read-percent 50 --concurrency 256 Workload: KV, 50% Reads - Concurrency: 256 pts/cockroach-1.0.2 kv --ramp 10s --read-percent 50 --concurrency 128 Workload: KV, 50% Reads - Concurrency: 128 pts/cockroach-1.0.2 kv --ramp 10s --read-percent 10 --concurrency 512 Workload: KV, 10% Reads - Concurrency: 512 pts/cockroach-1.0.2 kv --ramp 10s --read-percent 10 --concurrency 256 Workload: KV, 10% Reads - Concurrency: 256 pts/cockroach-1.0.2 kv --ramp 10s --read-percent 10 --concurrency 128 Workload: KV, 10% Reads - Concurrency: 128 pts/cockroach-1.0.2 movr --concurrency 1024 Workload: MoVR - Concurrency: 1024 pts/cockroach-1.0.2 movr --concurrency 512 Workload: MoVR - Concurrency: 512 pts/cockroach-1.0.2 movr --concurrency 256 Workload: MoVR - Concurrency: 256 pts/cockroach-1.0.2 movr --concurrency 128 Workload: MoVR - Concurrency: 128 pts/spark-1.0.1 -r 40000000 -p 2000 Row Count: 40000000 - Partitions: 2000 pts/spark-1.0.1 -r 40000000 -p 1000 Row Count: 40000000 - Partitions: 1000 pts/spark-1.0.1 -r 20000000 -p 2000 Row Count: 20000000 - Partitions: 2000 pts/spark-1.0.1 -r 20000000 -p 1000 Row Count: 20000000 - Partitions: 1000 pts/spark-1.0.1 -r 10000000 -p 2000 Row Count: 10000000 - Partitions: 2000 pts/spark-1.0.1 -r 10000000 -p 1000 Row Count: 10000000 - Partitions: 1000 pts/spark-1.0.1 -r 40000000 -p 500 Row Count: 40000000 - Partitions: 500 pts/spark-1.0.1 -r 40000000 -p 100 Row Count: 40000000 - Partitions: 100 pts/spark-1.0.1 -r 20000000 -p 500 Row Count: 20000000 - Partitions: 500 pts/spark-1.0.1 -r 20000000 -p 100 Row Count: 20000000 - Partitions: 100 pts/spark-1.0.1 -r 10000000 -p 500 Row Count: 10000000 - Partitions: 500 pts/spark-1.0.1 -r 10000000 -p 100 Row Count: 10000000 - Partitions: 100 pts/spark-1.0.1 -r 1000000 -p 2000 Row Count: 1000000 - Partitions: 2000 pts/spark-1.0.1 -r 1000000 -p 1000 Row Count: 1000000 - Partitions: 1000 pts/spark-1.0.1 -r 1000000 -p 500 Row Count: 1000000 - Partitions: 500 pts/spark-1.0.1 -r 1000000 -p 100 Row Count: 1000000 - Partitions: 100 pts/etcd-1.0.0 range KEY --total=4000000 --conns 500 --clients 1000 Test: RANGE - Connections: 500 - Clients: 1000 pts/etcd-1.0.0 range KEY --total=4000000 --conns 100 --clients 1000 Test: RANGE - Connections: 100 - Clients: 1000 pts/etcd-1.0.0 range KEY --total=4000000 --conns 500 --clients 100 Test: RANGE - Connections: 500 - Clients: 100 pts/etcd-1.0.0 range KEY --total=4000000 --conns 50 --clients 1000 Test: RANGE - Connections: 50 - Clients: 1000 pts/etcd-1.0.0 range KEY --total=4000000 --conns 100 --clients 100 Test: RANGE - Connections: 100 - Clients: 100 pts/etcd-1.0.0 range KEY --total=4000000 --conns 50 --clients 100 Test: RANGE - Connections: 50 - Clients: 100 pts/etcd-1.0.0 put --total=4000000 --val-size=256 --key-size=8 --conns 500 --clients 1000 Test: PUT - Connections: 500 - Clients: 1000 pts/etcd-1.0.0 put --total=4000000 --val-size=256 --key-size=8 --conns 100 --clients 1000 Test: PUT - Connections: 100 - Clients: 1000 pts/etcd-1.0.0 put --total=4000000 --val-size=256 --key-size=8 --conns 500 --clients 100 Test: PUT - Connections: 500 - Clients: 100 pts/etcd-1.0.0 put --total=4000000 --val-size=256 --key-size=8 --conns 50 --clients 1000 Test: PUT - Connections: 50 - Clients: 1000 pts/etcd-1.0.0 put --total=4000000 --val-size=256 --key-size=8 --conns 100 --clients 100 Test: PUT - Connections: 100 - Clients: 100 pts/etcd-1.0.0 put --total=4000000 --val-size=256 --key-size=8 --conns 50 --clients 100 Test: PUT - Connections: 50 - Clients: 100 pts/clickhouse-1.2.0 pts/node-web-tooling-1.0.1 pts/openssl-3.1.0 -evp chacha20-poly1305 Algorithm: ChaCha20-Poly1305 pts/openssl-3.1.0 -evp aes-256-gcm Algorithm: AES-256-GCM pts/openssl-3.1.0 -evp aes-128-gcm Algorithm: AES-128-GCM pts/openssl-3.1.0 -evp chacha20 Algorithm: ChaCha20 pts/openssl-3.1.0 rsa4096 Algorithm: RSA4096 pts/openssl-3.1.0 sha512 Algorithm: SHA512 pts/openssl-3.1.0 sha256 Algorithm: SHA256 pts/perl-benchmark-1.0.1 benchmarks/startup/noprog.b Test: Interpreter pts/perl-benchmark-1.0.1 benchmarks/app/podhtml.b Test: Pod2html pts/ebizzy-1.0.4 pts/node-express-loadtest-1.0.1 pts/simdjson-2.0.1 distinct_user_id Throughput Test: DistinctUserID pts/simdjson-2.0.1 partial_tweets Throughput Test: PartialTweets pts/simdjson-2.0.1 large_random Throughput Test: LargeRandom pts/simdjson-2.0.1 top_tweet Throughput Test: TopTweet pts/simdjson-2.0.1 kostya Throughput Test: Kostya pts/blogbench-1.1.0 WRITE Test: Write pts/blogbench-1.1.0 READ Test: Read pts/sqlite-2.1.0 1 Threads / Copies: 1 pts/leveldb-1.1.0 --benchmarks=fillseq --num=500000 Benchmark: Sequential Fill pts/leveldb-1.1.0 --benchmarks=deleterandom --num=500000 Benchmark: Random Delete pts/leveldb-1.1.0 --benchmarks=seekrandom --num=1000000 Benchmark: Seek Random pts/leveldb-1.1.0 --benchmarks=readrandom --num=1000000 Benchmark: Random Read pts/leveldb-1.1.0 --benchmarks=fillrandom --num=100000 Benchmark: Random Fill pts/leveldb-1.1.0 --benchmarks=overwrite --num=100000 Benchmark: Overwrite pts/leveldb-1.1.0 --benchmarks=fillsync --num=1000000 Benchmark: Fill Sync pts/leveldb-1.1.0 --benchmarks=readhot --num=1000000 Benchmark: Hot Read pts/brl-cad-1.5.0 pts/git-1.1.0 pts/sysbench-1.1.0 cpu run Test: CPU pts/sysbench-1.1.0 memory run Test: Memory pts/blender-4.0.0 -b ../barbershop_interior_gpu.blend -o output.test -x 1 -F JPEG -f 1 NONE Blend File: Barbershop - Compute: CPU-Only pts/blender-4.0.0 -b ../classroom_gpu.blend -o output.test -x 1 -F JPEG -f 1 NONE Blend File: Classroom - Compute: CPU-Only pts/blender-4.0.0 -b ../bmw27_gpu.blend -o output.test -x 1 -F JPEG -f 1 NONE Blend File: BMW27 - Compute: CPU-Only pts/swet-1.0.0 pts/himeno-1.3.0 pts/x265-1.3.0 Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Video Input: Bosphorus 1080p pts/x265-1.3.0 Bosphorus_3840x2160.y4m Video Input: Bosphorus 4K pts/rodinia-1.3.2 OMP_CFD Test: OpenMP CFD Solver pts/rodinia-1.3.2 OMP_LAVAMD Test: OpenMP LavaMD pts/parboil-1.2.1 cutcp omp_base large Test: OpenMP CUTCP