ovh nvme kevin kvm writethrough discard Ubuntu 18.04

ovh nvme KVM writethrough BTRFS/ZFS/EXT4/LVM-THIN versus LXC on BTRFS Ubuntu 18.04

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2007127-EURO-200712128
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

C/C++ Compiler Tests 2 Tests
CPU Massive 3 Tests
Database Test Suite 7 Tests
Disk Test Suite 6 Tests
Java Tests 2 Tests
Common Kernel Benchmarks 4 Tests
Multi-Core 2 Tests
Server 7 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
ovh nvme kevin kvm on BTRFS writethrough discard Ubuntu 18.04
May 27 2020
  18 Hours, 29 Minutes
ovh nvme LXC on btrfs Ubuntu 18.04
May 28 2020
  18 Hours, 46 Minutes
ovh nvme KVM on ZFS writethrough discard Ubuntu 18.04
May 29 2020
  1 Day, 10 Hours, 48 Minutes
ovh nvme KVM on EXT4 writethrough discard Ubuntu 18.04
May 31 2020
  12 Hours, 54 Minutes
ovh nvme KVM Proxmox on LVM-thin writethrough discard Ubuntu 18.04
July 12 2020
  14 Hours, 37 Minutes
Invert Hiding All Results Option
  19 Hours, 55 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


ovh nvme kevin kvm writethrough discard Ubuntu 18.04 Suite 1.0.0 System Test suite extracted from ovh nvme kevin kvm writethrough discard Ubuntu 18.04. pts/hbase-1.0.1 increment 16 Test: Increment - Clients: 16 pts/pgbench-1.9.1 ON_DISK SINGLE_THREAD READ_ONLY Scaling: On-Disk - Test: Single Thread - Mode: Read Only pts/pgbench-1.9.1 ON_DISK HEAVY_CONTENTION READ_ONLY Scaling: On-Disk - Test: Heavy Contention - Mode: Read Only pts/pgbench-1.9.1 ON_DISK NORMAL_LOAD READ_WRITE Scaling: On-Disk - Test: Normal Load - Mode: Read Write pts/pgbench-1.9.1 ON_DISK SINGLE_THREAD READ_WRITE Scaling: On-Disk - Test: Single Thread - Mode: Read Write pts/pgbench-1.9.1 ON_DISK NORMAL_LOAD READ_ONLY Scaling: On-Disk - Test: Normal Load - Mode: Read Only pts/pgbench-1.9.1 ON_DISK HEAVY_CONTENTION READ_WRITE Scaling: On-Disk - Test: Heavy Contention - Mode: Read Write pts/dbench-1.0.0 12 12 Clients pts/pgbench-1.9.1 MOSTLY_CACHE HEAVY_CONTENTION READ_WRITE Scaling: Mostly RAM - Test: Heavy Contention - Mode: Read Write pts/dbench-1.0.0 1 1 Clients pts/pgbench-1.9.1 MOSTLY_CACHE SINGLE_THREAD READ_WRITE Scaling: Mostly RAM - Test: Single Thread - Mode: Read Write pts/pgbench-1.9.1 MOSTLY_CACHE NORMAL_LOAD READ_WRITE Scaling: Mostly RAM - Test: Normal Load - Mode: Read Write pts/cassandra-1.0.3 WRITE Test: Writes pts/pgbench-1.9.1 MOSTLY_CACHE HEAVY_CONTENTION READ_ONLY Scaling: Mostly RAM - Test: Heavy Contention - Mode: Read Only pts/pgbench-1.9.1 MOSTLY_CACHE NORMAL_LOAD READ_ONLY Scaling: Mostly RAM - Test: Normal Load - Mode: Read Only pts/pgbench-1.9.1 MOSTLY_CACHE SINGLE_THREAD READ_ONLY Scaling: Mostly RAM - Test: Single Thread - Mode: Read Only pts/hbase-1.0.1 increment 4 Test: Increment - Clients: 4 pts/compilebench-1.0.2 COMPILE Test: Compile pts/hbase-1.0.1 randomRead 4 Test: Random Read - Clients: 4 pts/fs-mark-1.0.2 -s 1048576 -n 5000 -t 4 Test: 5000 Files, 1MB Size, 4 Threads pts/hbase-1.0.1 randomRead 16 Test: Random Read - Clients: 16 pts/pgbench-1.9.1 BUFFER_TEST NORMAL_LOAD READ_WRITE Scaling: Buffer Test - Test: Normal Load - Mode: Read Write pts/sqlite-speedtest-1.0.0 Timed Time - Size 1,000 pts/pgbench-1.9.1 BUFFER_TEST SINGLE_THREAD READ_WRITE Scaling: Buffer Test - Test: Single Thread - Mode: Read Write pts/pgbench-1.9.1 BUFFER_TEST HEAVY_CONTENTION READ_WRITE Scaling: Buffer Test - Test: Heavy Contention - Mode: Read Write pts/fio-1.13.2 randread libaio 0 1 4k Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory pts/fio-1.13.2 write libaio 0 1 2m Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory pts/rocksdb-1.0.2 --benchmarks="fillsync" Test: Random Fill Sync pts/rocksdb-1.0.2 --benchmarks="readwhilewriting" Test: Read While Writing pts/fio-1.13.2 randwrite libaio 0 1 2m Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory pts/fio-1.13.2 read libaio 0 1 4k Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory pts/fio-1.13.2 write libaio 0 1 4k Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory pts/hbase-1.0.1 randomRead 1 Test: Random Read - Clients: 1 pts/sqlite-2.1.0 1 Threads / Copies: 1 pts/sqlite-2.1.0 Timed SQLite Insertions pts/hbase-1.0.1 randomWrite 4 Test: Random Write - Clients: 4 pts/pgbench-1.9.1 BUFFER_TEST HEAVY_CONTENTION READ_ONLY Scaling: Buffer Test - Test: Heavy Contention - Mode: Read Only pts/pgbench-1.9.1 BUFFER_TEST NORMAL_LOAD READ_ONLY Scaling: Buffer Test - Test: Normal Load - Mode: Read Only pts/pgbench-1.9.1 BUFFER_TEST SINGLE_THREAD READ_ONLY Scaling: Buffer Test - Test: Single Thread - Mode: Read Only pts/rocksdb-1.0.2 --benchmarks="fillrandom" Test: Random Fill pts/rocksdb-1.0.2 --benchmarks="readrandom" Test: Random Read pts/hbase-1.0.1 increment 1 Test: Increment - Clients: 1 pts/postmark-1.1.2 Disk Transaction Performance pts/fio-1.13.2 randwrite libaio 0 1 4k Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory pts/fio-1.13.2 randread libaio 0 1 2m Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory pts/fio-1.13.2 read libaio 0 1 2m Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory pts/hbase-1.0.1 randomWrite 1 Test: Random Write - Clients: 1 pts/redis-1.2.0 -t sadd Test: SADD pts/fs-mark-1.0.2 -s 1048576 -n 4000 -D 32 Test: 4000 Files, 32 Sub Dirs, 1MB Size pts/redis-1.2.0 -t set Test: SET pts/redis-1.2.0 -t get Test: GET pts/redis-1.2.0 -t lpop Test: LPOP pts/redis-1.2.0 -t lpush Test: LPUSH pts/fs-mark-1.0.2 -s 1048576 -n 1000 Test: 1000 Files, 1MB Size pts/fs-mark-1.0.2 -s 1048576 -n 1000 -S 0 Test: 1000 Files, 1MB Size, No Sync/FSync pts/rocksdb-1.0.2 --benchmarks="fillseq" Test: Sequential Fill pts/compilebench-1.0.2 READ_COMPILED_TREE Test: Read Compiled Tree pts/compilebench-1.0.2 INITIAL_CREATE Test: Initial Create