pve.virt.nvm1.fs.h3.base-2.run-1

pve.virt.nvm1.fs.h3.base-2.run-1

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2009054-DROP-PVEVIRT49
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
pve.virt.nvm1.fs.h3.base-2.run-1
September 04 2020
  7 Hours, 24 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


pve.virt.nvm1.fs.h3.base-2.run-1OpenBenchmarking.orgPhoronix Test SuiteCommon KVM (4 Cores)QEMU Standard PC (i440FX + PIIX 1996) (rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org BIOS)1 x 12288 MB RAM QEMU107GB QEMU HDDDebian 104.19.0-10-cloud-amd64 (x86_64)GCC 8.3.0ext4KVMProcessorMotherboardMemoryDiskOSKernelCompilerFile-SystemSystem LayerPve.virt.nvm1.fs.h3.base-2.run-1 BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - NONE / errors=remount-ro,relatime,rw- CPU Microcode: 0x1- itlb_multihit: KVM: Vulnerable + l1tf: Mitigation of PTE Inversion + mds: Vulnerable; SMT Host state unknown + meltdown: Vulnerable + spec_store_bypass: Vulnerable + spectre_v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers + spectre_v2: Vulnerable STIBP: disabled + srbds: Not affected + tsx_async_abort: Not affected

pve.virt.nvm1.fs.h3.base-2.run-1fio: Rand Read - Linux AIO - No - Yes - 1MB - Default Test Directoryfio: Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Rand Read - Linux AIO - No - Yes - 64KB - Default Test Directoryfio: Rand Write - Linux AIO - No - Yes - 1MB - Default Test Directoryfio: Rand Write - Linux AIO - No - Yes - 1MB - Default Test Directoryfio: Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Rand Write - Linux AIO - No - Yes - 64KB - Default Test Directoryfio: Rand Write - Linux AIO - No - Yes - 64KB - Default Test Directoryfio: Seq Read - Linux AIO - No - Yes - 1MB - Default Test Directoryfio: Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Seq Read - Linux AIO - No - Yes - 64KB - Default Test Directoryfio: Seq Write - Linux AIO - No - Yes - 1MB - Default Test Directoryfio: Seq Write - Linux AIO - No - Yes - 1MB - Default Test Directoryfio: Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directoryfio: Seq Write - Linux AIO - No - Yes - 64KB - Default Test Directoryfio: Seq Write - Linux AIO - No - Yes - 64KB - Default Test Directoryfs-mark: 4000 Files, 32 Sub Dirs, 1MB Sizefs-mark: 1000 Files, 1MB Size, No Sync/FSyncblogbench: Readblogbench: Writetiobench: Read - 32MB - 4tiobench: Read - 32MB - 8tiobench: Read - 64MB - 4tiobench: Read - 64MB - 8tiobench: Read - 128MB - 4tiobench: Read - 128MB - 8tiobench: Read - 256MB - 4tiobench: Read - 256MB - 8tiobench: Read - 32MB - 16tiobench: Read - 64MB - 16tiobench: Write - 32MB - 4tiobench: Write - 32MB - 8tiobench: Write - 64MB - 4tiobench: Write - 64MB - 8tiobench: Read - 128MB - 16tiobench: Read - 256MB - 16tiobench: Write - 128MB - 4tiobench: Write - 128MB - 8tiobench: Write - 256MB - 4tiobench: Write - 256MB - 8tiobench: Write - 32MB - 16tiobench: Write - 64MB - 16tiobench: Write - 128MB - 16tiobench: Write - 256MB - 16tiobench: Rand Read - 32MB - 4tiobench: Rand Read - 32MB - 8tiobench: Rand Read - 64MB - 4tiobench: Rand Read - 64MB - 8tiobench: Rand Read - 128MB - 4tiobench: Rand Read - 128MB - 8tiobench: Rand Read - 256MB - 4tiobench: Rand Read - 256MB - 8tiobench: Rand Read - 32MB - 16tiobench: Rand Read - 64MB - 16tiobench: Rand Write - 32MB - 4tiobench: Rand Write - 32MB - 8tiobench: Rand Write - 64MB - 4tiobench: Rand Write - 64MB - 8tiobench: Rand Read - 128MB - 16tiobench: Rand Read - 256MB - 16tiobench: Rand Write - 128MB - 4tiobench: Rand Write - 128MB - 8tiobench: Rand Write - 256MB - 4tiobench: Rand Write - 256MB - 8tiobench: Rand Write - 32MB - 16tiobench: Rand Write - 64MB - 16tiobench: Rand Write - 128MB - 16tiobench: Rand Write - 256MB - 16postmark: Disk Transaction Performanceopenssl: RSA 4096-bit Performancepve.virt.nvm1.fs.h3.base-2.run-113667830212667180000459145887741983335123819671360081820933318233345144510795203667505080800181.02324.5950836946916778.62412776.50316741.74515204.28518961.74415206.83116405.49717779.48615323.65817009.1509.64419.1759.58218.62218158.23619233.1879.38117.8489.23917.90436.51733.36632.55531.43659271.61871781.343112357.480129475.743190613.583222996.412396046.269502967.77871554.996146310.905474.983574.810978.129917.738236542.479455219.7561875.5091857.5885966.03416053.164637.7631012.1042159.9454720.1576000664.8OpenBenchmarking.org

Flexible IO Tester

Fio is an advanced disk benchmark that depends upon the kernel's AIO access library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.18Type: Random Read - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 1MB - Disk Target: Default Test Directorypve.virt.nvm1.fs.h3.base-2.run-13K6K9K12K15KSE +/- 33.33, N = 3136671. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native -lrt -laio -lz -lpthread -lm -ldl

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.18Type: Random Read - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directorypve.virt.nvm1.fs.h3.base-2.run-12004006008001000SE +/- 3.53, N = 38301. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native -lrt -laio -lz -lpthread -lm -ldl

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.18Type: Random Read - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directorypve.virt.nvm1.fs.h3.base-2.run-150K100K150K200K250KSE +/- 881.92, N = 32126671. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native -lrt -laio -lz -lpthread -lm -ldl

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.18Type: Random Read - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 64KB - Disk Target: Default Test Directorypve.virt.nvm1.fs.h3.base-2.run-140K80K120K160K200KSE +/- 2000.00, N = 31800001. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native -lrt -laio -lz -lpthread -lm -ldl

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.18Type: Random Write - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 1MB - Disk Target: Default Test Directorypve.virt.nvm1.fs.h3.base-2.run-110002000300040005000SE +/- 6.06, N = 345911. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native -lrt -laio -lz -lpthread -lm -ldl

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.18Type: Random Write - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 1MB - Disk Target: Default Test Directorypve.virt.nvm1.fs.h3.base-2.run-110002000300040005000SE +/- 6.06, N = 345881. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native -lrt -laio -lz -lpthread -lm -ldl

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.18Type: Random Write - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directorypve.virt.nvm1.fs.h3.base-2.run-1170340510680850SE +/- 2.00, N = 37741. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native -lrt -laio -lz -lpthread -lm -ldl

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.18Type: Random Write - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directorypve.virt.nvm1.fs.h3.base-2.run-140K80K120K160K200KSE +/- 333.33, N = 31983331. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native -lrt -laio -lz -lpthread -lm -ldl

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.18Type: Random Write - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 64KB - Disk Target: Default Test Directorypve.virt.nvm1.fs.h3.base-2.run-111002200330044005500SE +/- 70.51, N = 351231. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native -lrt -laio -lz -lpthread -lm -ldl

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.18Type: Random Write - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 64KB - Disk Target: Default Test Directorypve.virt.nvm1.fs.h3.base-2.run-120K40K60K80K100KSE +/- 1109.55, N = 3819671. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native -lrt -laio -lz -lpthread -lm -ldl

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.18Type: Sequential Read - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 1MB - Disk Target: Default Test Directorypve.virt.nvm1.fs.h3.base-2.run-13K6K9K12K15K136001. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native -lrt -laio -lz -lpthread -lm -ldl

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.18Type: Sequential Read - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directorypve.virt.nvm1.fs.h3.base-2.run-120040060080010008181. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native -lrt -laio -lz -lpthread -lm -ldl

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.18Type: Sequential Read - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directorypve.virt.nvm1.fs.h3.base-2.run-140K80K120K160K200KSE +/- 333.33, N = 32093331. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native -lrt -laio -lz -lpthread -lm -ldl

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.18Type: Sequential Read - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 64KB - Disk Target: Default Test Directorypve.virt.nvm1.fs.h3.base-2.run-140K80K120K160K200KSE +/- 2323.11, N = 151823331. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native -lrt -laio -lz -lpthread -lm -ldl

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.18Type: Sequential Write - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 1MB - Disk Target: Default Test Directorypve.virt.nvm1.fs.h3.base-2.run-110002000300040005000SE +/- 40.51, N = 345141. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native -lrt -laio -lz -lpthread -lm -ldl

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.18Type: Sequential Write - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 1MB - Disk Target: Default Test Directorypve.virt.nvm1.fs.h3.base-2.run-110002000300040005000SE +/- 40.51, N = 345101. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native -lrt -laio -lz -lpthread -lm -ldl

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.18Type: Sequential Write - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directorypve.virt.nvm1.fs.h3.base-2.run-12004006008001000SE +/- 1.00, N = 37951. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native -lrt -laio -lz -lpthread -lm -ldl

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.18Type: Sequential Write - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directorypve.virt.nvm1.fs.h3.base-2.run-140K80K120K160K200KSE +/- 333.33, N = 32036671. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native -lrt -laio -lz -lpthread -lm -ldl

OpenBenchmarking.orgMB/s, More Is BetterFlexible IO Tester 3.18Type: Sequential Write - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 64KB - Disk Target: Default Test Directorypve.virt.nvm1.fs.h3.base-2.run-111002200330044005500SE +/- 67.62, N = 350501. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native -lrt -laio -lz -lpthread -lm -ldl

OpenBenchmarking.orgIOPS, More Is BetterFlexible IO Tester 3.18Type: Sequential Write - Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 64KB - Disk Target: Default Test Directorypve.virt.nvm1.fs.h3.base-2.run-120K40K60K80K100KSE +/- 1078.58, N = 3808001. (CC) gcc options: -rdynamic -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native -lrt -laio -lz -lpthread -lm -ldl

FS-Mark

FS_Mark is designed to test a system's file-system performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.3Test: 4000 Files, 32 Sub Dirs, 1MB Sizepve.virt.nvm1.fs.h3.base-2.run-14080120160200SE +/- 2.31, N = 3181.01. (CC) gcc options: -static

OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.3Test: 1000 Files, 1MB Size, No Sync/FSyncpve.virt.nvm1.fs.h3.base-2.run-15001000150020002500SE +/- 16.77, N = 142324.51. (CC) gcc options: -static

BlogBench

BlogBench is designed to replicate the load of a real-world busy file server by stressing the file-system with multiple threads of random reads, writes, and rewrites. The behavior is mimicked of that of a blog by creating blogs with content and pictures, modifying blog posts, adding comments to these blogs, and then reading the content of the blogs. All of these blogs generated are created locally with fake content and pictures. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFinal Score, More Is BetterBlogBench 1.1Test: Readpve.virt.nvm1.fs.h3.base-2.run-1200K400K600K800K1000KSE +/- 3413.50, N = 39508361. (CC) gcc options: -O2 -pthread

OpenBenchmarking.orgFinal Score, More Is BetterBlogBench 1.1Test: Writepve.virt.nvm1.fs.h3.base-2.run-12K4K6K8K10KSE +/- 92.42, N = 394691. (CC) gcc options: -O2 -pthread

Threaded I/O Tester

Tiotester (Threaded I/O Tester) benchmarks the hard disk drive / file-system performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Read - Size Per Thread: 32MB - Thread Count: 4pve.virt.nvm1.fs.h3.base-2.run-14K8K12K16K20KSE +/- 67.30, N = 316778.621. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Read - Size Per Thread: 32MB - Thread Count: 8pve.virt.nvm1.fs.h3.base-2.run-13K6K9K12K15KSE +/- 327.14, N = 1212776.501. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Read - Size Per Thread: 64MB - Thread Count: 4pve.virt.nvm1.fs.h3.base-2.run-14K8K12K16K20KSE +/- 780.65, N = 1416741.751. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Read - Size Per Thread: 64MB - Thread Count: 8pve.virt.nvm1.fs.h3.base-2.run-13K6K9K12K15KSE +/- 323.33, N = 1515204.291. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Read - Size Per Thread: 128MB - Thread Count: 4pve.virt.nvm1.fs.h3.base-2.run-14K8K12K16K20KSE +/- 41.92, N = 318961.741. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Read - Size Per Thread: 128MB - Thread Count: 8pve.virt.nvm1.fs.h3.base-2.run-13K6K9K12K15KSE +/- 195.29, N = 315206.831. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Read - Size Per Thread: 256MB - Thread Count: 4pve.virt.nvm1.fs.h3.base-2.run-14K8K12K16K20KSE +/- 745.63, N = 1516405.501. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Read - Size Per Thread: 256MB - Thread Count: 8pve.virt.nvm1.fs.h3.base-2.run-14K8K12K16K20KSE +/- 373.37, N = 1517779.491. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Read - Size Per Thread: 32MB - Thread Count: 16pve.virt.nvm1.fs.h3.base-2.run-13K6K9K12K15KSE +/- 405.90, N = 1215323.661. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Read - Size Per Thread: 64MB - Thread Count: 16pve.virt.nvm1.fs.h3.base-2.run-14K8K12K16K20KSE +/- 289.52, N = 1217009.151. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Write - Size Per Thread: 32MB - Thread Count: 4pve.virt.nvm1.fs.h3.base-2.run-13691215SE +/- 0.079, N = 39.6441. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Write - Size Per Thread: 32MB - Thread Count: 8pve.virt.nvm1.fs.h3.base-2.run-1510152025SE +/- 0.02, N = 319.181. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Write - Size Per Thread: 64MB - Thread Count: 4pve.virt.nvm1.fs.h3.base-2.run-13691215SE +/- 0.032, N = 39.5821. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Write - Size Per Thread: 64MB - Thread Count: 8pve.virt.nvm1.fs.h3.base-2.run-1510152025SE +/- 0.04, N = 318.621. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Read - Size Per Thread: 128MB - Thread Count: 16pve.virt.nvm1.fs.h3.base-2.run-14K8K12K16K20KSE +/- 173.65, N = 1518158.241. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Read - Size Per Thread: 256MB - Thread Count: 16pve.virt.nvm1.fs.h3.base-2.run-14K8K12K16K20KSE +/- 282.30, N = 319233.191. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Write - Size Per Thread: 128MB - Thread Count: 4pve.virt.nvm1.fs.h3.base-2.run-13691215SE +/- 0.057, N = 39.3811. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Write - Size Per Thread: 128MB - Thread Count: 8pve.virt.nvm1.fs.h3.base-2.run-148121620SE +/- 0.05, N = 317.851. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Write - Size Per Thread: 256MB - Thread Count: 4pve.virt.nvm1.fs.h3.base-2.run-13691215SE +/- 0.042, N = 39.2391. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Write - Size Per Thread: 256MB - Thread Count: 8pve.virt.nvm1.fs.h3.base-2.run-148121620SE +/- 0.17, N = 317.901. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Write - Size Per Thread: 32MB - Thread Count: 16pve.virt.nvm1.fs.h3.base-2.run-1816243240SE +/- 0.18, N = 336.521. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Write - Size Per Thread: 64MB - Thread Count: 16pve.virt.nvm1.fs.h3.base-2.run-1816243240SE +/- 0.10, N = 333.371. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Write - Size Per Thread: 128MB - Thread Count: 16pve.virt.nvm1.fs.h3.base-2.run-1816243240SE +/- 0.23, N = 332.561. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Write - Size Per Thread: 256MB - Thread Count: 16pve.virt.nvm1.fs.h3.base-2.run-1714212835SE +/- 0.06, N = 331.441. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Random Read - Size Per Thread: 32MB - Thread Count: 4pve.virt.nvm1.fs.h3.base-2.run-113K26K39K52K65KSE +/- 691.87, N = 659271.621. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Random Read - Size Per Thread: 32MB - Thread Count: 8pve.virt.nvm1.fs.h3.base-2.run-115K30K45K60K75KSE +/- 3080.84, N = 1571781.341. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Random Read - Size Per Thread: 64MB - Thread Count: 4pve.virt.nvm1.fs.h3.base-2.run-120K40K60K80K100KSE +/- 3266.39, N = 15112357.481. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Random Read - Size Per Thread: 64MB - Thread Count: 8pve.virt.nvm1.fs.h3.base-2.run-130K60K90K120K150KSE +/- 8843.74, N = 13129475.741. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Random Read - Size Per Thread: 128MB - Thread Count: 4pve.virt.nvm1.fs.h3.base-2.run-140K80K120K160K200KSE +/- 18003.39, N = 12190613.581. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Random Read - Size Per Thread: 128MB - Thread Count: 8pve.virt.nvm1.fs.h3.base-2.run-150K100K150K200K250KSE +/- 19845.82, N = 12222996.411. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Random Read - Size Per Thread: 256MB - Thread Count: 4pve.virt.nvm1.fs.h3.base-2.run-180K160K240K320K400KSE +/- 29318.65, N = 12396046.271. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Random Read - Size Per Thread: 256MB - Thread Count: 8pve.virt.nvm1.fs.h3.base-2.run-1110K220K330K440K550KSE +/- 23990.32, N = 12502967.781. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Random Read - Size Per Thread: 32MB - Thread Count: 16pve.virt.nvm1.fs.h3.base-2.run-115K30K45K60K75KSE +/- 5025.49, N = 1271555.001. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Random Read - Size Per Thread: 64MB - Thread Count: 16pve.virt.nvm1.fs.h3.base-2.run-130K60K90K120K150KSE +/- 6444.05, N = 12146310.911. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Random Write - Size Per Thread: 32MB - Thread Count: 4pve.virt.nvm1.fs.h3.base-2.run-1100200300400500SE +/- 78.34, N = 15474.981. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Random Write - Size Per Thread: 32MB - Thread Count: 8pve.virt.nvm1.fs.h3.base-2.run-1120240360480600SE +/- 83.24, N = 12574.811. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Random Write - Size Per Thread: 64MB - Thread Count: 4pve.virt.nvm1.fs.h3.base-2.run-12004006008001000SE +/- 148.12, N = 15978.131. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Random Write - Size Per Thread: 64MB - Thread Count: 8pve.virt.nvm1.fs.h3.base-2.run-12004006008001000SE +/- 261.62, N = 15917.741. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Random Read - Size Per Thread: 128MB - Thread Count: 16pve.virt.nvm1.fs.h3.base-2.run-150K100K150K200K250KSE +/- 2283.79, N = 3236542.481. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Random Read - Size Per Thread: 256MB - Thread Count: 16pve.virt.nvm1.fs.h3.base-2.run-1100K200K300K400K500KSE +/- 37099.84, N = 9455219.761. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Random Write - Size Per Thread: 128MB - Thread Count: 4pve.virt.nvm1.fs.h3.base-2.run-1400800120016002000SE +/- 298.30, N = 151875.511. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Random Write - Size Per Thread: 128MB - Thread Count: 8pve.virt.nvm1.fs.h3.base-2.run-1400800120016002000SE +/- 224.47, N = 151857.591. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Random Write - Size Per Thread: 256MB - Thread Count: 4pve.virt.nvm1.fs.h3.base-2.run-113002600390052006500SE +/- 2130.23, N = 125966.031. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Random Write - Size Per Thread: 256MB - Thread Count: 8pve.virt.nvm1.fs.h3.base-2.run-13K6K9K12K15KSE +/- 159.31, N = 316053.161. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Random Write - Size Per Thread: 32MB - Thread Count: 16pve.virt.nvm1.fs.h3.base-2.run-1140280420560700SE +/- 136.26, N = 15637.761. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Random Write - Size Per Thread: 64MB - Thread Count: 16pve.virt.nvm1.fs.h3.base-2.run-12004006008001000SE +/- 20.12, N = 151012.101. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Random Write - Size Per Thread: 128MB - Thread Count: 16pve.virt.nvm1.fs.h3.base-2.run-15001000150020002500SE +/- 32.64, N = 32159.951. (CC) gcc options: -O2

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 20170503Test: Random Write - Size Per Thread: 256MB - Thread Count: 16pve.virt.nvm1.fs.h3.base-2.run-110002000300040005000SE +/- 308.12, N = 104720.161. (CC) gcc options: -O2

PostMark

This is a test of NetApp's PostMark benchmark designed to simulate small-file testing similar to the tasks endured by web and mail servers. This test profile will set PostMark to perform 25,000 transactions with 500 files simultaneously with the file sizes ranging between 5 and 512 kilobytes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostMark 1.51Disk Transaction Performancepve.virt.nvm1.fs.h3.base-2.run-113002600390052006500SE +/- 48.33, N = 360001. (CC) gcc options: -O3

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test measures the RSA 4096-bit performance of OpenSSL. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSigns Per Second, More Is BetterOpenSSL 1.1.1RSA 4096-bit Performancepve.virt.nvm1.fs.h3.base-2.run-1140280420560700SE +/- 0.51, N = 3664.81. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

74 Results Shown

Flexible IO Tester:
  Rand Read - Linux AIO - No - Yes - 1MB - Default Test Directory
  Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directory
  Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directory
  Rand Read - Linux AIO - No - Yes - 64KB - Default Test Directory
  Rand Write - Linux AIO - No - Yes - 1MB - Default Test Directory
  Rand Write - Linux AIO - No - Yes - 1MB - Default Test Directory
  Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directory
  Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directory
  Rand Write - Linux AIO - No - Yes - 64KB - Default Test Directory
  Rand Write - Linux AIO - No - Yes - 64KB - Default Test Directory
  Seq Read - Linux AIO - No - Yes - 1MB - Default Test Directory
  Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directory
  Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directory
  Seq Read - Linux AIO - No - Yes - 64KB - Default Test Directory
  Seq Write - Linux AIO - No - Yes - 1MB - Default Test Directory
  Seq Write - Linux AIO - No - Yes - 1MB - Default Test Directory
  Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directory
  Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directory
  Seq Write - Linux AIO - No - Yes - 64KB - Default Test Directory
  Seq Write - Linux AIO - No - Yes - 64KB - Default Test Directory
FS-Mark:
  4000 Files, 32 Sub Dirs, 1MB Size
  1000 Files, 1MB Size, No Sync/FSync
BlogBench:
  Read
  Write
Threaded I/O Tester:
  Read - 32MB - 4
  Read - 32MB - 8
  Read - 64MB - 4
  Read - 64MB - 8
  Read - 128MB - 4
  Read - 128MB - 8
  Read - 256MB - 4
  Read - 256MB - 8
  Read - 32MB - 16
  Read - 64MB - 16
  Write - 32MB - 4
  Write - 32MB - 8
  Write - 64MB - 4
  Write - 64MB - 8
  Read - 128MB - 16
  Read - 256MB - 16
  Write - 128MB - 4
  Write - 128MB - 8
  Write - 256MB - 4
  Write - 256MB - 8
  Write - 32MB - 16
  Write - 64MB - 16
  Write - 128MB - 16
  Write - 256MB - 16
  Rand Read - 32MB - 4
  Rand Read - 32MB - 8
  Rand Read - 64MB - 4
  Rand Read - 64MB - 8
  Rand Read - 128MB - 4
  Rand Read - 128MB - 8
  Rand Read - 256MB - 4
  Rand Read - 256MB - 8
  Rand Read - 32MB - 16
  Rand Read - 64MB - 16
  Rand Write - 32MB - 4
  Rand Write - 32MB - 8
  Rand Write - 64MB - 4
  Rand Write - 64MB - 8
  Rand Read - 128MB - 16
  Rand Read - 256MB - 16
  Rand Write - 128MB - 4
  Rand Write - 128MB - 8
  Rand Write - 256MB - 4
  Rand Write - 256MB - 8
  Rand Write - 32MB - 16
  Rand Write - 64MB - 16
  Rand Write - 128MB - 16
  Rand Write - 256MB - 16
PostMark
OpenSSL