bfgtest3

2 x Intel Xeon E5620 testing with a Intel S5520HC and Matrox MGA G200e [Pilot] (SEP1) on Debian 6.0.3 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/1201173-ASVA-120101892&grr&sor.

bfgtest3ProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelDesktopDisplay ServerDisplay DriverCompilerFile-Systemcfq default #1cfq MD0 read_ahead 0 #1cfq MD0 read_ahead 128 #1cfq MD0 read_ahead 768 #1cfq MD0 rq_affinity 1 #1cfq MD0 rq_affinity 2 #1cfq quantum 16 #1cfq slice_idle 0 #1cfq slice_idle 64 #1cfq low_latency 50 #1as default #1as antic_expire 0 #1as write_batch_expire 1000 #1as antic_expire 50 #1as ae 50 re 200 we 400 rbe 600 wbe 1000 #1as ae 1 re 20 we 40 rbe 60 wbe 100 #1noop default #1noop rq_affinity 2 #1noop rq_affinity 0 #1noop default read_ahead 0 #1noop default read_ahead 16 #1noop default read_ahead 32 #1noop default read_ahead 64 #1noop default nr_requests 32 #1noop default nr_requests 512 #1deadline default fm1 ws2 fb16 re500 we5000 #1deadline fm0 ws2 fb16 re500 we5000 #1deadline fm1 ws8 fb16 re500 we5000 #1deadline fm1 ws2 fb48 re500 we5000 #1deadline fm1 ws2 fb16 re150 we5000 #1deadline fm1 ws2 fb16 re500 we15000 #1deadline fm0 ws8 fb48 re150 we15000 #1deadline fm1 ws8 fb48 re150 we15000 #1as write_batch_expire 1000 #2as antic_expire 50 #2as ae 50 re 200 we 400 rbe 600 wbe 1000 #2as ae 1 re 20 we 40 rbe 60 wbe 100 #2noop default #2noop rq_affinity 2 #2noop rq_affinity 0 #2noop default read_ahead 0 #2noop default read_ahead 16 #2noop default read_ahead 32 #2noop default read_ahead 64 #2noop default nr_requests 32 #2noop default nr_requests 512 #2deadline default fm1 ws2 fb16 re500 we5000 #2deadline fm0 ws2 fb16 re500 we5000 #2deadline fm1 ws8 fb16 re500 we5000 #2deadline fm1 ws2 fb48 re500 we5000 #2deadline fm1 ws2 fb16 re150 we5000 #2deadline fm1 ws2 fb16 re500 we15000 #2deadline fm0 ws8 fb48 re150 we15000 #2deadline fm1 ws8 fb48 re150 we15000 #2cfq default #2cfq MD0 read_ahead 0 #2cfq MD0 read_ahead 128 #2cfq MD0 read_ahead 768 #2cfq MD0 rq_affinity 1 #2cfq MD0 rq_affinity 2 #2cfq quantum 16 #2cfq slice_idle 0 #2cfq slice_idle 64 #2cfq low_latency 50 #2as default #2as antic_expire 0 #22012-01-01 19:02cfq default #3cfq MD0 read_ahead 0 #3cfq MD0 read_ahead 128 #3cfq MD0 read_ahead 768 #3cfq MD0 rq_affinity 1 #3cfq MD0 rq_affinity 2 #3cfq quantum 16 #3cfq slice_idle 0 #3cfq slice_idle 64 #3cfq low_latency 50 #3as default #3as antic_expire 0 #3as write_batch_expire 1000 #3as antic_expire 50 #3as ae 50 re 200 we 400 rbe 600 wbe 1000 #3as ae 1 re 20 we 40 rbe 60 wbe 100 #3noop default #3noop rq_affinity 2 #3noop rq_affinity 0 #3noop default read_ahead 0 #3noop default read_ahead 16 #3noop default read_ahead 32 #3noop default read_ahead 64 #3noop default nr_requests 32 #3noop default nr_requests 512 #3deadline default fm1 ws2 fb16 re500 we5000 #3deadline fm0 ws2 fb16 re500 we5000 #3deadline fm1 ws8 fb16 re500 we5000 #3deadline fm1 ws2 fb48 re500 we5000 #3deadline fm1 ws2 fb16 re150 we5000 #3deadline fm1 ws2 fb16 re500 we15000 #3deadline fm0 ws8 fb48 re150 we15000 #3deadline fm1 ws8 fb48 re150 we15000 #3SSD_e4_swraid5122 x Intel Xeon E5620 @ 2.40GHz (16 Cores)Intel S5520HCIntel 5520 I/O + ICH10R24576MB320GB Western Digital WDC WD3200AAKX-0 + 5 x 1500GB Western Digital WDC WD15EARS-00Z + 1500GB Western Digital WDC WD15EARS-00M + 1500GB Western Digital WDC WD15EARS-22ZMatrox MGA G200e [Pilot] (SEP1)Intel 82575EB Gigabit ConnectionDebian 6.0.32.6.32-5-amd64 (x86_64)GNOME 2.30.2X Server 1.7.7matroxGCC 4.4.5ext42 x 60GB OCZ REVODRIVE + 320GB Western Digital WD3200AAKX-0 + 5 x 1500GB Western Digital WD15EARS-00Z + 1500GB Western Digital WD15EARS-00M + 1500GB Western Digital WD15EARS-22ZOpenBenchmarking.orgSystem Details- cfq default #1: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- cfq MD0 read_ahead 0 #1: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- cfq MD0 read_ahead 128 #1: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- cfq MD0 read_ahead 768 #1: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- cfq MD0 rq_affinity 1 #1: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- cfq MD0 rq_affinity 2 #1: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- cfq quantum 16 #1: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- cfq slice_idle 0 #1: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- cfq slice_idle 64 #1: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- cfq low_latency 50 #1: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- as default #1: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- as antic_expire 0 #1: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- as write_batch_expire 1000 #1: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- as antic_expire 50 #1: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- as ae 50 re 200 we 400 rbe 600 wbe 1000 #1: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- as ae 1 re 20 we 40 rbe 60 wbe 100 #1: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- noop default #1: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- noop rq_affinity 2 #1: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- noop rq_affinity 0 #1: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- noop default read_ahead 0 #1: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- noop default read_ahead 16 #1: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- noop default read_ahead 32 #1: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- noop default read_ahead 64 #1: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- noop default nr_requests 32 #1: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- noop default nr_requests 512 #1: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- deadline default fm1 ws2 fb16 re500 we5000 #1: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- deadline fm0 ws2 fb16 re500 we5000 #1: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- deadline fm1 ws8 fb16 re500 we5000 #1: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- deadline fm1 ws2 fb48 re500 we5000 #1: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- deadline fm1 ws2 fb16 re150 we5000 #1: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- deadline fm1 ws2 fb16 re500 we15000 #1: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- deadline fm0 ws8 fb48 re150 we15000 #1: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- deadline fm1 ws8 fb48 re150 we15000 #1: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- as write_batch_expire 1000 #2: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- as antic_expire 50 #2: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- as ae 50 re 200 we 400 rbe 600 wbe 1000 #2: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- as ae 1 re 20 we 40 rbe 60 wbe 100 #2: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- noop default #2: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- noop rq_affinity 2 #2: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- noop rq_affinity 0 #2: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- noop default read_ahead 0 #2: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- noop default read_ahead 16 #2: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- noop default read_ahead 32 #2: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- noop default read_ahead 64 #2: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- noop default nr_requests 32 #2: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- noop default nr_requests 512 #2: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- deadline default fm1 ws2 fb16 re500 we5000 #2: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- deadline fm0 ws2 fb16 re500 we5000 #2: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- deadline fm1 ws8 fb16 re500 we5000 #2: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- deadline fm1 ws2 fb48 re500 we5000 #2: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- deadline fm1 ws2 fb16 re150 we5000 #2: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- deadline fm1 ws2 fb16 re500 we15000 #2: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- deadline fm0 ws8 fb48 re150 we15000 #2: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- deadline fm1 ws8 fb48 re150 we15000 #2: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- cfq default #2: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- cfq MD0 read_ahead 0 #2: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- cfq MD0 read_ahead 128 #2: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- cfq MD0 read_ahead 768 #2: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- cfq MD0 rq_affinity 1 #2: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- cfq MD0 rq_affinity 2 #2: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- cfq quantum 16 #2: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- cfq slice_idle 0 #2: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- cfq slice_idle 64 #2: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- cfq low_latency 50 #2: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- as default #2: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- as antic_expire 0 #2: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- 2012-01-01 19:02: Disk Scheduler: CFQ. Intel SpeedStep was enabled.- cfq default #3: Disk Scheduler: CFQ. Python 2.6.6. Intel SpeedStep was enabled.- cfq MD0 read_ahead 0 #3: Disk Scheduler: CFQ. Python 2.6.6. Intel SpeedStep was enabled.- cfq MD0 read_ahead 128 #3: Disk Scheduler: CFQ. Python 2.6.6. Intel SpeedStep was enabled.- cfq MD0 read_ahead 768 #3: Disk Scheduler: CFQ. Python 2.6.6. Intel SpeedStep was enabled.- cfq MD0 rq_affinity 1 #3: Disk Scheduler: CFQ. Python 2.6.6. Intel SpeedStep was enabled.- cfq MD0 rq_affinity 2 #3: Disk Scheduler: CFQ. Python 2.6.6. Intel SpeedStep was enabled.- cfq quantum 16 #3: Disk Scheduler: CFQ. Python 2.6.6. Intel SpeedStep was enabled.- cfq slice_idle 0 #3: Disk Scheduler: CFQ. Python 2.6.6. Intel SpeedStep was enabled.- cfq slice_idle 64 #3: Disk Scheduler: CFQ. Python 2.6.6. Intel SpeedStep was enabled.- cfq low_latency 50 #3: Disk Scheduler: CFQ. Python 2.6.6. Intel SpeedStep was enabled.- as default #3: Disk Scheduler: CFQ. Python 2.6.6. Intel SpeedStep was enabled.- as antic_expire 0 #3: Disk Scheduler: CFQ. Python 2.6.6. Intel SpeedStep was enabled.- as write_batch_expire 1000 #3: Disk Scheduler: CFQ. Python 2.6.6. Intel SpeedStep was enabled.- as antic_expire 50 #3: Disk Scheduler: CFQ. Python 2.6.6. Intel SpeedStep was enabled.- as ae 50 re 200 we 400 rbe 600 wbe 1000 #3: Disk Scheduler: CFQ. Python 2.6.6. Intel SpeedStep was enabled.- as ae 1 re 20 we 40 rbe 60 wbe 100 #3: Disk Scheduler: CFQ. Python 2.6.6. Intel SpeedStep was enabled.- noop default #3: Disk Scheduler: CFQ. Python 2.6.6. Intel SpeedStep was enabled.- noop rq_affinity 2 #3: Disk Scheduler: CFQ. Python 2.6.6. Intel SpeedStep was enabled.- noop rq_affinity 0 #3: Disk Scheduler: CFQ. Python 2.6.6. Intel SpeedStep was enabled.- noop default read_ahead 0 #3: Disk Scheduler: CFQ. Python 2.6.6. Intel SpeedStep was enabled.- noop default read_ahead 16 #3: Disk Scheduler: CFQ. Python 2.6.6. Intel SpeedStep was enabled.- noop default read_ahead 32 #3: Disk Scheduler: CFQ. Python 2.6.6. Intel SpeedStep was enabled.- noop default read_ahead 64 #3: Disk Scheduler: CFQ. Python 2.6.6. Intel SpeedStep was enabled.- noop default nr_requests 32 #3: Disk Scheduler: CFQ. Python 2.6.6. Intel SpeedStep was enabled.- noop default nr_requests 512 #3: Disk Scheduler: CFQ. Python 2.6.6. Intel SpeedStep was enabled.- deadline default fm1 ws2 fb16 re500 we5000 #3: Disk Scheduler: CFQ. Python 2.6.6. Intel SpeedStep was enabled.- deadline fm0 ws2 fb16 re500 we5000 #3: Disk Scheduler: CFQ. Python 2.6.6. Intel SpeedStep was enabled.- deadline fm1 ws8 fb16 re500 we5000 #3: Disk Scheduler: CFQ. Python 2.6.6. Intel SpeedStep was enabled.- deadline fm1 ws2 fb48 re500 we5000 #3: Disk Scheduler: CFQ. Python 2.6.6. Intel SpeedStep was enabled.- deadline fm1 ws2 fb16 re150 we5000 #3: Disk Scheduler: CFQ. Python 2.6.6. Intel SpeedStep was enabled.- deadline fm1 ws2 fb16 re500 we15000 #3: Disk Scheduler: CFQ. Python 2.6.6. Intel SpeedStep was enabled.- deadline fm0 ws8 fb48 re150 we15000 #3: Disk Scheduler: CFQ. Python 2.6.6. Intel SpeedStep was enabled.- deadline fm1 ws8 fb48 re150 we15000 #3: Disk Scheduler: CFQ. Python 2.6.6. Intel SpeedStep was enabled.- SSD_e4_swraid512: Disk Scheduler: ANTICIPATORY. Python 2.6.6.

bfgtest3compilebench: Read Compiled Treecompilebench: Compilecompilebench: Initial Createunpack-linux: linux-2.6.32.tar.bz2postmark: Disk Transaction Performancefio: Example Network Jobfio: Intel IOMeter File Server Access Patternfs-mark: 4000 Files, 32 Sub Dirs, 1MB Sizefs-mark: 5000 Files, 1MB Size, 4 Threadsfs-mark: 1000 Files, 1MB Size, No Sync/FSyncfs-mark: 1000 Files, 1MB Sizetiobench: Rand Read - 256MB - 8tiobench: Read - 256MB - 8tiobench: Rand Write - 256MB - 8tiobench: Write - 256MB - 8aio-stress: Rand Writetiobench: Rand Read - 4096MB - 8tiobench: Read - 4096MB - 8tiobench: Rand Write - 4096MB - 8tiobench: Write - 4096MB - 8cfq default #1cfq MD0 read_ahead 0 #1cfq MD0 read_ahead 128 #1cfq MD0 read_ahead 768 #1cfq MD0 rq_affinity 1 #1cfq MD0 rq_affinity 2 #1cfq quantum 16 #1cfq slice_idle 0 #1cfq slice_idle 64 #1cfq low_latency 50 #1as default #1as antic_expire 0 #1as write_batch_expire 1000 #1as antic_expire 50 #1as ae 50 re 200 we 400 rbe 600 wbe 1000 #1as ae 1 re 20 we 40 rbe 60 wbe 100 #1noop default #1noop rq_affinity 2 #1noop rq_affinity 0 #1noop default read_ahead 0 #1noop default read_ahead 16 #1noop default read_ahead 32 #1noop default read_ahead 64 #1noop default nr_requests 32 #1noop default nr_requests 512 #1deadline default fm1 ws2 fb16 re500 we5000 #1deadline fm0 ws2 fb16 re500 we5000 #1deadline fm1 ws8 fb16 re500 we5000 #1deadline fm1 ws2 fb48 re500 we5000 #1deadline fm1 ws2 fb16 re150 we5000 #1deadline fm1 ws2 fb16 re500 we15000 #1deadline fm0 ws8 fb48 re150 we15000 #1deadline fm1 ws8 fb48 re150 we15000 #1as write_batch_expire 1000 #2as antic_expire 50 #2as ae 50 re 200 we 400 rbe 600 wbe 1000 #2as ae 1 re 20 we 40 rbe 60 wbe 100 #2noop default #2noop rq_affinity 2 #2noop rq_affinity 0 #2noop default read_ahead 0 #2noop default read_ahead 16 #2noop default read_ahead 32 #2noop default read_ahead 64 #2noop default nr_requests 32 #2noop default nr_requests 512 #2deadline default fm1 ws2 fb16 re500 we5000 #2deadline fm0 ws2 fb16 re500 we5000 #2deadline fm1 ws8 fb16 re500 we5000 #2deadline fm1 ws2 fb48 re500 we5000 #2deadline fm1 ws2 fb16 re150 we5000 #2deadline fm1 ws2 fb16 re500 we15000 #2deadline fm0 ws8 fb48 re150 we15000 #2deadline fm1 ws8 fb48 re150 we15000 #2cfq default #2cfq MD0 read_ahead 0 #2cfq MD0 read_ahead 128 #2cfq MD0 read_ahead 768 #2cfq MD0 rq_affinity 1 #2cfq MD0 rq_affinity 2 #2cfq quantum 16 #2cfq slice_idle 0 #2cfq slice_idle 64 #2cfq low_latency 50 #2as default #2as antic_expire 0 #22012-01-01 19:02cfq default #3cfq MD0 read_ahead 0 #3cfq MD0 read_ahead 128 #3cfq MD0 read_ahead 768 #3cfq MD0 rq_affinity 1 #3cfq MD0 rq_affinity 2 #3cfq quantum 16 #3cfq slice_idle 0 #3cfq slice_idle 64 #3cfq low_latency 50 #3as default #3as antic_expire 0 #3as write_batch_expire 1000 #3as antic_expire 50 #3as ae 50 re 200 we 400 rbe 600 wbe 1000 #3as ae 1 re 20 we 40 rbe 60 wbe 100 #3noop default #3noop rq_affinity 2 #3noop rq_affinity 0 #3noop default read_ahead 0 #3noop default read_ahead 16 #3noop default read_ahead 32 #3noop default read_ahead 64 #3noop default nr_requests 32 #3noop default nr_requests 512 #3deadline default fm1 ws2 fb16 re500 we5000 #3deadline fm0 ws2 fb16 re500 we5000 #3deadline fm1 ws8 fb16 re500 we5000 #3deadline fm1 ws2 fb48 re500 we5000 #3deadline fm1 ws2 fb16 re150 we5000 #3deadline fm1 ws2 fb16 re500 we15000 #3deadline fm0 ws8 fb48 re150 we15000 #3deadline fm1 ws8 fb48 re150 we15000 #3SSD_e4_swraid5121143.66727.99156.9114.3835719.613623.5422.2085.751525.6320.4716368.7921322.781.90173.512170.893.69235.840.94203.591003.62725.17157.0214.2135719.273350.7121.0785.921525.0721.9316493.7519748.671.91166.822158.214.2845.680.95202.921157.19724.16157.9414.4335718.913589.3921.0386.471525.3022.8016027.9221023.521.71168.162159.964.1862.210.91202.70982.91726.06157.9414.2235718.833569.0521.0786.371526.6719.4314970.6018760.391.50170.772147.184.31212.030.84200.751101.63725.12156.1214.5335719.853380.5020.9085.401534.4320.2015049.1622362.621.84171.962168.744.02256.790.85203.801127.84730.21157.2314.3840479.523378.7420.7785.301535.8319.6314942.3018385.301.52170.132155.254.22244.530.90203.311159.15729.12160.2514.5839288.813425.6321.2086.831537.3320.6717154.2420579.891.71170.482174.144.15238.931.00201.331180.04729.11156.8614.3941669.013374.5120.9085.701534.2021.8014725.8021477.451.63169.382170.024.10294.760.93204.381100.52729.78155.7614.2436319.193385.0220.8383.801536.3322.5015982.1422647.451.50172.042171.593.76229.640.93203.011057.54722.63158.0114.3740479.103364.0820.5084.681529.7022.1215112.2122641.841.61169.262165.914.08239.560.91203.231139.31747.63156.6214.3535719.222895.6025.4093.031503.5527.7515808.0720488.481.83184.622151.413.74289.760.89213.281139.09743.30157.1114.2039889.482860.4024.7089.951533.3727.9014945.8221378.911.84185.802166.014.01302.290.90211.271106.02758.06155.9114.6936119.992866.6227.2098.471525.4029.7416309.3220623.052.12187.132166.203.77285.980.91212.501135.17748.15154.7814.5136909.482884.4126.6095.771530.9725.2714390.7721918.522.11184.592175.562.40241.890.86214.621169.05753.20156.6714.3635718.532883.4126.6790.251530.4730.4316776.6419933.902.11182.642152.402.49260.450.92215.431210.16728.33157.2214.5236319.282864.7427.1389.121534.9033.5014128.9917933.812.00179.812174.453.33272.620.83206.411179.74765.01156.5414.2635718.803035.4926.0793.381530.9027.2515799.4718772.741.75162.062123.864.09305.940.82205.841130.76756.13159.9814.5136909.472917.6624.7395.551515.1028.4714253.6120801.191.78159.122157.134.06301.720.84208.971038.07760.18155.2514.1937509.762877.7025.7092.931527.5029.6015437.5422656.771.76157.502165.154.27355.290.84206.291115.09758.59157.7014.5735719.673024.8325.4389.931529.6326.5215642.9017597.011.49163.772167.454.14306.980.86207.951158.23754.81156.0714.5735719.672936.6324.9388.131533.9727.4315169.7021783.611.52161.612163.934.30304.490.86205.031138.32760.62155.0814.53357110.083046.2424.2389.031541.6730.2715395.7121788.291.37163.532169.274.29303.280.81204.151046.45757.14156.8214.5638099.683055.4224.2086.271525.8726.7314259.8722601.421.41160.692162.994.24306.780.84203.311203.13746.52155.3814.6235719.453120.5024.1785.131531.1028.3316505.3822373.741.35154.282173.054.25294.630.82201.311108.93758.16154.8214.2336319.752956.3724.7885.251528.3731.3317438.0122252.631.49162.452151.304.22298.120.84205.65964.24747.01156.3914.1035719.943028.6624.5386.301529.4725.8014295.5320093.451.42165.852167.653.94303.630.84206.53993.55760.29159.3114.1335719.712923.9724.6086.671534.2329.0015587.4121951.911.44163.502163.844.16294.510.83206.511176.66766.55156.2314.07357110.302909.1124.8787.581523.0324.8017150.4820246.851.43163.822171.083.99303.710.83206.731167.23739.62157.3914.3538699.403057.3924.1085.621533.7723.6815117.4320467.671.71162.922167.554.21299.150.84206.031156.22767.79156.6614.5135719.473069.7424.5785.471531.4026.6317152.5121085.151.50160.952170.943.96305.640.83204.461068.44756.71156.6914.47357110.652944.0524.2387.701536.0724.5713252.4220142.291.43163.562152.134.24311.110.83203.791159.40738.73155.2514.41380910.193054.3524.0386.831525.6025.9815307.9122732.551.35161.492166.414.02296.200.84205.741163.21747.36157.1314.3735719.852908.0724.8085.181534.9728.0016725.6320548.821.40162.812163.964.31297.520.85206.0614.31364514.273571823.03799.96154.6714.53357110.0244.69101.80167.371456.04104.8311200.5714109.8981.31293.051776.7880.94511.6780.10322.50OpenBenchmarking.org

Compile Bench

Test: Read Compiled Tree

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Read Compiled Treeas ae 1 re 20 we 40 rbe 60 wbe 100 #1noop default nr_requests 32 #1cfq slice_idle 0 #1noop default #1deadline fm1 ws8 fb16 re500 we5000 #1as ae 50 re 200 we 400 rbe 600 wbe 1000 #1deadline fm1 ws2 fb48 re500 we5000 #1deadline fm1 ws8 fb48 re150 we15000 #1deadline fm0 ws8 fb48 re150 we15000 #1cfq quantum 16 #1noop default read_ahead 16 #1cfq MD0 read_ahead 128 #1deadline fm1 ws2 fb16 re150 we5000 #1cfq default #1as default #1as antic_expire 0 #1noop default read_ahead 32 #1as antic_expire 50 #1noop rq_affinity 2 #1cfq MD0 rq_affinity 2 #1noop default read_ahead 0 #1noop default nr_requests 512 #1as write_batch_expire 1000 #1cfq MD0 rq_affinity 1 #1cfq slice_idle 64 #1deadline fm1 ws2 fb16 re500 we15000 #1cfq low_latency 50 #1noop default read_ahead 64 #1noop rq_affinity 0 #1cfq MD0 read_ahead 0 #1deadline fm0 ws2 fb16 re500 we5000 #1cfq MD0 read_ahead 768 #1deadline default fm1 ws2 fb16 re500 we5000 #1SSD_e4_swraid51230060090012001500SE +/- 12.37, N = 3SE +/- 6.60, N = 3SE +/- 33.48, N = 3SE +/- 27.72, N = 3SE +/- 33.07, N = 3SE +/- 22.23, N = 3SE +/- 12.33, N = 3SE +/- 31.09, N = 3SE +/- 18.90, N = 6SE +/- 29.68, N = 3SE +/- 22.24, N = 3SE +/- 32.13, N = 3SE +/- 15.99, N = 3SE +/- 16.74, N = 6SE +/- 36.38, N = 3SE +/- 35.53, N = 3SE +/- 21.04, N = 6SE +/- 29.25, N = 3SE +/- 26.97, N = 3SE +/- 30.44, N = 3SE +/- 2.68, N = 3SE +/- 4.01, N = 3SE +/- 6.91, N = 3SE +/- 6.92, N = 3SE +/- 12.32, N = 3SE +/- 65.21, N = 6SE +/- 94.70, N = 4SE +/- 147.28, N = 3SE +/- 139.11, N = 3SE +/- 152.67, N = 3SE +/- 151.65, N = 3SE +/- 110.20, N = 3SE +/- 88.50, N = 6SE +/- 164.81, N = 31210.161203.131180.041179.741176.661169.051167.231163.211159.401159.151158.231157.191156.221143.661139.311139.091138.321135.171130.761127.841115.091108.931106.021101.631100.521068.441057.541046.451038.071003.62993.55982.91964.24823.03

Compile Bench

Test: Compile

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: CompileSSD_e4_swraid512deadline fm1 ws2 fb16 re150 we5000 #1deadline fm1 ws8 fb16 re500 we5000 #1noop default #1noop default read_ahead 32 #1deadline fm0 ws2 fb16 re500 we5000 #1noop rq_affinity 0 #1noop default read_ahead 0 #1noop default nr_requests 512 #1as write_batch_expire 1000 #1noop default read_ahead 64 #1deadline fm1 ws2 fb16 re500 we15000 #1noop rq_affinity 2 #1noop default read_ahead 16 #1as ae 50 re 200 we 400 rbe 600 wbe 1000 #1as antic_expire 50 #1as default #1deadline fm1 ws8 fb48 re150 we15000 #1deadline default fm1 ws2 fb16 re500 we5000 #1noop default nr_requests 32 #1as antic_expire 0 #1deadline fm1 ws2 fb48 re500 we5000 #1deadline fm0 ws8 fb48 re150 we15000 #1cfq MD0 rq_affinity 2 #1cfq slice_idle 64 #1cfq quantum 16 #1cfq slice_idle 0 #1as ae 1 re 20 we 40 rbe 60 wbe 100 #1cfq default #1cfq MD0 read_ahead 768 #1cfq MD0 read_ahead 0 #1cfq MD0 rq_affinity 1 #1cfq MD0 read_ahead 128 #1cfq low_latency 50 #12004006008001000SE +/- 15.00, N = 3SE +/- 9.01, N = 3SE +/- 5.59, N = 3SE +/- 3.28, N = 3SE +/- 4.72, N = 3SE +/- 3.14, N = 3SE +/- 3.22, N = 3SE +/- 7.02, N = 3SE +/- 1.23, N = 3SE +/- 5.74, N = 3SE +/- 4.18, N = 3SE +/- 4.73, N = 3SE +/- 5.93, N = 3SE +/- 2.50, N = 3SE +/- 11.67, N = 3SE +/- 8.58, N = 3SE +/- 7.17, N = 3SE +/- 9.10, N = 3SE +/- 9.66, N = 3SE +/- 9.67, N = 3SE +/- 7.61, N = 3SE +/- 6.64, N = 3SE +/- 10.39, N = 3SE +/- 1.58, N = 3SE +/- 1.09, N = 3SE +/- 2.46, N = 3SE +/- 1.18, N = 3SE +/- 1.49, N = 3SE +/- 0.83, N = 3SE +/- 2.07, N = 3SE +/- 3.52, N = 3SE +/- 4.68, N = 3SE +/- 2.34, N = 3SE +/- 3.02, N = 3799.96767.79766.55765.01760.62760.29760.18758.59758.16758.06757.14756.71756.13754.81753.20748.15747.63747.36747.01746.52743.30739.62738.73730.21729.78729.12729.11728.33727.99726.06725.17725.12724.16722.63

Compile Bench

Test: Initial Create

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Initial Createcfq quantum 16 #1noop rq_affinity 2 #1deadline fm0 ws2 fb16 re500 we5000 #1cfq low_latency 50 #1cfq MD0 read_ahead 768 #1cfq MD0 read_ahead 128 #1noop default read_ahead 0 #1deadline fm1 ws2 fb48 re500 we5000 #1cfq MD0 rq_affinity 2 #1as ae 1 re 20 we 40 rbe 60 wbe 100 #1deadline fm1 ws8 fb48 re150 we15000 #1as antic_expire 0 #1cfq MD0 read_ahead 0 #1cfq default #1cfq slice_idle 0 #1noop default read_ahead 64 #1deadline fm1 ws2 fb16 re500 we15000 #1as ae 50 re 200 we 400 rbe 600 wbe 1000 #1deadline fm1 ws2 fb16 re150 we5000 #1as default #1noop default #1deadline default fm1 ws2 fb16 re500 we5000 #1deadline fm1 ws8 fb16 re500 we5000 #1cfq MD0 rq_affinity 1 #1noop default read_ahead 16 #1as write_batch_expire 1000 #1cfq slice_idle 64 #1noop default nr_requests 32 #1deadline fm0 ws8 fb48 re150 we15000 #1noop rq_affinity 0 #1noop default read_ahead 32 #1noop default nr_requests 512 #1as antic_expire 50 #1SSD_e4_swraid5124080120160200SE +/- 2.59, N = 3SE +/- 3.22, N = 3SE +/- 0.12, N = 3SE +/- 2.55, N = 4SE +/- 2.70, N = 3SE +/- 1.86, N = 3SE +/- 2.24, N = 3SE +/- 2.36, N = 3SE +/- 2.31, N = 3SE +/- 2.10, N = 3SE +/- 2.42, N = 3SE +/- 1.64, N = 3SE +/- 1.99, N = 3SE +/- 2.95, N = 6SE +/- 0.31, N = 3SE +/- 2.15, N = 3SE +/- 2.57, N = 6SE +/- 2.21, N = 5SE +/- 2.41, N = 6SE +/- 2.78, N = 3SE +/- 1.55, N = 3SE +/- 2.80, N = 6SE +/- 1.95, N = 3SE +/- 2.33, N = 3SE +/- 2.23, N = 3SE +/- 0.22, N = 3SE +/- 2.12, N = 3SE +/- 2.85, N = 6SE +/- 3.35, N = 6SE +/- 1.97, N = 3SE +/- 3.06, N = 6SE +/- 1.87, N = 3SE +/- 1.94, N = 3SE +/- 1.07, N = 3160.25159.98159.31158.01157.94157.94157.70157.39157.23157.22157.13157.11157.02156.91156.86156.82156.69156.67156.66156.62156.54156.39156.23156.12156.07155.91155.76155.38155.25155.25155.08154.82154.78154.67

Unpacking The Linux Kernel

linux-2.6.32.tar.bz2

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernellinux-2.6.32.tar.bz2deadline fm1 ws8 fb16 re500 we5000 #1deadline default fm1 ws2 fb16 re500 we5000 #1deadline fm0 ws2 fb16 re500 we5000 #1noop rq_affinity 0 #1as antic_expire 0 #1cfq MD0 read_ahead 0 #1cfq MD0 read_ahead 768 #1noop default nr_requests 512 #1cfq slice_idle 64 #1noop default #12012-01-01 19:02as write_batch_expire 1000 #2deadline fm1 ws2 fb48 re500 we5000 #1as default #1as ae 50 re 200 we 400 rbe 600 wbe 1000 #1deadline fm1 ws8 fb48 re150 we15000 #1cfq low_latency 50 #1cfq default #1cfq MD0 rq_affinity 2 #1cfq slice_idle 0 #1deadline fm0 ws8 fb48 re150 we15000 #1cfq MD0 read_ahead 128 #1deadline fm1 ws2 fb16 re500 we15000 #1as antic_expire 50 #1noop rq_affinity 2 #1deadline fm1 ws2 fb16 re150 we5000 #1as ae 1 re 20 we 40 rbe 60 wbe 100 #1noop default read_ahead 32 #1cfq MD0 rq_affinity 1 #1SSD_e4_swraid512noop default read_ahead 64 #1noop default read_ahead 0 #1noop default read_ahead 16 #1cfq quantum 16 #1noop default nr_requests 32 #1as write_batch_expire 1000 #148121620SE +/- 0.16, N = 4SE +/- 0.13, N = 4SE +/- 0.10, N = 4SE +/- 0.18, N = 4SE +/- 0.06, N = 4SE +/- 0.11, N = 4SE +/- 0.14, N = 4SE +/- 0.20, N = 4SE +/- 0.12, N = 4SE +/- 0.17, N = 4SE +/- 0.10, N = 4SE +/- 0.15, N = 4SE +/- 0.22, N = 4SE +/- 0.16, N = 4SE +/- 0.12, N = 4SE +/- 0.11, N = 4SE +/- 0.22, N = 4SE +/- 0.05, N = 4SE +/- 0.07, N = 4SE +/- 0.06, N = 4SE +/- 0.04, N = 4SE +/- 0.07, N = 4SE +/- 0.02, N = 4SE +/- 0.02, N = 4SE +/- 0.08, N = 4SE +/- 0.13, N = 4SE +/- 0.09, N = 4SE +/- 0.05, N = 4SE +/- 0.14, N = 4SE +/- 0.06, N = 4SE +/- 0.21, N = 5SE +/- 0.02, N = 4SE +/- 0.10, N = 4SE +/- 0.11, N = 4SE +/- 0.11, N = 4SE +/- 0.06, N = 414.0714.1014.1314.1914.2014.2114.2214.2314.2414.2614.2714.3114.3514.3514.3614.3714.3714.3814.3814.3914.4114.4314.4714.5114.5114.5114.5214.5314.5314.5314.5614.5714.5714.5814.6214.69

PostMark

Disk Transaction Performance

OpenBenchmarking.orgTPS, More Is BetterPostMark 1.51Disk Transaction Performancecfq slice_idle 0 #1cfq low_latency 50 #1cfq MD0 rq_affinity 2 #1as antic_expire 0 #1cfq quantum 16 #1deadline fm1 ws2 fb48 re500 we5000 #1deadline fm0 ws8 fb48 re150 we15000 #1noop default read_ahead 64 #1noop rq_affinity 0 #1noop rq_affinity 2 #1as antic_expire 50 #1as write_batch_expire 1000 #2cfq slice_idle 64 #1noop default nr_requests 512 #1as ae 1 re 20 we 40 rbe 60 wbe 100 #1as write_batch_expire 1000 #1SSD_e4_swraid5122012-01-01 19:02as default #1cfq MD0 rq_affinity 1 #1cfq MD0 read_ahead 768 #1cfq MD0 read_ahead 128 #1cfq MD0 read_ahead 0 #1cfq default #1deadline fm1 ws8 fb48 re150 we15000 #1deadline fm1 ws2 fb16 re500 we15000 #1deadline fm1 ws2 fb16 re150 we5000 #1deadline fm1 ws8 fb16 re500 we5000 #1deadline fm0 ws2 fb16 re500 we5000 #1deadline default fm1 ws2 fb16 re500 we5000 #1noop default nr_requests 32 #1noop default read_ahead 32 #1noop default read_ahead 16 #1noop default read_ahead 0 #1noop default #1as ae 50 re 200 we 400 rbe 600 wbe 1000 #19001800270036004500SE +/- 0.00, N = 5SE +/- 79.33, N = 10SE +/- 79.33, N = 10SE +/- 90.89, N = 10SE +/- 97.16, N = 10SE +/- 99.17, N = 10SE +/- 97.16, N = 10SE +/- 97.16, N = 10SE +/- 90.89, N = 10SE +/- 79.33, N = 10SE +/- 79.33, N = 10SE +/- 97.27, N = 10SE +/- 59.50, N = 10SE +/- 59.50, N = 10SE +/- 59.50, N = 10SE +/- 121.23, N = 10SE +/- 0.00, N = 5SE +/- 0.00, N = 5SE +/- 0.00, N = 5SE +/- 0.00, N = 5SE +/- 0.00, N = 5SE +/- 0.00, N = 5SE +/- 0.00, N = 5SE +/- 0.00, N = 5SE +/- 0.00, N = 5SE +/- 0.00, N = 5SE +/- 0.00, N = 5SE +/- 0.00, N = 5SE +/- 0.00, N = 5SE +/- 0.00, N = 5SE +/- 0.00, N = 5SE +/- 0.00, N = 5SE +/- 0.00, N = 5SE +/- 0.00, N = 5SE +/- 0.00, N = 5SE +/- 0.00, N = 5416640474047398839283869380938093750369036903645363136313631361135713571357135713571357135713571357135713571357135713571357135713571357135713571

Flexible IO Tester

Test: Example Network Job

OpenBenchmarking.orgSeconds (Run Time), Fewer Is BetterFlexible IO Tester 1.57Test: Example Network Jobas ae 50 re 200 we 400 rbe 600 wbe 1000 #1noop default #1cfq quantum 16 #1cfq MD0 read_ahead 768 #1cfq MD0 read_ahead 128 #1cfq slice_idle 0 #1cfq low_latency 50 #1cfq slice_idle 64 #1as default #1cfq MD0 read_ahead 0 #1as ae 1 re 20 we 40 rbe 60 wbe 100 #1deadline fm1 ws2 fb48 re500 we5000 #1noop default nr_requests 32 #1noop rq_affinity 2 #1deadline fm1 ws2 fb16 re150 we5000 #1as antic_expire 0 #1as antic_expire 50 #1cfq MD0 rq_affinity 2 #1cfq default #1noop default read_ahead 0 #1noop default read_ahead 16 #1noop default read_ahead 64 #1deadline fm0 ws2 fb16 re500 we5000 #1noop default nr_requests 512 #1noop rq_affinity 0 #1cfq MD0 rq_affinity 1 #1deadline fm1 ws8 fb48 re150 we15000 #1deadline default fm1 ws2 fb16 re500 we5000 #1as write_batch_expire 1000 #1SSD_e4_swraid512noop default read_ahead 32 #1deadline fm0 ws8 fb48 re150 we15000 #1deadline fm1 ws8 fb16 re500 we5000 #1deadline fm1 ws2 fb16 re500 we15000 #13691215SE +/- 0.36, N = 6SE +/- 0.09, N = 3SE +/- 0.13, N = 3SE +/- 0.07, N = 3SE +/- 0.25, N = 6SE +/- 0.14, N = 3SE +/- 0.10, N = 3SE +/- 0.31, N = 6SE +/- 0.22, N = 6SE +/- 0.20, N = 6SE +/- 0.24, N = 6SE +/- 0.18, N = 3SE +/- 0.30, N = 6SE +/- 0.26, N = 6SE +/- 0.18, N = 3SE +/- 0.27, N = 6SE +/- 0.21, N = 6SE +/- 0.20, N = 6SE +/- 0.28, N = 6SE +/- 0.28, N = 6SE +/- 0.40, N = 6SE +/- 0.21, N = 6SE +/- 0.23, N = 6SE +/- 0.25, N = 6SE +/- 0.25, N = 6SE +/- 0.21, N = 6SE +/- 0.23, N = 6SE +/- 0.11, N = 3SE +/- 0.04, N = 3SE +/- 0.54, N = 6SE +/- 0.13, N = 3SE +/- 0.05, N = 3SE +/- 0.07, N = 3SE +/- 0.79, N = 68.538.808.818.838.919.019.109.199.229.279.289.409.459.479.479.489.489.529.619.679.679.689.719.759.769.859.859.949.9910.0210.0810.1910.3010.65

Flexible IO Tester

Test: Intel IOMeter File Server Access Pattern

OpenBenchmarking.orgSeconds (Run Time), Fewer Is BetterFlexible IO Tester 1.57Test: Intel IOMeter File Server Access PatternSSD_e4_swraid512as antic_expire 0 #1as ae 1 re 20 we 40 rbe 60 wbe 100 #1as write_batch_expire 1000 #1noop rq_affinity 0 #1as ae 50 re 200 we 400 rbe 600 wbe 1000 #1as antic_expire 50 #1as default #1deadline fm1 ws8 fb48 re150 we15000 #1deadline fm1 ws8 fb16 re500 we5000 #1noop rq_affinity 2 #1deadline fm0 ws2 fb16 re500 we5000 #1noop default read_ahead 16 #1deadline fm1 ws2 fb16 re500 we15000 #1noop default nr_requests 512 #1noop default read_ahead 0 #1deadline default fm1 ws2 fb16 re500 we5000 #1noop default #1noop default read_ahead 32 #1deadline fm0 ws8 fb48 re150 we15000 #1noop default read_ahead 64 #1deadline fm1 ws2 fb48 re500 we5000 #1deadline fm1 ws2 fb16 re150 we5000 #1noop default nr_requests 32 #1cfq MD0 read_ahead 0 #1cfq low_latency 50 #1cfq slice_idle 0 #1cfq MD0 rq_affinity 2 #1cfq MD0 rq_affinity 1 #1cfq slice_idle 64 #1cfq quantum 16 #1cfq MD0 read_ahead 768 #1cfq MD0 read_ahead 128 #1cfq default #18001600240032004000SE +/- 0.18, N = 3SE +/- 17.60, N = 3SE +/- 12.63, N = 3SE +/- 8.76, N = 3SE +/- 17.24, N = 3SE +/- 71.45, N = 3SE +/- 8.87, N = 3SE +/- 18.55, N = 3SE +/- 7.67, N = 3SE +/- 12.51, N = 3SE +/- 4.73, N = 3SE +/- 52.56, N = 3SE +/- 53.06, N = 3SE +/- 8.66, N = 3SE +/- 22.17, N = 3SE +/- 76.60, N = 3SE +/- 64.06, N = 3SE +/- 88.07, N = 3SE +/- 113.22, N = 3SE +/- 125.63, N = 3SE +/- 112.89, N = 3SE +/- 118.46, N = 3SE +/- 94.37, N = 3SE +/- 225.81, N = 3SE +/- 4.54, N = 3SE +/- 14.58, N = 3SE +/- 13.91, N = 3SE +/- 48.12, N = 3SE +/- 20.54, N = 3SE +/- 11.04, N = 3SE +/- 5.74, N = 3SE +/- 229.05, N = 3SE +/- 230.56, N = 3SE +/- 218.27, N = 344.692860.402864.742866.622877.702883.412884.412895.602908.072909.112917.662923.972936.632944.052956.373024.833028.663035.493046.243054.353055.423057.393069.743120.503350.713364.083374.513378.743380.503385.023425.633569.053589.393623.54

FS-Mark

Test: 4000 Files, 32 Sub Dirs, 1MB Size

OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.3Test: 4000 Files, 32 Sub Dirs, 1MB SizeSSD_e4_swraid512as write_batch_expire 1000 #1as ae 1 re 20 we 40 rbe 60 wbe 100 #1as ae 50 re 200 we 400 rbe 600 wbe 1000 #1as antic_expire 50 #1noop default #1noop rq_affinity 0 #1noop default read_ahead 0 #1as default #1noop default read_ahead 16 #1deadline fm1 ws8 fb16 re500 we5000 #1deadline fm1 ws8 fb48 re150 we15000 #1noop default nr_requests 512 #1noop rq_affinity 2 #1as antic_expire 0 #1deadline fm0 ws2 fb16 re500 we5000 #1deadline fm1 ws2 fb16 re150 we5000 #1deadline default fm1 ws2 fb16 re500 we5000 #1deadline fm1 ws2 fb16 re500 we15000 #1noop default read_ahead 32 #1noop default read_ahead 64 #1noop default nr_requests 32 #1deadline fm1 ws2 fb48 re500 we5000 #1deadline fm0 ws8 fb48 re150 we15000 #1cfq default #1cfq quantum 16 #1cfq MD0 read_ahead 768 #1cfq MD0 read_ahead 0 #1cfq MD0 read_ahead 128 #1cfq slice_idle 0 #1cfq MD0 rq_affinity 1 #1cfq slice_idle 64 #1cfq MD0 rq_affinity 2 #1cfq low_latency 50 #120406080100SE +/- 1.23, N = 3SE +/- 0.06, N = 3SE +/- 0.24, N = 3SE +/- 0.26, N = 3SE +/- 0.23, N = 3SE +/- 0.12, N = 3SE +/- 0.46, N = 3SE +/- 0.12, N = 3SE +/- 0.20, N = 3SE +/- 0.12, N = 3SE +/- 0.20, N = 3SE +/- 0.25, N = 3SE +/- 0.38, N = 4SE +/- 0.35, N = 3SE +/- 0.29, N = 3SE +/- 0.26, N = 3SE +/- 0.17, N = 3SE +/- 0.17, N = 3SE +/- 0.32, N = 3SE +/- 0.15, N = 3SE +/- 0.36, N = 4SE +/- 0.13, N = 3SE +/- 0.15, N = 3SE +/- 0.15, N = 3SE +/- 0.17, N = 3SE +/- 0.15, N = 3SE +/- 0.19, N = 3SE +/- 0.18, N = 3SE +/- 0.37, N = 3SE +/- 0.35, N = 3SE +/- 0.29, N = 3SE +/- 0.23, N = 3SE +/- 0.18, N = 3SE +/- 0.30, N = 6101.8027.2027.1326.6726.6026.0725.7025.4325.4024.9324.8724.8024.7824.7324.7024.6024.5724.5324.2324.2324.2024.1724.1024.0322.2021.2021.0721.0721.0320.9020.9020.8320.7720.50

FS-Mark

Test: 5000 Files, 1MB Size, 4 Threads

OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.3Test: 5000 Files, 1MB Size, 4 ThreadsSSD_e4_swraid512as write_batch_expire 1000 #1as antic_expire 50 #1noop rq_affinity 2 #1noop default #1as default #1noop rq_affinity 0 #1as ae 50 re 200 we 400 rbe 600 wbe 1000 #1as antic_expire 0 #1noop default read_ahead 0 #1as ae 1 re 20 we 40 rbe 60 wbe 100 #1noop default read_ahead 32 #1noop default read_ahead 16 #1deadline fm1 ws2 fb16 re500 we15000 #1deadline fm1 ws8 fb16 re500 we5000 #1deadline fm0 ws8 fb48 re150 we15000 #1cfq quantum 16 #1deadline fm0 ws2 fb16 re500 we5000 #1cfq MD0 read_ahead 128 #1cfq MD0 read_ahead 768 #1deadline default fm1 ws2 fb16 re500 we5000 #1noop default read_ahead 64 #1cfq MD0 read_ahead 0 #1cfq default #1cfq slice_idle 0 #1deadline fm1 ws2 fb48 re500 we5000 #1deadline fm1 ws2 fb16 re150 we5000 #1cfq MD0 rq_affinity 1 #1cfq MD0 rq_affinity 2 #1noop default nr_requests 512 #1deadline fm1 ws8 fb48 re150 we15000 #1noop default nr_requests 32 #1cfq low_latency 50 #1cfq slice_idle 64 #14080120160200SE +/- 0.62, N = 3SE +/- 1.72, N = 3SE +/- 1.62, N = 3SE +/- 2.20, N = 6SE +/- 3.19, N = 6SE +/- 1.57, N = 3SE +/- 2.94, N = 6SE +/- 3.69, N = 6SE +/- 2.52, N = 6SE +/- 3.34, N = 6SE +/- 2.91, N = 6SE +/- 2.19, N = 6SE +/- 2.66, N = 6SE +/- 2.09, N = 6SE +/- 3.27, N = 6SE +/- 2.24, N = 6SE +/- 0.86, N = 3SE +/- 2.91, N = 6SE +/- 1.51, N = 6SE +/- 1.17, N = 3SE +/- 3.09, N = 6SE +/- 2.57, N = 6SE +/- 2.56, N = 6SE +/- 3.23, N = 6SE +/- 1.59, N = 3SE +/- 3.03, N = 6SE +/- 3.37, N = 6SE +/- 1.23, N = 3SE +/- 0.32, N = 3SE +/- 3.21, N = 6SE +/- 2.81, N = 6SE +/- 2.99, N = 6SE +/- 1.42, N = 4SE +/- 1.36, N = 3167.3798.4795.7795.5593.3893.0392.9390.2589.9589.9389.1289.0388.1387.7087.5886.8386.8386.6786.4786.3786.3086.2785.9285.7585.7085.6285.4785.4085.3085.2585.1885.1384.6883.80

FS-Mark

Test: 1000 Files, 1MB Size, No Sync/FSync

OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.3Test: 1000 Files, 1MB Size, No Sync/FSyncnoop default read_ahead 32 #1cfq quantum 16 #1cfq slice_idle 64 #1deadline fm1 ws2 fb16 re500 we15000 #1cfq MD0 rq_affinity 2 #1deadline fm1 ws8 fb48 re150 we15000 #1as ae 1 re 20 we 40 rbe 60 wbe 100 #1cfq MD0 rq_affinity 1 #1deadline fm0 ws2 fb16 re500 we5000 #1cfq slice_idle 0 #1noop default read_ahead 16 #1deadline fm1 ws2 fb48 re500 we5000 #1as antic_expire 0 #1deadline fm1 ws2 fb16 re150 we5000 #1noop default nr_requests 32 #1as antic_expire 50 #1noop default #1as ae 50 re 200 we 400 rbe 600 wbe 1000 #1cfq low_latency 50 #1noop default read_ahead 0 #1deadline default fm1 ws2 fb16 re500 we5000 #1noop default nr_requests 512 #1noop rq_affinity 0 #1cfq MD0 read_ahead 768 #1noop default read_ahead 64 #1cfq default #1deadline fm0 ws8 fb48 re150 we15000 #1as write_batch_expire 1000 #1cfq MD0 read_ahead 128 #1cfq MD0 read_ahead 0 #1deadline fm1 ws8 fb16 re500 we5000 #1noop rq_affinity 2 #1as default #1SSD_e4_swraid51230060090012001500SE +/- 3.04, N = 3SE +/- 4.16, N = 3SE +/- 2.07, N = 3SE +/- 3.82, N = 3SE +/- 4.64, N = 3SE +/- 1.20, N = 3SE +/- 2.78, N = 3SE +/- 2.06, N = 3SE +/- 3.88, N = 3SE +/- 3.27, N = 3SE +/- 1.20, N = 3SE +/- 3.36, N = 3SE +/- 2.75, N = 3SE +/- 5.25, N = 3SE +/- 3.35, N = 3SE +/- 0.55, N = 3SE +/- 4.40, N = 3SE +/- 5.01, N = 3SE +/- 5.54, N = 3SE +/- 5.37, N = 3SE +/- 6.43, N = 3SE +/- 4.24, N = 3SE +/- 1.07, N = 3SE +/- 3.55, N = 3SE +/- 3.81, N = 3SE +/- 0.58, N = 3SE +/- 4.72, N = 3SE +/- 12.44, N = 3SE +/- 0.70, N = 3SE +/- 3.14, N = 3SE +/- 5.33, N = 3SE +/- 16.79, N = 3SE +/- 25.00, N = 6SE +/- 21.71, N = 51541.671537.331536.331536.071535.831534.971534.901534.431534.231534.201533.971533.771533.371531.401531.101530.971530.901530.471529.701529.631529.471528.371527.501526.671525.871525.631525.601525.401525.301525.071523.031515.101503.551456.04

FS-Mark

Test: 1000 Files, 1MB Size

OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.3Test: 1000 Files, 1MB SizeSSD_e4_swraid512as ae 1 re 20 we 40 rbe 60 wbe 100 #1noop default nr_requests 512 #1as ae 50 re 200 we 400 rbe 600 wbe 1000 #1noop default read_ahead 32 #1as write_batch_expire 1000 #1noop rq_affinity 0 #1deadline fm0 ws2 fb16 re500 we5000 #1noop rq_affinity 2 #1noop default nr_requests 32 #1deadline fm1 ws8 fb48 re150 we15000 #1as antic_expire 0 #1as default #1noop default read_ahead 16 #1noop default #1noop default read_ahead 64 #1deadline fm1 ws2 fb16 re150 we5000 #1noop default read_ahead 0 #1deadline fm0 ws8 fb48 re150 we15000 #1deadline default fm1 ws2 fb16 re500 we5000 #1as antic_expire 50 #1deadline fm1 ws8 fb16 re500 we5000 #1deadline fm1 ws2 fb16 re500 we15000 #1deadline fm1 ws2 fb48 re500 we5000 #1cfq MD0 read_ahead 128 #1cfq slice_idle 64 #1cfq low_latency 50 #1cfq MD0 read_ahead 0 #1cfq slice_idle 0 #1cfq quantum 16 #1cfq default #1cfq MD0 rq_affinity 1 #1cfq MD0 rq_affinity 2 #1cfq MD0 read_ahead 768 #120406080100SE +/- 0.79, N = 3SE +/- 0.51, N = 6SE +/- 0.53, N = 3SE +/- 0.34, N = 3SE +/- 0.24, N = 3SE +/- 0.46, N = 5SE +/- 0.56, N = 3SE +/- 0.15, N = 3SE +/- 0.99, N = 6SE +/- 1.09, N = 6SE +/- 0.53, N = 3SE +/- 0.40, N = 3SE +/- 0.61, N = 6SE +/- 0.39, N = 3SE +/- 0.52, N = 6SE +/- 0.70, N = 6SE +/- 0.29, N = 3SE +/- 1.00, N = 6SE +/- 1.05, N = 6SE +/- 1.01, N = 6SE +/- 0.12, N = 3SE +/- 0.82, N = 6SE +/- 0.65, N = 6SE +/- 0.35, N = 4SE +/- 0.15, N = 3SE +/- 0.23, N = 3SE +/- 0.57, N = 6SE +/- 0.65, N = 6SE +/- 0.25, N = 3SE +/- 0.35, N = 6SE +/- 0.40, N = 6SE +/- 0.45, N = 6SE +/- 0.13, N = 3SE +/- 0.28, N = 3104.8333.5031.3330.4330.2729.7429.6029.0028.4728.3328.0027.9027.7527.4327.2526.7326.6326.5225.9825.8025.2724.8024.5723.6822.8022.5022.1221.9321.8020.6720.4720.2019.6319.43

Threaded I/O Tester

Test: Random Read - Size Per Thread: 256MB - Thread Count: 8

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 0.3.3Test: Random Read - Size Per Thread: 256MB - Thread Count: 8noop default nr_requests 512 #1cfq quantum 16 #1deadline fm1 ws2 fb16 re150 we5000 #1deadline fm1 ws8 fb16 re500 we5000 #1as ae 50 re 200 we 400 rbe 600 wbe 1000 #1deadline fm1 ws8 fb48 re150 we15000 #1noop default nr_requests 32 #1cfq MD0 read_ahead 0 #1cfq default #1as write_batch_expire 1000 #1cfq MD0 read_ahead 128 #1cfq slice_idle 64 #1as default #1noop default #1noop default read_ahead 0 #1deadline fm0 ws2 fb16 re500 we5000 #1noop rq_affinity 0 #1noop default read_ahead 32 #1deadline fm0 ws8 fb48 re150 we15000 #1noop default read_ahead 16 #1deadline fm1 ws2 fb48 re500 we5000 #1cfq low_latency 50 #1cfq MD0 rq_affinity 1 #1cfq MD0 read_ahead 768 #1as antic_expire 0 #1cfq MD0 rq_affinity 2 #1cfq slice_idle 0 #1as antic_expire 50 #1deadline default fm1 ws2 fb16 re500 we5000 #1noop default read_ahead 64 #1noop rq_affinity 2 #1as ae 1 re 20 we 40 rbe 60 wbe 100 #1deadline fm1 ws2 fb16 re500 we15000 #1SSD_e4_swraid5124K8K12K16K20KSE +/- 103.03, N = 4SE +/- 48.44, N = 4SE +/- 76.30, N = 4SE +/- 88.04, N = 4SE +/- 120.52, N = 4SE +/- 172.87, N = 4SE +/- 486.20, N = 8SE +/- 259.02, N = 8SE +/- 597.22, N = 8SE +/- 534.79, N = 8SE +/- 603.60, N = 8SE +/- 732.80, N = 8SE +/- 670.32, N = 8SE +/- 932.64, N = 8SE +/- 942.47, N = 8SE +/- 979.21, N = 8SE +/- 965.77, N = 8SE +/- 848.96, N = 8SE +/- 846.90, N = 8SE +/- 841.67, N = 8SE +/- 963.45, N = 8SE +/- 723.27, N = 8SE +/- 1065.96, N = 8SE +/- 954.32, N = 8SE +/- 1011.99, N = 8SE +/- 1003.35, N = 8SE +/- 880.97, N = 8SE +/- 1030.41, N = 8SE +/- 1132.15, N = 8SE +/- 1032.60, N = 8SE +/- 1091.90, N = 8SE +/- 1075.30, N = 8SE +/- 1157.73, N = 8SE +/- 381.13, N = 817438.0117154.2417152.5117150.4816776.6416725.6316505.3816493.7516368.7916309.3216027.9215982.1415808.0715799.4715642.9015587.4115437.5415395.7115307.9115169.7015117.4315112.2115049.1614970.6014945.8214942.3014725.8014390.7714295.5314259.8714253.6114128.9913252.4211200.57

Threaded I/O Tester

Test: Read - Size Per Thread: 256MB - Thread Count: 8

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 0.3.3Test: Read - Size Per Thread: 256MB - Thread Count: 8deadline fm0 ws8 fb48 re150 we15000 #1noop rq_affinity 0 #1cfq slice_idle 64 #1cfq low_latency 50 #1noop default read_ahead 64 #1noop default nr_requests 32 #1cfq MD0 rq_affinity 1 #1noop default nr_requests 512 #1deadline fm0 ws2 fb16 re500 we5000 #1as antic_expire 50 #1noop default read_ahead 32 #1noop default read_ahead 16 #1cfq slice_idle 0 #1as antic_expire 0 #1cfq default #1deadline fm1 ws2 fb16 re150 we5000 #1cfq MD0 read_ahead 128 #1noop rq_affinity 2 #1as write_batch_expire 1000 #1cfq quantum 16 #1deadline fm1 ws8 fb48 re150 we15000 #1as default #1deadline fm1 ws2 fb48 re500 we5000 #1deadline fm1 ws8 fb16 re500 we5000 #1deadline fm1 ws2 fb16 re500 we15000 #1deadline default fm1 ws2 fb16 re500 we5000 #1as ae 50 re 200 we 400 rbe 600 wbe 1000 #1cfq MD0 read_ahead 0 #1noop default #1cfq MD0 read_ahead 768 #1cfq MD0 rq_affinity 2 #1as ae 1 re 20 we 40 rbe 60 wbe 100 #1noop default read_ahead 0 #1SSD_e4_swraid5125K10K15K20K25KSE +/- 41.62, N = 4SE +/- 11.83, N = 4SE +/- 27.43, N = 4SE +/- 29.96, N = 4SE +/- 20.51, N = 4SE +/- 281.51, N = 4SE +/- 274.16, N = 4SE +/- 343.41, N = 5SE +/- 729.78, N = 8SE +/- 754.25, N = 8SE +/- 894.71, N = 8SE +/- 889.92, N = 8SE +/- 715.53, N = 8SE +/- 823.84, N = 8SE +/- 1211.30, N = 8SE +/- 1069.06, N = 8SE +/- 1018.69, N = 8SE +/- 1203.62, N = 8SE +/- 958.10, N = 8SE +/- 1416.55, N = 8SE +/- 1403.49, N = 8SE +/- 1395.27, N = 8SE +/- 1176.14, N = 8SE +/- 1197.28, N = 8SE +/- 1268.55, N = 8SE +/- 1278.42, N = 8SE +/- 1645.14, N = 8SE +/- 1248.22, N = 8SE +/- 1557.20, N = 8SE +/- 1396.09, N = 8SE +/- 1617.61, N = 8SE +/- 1543.51, N = 8SE +/- 1515.96, N = 8SE +/- 797.50, N = 822732.5522656.7722647.4522641.8422601.4222373.7422362.6222252.6321951.9121918.5221788.2921783.6121477.4521378.9121322.7821085.1521023.5220801.1920623.0520579.8920548.8220488.4820467.6720246.8520142.2920093.4519933.9019748.6718772.7418760.3918385.3017933.8117597.0114109.89

Threaded I/O Tester

Test: Random Write - Size Per Thread: 256MB - Thread Count: 8

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 0.3.3Test: Random Write - Size Per Thread: 256MB - Thread Count: 8SSD_e4_swraid512as write_batch_expire 1000 #1as ae 50 re 200 we 400 rbe 600 wbe 1000 #1as antic_expire 50 #1as ae 1 re 20 we 40 rbe 60 wbe 100 #1cfq MD0 read_ahead 0 #1cfq default #1as antic_expire 0 #1cfq MD0 rq_affinity 1 #1as default #1noop rq_affinity 2 #1noop rq_affinity 0 #1noop default #1deadline fm1 ws2 fb48 re500 we5000 #1cfq quantum 16 #1cfq MD0 read_ahead 128 #1cfq slice_idle 0 #1cfq low_latency 50 #1noop default read_ahead 16 #1cfq MD0 rq_affinity 2 #1deadline fm1 ws2 fb16 re150 we5000 #1cfq slice_idle 64 #1cfq MD0 read_ahead 768 #1noop default nr_requests 512 #1noop default read_ahead 0 #1deadline fm0 ws2 fb16 re500 we5000 #1deadline fm1 ws2 fb16 re500 we15000 #1deadline fm1 ws8 fb16 re500 we5000 #1deadline default fm1 ws2 fb16 re500 we5000 #1noop default read_ahead 64 #1deadline fm1 ws8 fb48 re150 we15000 #1noop default read_ahead 32 #1deadline fm0 ws8 fb48 re150 we15000 #1noop default nr_requests 32 #120406080100SE +/- 3.81, N = 8SE +/- 0.04, N = 8SE +/- 0.03, N = 4SE +/- 0.04, N = 4SE +/- 0.03, N = 8SE +/- 0.03, N = 4SE +/- 0.02, N = 4SE +/- 0.04, N = 8SE +/- 0.02, N = 4SE +/- 0.04, N = 8SE +/- 0.03, N = 8SE +/- 0.02, N = 6SE +/- 0.02, N = 4SE +/- 0.02, N = 4SE +/- 0.04, N = 8SE +/- 0.07, N = 8SE +/- 0.03, N = 8SE +/- 0.02, N = 8SE +/- 0.03, N = 8SE +/- 0.03, N = 8SE +/- 0.01, N = 4SE +/- 0.03, N = 8SE +/- 0.04, N = 8SE +/- 0.02, N = 4SE +/- 0.03, N = 8SE +/- 0.01, N = 4SE +/- 0.04, N = 8SE +/- 0.02, N = 8SE +/- 0.03, N = 8SE +/- 0.03, N = 8SE +/- 0.03, N = 8SE +/- 0.02, N = 6SE +/- 0.02, N = 8SE +/- 0.04, N = 881.312.122.112.112.001.911.901.841.841.831.781.761.751.711.711.711.631.611.521.521.501.501.501.491.491.441.431.431.421.411.401.371.351.35

Threaded I/O Tester

Test: Write - Size Per Thread: 256MB - Thread Count: 8

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 0.3.3Test: Write - Size Per Thread: 256MB - Thread Count: 8SSD_e4_swraid512as write_batch_expire 1000 #1as antic_expire 0 #1as default #1as antic_expire 50 #1as ae 50 re 200 we 400 rbe 600 wbe 1000 #1as ae 1 re 20 we 40 rbe 60 wbe 100 #1cfq default #1cfq slice_idle 64 #1cfq MD0 rq_affinity 1 #1cfq MD0 read_ahead 768 #1cfq quantum 16 #1cfq MD0 rq_affinity 2 #1cfq slice_idle 0 #1cfq low_latency 50 #1cfq MD0 read_ahead 128 #1cfq MD0 read_ahead 0 #1deadline default fm1 ws2 fb16 re500 we5000 #1deadline fm1 ws8 fb16 re500 we5000 #1noop default read_ahead 0 #1deadline fm1 ws2 fb16 re500 we15000 #1noop default read_ahead 32 #1deadline fm0 ws2 fb16 re500 we5000 #1deadline fm1 ws2 fb48 re500 we5000 #1deadline fm1 ws8 fb48 re150 we15000 #1noop default nr_requests 512 #1noop default #1noop default read_ahead 16 #1deadline fm0 ws8 fb48 re150 we15000 #1deadline fm1 ws2 fb16 re150 we5000 #1noop default read_ahead 64 #1noop rq_affinity 2 #1noop rq_affinity 0 #1noop default nr_requests 32 #160120180240300SE +/- 3.60, N = 7SE +/- 2.88, N = 4SE +/- 1.95, N = 4SE +/- 2.86, N = 4SE +/- 2.69, N = 5SE +/- 1.33, N = 4SE +/- 1.46, N = 4SE +/- 1.15, N = 4SE +/- 1.26, N = 4SE +/- 1.64, N = 4SE +/- 1.39, N = 4SE +/- 2.75, N = 4SE +/- 1.96, N = 4SE +/- 1.02, N = 4SE +/- 2.32, N = 8SE +/- 2.19, N = 4SE +/- 2.01, N = 4SE +/- 2.37, N = 4SE +/- 0.43, N = 4SE +/- 2.51, N = 4SE +/- 1.26, N = 4SE +/- 1.47, N = 4SE +/- 2.29, N = 6SE +/- 1.39, N = 4SE +/- 2.04, N = 4SE +/- 1.70, N = 4SE +/- 1.04, N = 4SE +/- 2.65, N = 4SE +/- 1.31, N = 4SE +/- 1.20, N = 4SE +/- 1.22, N = 4SE +/- 1.42, N = 4SE +/- 1.65, N = 4SE +/- 1.82, N = 8293.05187.13185.80184.62184.59182.64179.81173.51172.04171.96170.77170.48170.13169.38169.26168.16166.82165.85163.82163.77163.56163.53163.50162.92162.81162.45162.06161.61161.49160.95160.69159.12157.50154.28

AIO-Stress

Test: Random Write

OpenBenchmarking.orgMB/s, More Is BetterAIO-Stress 0.21Test: Random Writeas antic_expire 50 #1as ae 1 re 20 we 40 rbe 60 wbe 100 #1cfq quantum 16 #1noop default nr_requests 32 #1cfq slice_idle 64 #1deadline fm1 ws8 fb16 re500 we5000 #1deadline fm1 ws2 fb16 re150 we5000 #1cfq default #1cfq slice_idle 0 #1noop default read_ahead 32 #1cfq MD0 rq_affinity 1 #1deadline default fm1 ws2 fb16 re500 we5000 #1deadline fm1 ws2 fb48 re500 we5000 #1noop default read_ahead 0 #1deadline fm0 ws8 fb48 re150 we15000 #1as write_batch_expire 1000 #1as antic_expire 0 #1cfq low_latency 50 #1noop rq_affinity 0 #1deadline fm1 ws8 fb48 re150 we15000 #1noop default read_ahead 16 #1deadline fm0 ws2 fb16 re500 we5000 #1noop default read_ahead 64 #1cfq MD0 read_ahead 128 #1cfq MD0 read_ahead 0 #1noop rq_affinity 2 #1cfq MD0 rq_affinity 2 #1as ae 50 re 200 we 400 rbe 600 wbe 1000 #1deadline fm1 ws2 fb16 re500 we15000 #1as default #1noop default nr_requests 512 #1cfq MD0 read_ahead 768 #1noop default #1SSD_e4_swraid5125001000150020002500SE +/- 4.35, N = 3SE +/- 8.30, N = 3SE +/- 9.37, N = 3SE +/- 6.77, N = 3SE +/- 5.59, N = 3SE +/- 4.64, N = 3SE +/- 9.60, N = 3SE +/- 5.16, N = 3SE +/- 4.92, N = 3SE +/- 19.64, N = 3SE +/- 3.80, N = 3SE +/- 5.48, N = 3SE +/- 8.40, N = 3SE +/- 6.70, N = 3SE +/- 10.39, N = 3SE +/- 5.79, N = 3SE +/- 6.26, N = 3SE +/- 8.72, N = 3SE +/- 10.45, N = 3SE +/- 11.73, N = 3SE +/- 4.23, N = 3SE +/- 2.63, N = 3SE +/- 8.82, N = 3SE +/- 4.34, N = 3SE +/- 1.57, N = 3SE +/- 11.17, N = 3SE +/- 7.58, N = 3SE +/- 17.95, N = 3SE +/- 7.26, N = 3SE +/- 13.23, N = 3SE +/- 20.90, N = 3SE +/- 21.41, N = 3SE +/- 39.96, N = 3SE +/- 82.91, N = 62175.562174.452174.142173.052171.592171.082170.942170.892170.022169.272168.742167.652167.552167.452166.412166.202166.012165.912165.152163.962163.932163.842162.992159.962158.212157.132155.252152.402152.132151.412151.302147.182123.861776.78

Threaded I/O Tester

Test: Random Read - Size Per Thread: 4096MB - Thread Count: 8

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 0.3.3Test: Random Read - Size Per Thread: 4096MB - Thread Count: 8SSD_e4_swraid512deadline fm1 ws8 fb48 re150 we15000 #1cfq MD0 read_ahead 768 #1noop default read_ahead 16 #1noop default read_ahead 32 #1cfq MD0 read_ahead 0 #1noop rq_affinity 0 #1noop default nr_requests 32 #1deadline fm1 ws2 fb16 re500 we15000 #1noop default read_ahead 64 #1noop default nr_requests 512 #1cfq MD0 rq_affinity 2 #1deadline fm1 ws2 fb48 re500 we5000 #1cfq MD0 read_ahead 128 #1deadline fm0 ws2 fb16 re500 we5000 #1cfq quantum 16 #1noop default read_ahead 0 #1cfq slice_idle 0 #1noop default #1cfq low_latency 50 #1noop rq_affinity 2 #1deadline fm0 ws8 fb48 re150 we15000 #1cfq MD0 rq_affinity 1 #1as antic_expire 0 #1deadline fm1 ws8 fb16 re500 we5000 #1deadline fm1 ws2 fb16 re150 we5000 #1deadline default fm1 ws2 fb16 re500 we5000 #1as write_batch_expire 1000 #1cfq slice_idle 64 #1as default #1cfq default #1as ae 1 re 20 we 40 rbe 60 wbe 100 #1as ae 50 re 200 we 400 rbe 600 wbe 1000 #1as antic_expire 50 #120406080100SE +/- 7.78, N = 8SE +/- 0.12, N = 8SE +/- 0.04, N = 4SE +/- 0.10, N = 8SE +/- 0.13, N = 8SE +/- 0.10, N = 8SE +/- 0.10, N = 8SE +/- 0.06, N = 8SE +/- 0.13, N = 8SE +/- 0.10, N = 8SE +/- 0.06, N = 8SE +/- 0.07, N = 8SE +/- 0.07, N = 8SE +/- 0.08, N = 8SE +/- 0.15, N = 8SE +/- 0.06, N = 6SE +/- 0.04, N = 4SE +/- 0.13, N = 8SE +/- 0.05, N = 4SE +/- 0.09, N = 8SE +/- 0.04, N = 4SE +/- 0.08, N = 8SE +/- 0.25, N = 8SE +/- 0.18, N = 8SE +/- 0.15, N = 8SE +/- 0.10, N = 8SE +/- 0.14, N = 8SE +/- 0.06, N = 4SE +/- 0.14, N = 8SE +/- 0.05, N = 7SE +/- 0.24, N = 8SE +/- 0.12, N = 8SE +/- 0.05, N = 8SE +/- 0.05, N = 880.944.314.314.304.294.284.274.254.244.244.224.224.214.184.164.154.144.104.094.084.064.024.024.013.993.963.943.773.763.743.693.332.492.40

Threaded I/O Tester

Test: Read - Size Per Thread: 4096MB - Thread Count: 8

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 0.3.3Test: Read - Size Per Thread: 4096MB - Thread Count: 8SSD_e4_swraid512noop rq_affinity 0 #1deadline fm1 ws2 fb16 re500 we15000 #1noop default read_ahead 0 #1noop default read_ahead 64 #1noop default #1deadline fm1 ws2 fb16 re150 we5000 #1noop default read_ahead 16 #1deadline fm1 ws8 fb16 re500 we5000 #1deadline default fm1 ws2 fb16 re500 we5000 #1noop default read_ahead 32 #1as antic_expire 0 #1noop rq_affinity 2 #1deadline fm1 ws2 fb48 re500 we5000 #1noop default nr_requests 512 #1deadline fm1 ws8 fb48 re150 we15000 #1deadline fm0 ws8 fb48 re150 we15000 #1cfq slice_idle 0 #1noop default nr_requests 32 #1deadline fm0 ws2 fb16 re500 we5000 #1as default #1as write_batch_expire 1000 #1as ae 1 re 20 we 40 rbe 60 wbe 100 #1as ae 50 re 200 we 400 rbe 600 wbe 1000 #1cfq MD0 rq_affinity 1 #1cfq MD0 rq_affinity 2 #1as antic_expire 50 #1cfq low_latency 50 #1cfq quantum 16 #1cfq default #1cfq slice_idle 64 #1cfq MD0 read_ahead 768 #1cfq MD0 read_ahead 128 #1cfq MD0 read_ahead 0 #1110220330440550SE +/- 4.09, N = 4SE +/- 8.54, N = 8SE +/- 1.77, N = 4SE +/- 2.02, N = 4SE +/- 5.35, N = 4SE +/- 3.06, N = 4SE +/- 4.77, N = 4SE +/- 3.57, N = 4SE +/- 3.58, N = 4SE +/- 1.66, N = 4SE +/- 1.58, N = 4SE +/- 4.85, N = 4SE +/- 4.08, N = 6SE +/- 1.78, N = 4SE +/- 4.19, N = 6SE +/- 2.88, N = 4SE +/- 4.37, N = 5SE +/- 5.12, N = 8SE +/- 4.59, N = 4SE +/- 4.43, N = 4SE +/- 2.33, N = 4SE +/- 3.31, N = 4SE +/- 4.10, N = 4SE +/- 2.50, N = 4SE +/- 6.68, N = 8SE +/- 5.67, N = 8SE +/- 2.92, N = 8SE +/- 6.50, N = 8SE +/- 4.15, N = 8SE +/- 5.43, N = 8SE +/- 5.54, N = 8SE +/- 5.91, N = 8SE +/- 3.86, N = 8SE +/- 1.78, N = 8511.67355.29311.11306.98306.78305.94305.64304.49303.71303.63303.28302.29301.72299.15298.12297.52296.20294.76294.63294.51289.76285.98272.62260.45256.79244.53241.89239.56238.93235.84229.64212.0362.2145.68

Threaded I/O Tester

Test: Random Write - Size Per Thread: 4096MB - Thread Count: 8

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 0.3.3Test: Random Write - Size Per Thread: 4096MB - Thread Count: 8SSD_e4_swraid512cfq quantum 16 #1cfq MD0 read_ahead 0 #1cfq default #1cfq slice_idle 64 #1cfq slice_idle 0 #1as ae 50 re 200 we 400 rbe 600 wbe 1000 #1as write_batch_expire 1000 #1cfq low_latency 50 #1cfq MD0 read_ahead 128 #1as antic_expire 0 #1cfq MD0 rq_affinity 2 #1as default #1noop default read_ahead 16 #1noop default read_ahead 0 #1as antic_expire 50 #1deadline fm1 ws8 fb48 re150 we15000 #1cfq MD0 rq_affinity 1 #1deadline fm0 ws8 fb48 re150 we15000 #1deadline fm1 ws2 fb48 re500 we5000 #1deadline default fm1 ws2 fb16 re500 we5000 #1noop default nr_requests 512 #1noop default read_ahead 64 #1noop rq_affinity 0 #1noop rq_affinity 2 #1cfq MD0 read_ahead 768 #1deadline fm1 ws2 fb16 re500 we15000 #1deadline fm1 ws2 fb16 re150 we5000 #1deadline fm1 ws8 fb16 re500 we5000 #1deadline fm0 ws2 fb16 re500 we5000 #1as ae 1 re 20 we 40 rbe 60 wbe 100 #1noop default nr_requests 32 #1noop default #1noop default read_ahead 32 #120406080100SE +/- 8.52, N = 8SE +/- 0.01, N = 4SE +/- 0.01, N = 4SE +/- 0.01, N = 4SE +/- 0.01, N = 4SE +/- 0.01, N = 4SE +/- 0.01, N = 4SE +/- 0.01, N = 4SE +/- 0.01, N = 4SE +/- 0.01, N = 4SE +/- 0.01, N = 4SE +/- 0.01, N = 5SE +/- 0.01, N = 4SE +/- 0.01, N = 4SE +/- 0.01, N = 4SE +/- 0.01, N = 5SE +/- 0.01, N = 5SE +/- 0.01, N = 8SE +/- 0.01, N = 6SE +/- 0.01, N = 8SE +/- 0.01, N = 4SE +/- 0.01, N = 4SE +/- 0.01, N = 4SE +/- 0.01, N = 4SE +/- 0.01, N = 8SE +/- 0.01, N = 8SE +/- 0.01, N = 4SE +/- 0.01, N = 4SE +/- 0.01, N = 4SE +/- 0.01, N = 4SE +/- 0.01, N = 4SE +/- 0.01, N = 6SE +/- 0.00, N = 4SE +/- 0.01, N = 580.101.000.950.940.930.930.920.910.910.910.900.900.890.860.860.860.850.850.840.840.840.840.840.840.840.840.830.830.830.830.830.820.820.81

Threaded I/O Tester

Test: Write - Size Per Thread: 4096MB - Thread Count: 8

OpenBenchmarking.orgMB/s, More Is BetterThreaded I/O Tester 0.3.3Test: Write - Size Per Thread: 4096MB - Thread Count: 8SSD_e4_swraid512as ae 50 re 200 we 400 rbe 600 wbe 1000 #1as antic_expire 50 #1as default #1as write_batch_expire 1000 #1as antic_expire 0 #1noop rq_affinity 2 #1noop default read_ahead 0 #1deadline fm1 ws8 fb16 re500 we5000 #1deadline default fm1 ws2 fb16 re500 we5000 #1deadline fm0 ws2 fb16 re500 we5000 #1as ae 1 re 20 we 40 rbe 60 wbe 100 #1noop rq_affinity 0 #1deadline fm1 ws8 fb48 re150 we15000 #1deadline fm1 ws2 fb48 re500 we5000 #1noop default #1deadline fm0 ws8 fb48 re150 we15000 #1noop default nr_requests 512 #1noop default read_ahead 16 #1deadline fm1 ws2 fb16 re150 we5000 #1cfq slice_idle 0 #1noop default read_ahead 32 #1cfq MD0 rq_affinity 1 #1deadline fm1 ws2 fb16 re500 we15000 #1cfq default #1noop default read_ahead 64 #1cfq MD0 rq_affinity 2 #1cfq low_latency 50 #1cfq slice_idle 64 #1cfq MD0 read_ahead 0 #1cfq MD0 read_ahead 128 #1cfq quantum 16 #1noop default nr_requests 32 #1cfq MD0 read_ahead 768 #170140210280350SE +/- 9.38, N = 8SE +/- 1.20, N = 4SE +/- 0.32, N = 4SE +/- 0.62, N = 4SE +/- 1.67, N = 4SE +/- 1.68, N = 4SE +/- 1.57, N = 4SE +/- 2.30, N = 4SE +/- 0.83, N = 4SE +/- 1.68, N = 4SE +/- 0.82, N = 4SE +/- 0.82, N = 4SE +/- 0.86, N = 4SE +/- 0.97, N = 4SE +/- 1.20, N = 4SE +/- 0.99, N = 4SE +/- 1.32, N = 4SE +/- 0.58, N = 4SE +/- 1.20, N = 4SE +/- 1.13, N = 4SE +/- 1.44, N = 4SE +/- 1.12, N = 4SE +/- 1.75, N = 4SE +/- 1.06, N = 4SE +/- 0.89, N = 4SE +/- 1.11, N = 4SE +/- 1.05, N = 4SE +/- 0.58, N = 4SE +/- 1.44, N = 4SE +/- 0.98, N = 4SE +/- 1.46, N = 4SE +/- 1.26, N = 4SE +/- 0.69, N = 4SE +/- 1.76, N = 4322.50215.43214.62213.28212.50211.27208.97207.95206.73206.53206.51206.41206.29206.06206.03205.84205.74205.65205.03204.46204.38204.15203.80203.79203.59203.31203.31203.23203.01202.92202.70201.33201.31200.75


Phoronix Test Suite v10.8.4