The default maximum value of segments in indirect requests was 32, IO
operations with bigger block size(>32*4k) would be split and performance start
to drop.
Nowadays backend device usually support 512k max_sectors_kb on desktop, and
may larger on server machines with high-end storage system.
The default size 128k was not very appropriate, this patch increases the default
maximum value to 128(128*4k=512k).
Signed-off-by: Bob Liu <[email protected]>
---
drivers/block/xen-blkfront.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 2236c6f..1bf2429 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -94,9 +94,9 @@ static const struct block_device_operations xlvbd_block_fops;
* by the backend driver.
*/
-static unsigned int xen_blkif_max_segments = 32;
+static unsigned int xen_blkif_max_segments = 128;
module_param_named(max, xen_blkif_max_segments, int, S_IRUGO);
-MODULE_PARM_DESC(max, "Maximum amount of segments in indirect requests (default is 32)");
+MODULE_PARM_DESC(max, "Maximum amount of segments in indirect requests (default is 128)");
#define BLK_RING_SIZE __CONST_RING_SIZE(blkif, PAGE_SIZE)
--
1.7.10.4
El 16/12/14 a les 11.11, Bob Liu ha escrit:
> The default maximum value of segments in indirect requests was 32, IO
> operations with bigger block size(>32*4k) would be split and performance start
> to drop.
>
> Nowadays backend device usually support 512k max_sectors_kb on desktop, and
> may larger on server machines with high-end storage system.
> The default size 128k was not very appropriate, this patch increases the default
> maximum value to 128(128*4k=512k).
This looks fine, do you have any data/graphs to backup your reasoning?
I would also add to the commit message that this change implies we can
now have 32*128+32 = 4128 in-flight grants, which greatly surpasses the
default amount of persistent grants blkback can handle, so the LRU in
blkback will kick in.
Roger.
On 12/16/2014 06:32 PM, Roger Pau Monn? wrote:
> El 16/12/14 a les 11.11, Bob Liu ha escrit:
>> The default maximum value of segments in indirect requests was 32, IO
>> operations with bigger block size(>32*4k) would be split and performance start
>> to drop.
>>
>> Nowadays backend device usually support 512k max_sectors_kb on desktop, and
>> may larger on server machines with high-end storage system.
>> The default size 128k was not very appropriate, this patch increases the default
>> maximum value to 128(128*4k=512k).
>
> This looks fine, do you have any data/graphs to backup your reasoning?
>
I only have some results for 1M block size FIO test but I think that's
enough.
xen_blkfront.max Rate (MB/s) Percent of Dom-0
32 11.1 31.0%
48 15.3 42.7%
64 19.8 55.3%
80 19.9 55.6%
96 23.0 64.2%
112 23.7 66.2%
128 31.6 88.3%
The rates above are compared against the dom-0 rate of 35.8 MB/s.
> I would also add to the commit message that this change implies we can
> now have 32*128+32 = 4128 in-flight grants, which greatly surpasses the
The number could be larger if using more pages as the
xen-blkfront/backend ring based on Wei Liu's patch "xenbus_client:
extend interface to suppurt multi-page ring", it helped improve the IO
performance a lot on our system connected with high-end storage.
I'm preparing resend related patches.
> default amount of persistent grants blkback can handle, so the LRU in
> blkback will kick in.
>
Sounds good.
--
Regards,
-Bob
On Wed, Dec 17, 2014 at 04:18:58PM +0800, Bob Liu wrote:
>
> On 12/16/2014 06:32 PM, Roger Pau Monn? wrote:
> > El 16/12/14 a les 11.11, Bob Liu ha escrit:
> >> The default maximum value of segments in indirect requests was 32, IO
> >> operations with bigger block size(>32*4k) would be split and performance start
> >> to drop.
> >>
> >> Nowadays backend device usually support 512k max_sectors_kb on desktop, and
> >> may larger on server machines with high-end storage system.
> >> The default size 128k was not very appropriate, this patch increases the default
> >> maximum value to 128(128*4k=512k).
> >
> > This looks fine, do you have any data/graphs to backup your reasoning?
> >
>
> I only have some results for 1M block size FIO test but I think that's
> enough.
>
> xen_blkfront.max Rate (MB/s) Percent of Dom-0
> 32 11.1 31.0%
> 48 15.3 42.7%
> 64 19.8 55.3%
> 80 19.9 55.6%
> 96 23.0 64.2%
> 112 23.7 66.2%
> 128 31.6 88.3%
>
> The rates above are compared against the dom-0 rate of 35.8 MB/s.
>
> > I would also add to the commit message that this change implies we can
> > now have 32*128+32 = 4128 in-flight grants, which greatly surpasses the
>
> The number could be larger if using more pages as the
> xen-blkfront/backend ring based on Wei Liu's patch "xenbus_client:
> extend interface to suppurt multi-page ring", it helped improve the IO
> performance a lot on our system connected with high-end storage.
> I'm preparing resend related patches.
Or potentially making the request and response be seperate rings - and the
response ring entries not tied in to the request. As in right now if we
have an request at say slot 1,5, and 7, we expect the response to be at
slot 1,5, and 7 as well. If we made the response ring producer index
not be tied to the request we could put the responses on the first available
slot - so say at 1, 2, and 3 (if all three responses came at the same time).
>
> > default amount of persistent grants blkback can handle, so the LRU in
> > blkback will kick in.
> >
>
> Sounds good.
>
> --
> Regards,
> -Bob
>
> _______________________________________________
> Xen-devel mailing list
> [email protected]
> http://lists.xen.org/xen-devel
On 17/12/14 16:13, Konrad Rzeszutek Wilk wrote:
> On Wed, Dec 17, 2014 at 04:18:58PM +0800, Bob Liu wrote:
>>
>> On 12/16/2014 06:32 PM, Roger Pau Monn? wrote:
>>> El 16/12/14 a les 11.11, Bob Liu ha escrit:
>>>> The default maximum value of segments in indirect requests was 32, IO
>>>> operations with bigger block size(>32*4k) would be split and performance start
>>>> to drop.
>>>>
>>>> Nowadays backend device usually support 512k max_sectors_kb on desktop, and
>>>> may larger on server machines with high-end storage system.
>>>> The default size 128k was not very appropriate, this patch increases the default
>>>> maximum value to 128(128*4k=512k).
>>>
>>> This looks fine, do you have any data/graphs to backup your reasoning?
>>>
>>
>> I only have some results for 1M block size FIO test but I think that's
>> enough.
>>
>> xen_blkfront.max Rate (MB/s) Percent of Dom-0
>> 32 11.1 31.0%
>> 48 15.3 42.7%
>> 64 19.8 55.3%
>> 80 19.9 55.6%
>> 96 23.0 64.2%
>> 112 23.7 66.2%
>> 128 31.6 88.3%
>>
>> The rates above are compared against the dom-0 rate of 35.8 MB/s.
>>
>>> I would also add to the commit message that this change implies we can
>>> now have 32*128+32 = 4128 in-flight grants, which greatly surpasses the
>>
>> The number could be larger if using more pages as the
>> xen-blkfront/backend ring based on Wei Liu's patch "xenbus_client:
>> extend interface to suppurt multi-page ring", it helped improve the IO
>> performance a lot on our system connected with high-end storage.
>> I'm preparing resend related patches.
>
> Or potentially making the request and response be seperate rings - and the
> response ring entries not tied in to the request. As in right now if we
> have an request at say slot 1,5, and 7, we expect the response to be at
> slot 1,5, and 7 as well.
No. Responses are placed in the first available slot. The response is
associated with the original request by the ID field.
See make_response().
David
On Wed, Dec 17, 2014 at 04:34:41PM +0000, David Vrabel wrote:
> On 17/12/14 16:13, Konrad Rzeszutek Wilk wrote:
> > On Wed, Dec 17, 2014 at 04:18:58PM +0800, Bob Liu wrote:
> >>
> >> On 12/16/2014 06:32 PM, Roger Pau Monn? wrote:
> >>> El 16/12/14 a les 11.11, Bob Liu ha escrit:
> >>>> The default maximum value of segments in indirect requests was 32, IO
> >>>> operations with bigger block size(>32*4k) would be split and performance start
> >>>> to drop.
> >>>>
> >>>> Nowadays backend device usually support 512k max_sectors_kb on desktop, and
> >>>> may larger on server machines with high-end storage system.
> >>>> The default size 128k was not very appropriate, this patch increases the default
> >>>> maximum value to 128(128*4k=512k).
> >>>
> >>> This looks fine, do you have any data/graphs to backup your reasoning?
> >>>
> >>
> >> I only have some results for 1M block size FIO test but I think that's
> >> enough.
> >>
> >> xen_blkfront.max Rate (MB/s) Percent of Dom-0
> >> 32 11.1 31.0%
> >> 48 15.3 42.7%
> >> 64 19.8 55.3%
> >> 80 19.9 55.6%
> >> 96 23.0 64.2%
> >> 112 23.7 66.2%
> >> 128 31.6 88.3%
> >>
> >> The rates above are compared against the dom-0 rate of 35.8 MB/s.
> >>
> >>> I would also add to the commit message that this change implies we can
> >>> now have 32*128+32 = 4128 in-flight grants, which greatly surpasses the
> >>
> >> The number could be larger if using more pages as the
> >> xen-blkfront/backend ring based on Wei Liu's patch "xenbus_client:
> >> extend interface to suppurt multi-page ring", it helped improve the IO
> >> performance a lot on our system connected with high-end storage.
> >> I'm preparing resend related patches.
> >
> > Or potentially making the request and response be seperate rings - and the
> > response ring entries not tied in to the request. As in right now if we
> > have an request at say slot 1,5, and 7, we expect the response to be at
> > slot 1,5, and 7 as well.
>
> No. Responses are placed in the first available slot. The response is
> associated with the original request by the ID field.
>
> See make_response().
You are right! Thank you for the update.
>
> David