Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S967236Ab3DRPOe (ORCPT ); Thu, 18 Apr 2013 11:14:34 -0400 Received: from smtp.eu.citrix.com ([46.33.159.39]:34091 "EHLO SMTP.EU.CITRIX.COM" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S967071Ab3DRPOd (ORCPT ); Thu, 18 Apr 2013 11:14:33 -0400 X-IronPort-AV: E=Sophos;i="4.87,502,1363132800"; d="scan'208";a="3701213" Message-ID: <51700DD6.1020603@citrix.com> Date: Thu, 18 Apr 2013 17:14:30 +0200 From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:17.0) Gecko/20130328 Thunderbird/17.0.5 MIME-Version: 1.0 To: Jens Axboe CC: Konrad Rzeszutek Wilk , "martin.peterson@oracle.com" , "linux-kernel@vger.kernel.org" , "xen-devel@lists.xen.org" Subject: Re: [PATCH v1 7/7] xen-block: implement indirect descriptors References: <1364382643-3711-1-git-send-email-roger.pau@citrix.com> <1364382643-3711-8-git-send-email-roger.pau@citrix.com> <20130409184923.GA4978@phenom.dumpdata.com> <516C3264.3050409@citrix.com> <20130417142554.GG21378@phenom.dumpdata.com> <516ED633.3040205@citrix.com> <20130417172747.GA25736@phenom.dumpdata.com> <20130418124355.GW4816@kernel.dk> <5170005B.807@citrix.com> <20130418142642.GY4816@kernel.dk> In-Reply-To: <20130418142642.GY4816@kernel.dk> Content-Type: text/plain; charset="ISO-8859-1" Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2234 Lines: 57 On 18/04/13 16:26, Jens Axboe wrote: >>>>>>> I've just set that to something that brings a performance benefit >>>>>>> without having to map an insane number of persistent grants in blkback. >>>>>>> >>>>>>> Yes, the values are correct, but the device request queue (rq) is only >>>>>>> able to provide read requests with 64 segments or write requests with >>>>>>> 128 segments. I haven't been able to get larger requests, even when >>>>>>> setting this to 512 or higer. >>>>>> >>>>>> What are you using to drive the requests? 'fio'? >>>>> >>>>> Yes, I've tried fio with several "bs=" values, but it doesn't seem to >>>>> change the size of the underlying requests. Have you been able to get >>>>> bigger requests? >>>> >>>> Martin, Jens, >>>> Any way to drive more than 128 segments? >>> >>> If the driver is bio based, then there's a natural size constraint on >>> the number of vecs in the bio. So to get truly large requests, the >>> driver would need to merge incoming sequential IOs (similar to how it's >>> done for rq based drivers). >> >> When you say rq based drivers, you mean drivers with a request queue? >> >> We are already using a request queue in blkfront, and I'm setting the >> maximum number of segments per request using: >> >> blk_queue_max_segments(, ); >> >> But even when setting to 256 or 512, I only get read requests >> with 64 segments and write requests with 128 segments from the queue. > > What kernel are you testing? The plugging is usually what will trigger a > run of the queue, for rq based drivers. What does your fio job look > like? > I'm currently testing on top of Konrad for-jens-3.9 branch, which is 3.8.0-rc7. This is how my fio job looks like: [read] rw=read size=900m bs=4k directory=/root/fio loops=100000 I've tried several bs sizes; 4k, 16k, 128k, 1m, 10m, and as far as I can see requests from the queue don't have more than 64 segments. Also tried a simple dd with several bs sizes and got the same result. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/