Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751522AbaLQQrW (ORCPT ); Wed, 17 Dec 2014 11:47:22 -0500 Received: from aserp1040.oracle.com ([141.146.126.69]:36787 "EHLO aserp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751240AbaLQQrU convert rfc822-to-8bit (ORCPT ); Wed, 17 Dec 2014 11:47:20 -0500 Date: Wed, 17 Dec 2014 11:47:12 -0500 From: Konrad Rzeszutek Wilk To: David Vrabel Cc: Bob Liu , xen-devel@lists.xen.org, linux-kernel@vger.kernel.org, Roger Pau =?iso-8859-1?Q?Monn=E9?= Subject: Re: [Xen-devel] [PATCH] xen/blkfront: increase the default value of xen_blkif_max_segments Message-ID: <20141217164712.GG6414@laptop.dumpdata.com> References: <1418724696-23922-1-git-send-email-bob.liu@oracle.com> <54900A3F.7070300@citrix.com> <54913C72.7090704@oracle.com> <20141217161323.GA6414@laptop.dumpdata.com> <5491B0A1.7080601@citrix.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline In-Reply-To: <5491B0A1.7080601@citrix.com> User-Agent: Mutt/1.5.23 (2014-03-12) Content-Transfer-Encoding: 8BIT X-Source-IP: acsinet22.oracle.com [141.146.126.238] Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Dec 17, 2014 at 04:34:41PM +0000, David Vrabel wrote: > On 17/12/14 16:13, Konrad Rzeszutek Wilk wrote: > > On Wed, Dec 17, 2014 at 04:18:58PM +0800, Bob Liu wrote: > >> > >> On 12/16/2014 06:32 PM, Roger Pau Monn? wrote: > >>> El 16/12/14 a les 11.11, Bob Liu ha escrit: > >>>> The default maximum value of segments in indirect requests was 32, IO > >>>> operations with bigger block size(>32*4k) would be split and performance start > >>>> to drop. > >>>> > >>>> Nowadays backend device usually support 512k max_sectors_kb on desktop, and > >>>> may larger on server machines with high-end storage system. > >>>> The default size 128k was not very appropriate, this patch increases the default > >>>> maximum value to 128(128*4k=512k). > >>> > >>> This looks fine, do you have any data/graphs to backup your reasoning? > >>> > >> > >> I only have some results for 1M block size FIO test but I think that's > >> enough. > >> > >> xen_blkfront.max Rate (MB/s) Percent of Dom-0 > >> 32 11.1 31.0% > >> 48 15.3 42.7% > >> 64 19.8 55.3% > >> 80 19.9 55.6% > >> 96 23.0 64.2% > >> 112 23.7 66.2% > >> 128 31.6 88.3% > >> > >> The rates above are compared against the dom-0 rate of 35.8 MB/s. > >> > >>> I would also add to the commit message that this change implies we can > >>> now have 32*128+32 = 4128 in-flight grants, which greatly surpasses the > >> > >> The number could be larger if using more pages as the > >> xen-blkfront/backend ring based on Wei Liu's patch "xenbus_client: > >> extend interface to suppurt multi-page ring", it helped improve the IO > >> performance a lot on our system connected with high-end storage. > >> I'm preparing resend related patches. > > > > Or potentially making the request and response be seperate rings - and the > > response ring entries not tied in to the request. As in right now if we > > have an request at say slot 1,5, and 7, we expect the response to be at > > slot 1,5, and 7 as well. > > No. Responses are placed in the first available slot. The response is > associated with the original request by the ID field. > > See make_response(). You are right! Thank you for the update. > > David -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/