Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755816AbbKDBva (ORCPT ); Tue, 3 Nov 2015 20:51:30 -0500 Received: from aserp1040.oracle.com ([141.146.126.69]:47243 "EHLO aserp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752355AbbKDBv2 (ORCPT ); Tue, 3 Nov 2015 20:51:28 -0500 Date: Tue, 3 Nov 2015 20:51:05 -0500 From: Konrad Rzeszutek Wilk To: Bob Liu Cc: xen-devel@lists.xen.org, linux-kernel@vger.kernel.org, roger.pau@citrix.com, felipe.franciosi@citrix.com, axboe@fb.com, avanzini.arianna@gmail.com, rafal.mielniczuk@citrix.com, jonathan.davies@citrix.com, david.vrabel@citrix.com Subject: Re: [PATCH v4 04/10] xen/blkfront: split per device io_lock Message-ID: <20151104015104.GA3882@x230.dumpdata.com> References: <1446438106-20171-1-git-send-email-bob.liu@oracle.com> <1446438106-20171-5-git-send-email-bob.liu@oracle.com> <20151103200902.GF28527@char.us.oracle.com> <56395A40.70204@oracle.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <56395A40.70204@oracle.com> User-Agent: Mutt/1.5.23 (2014-03-12) X-Source-IP: userv0022.oracle.com [156.151.31.74] Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4406 Lines: 129 On Wed, Nov 04, 2015 at 09:07:12AM +0800, Bob Liu wrote: > > On 11/04/2015 04:09 AM, Konrad Rzeszutek Wilk wrote: > > On Mon, Nov 02, 2015 at 12:21:40PM +0800, Bob Liu wrote: > >> The per device io_lock became a coarser grained lock after multi-queues/rings > >> was introduced, this patch introduced a fine-grained ring_lock for each ring. > > > > s/was introduced/was introduced (see commit titled XYZ)/ > > > > s/introdued/introduces/ > >> > >> The old io_lock was renamed to dev_lock and only protect the ->grants list > > > > s/was/is/ > > s/protect/protects/ > > > >> which is shared by all rings. > >> > >> Signed-off-by: Bob Liu > >> --- > >> drivers/block/xen-blkfront.c | 57 ++++++++++++++++++++++++++------------------ > >> 1 file changed, 34 insertions(+), 23 deletions(-) > >> > >> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c > >> index eab78e7..8cc5995 100644 > >> --- a/drivers/block/xen-blkfront.c > >> +++ b/drivers/block/xen-blkfront.c > >> @@ -121,6 +121,7 @@ MODULE_PARM_DESC(max_ring_page_order, "Maximum order of pages to be used for the > >> */ > >> struct blkfront_ring_info { > >> struct blkif_front_ring ring; > > > > Can you add a comment explaining the lock semantic? As in under what conditions > > should it be taken? Like you have it below. > > > >> + spinlock_t ring_lock; > >> unsigned int ring_ref[XENBUS_MAX_RING_PAGES]; > >> unsigned int evtchn, irq; > >> struct work_struct work; > >> @@ -138,7 +139,8 @@ struct blkfront_ring_info { > >> */ > >> struct blkfront_info > >> { > >> - spinlock_t io_lock; > >> + /* Lock to proect info->grants list shared by multi rings */ > > > > s/proect/protect/ > > > > Missing full stop. > > > >> + spinlock_t dev_lock; > > > > Shouldn't it be right next to what it is protecting? > > > > That is right below (or above): 'struct list_head grants;'? > > > >> struct mutex mutex; > >> struct xenbus_device *xbdev; > >> struct gendisk *gd; > >> @@ -224,6 +226,7 @@ static int fill_grant_buffer(struct blkfront_ring_info *rinfo, int num) > >> struct grant *gnt_list_entry, *n; > >> int i = 0; > >> > >> + spin_lock_irq(&info->dev_lock); > > > > Why there? Why not where you add it to the list? > >> while(i < num) { > >> gnt_list_entry = kzalloc(sizeof(struct grant), GFP_NOIO); > >> if (!gnt_list_entry) > >> @@ -242,6 +245,7 @@ static int fill_grant_buffer(struct blkfront_ring_info *rinfo, int num) > >> list_add(&gnt_list_entry->node, &info->grants); > > > > Right here that is? > > > > You are holding the lock for the duration of 'kzalloc' and 'alloc_page'. > > > > And more interestingly, GFP_NOIO translates to __GFP_WAIT which means > > it can call 'schedule'. - And you have taken an spinlock. That should > > have thrown lots of warnings? > > > >> i++; > >> } > >> + spin_unlock_irq(&info->dev_lock); > >> > >> return 0; > >> > >> @@ -254,6 +258,7 @@ out_of_memory: > >> kfree(gnt_list_entry); > >> i--; > >> } > >> + spin_unlock_irq(&info->dev_lock); > > > > Just do it around the 'list_del' operation. You are using an > > 'safe' > >> BUG_ON(i != 0); > >> return -ENOMEM; > >> } > >> @@ -265,6 +270,7 @@ static struct grant *get_grant(grant_ref_t *gref_head, > >> struct grant *gnt_list_entry; > >> unsigned long buffer_gfn; > >> > >> + spin_lock(&info->dev_lock); > >> BUG_ON(list_empty(&info->grants)); > >> gnt_list_entry = list_first_entry(&info->grants, struct grant, > >> node); > >> @@ -272,8 +278,10 @@ static struct grant *get_grant(grant_ref_t *gref_head, > >> > >> if (gnt_list_entry->gref != GRANT_INVALID_REF) { > >> info->persistent_gnts_c--; > >> + spin_unlock(&info->dev_lock); > >> return gnt_list_entry; > >> } > >> + spin_unlock(&info->dev_lock); > > > > Just have one spin_unlock. Put it right before the 'if (gnt_list_entry->gref)..'. > > That's used to protect info->persistent_gnts_c, will update all other place. But you don't mention that in the description - that the lock is suppose to also protect persistent_gnts_c. Please update that. > > Thanks, > -Bob -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/