Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756424Ab1BROE3 (ORCPT ); Fri, 18 Feb 2011 09:04:29 -0500 Received: from mx1.redhat.com ([209.132.183.28]:65100 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753577Ab1BROE1 (ORCPT ); Fri, 18 Feb 2011 09:04:27 -0500 Date: Fri, 18 Feb 2011 09:04:21 -0500 From: Mike Snitzer To: NeilBrown Cc: Vivek Goyal , Jens Axboe , linux-kernel@vger.kernel.org Subject: Re: blk_throtl_exit taking q->queue_lock is problematic Message-ID: <20110218140420.GA20275@redhat.com> References: <20110216183114.26a3613b@notabene.brown> <20110216155305.GC14653@redhat.com> <20110217113536.2bbf308e@notabene.brown> <20110217011029.GA6793@redhat.com> <20110217165501.47f3c26f@notabene.brown> <20110217165906.GE9075@redhat.com> <20110218134025.2a2e5bbb@notabene.brown> <20110218143325.5738e127@notabene.brown> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20110218143325.5738e127@notabene.brown> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2922 Lines: 63 On Thu, Feb 17 2011 at 10:33pm -0500, NeilBrown wrote: > On Thu, 17 Feb 2011 22:19:52 -0500 Mike Snitzer wrote: > > > On Thu, Feb 17, 2011 at 9:40 PM, NeilBrown wrote: > > > On Thu, 17 Feb 2011 11:59:06 -0500 Vivek Goyal wrote: > > >> So if we do this change for performance reasons, it still makes sense > > >> but doing this change because md provided a q->queue_lock and took away that > > >> lock without notifying block layer hence we do this change, is still not > > >> the right reason, IMHO. > > > > > > Well...I like that patch, as it makes my life easier.... > > > > > > But I agree that md is doing something wrong. ?Now that ->queue_lock is > > > always initialised, it is wrong to leave it in a state where it not defined. > > > > > > So maybe I'll apply this (after testing it a bit. ?The only reason for taking > > > the lock queue_lock in a couple of places is to silence some warnings. > > > > > > Thanks, > > > NeilBrown > > > > > > > > > diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c > > > index a23ffa3..909282d 100644 > > > --- a/drivers/md/raid1.c > > > +++ b/drivers/md/raid1.c > > > @@ -959,7 +961,9 @@ static int make_request(mddev_t *mddev, struct bio * bio) > > > ? ? ? ? ? ? ? ?atomic_inc(&r1_bio->remaining); > > > ? ? ? ? ? ? ? ?spin_lock_irqsave(&conf->device_lock, flags); > > > ? ? ? ? ? ? ? ?bio_list_add(&conf->pending_bio_list, mbio); > > > + ? ? ? ? ? ? ? spin_lock(mddev->queue->queue_lock); > > > ? ? ? ? ? ? ? ?blk_plug_device(mddev->queue); > > > + ? ? ? ? ? ? ? spin_unlock(mddev->queue->queue_lock); > > > ? ? ? ? ? ? ? ?spin_unlock_irqrestore(&conf->device_lock, flags); > > > ? ? ? ?} > > > ? ? ? ?r1_bio_write_done(r1_bio, bio->bi_vcnt, behind_pages, behind_pages != NULL); > > > > Noticed an inconsistency, raid10.c's additional locking also protects > > the bio_list_add() whereas raid1.c's doesn't. Seems the additional > > protection in raid10 isn't needed? > > Correct - not needed at all. > I put it there because it felt a little cleaner keeping the two 'lock's > together like the two 'unlock's. Probably confusing though... > > My other though is to stop using the block-layer plugging altogether like I > have in RAID5 (Which I needed to do to make it work with DM). Then I > wouldn't need to touch queue_lock at all - very tempting. FYI, Jens has a considerable plugging overhaul staged in his tree for 2.6.39: http://git.kernel.dk/?p=linux-2.6-block.git;a=shortlog;h=refs/heads/for-2.6.39/stack-plug So if you do ween MD off of the block-layer's plugging Jens will need to adapt his patches that touch MD.. not a big deal. Mike -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/