Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1761260Ab3DBJbN (ORCPT ); Tue, 2 Apr 2013 05:31:13 -0400 Received: from mx3-phx2.redhat.com ([209.132.183.24]:58480 "EHLO mx3-phx2.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1760918Ab3DBJbJ (ORCPT ); Tue, 2 Apr 2013 05:31:09 -0400 Date: Tue, 2 Apr 2013 05:31:06 -0400 (EDT) From: CAI Qian To: Jens Axboe Cc: Dave Chinner , xfs@oss.sgi.com, LKML Message-ID: <985125161.581860.1364895066584.JavaMail.root@redhat.com> In-Reply-To: <20130402090047.GF3670@kernel.dk> References: <1462091996.435156.1364882416199.JavaMail.root@redhat.com> <247719576.438259.1364882929749.JavaMail.root@redhat.com> <20130402070537.GP6369@dastard> <20130402071937.GC3670@kernel.dk> <20130402073035.GD3670@kernel.dk> <14055702.547701.1364891947331.JavaMail.root@redhat.com> <20130402090047.GF3670@kernel.dk> Subject: Re: Loopback device hung [was Re: xfs deadlock on 3.9-rc5 running xfstests case #78] MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [10.5.82.11] X-Mailer: Zimbra 8.0.3_GA_5664 (ZimbraWebClient - FF19 (Linux)/8.0.3_GA_5664) Thread-Topic: Loopback device hung [was Re: xfs deadlock on 3.9-rc5 running xfstests case #78] Thread-Index: VFaXVt88xXrJyY97wdDxyGpExk9LwQ== Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3045 Lines: 83 ----- Original Message ----- > From: "Jens Axboe" > To: "CAI Qian" > Cc: "Dave Chinner" , xfs@oss.sgi.com, "LKML" > Sent: Tuesday, April 2, 2013 5:00:47 PM > Subject: Re: Loopback device hung [was Re: xfs deadlock on 3.9-rc5 running xfstests case #78] > > On Tue, Apr 02 2013, CAI Qian wrote: > > > > > > ----- Original Message ----- > > > From: "Jens Axboe" > > > To: "Dave Chinner" > > > Cc: "CAI Qian" , xfs@oss.sgi.com, "LKML" > > > > > > Sent: Tuesday, April 2, 2013 3:30:35 PM > > > Subject: Re: Loopback device hung [was Re: xfs deadlock on 3.9-rc5 > > > running xfstests case #78] > > > > > > On Tue, Apr 02 2013, Jens Axboe wrote: > > > > On Tue, Apr 02 2013, Dave Chinner wrote: > > > > > [Added jens Axboe to CC] > > > > > > > > > > On Tue, Apr 02, 2013 at 02:08:49AM -0400, CAI Qian wrote: > > > > > > Saw on almost all the servers range from x64, ppc64 and s390x with > > > > > > kernel > > > > > > 3.9-rc5 and xfsprogs-3.1.10. Never caught this in 3.9-rc4, so looks > > > > > > like > > > > > > something new broke this. Log is here with sysrq debug info. > > > > > > http://people.redhat.com/qcai/stable/log > > > > > > > > CAI Qian, can you try and back the below out and test again? > > > > > > Nevermind, it's clearly that one. The below should improve the > > > situation, but it's not pretty. A better fix would be to allow > > > auto-deletion even if PART_NO_SCAN is set. > > Jens, when compiled the mainline (up to fefcdbe) with this patch, > > it error-ed out, > > Looks like I sent the wrong one, updated below. The patch works well. Thanks! CAI Qian > > diff --git a/drivers/block/loop.c b/drivers/block/loop.c > index fe5f640..faa3afa 100644 > --- a/drivers/block/loop.c > +++ b/drivers/block/loop.c > @@ -1057,14 +1057,15 @@ static int loop_clr_fd(struct loop_device *lo) > struct disk_part_iter piter; > struct hd_struct *part; > > - mutex_lock_nested(&bdev->bd_mutex, 1); > - invalidate_partition(bdev->bd_disk, 0); > - disk_part_iter_init(&piter, bdev->bd_disk, > - DISK_PITER_INCL_EMPTY); > - while ((part = disk_part_iter_next(&piter))) > - delete_partition(bdev->bd_disk, part->partno); > - disk_part_iter_exit(&piter); > - mutex_unlock(&bdev->bd_mutex); > + if (mutex_trylock(&bdev->bd_mutex)) { > + invalidate_partition(bdev->bd_disk, 0); > + disk_part_iter_init(&piter, bdev->bd_disk, > + DISK_PITER_INCL_EMPTY); > + while ((part = disk_part_iter_next(&piter))) > + delete_partition(bdev->bd_disk, part->partno); > + disk_part_iter_exit(&piter); > + mutex_unlock(&bdev->bd_mutex); > + } > } > > /* > > -- > Jens Axboe > > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/