Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751302AbZJTIdf (ORCPT ); Tue, 20 Oct 2009 04:33:35 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1750976AbZJTIdf (ORCPT ); Tue, 20 Oct 2009 04:33:35 -0400 Received: from lucidpixels.com ([75.144.35.66]:40114 "EHLO lucidpixels.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750806AbZJTIde (ORCPT ); Tue, 20 Oct 2009 04:33:34 -0400 Date: Tue, 20 Oct 2009 04:33:37 -0400 (EDT) From: Justin Piszcz To: Dave Chinner cc: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com, Alan Piszcz Subject: Re: 2.6.31+2.6.31.4: XFS - All I/O locks up to D-state after 24-48 hours (sysrq-t+w available) In-Reply-To: <20091020003358.GW9464@discord.disaster> Message-ID: References: <20091019030456.GS9464@discord.disaster> <20091020003358.GW9464@discord.disaster> User-Agent: Alpine 2.00 (DEB 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2853 Lines: 64 On Tue, 20 Oct 2009, Dave Chinner wrote: > On Mon, Oct 19, 2009 at 06:18:58AM -0400, Justin Piszcz wrote: >> On Mon, 19 Oct 2009, Dave Chinner wrote: >>> On Sun, Oct 18, 2009 at 04:17:42PM -0400, Justin Piszcz wrote: >>>> It has happened again, all sysrq-X output was saved this time. >>> ..... >>> >>> All pointing to log IO not completing. >>> > .... >> So far I do not have a reproducible test case, > > Ok. What sort of load is being placed on the machine? Hello, generally the load is low, it mainly serves out some samba shares. > >> the only other thing not posted was the output of ps auxww during >> the time of the lockup, not sure if it will help, but here it is: >> >> USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND >> root 1 0.0 0.0 10320 684 ? Ss Oct16 0:00 init [2] > .... >> root 371 0.0 0.0 0 0 ? R< Oct16 0:01 [xfslogd/0] >> root 372 0.0 0.0 0 0 ? S< Oct16 0:00 [xfslogd/1] >> root 373 0.0 0.0 0 0 ? S< Oct16 0:00 [xfslogd/2] >> root 374 0.0 0.0 0 0 ? S< Oct16 0:00 [xfslogd/3] >> root 375 0.0 0.0 0 0 ? R< Oct16 0:00 [xfsdatad/0] >> root 376 0.0 0.0 0 0 ? S< Oct16 0:00 [xfsdatad/1] >> root 377 0.0 0.0 0 0 ? S< Oct16 0:03 [xfsdatad/2] >> root 378 0.0 0.0 0 0 ? S< Oct16 0:01 [xfsdatad/3] >> root 379 0.0 0.0 0 0 ? S< Oct16 0:00 [xfsconvertd/0] >> root 380 0.0 0.0 0 0 ? S< Oct16 0:00 [xfsconvertd/1] >> root 381 0.0 0.0 0 0 ? S< Oct16 0:00 [xfsconvertd/2] >> root 382 0.0 0.0 0 0 ? S< Oct16 0:00 [xfsconvertd/3] > ..... > > It appears that both the xfslogd and the xfsdatad on CPU 0 are in > the running state but don't appear to be consuming any significant > CPU time. If they remain like this then I think that means they are > stuck waiting on the run queue. Do these XFS threads always appear > like this when the hang occurs? If so, is there something else that > is hogging CPU 0 preventing these threads from getting the CPU? Yes, the XFS threads show up like this on each time the kernel crashed. So far with 2.6.30.9 after ~48hrs+ it has not crashed. So it appears to be some issue between 2.6.30.9 and 2.6.31.x when this began happening. Any recommendations on how to catch this bug w/certain options enabled/etc? > > Cheers, > > Dave. > -- > Dave Chinner > david@fromorbit.com > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/