Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1760060Ab1D0Wd1 (ORCPT ); Wed, 27 Apr 2011 18:33:27 -0400 Received: from www.linutronix.de ([62.245.132.108]:33025 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754386Ab1D0Wd0 (ORCPT ); Wed, 27 Apr 2011 18:33:26 -0400 Date: Thu, 28 Apr 2011 00:32:50 +0200 (CEST) From: Thomas Gleixner To: "Paul E. McKenney" cc: =?ISO-8859-15?Q?Bruno_Pr=E9mont?= , Linus Torvalds , Ingo Molnar , Peter Zijlstra , Mike Frysinger , KOSAKI Motohiro , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, "Paul E. McKenney" , Pekka Enberg Subject: Re: 2.6.39-rc4+: Kernel leaking memory during FS scanning, regression? In-Reply-To: <20110427222727.GU2135@linux.vnet.ibm.com> Message-ID: References: <20110425214933.GO2468@linux.vnet.ibm.com> <20110426081904.0d2b1494@pluto.restena.lu> <20110426112756.GF4308@linux.vnet.ibm.com> <20110426183859.6ff6279b@neptune.home> <20110426190918.01660ccf@neptune.home> <20110427081501.5ba28155@pluto.restena.lu> <20110427204139.1b0ea23b@neptune.home> <20110427222727.GU2135@linux.vnet.ibm.com> User-Agent: Alpine 2.02 (LFD 1266 2009-07-14) MIME-Version: 1.0 Content-Type: MULTIPART/MIXED; BOUNDARY="8323328-555990961-1303943571=:3323" X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3156 Lines: 75 This message is in MIME format. The first part should be readable text, while the remaining parts are likely unreadable without MIME-aware tools. --8323328-555990961-1303943571=:3323 Content-Type: TEXT/PLAIN; charset=iso-8859-1 Content-Transfer-Encoding: 8BIT On Wed, 27 Apr 2011, Paul E. McKenney wrote: > On Thu, Apr 28, 2011 at 12:06:11AM +0200, Thomas Gleixner wrote: > > On Wed, 27 Apr 2011, Bruno Pr?mont wrote: > > > On Wed, 27 April 2011 Bruno Pr?mont wrote: > > > Voluntary context switches stay constant from the time on SLABs pile up. > > > (which makes sense as it doesn't run get CPU slices anymore) > > > > > > > > Can you please enable CONFIG_SCHED_DEBUG and provide the output of > > > > > /proc/sched_stat when the problem surfaces and a minute after the > > > > > first snapshot? > > > > > > hm, did you mean CONFIG_SCHEDSTAT or /proc/sched_debug? > > > > > > I did use CONFIG_SCHED_DEBUG (and there is no /proc/sched_stat) so I took > > > /proc/sched_debug which exists... (attached, taken about 7min and +1min > > > after SLABs started piling up), though build processes were SIGSTOPped > > > during first minute. > > > > Oops. /proc/sched_debug is the right thing. > > > > > printk wrote (in case its timestamp is useful, more below): > > > [ 518.480103] sched: RT throttling activated > > > > Ok. Aside of the fact that the CPU time accounting is completely hosed > > this is pointing to the root cause of the problem. > > > > kthread_rcu seems to run in circles for whatever reason and the RT > > throttler catches it. After that things go down the drain completely > > as it should get on the CPU again after that 50ms throttling break. > > Ah. This could happen if there was a huge number of callbacks, in > which case blimit would be set very large and kthread_rcu could then > go CPU-bound. And this workload was generating large numbers of > callbacks due to filesystem operations, right? > > So, perhaps I should kick kthread_rcu back to SCHED_NORMAL if blimit > has been set high. Or have some throttling of my own. I must confess > that throttling kthread_rcu for two hours seems a bit harsh. ;-) That's not the intended thing. See below. > If this was just throttling kthread_rcu for a few hundred milliseconds, > or even for a second or two, things would be just fine. > > Left to myself, I will put together a patch that puts callback processing > down to SCHED_NORMAL in the case where there are huge numbers of > callbacks to be processed. Well that's going to paper over the problem at hand possibly. I really don't see why that thing would run for more than 950ms in a row even if there is a large number of callbacks pending. And then I don't have an explanation for the hosed CPU accounting and why that thing does not get another 950ms RT time when the 50ms throttling break is over. Thanks, tglx --8323328-555990961-1303943571=:3323-- -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/