From: Mel Gorman Subject: Re: [BUG] fatal hang untarring 90GB file, possibly writeback related. Date: Fri, 6 May 2011 08:42:24 +0100 Message-ID: <20110506074224.GB6591@suse.de> References: <1304009778.2598.10.camel@mulgrave.site> <20110428171826.GZ4658@suse.de> <1304015436.2598.19.camel@mulgrave.site> <20110428192104.GA4658@suse.de> <1304020767.2598.21.camel@mulgrave.site> <1304025145.2598.24.camel@mulgrave.site> <1304030629.2598.42.camel@mulgrave.site> <20110503091320.GA4542@novell.com> <1304431982.2576.5.camel@mulgrave.site> <1304432553.2576.10.camel@mulgrave.site> Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Cc: Mel Gorman , Jan Kara , colin.king@canonical.com, Chris Mason , linux-fsdevel , linux-mm , linux-kernel , linux-ext4 To: James Bottomley Return-path: Received: from cantor.suse.de ([195.135.220.2]:51032 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754041Ab1EFHma (ORCPT ); Fri, 6 May 2011 03:42:30 -0400 Content-Disposition: inline In-Reply-To: <1304432553.2576.10.camel@mulgrave.site> Sender: linux-ext4-owner@vger.kernel.org List-ID: On Tue, May 03, 2011 at 09:22:33AM -0500, James Bottomley wrote: > On Tue, 2011-05-03 at 09:13 -0500, James Bottomley wrote: > > I've got a ftrace output of kswapd ... it's 500k compressed, so I'll > > send under separate cover. > > Here it is ... it's under 2.6.38.4 vanilla, but the code is similar. > I was quiet because I was off trying to reproduce this but not having much luck. It doesn't seem directly related to filesystems or cgroups. For example, here is what I see with ext4 without cgroups 2.6.34-vanilla 2.6.37-vanilla 2.6.38-vanilla rc6-vanilla download tar 70 ( 0.00%) 68 ( 2.94%) 69 ( 1.45%) 70 ( 0.00%) unpack tar 601 ( 0.00%) 605 (-0.66%) 604 (-0.50%) 605 (-0.66%) copy source files 319 ( 0.00%) 321 (-0.62%) 320 (-0.31%) 332 (-3.92%) create tarfile 1368 ( 0.00%) 1372 (-0.29%) 1371 (-0.22%) 1363 ( 0.37%) delete source dirs 21 ( 0.00%) 21 ( 0.00%) 23 (-8.70%) 22 (-4.55%) expand tar 263 ( 0.00%) 261 ( 0.77%) 257 ( 2.33%) 259 ( 1.54%) (all results are in seconds) When running in cgroups, the results are similar - bit slower but not remarkably so. ext3 is slower but not enough to count as the bug. The trace you posted is very short but kswapd is not going to sleep in it. It's less than a seconds worth on different cpus so it's hard to draw any conclusion from it other than sleeping_prematurely() is often deciding that kswapd should not sleep. So lets consider what keeps it awake. 1. High-order allocations? You machine is using i915 and RPC, something neither of my test machine uses. i915 is potentially a source for high-order allocations. I'm attaching a perl script. Please run it as ./watch-highorder.pl --output /tmp/highorders.txt while you are running tar. When kswapd is running for about 30 seconds, interrupt it with ctrl+c twice in quick succession and post /tmp/highorders.txt 2. All unreclaimable is not being set or we are not balancing at all. Can you post the output of sysrq+m while the machine is struggling please? 3. Slab may not be shrinking for some reason. Can you run a shell script like this during the whole test and record its output please? #!/bin/bash while [ 1 ]; do echo time: `date +%s` cat /proc/vmstat sleep 2 done Similarly if this is a slab issue, it'd be nice to know who it is so #!/bin/bash while [ 1 ]; do echo time: `date +%s` cat /proc/slabinfo sleep $MONITOR_UPDATE_FREQUENCY done 4. Lets get a better look at what is going on in kswapd echo 1 > /sys/kernel/debug/tracing/events/vmscan/enable cat /sys/kernel/debug/tracing/trace_pipe > vmscan-ftrace.txt Out of curiousity, what's the exact machine you are testing on because I want to see what sort of hardware combination triggers this problem? Is the tar local or is it copied over NFS? What is the configuration of the disk or partition you are copying to? Thanks -- Mel Gorman SUSE Labs