Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934094AbZGQGNX (ORCPT ); Fri, 17 Jul 2009 02:13:23 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S933903AbZGQGNX (ORCPT ); Fri, 17 Jul 2009 02:13:23 -0400 Received: from sparc.brc.ubc.ca ([137.82.2.12]:51444 "EHLO sparc.brc.ubc.ca" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933901AbZGQGNW (ORCPT ); Fri, 17 Jul 2009 02:13:22 -0400 Date: Thu, 16 Jul 2009 23:32:33 -0700 (PDT) From: "Li, Ming Chun" To: KOSAKI Motohiro Cc: LKML , "linux-mm@kvack.org" Subject: Re: [PATCH] mm: count only reclaimable lru pages In-Reply-To: <20090717135632.A91A.A69D9226@jp.fujitsu.com> Message-ID: References: <23396.1247764286@redhat.com> <20090717135632.A91A.A69D9226@jp.fujitsu.com> User-Agent: Alpine 1.00 (DEB 882 2007-12-20) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 5758 Lines: 100 On Fri, 17 Jul 2009, KOSAKI Motohiro wrote: > > On Thu, 16 Jul 2009, David Howells wrote: > > > > > Rik van Riel wrote: > > > > > > > It's part of a series of patches, including the three posted by Kosaki-san > > > > last night (to track the number of isolated pages) and the patch I posted > > > > last night (to throttle reclaim when too many pages are isolated). > > > > > > Okay; Rik gave me a tarball of those patches, which I applied and re-ran the > > > test. The first run of msgctl11 produced lots of: > > > > > > [root@andromeda ltp]# while ./testcases/bin/msgctl11; do :; done > > > > I applied the series of patches on 2.6.31-rc3 and run > > > > while ./testcases/bin/msgctl11; do :; done > > > > four times, only got one OOM kill in the first round and the system is > > quite responsive all the time. > > > > # while ./testcases/bin/msgctl11; do :; done > > > > --- > > kernel: [ 735.507878] msgctl11 invoked oom-killer: gfp_mask=0x84d0, order=0, oom_adj=0 > > GFP_KERNEL | __GFP_REPEAT __GFP_ZERO ah, ./scripts/gfp-translate 0x84d0 __GFP_WAIT | __GFP_IO | __GFP_FS | __GFP_REPEAT | __GFP_ZERO > > > kernel: [ 735.507884] msgctl11 cpuset=/ mems_allowed=0 > > kernel: [ 735.507888] Pid: 20631, comm: msgctl11 Not tainted 2.6.31-rc3-custom #1 > > kernel: [ 735.507891] Call Trace: > > kernel: [ 735.507900] [] oom_kill_process+0x161/0x280 > > kernel: [ 735.507905] [] ? select_bad_process+0x63/0xd0 > > kernel: [ 735.507909] [] __out_of_memory+0x4e/0xb0 > > kernel: [ 735.507913] [] out_of_memory+0x52/0xa0 > > kernel: [ 735.507917] [] __alloc_pages_nodemask+0x4d7/0x4f0 > > kernel: [ 735.507922] [] __get_free_pages+0x17/0x30 > > kernel: [ 735.507927] [] pgd_alloc+0x36/0x250 > > kernel: [ 735.507932] [] ? dup_fd+0x23/0x340 > > kernel: [ 735.507936] [] ? dup_mm+0x47/0x350 > > kernel: [ 735.507939] [] mm_init+0xa9/0xe0 > > kernel: [ 735.507943] [] dup_mm+0x79/0x350 > > kernel: [ 735.507947] [] ? copy_fs_struct+0x22/0x90 > > kernel: [ 735.507951] [] ? copy_process+0xc75/0x1070 > > kernel: [ 735.507955] [] copy_process+0xa30/0x1070 > > kernel: [ 735.507959] [] ? schedule+0x494/0xa80 > > kernel: [ 735.507963] [] do_fork+0x6f/0x330 > > kernel: [ 735.507968] [] ? recalc_sigpending+0xe/0x40 > > kernel: [ 735.507972] [] sys_clone+0x36/0x40 > > kernel: [ 735.507976] [] sysenter_do_call+0x12/0x28 > > kernel: [ 735.507979] Mem-Info: > > kernel: [ 735.507981] DMA per-cpu: > > kernel: [ 735.507983] CPU 0: hi: 0, btch: 1 usd: 0 > > kernel: [ 735.507986] CPU 1: hi: 0, btch: 1 usd: 0 > > kernel: [ 735.507988] Normal per-cpu: > > kernel: [ 735.507990] CPU 0: hi: 186, btch: 31 usd: 17 > > kernel: [ 735.507993] CPU 1: hi: 186, btch: 31 usd: 180 > > kernel: [ 735.507994] HighMem per-cpu: > > kernel: [ 735.507997] CPU 0: hi: 42, btch: 7 usd: 22 > > kernel: [ 735.507999] CPU 1: hi: 42, btch: 7 usd: 0 > > kernel: [ 735.508008] active_anon:82389 inactive_anon:2043 isolated_anon:32 > > kernel: [ 735.508009] active_file:2201 inactive_file:5773 isolated_file:31 > > kernel: [ 735.508010] unevictable:0 dirty:4 writeback:0 unstable:0 buffer:19 > > kernel: [ 735.508011] free:1825 slab_reclaimable:655 slab_unreclaimable:19679 > > kernel: [ 735.508012] mapped:1309 shmem:113 pagetables:66757 bounce:0 > > a lot free pages. but... > > > kernel: [ 735.508020] DMA free:3520kB min:64kB low:80kB high:96kB active_anon:2240kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15832kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:132kB kernel_stack:120kB pagetables:2436kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no > > kernel: [ 735.508026] lowmem_reserve[]: 0 867 998 998 > > kernel: [ 735.508035] Normal free:3632kB min:3732kB low:4664kB high:5596kB active_anon:269136kB inactive_anon:0kB active_file:56kB inactive_file:20kB unevictable:0kB isolated(anon):128kB isolated(file):124kB present:887976kB mlocked:0kB dirty:0kB writeback:0kB mapped:4kB shmem:0kB slab_reclaimable:2620kB slab_unreclaimable:78584kB kernel_stack:77328kB pagetables:227972kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:222 all_unreclaimable? no > > kernel: [ 735.508042] lowmem_reserve[]: 0 0 1052 1052 > > DMA and Normal zone doesn't have enough free pages. Caculate this way? DMA: 3520K < 64K + 998 * 4K Normal: 3632k < 3732k + 1052 * 4k > > > kernel: [ 735.508051] HighMem free:148kB min:128kB low:268kB high:408kB active_anon:58180kB inactive_anon:8172kB active_file:8748kB inactive_file:23072kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:134688kB mlocked:0kB dirty:16kB writeback:0kB mapped:5232kB shmem:452kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:36620kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no > > kernel: [ 735.508057] lowmem_reserve[]: 0 0 0 0 > > HighMem zone only have enough free pages and reclaimable file cache pages. > GFP_KERNEL | GFP_REPEAT | GFP_ZERO could not access HighMem free pages? Vincent Li Biomedical Research Center University of British Columbia -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/