Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758093AbZCaAXw (ORCPT ); Mon, 30 Mar 2009 20:23:52 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1753202AbZCaAXn (ORCPT ); Mon, 30 Mar 2009 20:23:43 -0400 Received: from TYO201.gate.nec.co.jp ([202.32.8.193]:60339 "EHLO tyo201.gate.nec.co.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752289AbZCaAXm (ORCPT ); Mon, 30 Mar 2009 20:23:42 -0400 Date: Tue, 31 Mar 2009 09:18:43 +0900 From: Daisuke Nishimura To: KOSAKI Motohiro Cc: nishimura@mxp.nes.nec.co.jp, KAMEZAWA Hiroyuki , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Johannes Weiner , MinChan Kim , Andrew Morton , "Rafael J. Wysocki" , Rik van Riel , Balbir Singh Subject: Re: [PATCH] vmscan: memcg needs may_swap (Re: [patch] vmscan: rename sc.may_swap to may_unmap) Message-Id: <20090331091843.a0599899.nishimura@mxp.nes.nec.co.jp> In-Reply-To: <20090328214636.68FF.A69D9226@jp.fujitsu.com> References: <20090327151926.f252fba7.nishimura@mxp.nes.nec.co.jp> <20090327153035.35498303.kamezawa.hiroyu@jp.fujitsu.com> <20090328214636.68FF.A69D9226@jp.fujitsu.com> Organization: NEC Soft, Ltd. X-Mailer: Sylpheed 2.6.0 (GTK+ 2.10.14; i686-pc-mingw32) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 6513 Lines: 184 On Mon, 30 Mar 2009 08:45:28 +0900 (JST), KOSAKI Motohiro wrote: > > On Fri, 27 Mar 2009 15:19:26 +0900 > > Daisuke Nishimura wrote: > > > > > Added > > > Cc: KAMEZAWA Hiroyuki > > > Cc: Balbir Singh > > > > > > I'm sorry for replying to a very old mail. > > > > > > > @@ -1713,7 +1713,7 @@ unsigned long try_to_free_mem_cgroup_pag > > > > { > > > > struct scan_control sc = { > > > > .may_writepage = !laptop_mode, > > > > - .may_swap = 1, > > > > + .may_unmap = 1, > > > > .swap_cluster_max = SWAP_CLUSTER_MAX, > > > > .swappiness = swappiness, > > > > .order = 0, > > > > @@ -1723,7 +1723,7 @@ unsigned long try_to_free_mem_cgroup_pag > > > > struct zonelist *zonelist; > > > > > > > > if (noswap) > > > > - sc.may_swap = 0; > > > > + sc.may_unmap = 0; > > > > > > > > sc.gfp_mask = (gfp_mask & GFP_RECLAIM_MASK) | > > > > (GFP_HIGHUSER_MOVABLE & ~GFP_RECLAIM_MASK); > > > IIUC, memcg had used may_swap as a flag for "we need to use swap?" as the name indicate. > > > > > > Because, when mem+swap hits the limit, trying to swapout pages is meaningless > > > as it doesn't change mem+swap usage. > > > > > Good catch...sigh, I missed this disussion. > > > > > > > > > What do you think of this patch? > > > === > > > From: Daisuke Nishimura > > > > > > vmscan-rename-scmay_swap-to-may_unmap.patch removed may_swap flag, > > > but memcg had used it as a flag for "we need to use swap?", as the > > > name indicate. > > > > > > And in current implementation, memcg cannot reclaim mapped file caches > > > when mem+swap hits the limit. > > > > > When mem+swap hits the limit, swap-out anonymous page doesn't reduce the > > amount of usage of mem+swap, so, swap-out should be avoided. > > > > > re-introduce may_swap flag and handle it at shrink_page_list. > > > > > > This patch doesn't influence any scan_control users other than memcg. > > > > > > > > > > Signed-off-by: Daisuke Nishimura > > > > Seems good, > > Reviewed-by: KAMEZAWA Hiroyuki > > > > But hum....Maybe this lru scan work in the same way as the case > > of !total_swap_pages. (means don't scan anon LRU.) > > revisit this later. > > Well, How about following patch? > I think your patch looks better because vain scanning of anon list is avoided. Thanks, Daisuke Nishimura. > So, I have to agree my judgement of may_unmap was wrong. > You explain memcg can use may_swap instead may_unmap. and I think > other may_unmap user (zone_reclaim and shrink_all_list) can convert > may_unmap code to may_swap. > > IOW, Nishimura-san, you explain we can remove the branch of the may_unmap > from shrink_page_list(). > it's really good job. thanks! > > > ======== > Subject: vmswan: reintroduce sc->may_swap > > vmscan-rename-scmay_swap-to-may_unmap.patch removed may_swap flag, > but memcg had used it as a flag for "we need to use swap?", as the > name indicate. > > And in current implementation, memcg cannot reclaim mapped file caches > when mem+swap hits the limit. > > re-introduce may_swap flag and handle it at get_scan_ratio(). > This patch doesn't influence any scan_control users other than memcg. > > Signed-off-by: KOSAKI Motohiro > Signed-off-by: Daisuke Nishimura > -- > mm/vmscan.c | 12 ++++++++++-- > 1 files changed, 10 insertions(+), 2 deletions(-) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 3be6157..00ea4a1 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -63,6 +63,9 @@ struct scan_control { > /* Can mapped pages be reclaimed? */ > int may_unmap; > > + /* Can pages be swapped as part of reclaim? */ > + int may_swap; > + > /* This context's SWAP_CLUSTER_MAX. If freeing memory for > * suspend, we effectively ignore SWAP_CLUSTER_MAX. > * In this context, it doesn't matter that we scan the > @@ -1379,7 +1382,7 @@ static void get_scan_ratio(struct zone *zone, struct scan_control *sc, > struct zone_reclaim_stat *reclaim_stat = get_reclaim_stat(zone, sc); > > /* If we have no swap space, do not bother scanning anon pages. */ > - if (nr_swap_pages <= 0) { > + if (!sc->may_swap || (nr_swap_pages <= 0)) { > percent[0] = 0; > percent[1] = 100; > return; > @@ -1695,6 +1698,7 @@ unsigned long try_to_free_pages(struct zonelist *zonelist, int order, > .may_writepage = !laptop_mode, > .swap_cluster_max = SWAP_CLUSTER_MAX, > .may_unmap = 1, > + .may_swap = 1, > .swappiness = vm_swappiness, > .order = order, > .mem_cgroup = NULL, > @@ -1714,6 +1718,7 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *mem_cont, > struct scan_control sc = { > .may_writepage = !laptop_mode, > .may_unmap = 1, > + .may_swap = 1, > .swap_cluster_max = SWAP_CLUSTER_MAX, > .swappiness = swappiness, > .order = 0, > @@ -1723,7 +1728,7 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *mem_cont, > struct zonelist *zonelist; > > if (noswap) > - sc.may_unmap = 0; > + sc.may_swap = 0; > > sc.gfp_mask = (gfp_mask & GFP_RECLAIM_MASK) | > (GFP_HIGHUSER_MOVABLE & ~GFP_RECLAIM_MASK); > @@ -1763,6 +1768,7 @@ static unsigned long balance_pgdat(pg_data_t *pgdat, int order) > struct scan_control sc = { > .gfp_mask = GFP_KERNEL, > .may_unmap = 1, > + .may_swap = 1, > .swap_cluster_max = SWAP_CLUSTER_MAX, > .swappiness = vm_swappiness, > .order = order, > @@ -2109,6 +2115,7 @@ unsigned long shrink_all_memory(unsigned long nr_pages) > struct scan_control sc = { > .gfp_mask = GFP_KERNEL, > .may_unmap = 0, > + .may_swap = 1, > .swap_cluster_max = nr_pages, > .may_writepage = 1, > .isolate_pages = isolate_pages_global, > @@ -2289,6 +2296,7 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order) > struct scan_control sc = { > .may_writepage = !!(zone_reclaim_mode & RECLAIM_WRITE), > .may_unmap = !!(zone_reclaim_mode & RECLAIM_SWAP), > + .may_swap = 1, > .swap_cluster_max = max_t(unsigned long, nr_pages, > SWAP_CLUSTER_MAX), > .gfp_mask = gfp_mask, > > > > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/