Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932829AbXHIVGy (ORCPT ); Thu, 9 Aug 2007 17:06:54 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1757609AbXHIVGm (ORCPT ); Thu, 9 Aug 2007 17:06:42 -0400 Received: from gir.skynet.ie ([193.1.99.77]:38853 "EHLO gir.skynet.ie" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756088AbXHIVGk (ORCPT ); Thu, 9 Aug 2007 17:06:40 -0400 From: Mel Gorman To: Lee.Schermerhorn@hp.com, ak@suse.de, clameter@sgi.com Cc: Mel Gorman , linux-kernel@vger.kernel.org, linux-mm@kvack.org Message-Id: <20070809210636.14702.86478.sendpatchset@skynet.skynet.ie> In-Reply-To: <20070809210616.14702.73376.sendpatchset@skynet.skynet.ie> References: <20070809210616.14702.73376.sendpatchset@skynet.skynet.ie> Subject: [PATCH 1/4] Use zonelists instead of zones when direct reclaiming pages Date: Thu, 9 Aug 2007 22:06:36 +0100 (IST) Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4029 Lines: 87 The allocator deals with zonelists which indicate the order in which zones should be targeted for an allocation. Similarly, direct reclaim of pages iterates over an array of zones. For consistency, this patch converts direct reclaim to use a zonelist. No functionality is changed by this patch. This simplifies zonelist iterators in the next patch. Signed-off-by: Mel Gorman --- include/linux/swap.h | 2 +- mm/page_alloc.c | 2 +- mm/vmscan.c | 9 ++++++--- 3 files changed, 8 insertions(+), 5 deletions(-) diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.23-rc1-mm2-clean/include/linux/swap.h linux-2.6.23-rc1-mm2-005_freepages_zonelist/include/linux/swap.h --- linux-2.6.23-rc1-mm2-clean/include/linux/swap.h 2007-08-07 14:45:11.000000000 +0100 +++ linux-2.6.23-rc1-mm2-005_freepages_zonelist/include/linux/swap.h 2007-08-08 17:01:12.000000000 +0100 @@ -189,7 +189,7 @@ extern int rotate_reclaimable_page(struc extern void swap_setup(void); /* linux/mm/vmscan.c */ -extern unsigned long try_to_free_pages(struct zone **zones, int order, +extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order, gfp_t gfp_mask); extern unsigned long shrink_all_memory(unsigned long nr_pages); extern int vm_swappiness; diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.23-rc1-mm2-clean/mm/page_alloc.c linux-2.6.23-rc1-mm2-005_freepages_zonelist/mm/page_alloc.c --- linux-2.6.23-rc1-mm2-clean/mm/page_alloc.c 2007-08-07 14:45:11.000000000 +0100 +++ linux-2.6.23-rc1-mm2-005_freepages_zonelist/mm/page_alloc.c 2007-08-08 17:01:12.000000000 +0100 @@ -1644,7 +1644,7 @@ nofail_alloc: reclaim_state.reclaimed_slab = 0; p->reclaim_state = &reclaim_state; - did_some_progress = try_to_free_pages(zonelist->zones, order, gfp_mask); + did_some_progress = try_to_free_pages(zonelist, order, gfp_mask); p->reclaim_state = NULL; p->flags &= ~PF_MEMALLOC; diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.23-rc1-mm2-clean/mm/vmscan.c linux-2.6.23-rc1-mm2-005_freepages_zonelist/mm/vmscan.c --- linux-2.6.23-rc1-mm2-clean/mm/vmscan.c 2007-08-07 14:45:11.000000000 +0100 +++ linux-2.6.23-rc1-mm2-005_freepages_zonelist/mm/vmscan.c 2007-08-09 10:18:59.000000000 +0100 @@ -1086,10 +1086,11 @@ static unsigned long shrink_zone(int pri * If a zone is deemed to be full of pinned pages then just give it a light * scan then give up on it. */ -static unsigned long shrink_zones(int priority, struct zone **zones, +static unsigned long shrink_zones(int priority, struct zonelist *zonelist, struct scan_control *sc) { unsigned long nr_reclaimed = 0; + struct zones **zones = zonelist->zones; int i; sc->all_unreclaimable = 1; @@ -1127,7 +1128,8 @@ static unsigned long shrink_zones(int pr * holds filesystem locks which prevent writeout this might not work, and the * allocation attempt will fail. */ -unsigned long try_to_free_pages(struct zone **zones, int order, gfp_t gfp_mask) +unsigned long try_to_free_pages(struct zonelist *zonelist, int order, + gfp_t gfp_mask) { int priority; int ret = 0; @@ -1135,6 +1137,7 @@ unsigned long try_to_free_pages(struct z unsigned long nr_reclaimed = 0; struct reclaim_state *reclaim_state = current->reclaim_state; unsigned long lru_pages = 0; + struct zone **zones = zonelist->zones; int i; struct scan_control sc = { .gfp_mask = gfp_mask, @@ -1162,7 +1165,7 @@ unsigned long try_to_free_pages(struct z sc.nr_scanned = 0; if (!priority) disable_swap_token(); - nr_reclaimed += shrink_zones(priority, zones, &sc); + nr_reclaimed += shrink_zones(priority, zonelist, &sc); shrink_slab(sc.nr_scanned, gfp_mask, lru_pages); if (reclaim_state) { nr_reclaimed += reclaim_state->reclaimed_slab; - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/