Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755565Ab3HLVyp (ORCPT ); Mon, 12 Aug 2013 17:54:45 -0400 Received: from relay3.sgi.com ([192.48.152.1]:59874 "EHLO relay.sgi.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1754866Ab3HLVyo (ORCPT ); Mon, 12 Aug 2013 17:54:44 -0400 From: Nathan Zimmer To: hpa@zytor.com, mingo@kernel.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, holt@sgi.com, nzimmer@sgi.com, rob@landley.net, travis@sgi.com, daniel@numascale-asia.com, akpm@linux-foundation.org, gregkh@linuxfoundation.org, yinghai@kernel.org, mgorman@suse.de Subject: [RFC v3 3/5] Move page initialization into a separate function. Date: Mon, 12 Aug 2013 16:54:38 -0500 Message-Id: <1376344480-156708-4-git-send-email-nzimmer@sgi.com> X-Mailer: git-send-email 1.8.2.1 In-Reply-To: <1376344480-156708-1-git-send-email-nzimmer@sgi.com> References: <1375465467-40488-1-git-send-email-nzimmer@sgi.com> <1376344480-156708-1-git-send-email-nzimmer@sgi.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 5097 Lines: 144 From: Robin Holt Currently, memmap_init_zone() has all the smarts for initializing a single page. When we convert to initializing pages in a 2MiB chunk, we will need to do this equivalent work from two separate places so we are breaking out a helper function. Signed-off-by: Robin Holt Signed-off-by: Nathan Zimmer To: "H. Peter Anvin" To: Ingo Molnar Cc: Linux Kernel Cc: Linux MM Cc: Rob Landley Cc: Mike Travis Cc: Daniel J Blueman Cc: Andrew Morton Cc: Greg KH Cc: Yinghai Lu Cc: Mel Gorman --- mm/mm_init.c | 2 +- mm/page_alloc.c | 73 +++++++++++++++++++++++++++++++-------------------------- 2 files changed, 41 insertions(+), 34 deletions(-) diff --git a/mm/mm_init.c b/mm/mm_init.c index c280a02..be8a539 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -128,7 +128,7 @@ void __init mminit_verify_pageflags_layout(void) BUG_ON(or_mask != add_mask); } -void __meminit mminit_verify_page_links(struct page *page, enum zone_type zone, +void mminit_verify_page_links(struct page *page, enum zone_type zone, unsigned long nid, unsigned long pfn) { BUG_ON(page_to_nid(page) != nid); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 5adf81e..df3ec13 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -697,6 +697,45 @@ static void free_one_page(struct zone *zone, struct page *page, int order, spin_unlock(&zone->lock); } +static void __init_single_page(unsigned long pfn, unsigned long zone, int nid) +{ + struct page *page = pfn_to_page(pfn); + struct zone *z = &NODE_DATA(nid)->node_zones[zone]; + + set_page_links(page, zone, nid, pfn); + mminit_verify_page_links(page, zone, nid, pfn); + init_page_count(page); + page_mapcount_reset(page); + page_nid_reset_last(page); + SetPageReserved(page); + + /* + * Mark the block movable so that blocks are reserved for + * movable at startup. This will force kernel allocations + * to reserve their blocks rather than leaking throughout + * the address space during boot when many long-lived + * kernel allocations are made. Later some blocks near + * the start are marked MIGRATE_RESERVE by + * setup_zone_migrate_reserve() + * + * bitmap is created for zone's valid pfn range. but memmap + * can be created for invalid pages (for alignment) + * check here not to call set_pageblock_migratetype() against + * pfn out of zone. + */ + if ((z->zone_start_pfn <= pfn) + && (pfn < zone_end_pfn(z)) + && !(pfn & (pageblock_nr_pages - 1))) + set_pageblock_migratetype(page, MIGRATE_MOVABLE); + + INIT_LIST_HEAD(&page->lru); +#ifdef WANT_PAGE_VIRTUAL + /* The shift won't overflow because ZONE_NORMAL is below 4G. */ + if (!is_highmem_idx(zone)) + set_page_address(page, __va(pfn << PAGE_SHIFT)); +#endif +} + static bool free_pages_prepare(struct page *page, unsigned int order) { int i; @@ -3951,7 +3990,6 @@ static void setup_zone_migrate_reserve(struct zone *zone) void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, unsigned long start_pfn, enum memmap_context context) { - struct page *page; unsigned long end_pfn = start_pfn + size; unsigned long pfn; struct zone *z; @@ -3972,38 +4010,7 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, if (!early_pfn_in_nid(pfn, nid)) continue; } - page = pfn_to_page(pfn); - set_page_links(page, zone, nid, pfn); - mminit_verify_page_links(page, zone, nid, pfn); - init_page_count(page); - page_mapcount_reset(page); - page_nid_reset_last(page); - SetPageReserved(page); - /* - * Mark the block movable so that blocks are reserved for - * movable at startup. This will force kernel allocations - * to reserve their blocks rather than leaking throughout - * the address space during boot when many long-lived - * kernel allocations are made. Later some blocks near - * the start are marked MIGRATE_RESERVE by - * setup_zone_migrate_reserve() - * - * bitmap is created for zone's valid pfn range. but memmap - * can be created for invalid pages (for alignment) - * check here not to call set_pageblock_migratetype() against - * pfn out of zone. - */ - if ((z->zone_start_pfn <= pfn) - && (pfn < zone_end_pfn(z)) - && !(pfn & (pageblock_nr_pages - 1))) - set_pageblock_migratetype(page, MIGRATE_MOVABLE); - - INIT_LIST_HEAD(&page->lru); -#ifdef WANT_PAGE_VIRTUAL - /* The shift won't overflow because ZONE_NORMAL is below 4G. */ - if (!is_highmem_idx(zone)) - set_page_address(page, __va(pfn << PAGE_SHIFT)); -#endif + __init_single_page(pfn, zone, nid); } } -- 1.8.2.1 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/