Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758100Ab3GMDGz (ORCPT ); Fri, 12 Jul 2013 23:06:55 -0400 Received: from mail-ie0-f175.google.com ([209.85.223.175]:59724 "EHLO mail-ie0-f175.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758078Ab3GMDGy (ORCPT ); Fri, 12 Jul 2013 23:06:54 -0400 MIME-Version: 1.0 In-Reply-To: <1373594635-131067-4-git-send-email-holt@sgi.com> References: <1373594635-131067-1-git-send-email-holt@sgi.com> <1373594635-131067-4-git-send-email-holt@sgi.com> Date: Fri, 12 Jul 2013 20:06:52 -0700 X-Google-Sender-Auth: WO3CJRJl2LYGD8i5-oXb9TJwlNQ Message-ID: Subject: Re: [RFC 3/4] Seperate page initialization into a separate function. From: Yinghai Lu To: Robin Holt Cc: "H. Peter Anvin" , Ingo Molnar , Nate Zimmer , Linux Kernel , Linux MM , Rob Landley , Mike Travis , Daniel J Blueman , Andrew Morton , Greg KH , Mel Gorman Content-Type: text/plain; charset=ISO-8859-1 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 5923 Lines: 138 On Thu, Jul 11, 2013 at 7:03 PM, Robin Holt wrote: > Currently, memmap_init_zone() has all the smarts for initializing a > single page. When we convert to initializing pages in a 2MiB chunk, > we will need to do this equivalent work from two separate places > so we are breaking out a helper function. > > Signed-off-by: Robin Holt > Signed-off-by: Nate Zimmer > To: "H. Peter Anvin" > To: Ingo Molnar > Cc: Linux Kernel > Cc: Linux MM > Cc: Rob Landley > Cc: Mike Travis > Cc: Daniel J Blueman > Cc: Andrew Morton > Cc: Greg KH > Cc: Yinghai Lu > Cc: Mel Gorman > --- > mm/mm_init.c | 2 +- > mm/page_alloc.c | 75 +++++++++++++++++++++++++++++++++------------------------ > 2 files changed, 45 insertions(+), 32 deletions(-) > > diff --git a/mm/mm_init.c b/mm/mm_init.c > index c280a02..be8a539 100644 > --- a/mm/mm_init.c > +++ b/mm/mm_init.c > @@ -128,7 +128,7 @@ void __init mminit_verify_pageflags_layout(void) > BUG_ON(or_mask != add_mask); > } > > -void __meminit mminit_verify_page_links(struct page *page, enum zone_type zone, > +void mminit_verify_page_links(struct page *page, enum zone_type zone, > unsigned long nid, unsigned long pfn) > { > BUG_ON(page_to_nid(page) != nid); > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index c3edb62..635b131 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -697,6 +697,49 @@ static void free_one_page(struct zone *zone, struct page *page, int order, > spin_unlock(&zone->lock); > } > > +static void __init_single_page(struct page *page, unsigned long zone, int nid, int reserved) > +{ > + unsigned long pfn = page_to_pfn(page); > + struct zone *z = &NODE_DATA(nid)->node_zones[zone]; > + > + set_page_links(page, zone, nid, pfn); > + mminit_verify_page_links(page, zone, nid, pfn); > + init_page_count(page); > + page_mapcount_reset(page); > + page_nid_reset_last(page); > + if (reserved) { > + SetPageReserved(page); > + } else { > + ClearPageReserved(page); > + set_page_count(page, 0); > + } > + /* > + * Mark the block movable so that blocks are reserved for > + * movable at startup. This will force kernel allocations > + * to reserve their blocks rather than leaking throughout > + * the address space during boot when many long-lived > + * kernel allocations are made. Later some blocks near > + * the start are marked MIGRATE_RESERVE by > + * setup_zone_migrate_reserve() > + * > + * bitmap is created for zone's valid pfn range. but memmap > + * can be created for invalid pages (for alignment) > + * check here not to call set_pageblock_migratetype() against > + * pfn out of zone. > + */ > + if ((z->zone_start_pfn <= pfn) > + && (pfn < zone_end_pfn(z)) > + && !(pfn & (pageblock_nr_pages - 1))) > + set_pageblock_migratetype(page, MIGRATE_MOVABLE); > + > + INIT_LIST_HEAD(&page->lru); > +#ifdef WANT_PAGE_VIRTUAL > + /* The shift won't overflow because ZONE_NORMAL is below 4G. */ > + if (!is_highmem_idx(zone)) > + set_page_address(page, __va(pfn << PAGE_SHIFT)); > +#endif > +} > + > static bool free_pages_prepare(struct page *page, unsigned int order) > { > int i; > @@ -3934,37 +3977,7 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, > continue; > } > page = pfn_to_page(pfn); > - set_page_links(page, zone, nid, pfn); > - mminit_verify_page_links(page, zone, nid, pfn); > - init_page_count(page); > - page_mapcount_reset(page); > - page_nid_reset_last(page); > - SetPageReserved(page); > - /* > - * Mark the block movable so that blocks are reserved for > - * movable at startup. This will force kernel allocations > - * to reserve their blocks rather than leaking throughout > - * the address space during boot when many long-lived > - * kernel allocations are made. Later some blocks near > - * the start are marked MIGRATE_RESERVE by > - * setup_zone_migrate_reserve() > - * > - * bitmap is created for zone's valid pfn range. but memmap > - * can be created for invalid pages (for alignment) > - * check here not to call set_pageblock_migratetype() against > - * pfn out of zone. > - */ > - if ((z->zone_start_pfn <= pfn) > - && (pfn < zone_end_pfn(z)) > - && !(pfn & (pageblock_nr_pages - 1))) > - set_pageblock_migratetype(page, MIGRATE_MOVABLE); > - > - INIT_LIST_HEAD(&page->lru); > -#ifdef WANT_PAGE_VIRTUAL > - /* The shift won't overflow because ZONE_NORMAL is below 4G. */ > - if (!is_highmem_idx(zone)) > - set_page_address(page, __va(pfn << PAGE_SHIFT)); > -#endif > + __init_single_page(page, zone, nid, 1); Can you move page = pfn_to_page(pfn) into __init_single_page and pass pfn directly? Yinghai -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/