Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755631AbYJHO5e (ORCPT ); Wed, 8 Oct 2008 10:57:34 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1755164AbYJHO5X (ORCPT ); Wed, 8 Oct 2008 10:57:23 -0400 Received: from gir.skynet.ie ([193.1.99.77]:52889 "EHLO gir.skynet.ie" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755218AbYJHO5W (ORCPT ); Wed, 8 Oct 2008 10:57:22 -0400 Date: Wed, 8 Oct 2008 15:57:20 +0100 From: Mel Gorman To: Andy Whitcroft Cc: linux-mm@kvack.org, Andrew Morton , linux-kernel@vger.kernel.org, Jon Tollefson , Nick Piggin Subject: Re: [PATCH 1/1] hugetlb: pull gigantic page initialisation out of the default path Message-ID: <20081008145720.GB13816@csn.ul.ie> References: <1223458499-12752-1-git-send-email-apw@shadowen.org> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <1223458499-12752-1-git-send-email-apw@shadowen.org> User-Agent: Mutt/1.5.13 (2006-08-11) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4236 Lines: 122 On (08/10/08 10:34), Andy Whitcroft didst pronounce: > As we can determine exactly when a gigantic page is in use we can optimise > the common regular page cases by pulling out gigantic page initialisation > into its own function. As gigantic pages are never released to buddy we > do not need a destructor. This effectivly reverts the previous change > to the main buddy allocator. It also adds a paranoid check to ensure we > never release gigantic pages from hugetlbfs to the main buddy. > > Signed-off-by: Andy Whitcroft > Cc: Nick Piggin Acked-by: Mel Gorman > --- > mm/hugetlb.c | 4 +++- > mm/internal.h | 1 + > mm/page_alloc.c | 26 +++++++++++++++++++------- > 3 files changed, 23 insertions(+), 8 deletions(-) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index bb5cf81..716b151 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -460,6 +460,8 @@ static void update_and_free_page(struct hstate *h, struct page *page) > { > int i; > > + BUG_ON(h->order >= MAX_ORDER); > + > h->nr_huge_pages--; > h->nr_huge_pages_node[page_to_nid(page)]--; > for (i = 0; i < pages_per_huge_page(h); i++) { > @@ -984,7 +986,7 @@ static void __init gather_bootmem_prealloc(void) > struct hstate *h = m->hstate; > __ClearPageReserved(page); > WARN_ON(page_count(page) != 1); > - prep_compound_page(page, h->order); > + prep_compound_gigantic_page(page, h->order); > prep_new_huge_page(h, page, page_to_nid(page)); > } > } > diff --git a/mm/internal.h b/mm/internal.h > index 08b8dea..92729ea 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -17,6 +17,7 @@ void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *start_vma, > unsigned long floor, unsigned long ceiling); > > extern void prep_compound_page(struct page *page, unsigned long order); > +extern void prep_compound_gigantic_page(struct page *page, unsigned long order); > > static inline void set_page_count(struct page *page, int v) > { > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 27b8681..dbeb3f8 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -268,14 +268,28 @@ void prep_compound_page(struct page *page, unsigned long order) > { > int i; > int nr_pages = 1 << order; > + > + set_compound_page_dtor(page, free_compound_page); > + set_compound_order(page, order); > + __SetPageHead(page); > + for (i = 1; i < nr_pages; i++) { > + struct page *p = page + i; > + > + __SetPageTail(p); > + p->first_page = page; > + } > +} > + > +void prep_compound_gigantic_page(struct page *page, unsigned long order) > +{ > + int i; > + int nr_pages = 1 << order; > struct page *p = page + 1; > > set_compound_page_dtor(page, free_compound_page); > set_compound_order(page, order); > __SetPageHead(page); > - for (i = 1; i < nr_pages; i++, p++) { > - if (unlikely((i & (MAX_ORDER_NR_PAGES - 1)) == 0)) > - p = pfn_to_page(page_to_pfn(page) + i); > + for (i = 1; i < nr_pages; i++, p = mem_map_next(p, page, i)) { > __SetPageTail(p); > p->first_page = page; > } > @@ -285,7 +299,6 @@ static void destroy_compound_page(struct page *page, unsigned long order) > { > int i; > int nr_pages = 1 << order; > - struct page *p = page + 1; > > if (unlikely(compound_order(page) != order)) > bad_page(page); > @@ -293,9 +306,8 @@ static void destroy_compound_page(struct page *page, unsigned long order) > if (unlikely(!PageHead(page))) > bad_page(page); > __ClearPageHead(page); > - for (i = 1; i < nr_pages; i++, p++) { > - if (unlikely((i & (MAX_ORDER_NR_PAGES - 1)) == 0)) > - p = pfn_to_page(page_to_pfn(page) + i); > + for (i = 1; i < nr_pages; i++) { > + struct page *p = page + i; > > if (unlikely(!PageTail(p) | > (p->first_page != page))) > -- > 1.6.0.1.451.gc8d31 > -- Mel Gorman Part-time Phd Student Linux Technology Center University of Limerick IBM Dublin Software Lab -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/