Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755559Ab2BCNx0 (ORCPT ); Fri, 3 Feb 2012 08:53:26 -0500 Received: from gir.skynet.ie ([193.1.99.77]:60148 "EHLO gir.skynet.ie" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754261Ab2BCNxY (ORCPT ); Fri, 3 Feb 2012 08:53:24 -0500 Date: Fri, 3 Feb 2012 13:53:20 +0000 From: Mel Gorman To: Marek Szyprowski Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-media@vger.kernel.org, linux-mm@kvack.org, linaro-mm-sig@lists.linaro.org, Michal Nazarewicz , Kyungmin Park , Russell King , Andrew Morton , KAMEZAWA Hiroyuki , Daniel Walker , Arnd Bergmann , Jesse Barker , Jonathan Corbet , Shariq Hasnain , Chunsang Jeong , Dave Hansen , Benjamin Gaignard , Rob Clark , Ohad Ben-Cohen Subject: Re: [PATCH 08/15] mm: mmzone: MIGRATE_CMA migration type added Message-ID: <20120203135320.GF5796@csn.ul.ie> References: <1328271538-14502-1-git-send-email-m.szyprowski@samsung.com> <1328271538-14502-9-git-send-email-m.szyprowski@samsung.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <1328271538-14502-9-git-send-email-m.szyprowski@samsung.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 8444 Lines: 229 On Fri, Feb 03, 2012 at 01:18:51PM +0100, Marek Szyprowski wrote: > From: Michal Nazarewicz > > The MIGRATE_CMA migration type has two main characteristics: > (i) only movable pages can be allocated from MIGRATE_CMA > pageblocks and (ii) page allocator will never change migration > type of MIGRATE_CMA pageblocks. > > This guarantees (to some degree) that page in a MIGRATE_CMA page > block can always be migrated somewhere else (unless there's no > memory left in the system). > > It is designed to be used for allocating big chunks (eg. 10MiB) > of physically contiguous memory. Once driver requests > contiguous memory, pages from MIGRATE_CMA pageblocks may be > migrated away to create a contiguous block. > > To minimise number of migrations, MIGRATE_CMA migration type > is the last type tried when page allocator falls back to other > migration types then requested. > > Signed-off-by: Michal Nazarewicz > Signed-off-by: Marek Szyprowski > Signed-off-by: Kyungmin Park > Tested-by: Rob Clark > Tested-by: Ohad Ben-Cohen > Tested-by: Benjamin Gaignard Minor comments that can be handled as a follow-up but otherwise Acked-by: Mel Gorman > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 238fcec..993c375 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -750,6 +750,26 @@ void __meminit __free_pages_bootmem(struct page *page, unsigned int order) > __free_pages(page, order); > } > > +#ifdef CONFIG_CMA > +/* > + * Free whole pageblock and set it's migration type to MIGRATE_CMA. > + */ > +void __init init_cma_reserved_pageblock(struct page *page) > +{ > + unsigned i = pageblock_nr_pages; > + struct page *p = page; > + > + do { > + __ClearPageReserved(p); > + set_page_count(p, 0); > + } while (++p, --i); > + > + set_page_refcounted(page); > + set_pageblock_migratetype(page, MIGRATE_CMA); > + __free_pages(page, pageblock_order); > + totalram_pages += pageblock_nr_pages; > +} > +#endif > This chunk is not used with the patch. Usually a hunk like this would be part of the patch that used it. Functionally it looks ok but I see that the function that calls it is *not* __init. That should trigger a section warning. Do a make CONFIG_DEBUG_SECTION_MISMATCH=y and clean it up if necessary. > /* > * The order of subdivision here is critical for the IO subsystem. > @@ -875,10 +895,15 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order, > * This array describes the order lists are fallen back to when > * the free lists for the desirable migrate type are depleted > */ > -static int fallbacks[MIGRATE_TYPES][3] = { > - [MIGRATE_UNMOVABLE] = { MIGRATE_RECLAIMABLE, MIGRATE_MOVABLE, MIGRATE_RESERVE }, > - [MIGRATE_RECLAIMABLE] = { MIGRATE_UNMOVABLE, MIGRATE_MOVABLE, MIGRATE_RESERVE }, > - [MIGRATE_MOVABLE] = { MIGRATE_RECLAIMABLE, MIGRATE_UNMOVABLE, MIGRATE_RESERVE }, > +static int fallbacks[MIGRATE_TYPES][4] = { > + [MIGRATE_UNMOVABLE] = { MIGRATE_RECLAIMABLE, MIGRATE_MOVABLE, MIGRATE_RESERVE }, > + [MIGRATE_RECLAIMABLE] = { MIGRATE_UNMOVABLE, MIGRATE_MOVABLE, MIGRATE_RESERVE }, > +#ifdef CONFIG_CMA > + [MIGRATE_MOVABLE] = { MIGRATE_CMA, MIGRATE_RECLAIMABLE, MIGRATE_UNMOVABLE, MIGRATE_RESERVE }, > + [MIGRATE_CMA] = { MIGRATE_RESERVE }, /* Never used */ > +#else > + [MIGRATE_MOVABLE] = { MIGRATE_RECLAIMABLE, MIGRATE_UNMOVABLE, MIGRATE_RESERVE }, > +#endif > [MIGRATE_RESERVE] = { MIGRATE_RESERVE }, /* Never used */ > [MIGRATE_ISOLATE] = { MIGRATE_RESERVE }, /* Never used */ > }; > @@ -995,11 +1020,18 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype) > * pages to the preferred allocation list. If falling > * back for a reclaimable kernel allocation, be more > * aggressive about taking ownership of free pages > + * > + * On the other hand, never change migration > + * type of MIGRATE_CMA pageblocks nor move CMA > + * pages on different free lists. We don't > + * want unmovable pages to be allocated from > + * MIGRATE_CMA areas. > */ > - if (unlikely(current_order >= (pageblock_order >> 1)) || > - start_migratetype == MIGRATE_RECLAIMABLE || > - page_group_by_mobility_disabled) { > - unsigned long pages; > + if (!is_migrate_cma(migratetype) && > + (unlikely(current_order >= pageblock_order / 2) || > + start_migratetype == MIGRATE_RECLAIMABLE || > + page_group_by_mobility_disabled)) { > + int pages; > pages = move_freepages_block(zone, page, > start_migratetype); > > @@ -1017,11 +1049,14 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype) > rmv_page_order(page); > > /* Take ownership for orders >= pageblock_order */ > - if (current_order >= pageblock_order) > + if (current_order >= pageblock_order && > + !is_migrate_cma(migratetype)) > change_pageblock_range(page, current_order, > start_migratetype); > > - expand(zone, page, order, current_order, area, migratetype); > + expand(zone, page, order, current_order, area, > + is_migrate_cma(migratetype) > + ? migratetype : start_migratetype); > > trace_mm_page_alloc_extfrag(page, order, current_order, > start_migratetype, migratetype); > @@ -1072,7 +1107,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order, > unsigned long count, struct list_head *list, > int migratetype, int cold) > { > - int i; > + int mt = migratetype, i; > > spin_lock(&zone->lock); > for (i = 0; i < count; ++i) { > @@ -1093,7 +1128,12 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order, > list_add(&page->lru, list); > else > list_add_tail(&page->lru, list); > - set_page_private(page, migratetype); > +#ifdef CONFIG_CMA > + mt = get_pageblock_migratetype(page); > + if (!is_migrate_cma(mt) && mt != MIGRATE_ISOLATE) > + mt = migratetype; > +#endif > + set_page_private(page, mt); Ok, so while I'm not happy with the CONFIG_CMA bit here, so be it for now. There are a few things I would like to see with it though in the future 1. Add a comment explaning why it is necessary only for CONFIG_CMA. Put all the ugliness in the changelog if you like, it's already been written up in a mail so you can cut and paste the changelog 2. If there exists a second hunk that has this sort of ugliness, consider doing something like #ifdef CONFIG_CMA #define CMA_BUILD 1 #else #define CMA_BUILD 0 #endif if (CONFIG_CMA) { int mt = get_pageblock_migratetype(page); if (!is_migrate_cma(mt) && mt != MIGRATE_ISOLATE) mt = migratetype; } That can be slightly tidier and easier to follow while still getting optimised out for !CONFIG_CMA 3. Consider trying to get rid of the CONFIG_CMA part entirely. Do this by having a readmostly static variable that is *only* set while MIGRATE_ISOLATE pageblocks exist. If they exist, then unconditionally do this paranoid check documenting that both memory hotplug and CMA benefit from it. The advantage is that you get the careful checking that you want but incur the cost in the page allocator *only* when you are actively trying to allocate with CMA. e.g. static int nr_migrate_isolate __read_mostly; int set_migratetype_isolate(struct page *page) { .... nr_migrate_isolate++; .... } void unset_migratetype_isolate(struct page *page) { ... nr_migrate_isolate--; ... } Be careful to get the accounting of nr_migrate_isolate right during memory hot-remove if unset_migratetype_isolate is not called because the memory is offlined.In rmqueue_bulk() then do /* * During memory hot-remove and during CMA allocation, be * careful that pages that were added to the per-cpu * lists before the pageblock was marked MIGRATE_ISOLATE and * not moved properly are accounted for properly */ if (nr_migrate_isolate) { int mt = get_pageblock_migratetype(page); if (!is_migrate_cma(mt) && mt != MIGRATE_ISOLATE) mt = migratetype; } Make sense? -- Mel Gorman SUSE Labs -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/