Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754338AbbBBI1r (ORCPT ); Mon, 2 Feb 2015 03:27:47 -0500 Received: from cantor2.suse.de ([195.135.220.15]:35947 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752921AbbBBI1p (ORCPT ); Mon, 2 Feb 2015 03:27:45 -0500 Message-ID: <54CF34FE.6020204@suse.cz> Date: Mon, 02 Feb 2015 09:27:42 +0100 From: Vlastimil Babka User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.4.0 MIME-Version: 1.0 To: Joonsoo Kim , Andrew Morton CC: Mel Gorman , David Rientjes , Rik van Riel , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Zhang Yanfei , Joonsoo Kim Subject: Re: [RFC PATCH v3 1/3] mm/cma: change fallback behaviour for CMA freepage References: <1422861348-5117-1-git-send-email-iamjoonsoo.kim@lge.com> In-Reply-To: <1422861348-5117-1-git-send-email-iamjoonsoo.kim@lge.com> Content-Type: text/plain; charset=iso-8859-2 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4806 Lines: 113 On 02/02/2015 08:15 AM, Joonsoo Kim wrote: > freepage with MIGRATE_CMA can be used only for MIGRATE_MOVABLE and > they should not be expanded to other migratetype buddy list > to protect them from unmovable/reclaimable allocation. Implementing > these requirements in __rmqueue_fallback(), that is, finding largest > possible block of freepage has bad effect that high order freepage > with MIGRATE_CMA are broken continually although there are suitable > order CMA freepage. Reason is that they are not be expanded to other > migratetype buddy list and next __rmqueue_fallback() invocation try to > finds another largest block of freepage and break it again. So, > MIGRATE_CMA fallback should be handled separately. This patch > introduces __rmqueue_cma_fallback(), that just wrapper of > __rmqueue_smallest() and call it before __rmqueue_fallback() > if migratetype == MIGRATE_MOVABLE. > > This results in unintended behaviour change that MIGRATE_CMA freepage > is always used first rather than other migratetype as movable > allocation's fallback. But, as already mentioned above, > MIGRATE_CMA can be used only for MIGRATE_MOVABLE, so it is better > to use MIGRATE_CMA freepage first as much as possible. Otherwise, > we needlessly take up precious freepages with other migratetype and > increase chance of fragmentation. This makes a lot of sense to me. We could go as far as having __rmqueue_smallest consider both MOVABLE and CMA simultaneously and pick the smallest block available between the two. But that would make the fast path more complex, so this could be enough. Hope it survives the scrutiny of CMA success testing :) > Signed-off-by: Joonsoo Kim Acked-by: Vlastimil Babka > --- > mm/page_alloc.c | 36 +++++++++++++++++++----------------- > 1 file changed, 19 insertions(+), 17 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 8d52ab1..e64b260 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -1029,11 +1029,9 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order, > static int fallbacks[MIGRATE_TYPES][4] = { > [MIGRATE_UNMOVABLE] = { MIGRATE_RECLAIMABLE, MIGRATE_MOVABLE, MIGRATE_RESERVE }, > [MIGRATE_RECLAIMABLE] = { MIGRATE_UNMOVABLE, MIGRATE_MOVABLE, MIGRATE_RESERVE }, > + [MIGRATE_MOVABLE] = { MIGRATE_RECLAIMABLE, MIGRATE_UNMOVABLE, MIGRATE_RESERVE }, > #ifdef CONFIG_CMA > - [MIGRATE_MOVABLE] = { MIGRATE_CMA, MIGRATE_RECLAIMABLE, MIGRATE_UNMOVABLE, MIGRATE_RESERVE }, > [MIGRATE_CMA] = { MIGRATE_RESERVE }, /* Never used */ > -#else > - [MIGRATE_MOVABLE] = { MIGRATE_RECLAIMABLE, MIGRATE_UNMOVABLE, MIGRATE_RESERVE }, > #endif > [MIGRATE_RESERVE] = { MIGRATE_RESERVE }, /* Never used */ > #ifdef CONFIG_MEMORY_ISOLATION > @@ -1041,6 +1039,17 @@ static int fallbacks[MIGRATE_TYPES][4] = { > #endif > }; > > +#ifdef CONFIG_CMA > +static struct page *__rmqueue_cma_fallback(struct zone *zone, > + unsigned int order) > +{ > + return __rmqueue_smallest(zone, order, MIGRATE_CMA); > +} > +#else > +static inline struct page *__rmqueue_cma_fallback(struct zone *zone, > + unsigned int order) { return NULL; } > +#endif > + > /* > * Move the free pages in a range to the free lists of the requested type. > * Note that start_page and end_pages are not aligned on a pageblock > @@ -1192,19 +1201,8 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype) > struct page, lru); > area->nr_free--; > > - if (!is_migrate_cma(migratetype)) { > - try_to_steal_freepages(zone, page, > - start_migratetype, > - migratetype); > - } else { > - /* > - * When borrowing from MIGRATE_CMA, we need to > - * release the excess buddy pages to CMA > - * itself, and we do not try to steal extra > - * free pages. > - */ > - buddy_type = migratetype; > - } > + try_to_steal_freepages(zone, page, start_migratetype, > + migratetype); > > /* Remove the page from the freelists */ > list_del(&page->lru); > @@ -1246,7 +1244,11 @@ retry_reserve: > page = __rmqueue_smallest(zone, order, migratetype); > > if (unlikely(!page) && migratetype != MIGRATE_RESERVE) { > - page = __rmqueue_fallback(zone, order, migratetype); > + if (migratetype == MIGRATE_MOVABLE) > + page = __rmqueue_cma_fallback(zone, order); > + > + if (!page) > + page = __rmqueue_fallback(zone, order, migratetype); > > /* > * Use MIGRATE_RESERVE rather than fail an allocation. goto > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/