Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1945948Ab2KNW6v (ORCPT ); Wed, 14 Nov 2012 17:58:51 -0500 Received: from mail.linuxfoundation.org ([140.211.169.12]:38350 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1945923Ab2KNW6u (ORCPT ); Wed, 14 Nov 2012 17:58:50 -0500 Date: Wed, 14 Nov 2012 14:58:48 -0800 From: Andrew Morton To: Marek Szyprowski Cc: linux-mm@kvack.org, linaro-mm-sig@lists.linaro.org, linux-kernel@vger.kernel.org, Kyungmin Park , Mel Gorman , Michal Nazarewicz , Minchan Kim , Bartlomiej Zolnierkiewicz Subject: Re: [PATCH] mm: cma: allocate pages from CMA if NR_FREE_PAGES approaches low water mark Message-Id: <20121114145848.8224e8b0.akpm@linux-foundation.org> In-Reply-To: <1352710782-25425-1-git-send-email-m.szyprowski@samsung.com> References: <1352710782-25425-1-git-send-email-m.szyprowski@samsung.com> X-Mailer: Sylpheed 3.0.2 (GTK+ 2.20.1; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2289 Lines: 64 On Mon, 12 Nov 2012 09:59:42 +0100 Marek Szyprowski wrote: > It has been observed that system tends to keep a lot of CMA free pages > even in very high memory pressure use cases. The CMA fallback for movable > pages is used very rarely, only when system is completely pruned from > MOVABLE pages, what usually means that the out-of-memory even will be > triggered very soon. To avoid such situation and make better use of CMA > pages, a heuristics is introduced which turns on CMA fallback for movable > pages when the real number of free pages (excluding CMA free pages) > approaches low water mark. > > Signed-off-by: Marek Szyprowski > Reviewed-by: Kyungmin Park > CC: Michal Nazarewicz > --- > mm/page_alloc.c | 9 +++++++++ > 1 file changed, 9 insertions(+) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index fcb9719..90b51f3 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -1076,6 +1076,15 @@ static struct page *__rmqueue(struct zone *zone, unsigned int order, > { > struct page *page; > > +#ifdef CONFIG_CMA > + unsigned long nr_free = zone_page_state(zone, NR_FREE_PAGES); > + unsigned long nr_cma_free = zone_page_state(zone, NR_FREE_CMA_PAGES); > + > + if (migratetype == MIGRATE_MOVABLE && nr_cma_free && > + nr_free - nr_cma_free < 2 * low_wmark_pages(zone)) > + migratetype = MIGRATE_CMA; > +#endif /* CONFIG_CMA */ > + > retry_reserve: > page = __rmqueue_smallest(zone, order, migratetype); erk, this is right on the page allocator hotpath. Bad. At the very least, we could code it so it is not quite so dreadfully inefficient: if (migratetype == MIGRATE_MOVABLE) { unsigned long nr_cma_free; nr_cma_free = zone_page_state(zone, NR_FREE_CMA_PAGES); if (nr_cma_free) { unsigned long nr_free; nr_free = zone_page_state(zone, NR_FREE_PAGES); if (nr_free - nr_cma_free < 2 * low_wmark_pages(zone)) migratetype = MIGRATE_CMA; } } but it still looks pretty bad. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/