Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752544AbbBYBAL (ORCPT ); Tue, 24 Feb 2015 20:00:11 -0500 Received: from LGEMRELSE7Q.lge.com ([156.147.1.151]:56939 "EHLO lgemrelse7q.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750748AbbBYBAK (ORCPT ); Tue, 24 Feb 2015 20:00:10 -0500 X-Original-SENDERIP: 10.177.222.149 X-Original-MAILFROM: iamjoonsoo.kim@lge.com Date: Wed, 25 Feb 2015 10:00:11 +0900 From: Joonsoo Kim To: Andrew Morton Cc: Vlastimil Babka , Mel Gorman , David Rientjes , Rik van Riel , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Zhang Yanfei Subject: Re: [PATCH v4 3/3] mm/compaction: enhance compaction finish condition Message-ID: <20150225010011.GB16796@js1304-P5Q-DELUXE> References: <1423725305-3726-1-git-send-email-iamjoonsoo.kim@lge.com> <1423725305-3726-3-git-send-email-iamjoonsoo.kim@lge.com> <20150218160405.526418167249559d0ed3efc5@linux-foundation.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150218160405.526418167249559d0ed3efc5@linux-foundation.org> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 10973 Lines: 261 On Wed, Feb 18, 2015 at 04:04:05PM -0800, Andrew Morton wrote: > On Thu, 12 Feb 2015 16:15:05 +0900 Joonsoo Kim wrote: > > > Compaction has anti fragmentation algorithm. It is that freepage > > should be more than pageblock order to finish the compaction if we don't > > find any freepage in requested migratetype buddy list. This is for > > mitigating fragmentation, but, there is a lack of migratetype > > consideration and it is too excessive compared to page allocator's anti > > fragmentation algorithm. > > > > Not considering migratetype would cause premature finish of compaction. > > For example, if allocation request is for unmovable migratetype, > > freepage with CMA migratetype doesn't help that allocation and > > compaction should not be stopped. But, current logic regards this > > situation as compaction is no longer needed, so finish the compaction. > > > > Secondly, condition is too excessive compared to page allocator's logic. > > We can steal freepage from other migratetype and change pageblock > > migratetype on more relaxed conditions in page allocator. This is designed > > to prevent fragmentation and we can use it here. Imposing hard constraint > > only to the compaction doesn't help much in this case since page allocator > > would cause fragmentation again. > > > > To solve these problems, this patch borrows anti fragmentation logic from > > page allocator. It will reduce premature compaction finish in some cases > > and reduce excessive compaction work. > > > > stress-highalloc test in mmtests with non movable order 7 allocation shows > > considerable increase of compaction success rate. > > > > Compaction success rate (Compaction success * 100 / Compaction stalls, %) > > 31.82 : 42.20 > > > > I tested it on non-reboot 5 runs stress-highalloc benchmark and found that > > there is no more degradation on allocation success rate than before. That > > roughly means that this patch doesn't result in more fragmentations. > > > > Vlastimil suggests additional idea that we only test for fallbacks > > when migration scanner has scanned a whole pageblock. It looked good for > > fragmentation because chance of stealing increase due to making more > > free pages in certain pageblock. So, I tested it, but, it results in > > decreased compaction success rate, roughly 38.00. I guess the reason that > > if system is low memory condition, watermark check could be failed due to > > not enough order 0 free page and so, sometimes, we can't reach a fallback > > check although migrate_pfn is aligned to pageblock_nr_pages. I can insert > > code to cope with this situation but it makes code more complicated so > > I don't include his idea at this patch. > > > > ... > > > > --- a/mm/compaction.c > > +++ b/mm/compaction.c > > @@ -1170,13 +1170,23 @@ static int __compact_finished(struct zone *zone, struct compact_control *cc, > > /* Direct compactor: Is a suitable page free? */ > > for (order = cc->order; order < MAX_ORDER; order++) { > > struct free_area *area = &zone->free_area[order]; > > + bool can_steal; > > > > /* Job done if page is free of the right migratetype */ > > if (!list_empty(&area->free_list[migratetype])) > > return COMPACT_PARTIAL; > > > > - /* Job done if allocation would set block type */ > > - if (order >= pageblock_order && area->nr_free) > > + /* MIGRATE_MOVABLE can fallback on MIGRATE_CMA */ > > + if (migratetype == MIGRATE_MOVABLE && > > + !list_empty(&area->free_list[MIGRATE_CMA])) > > + return COMPACT_PARTIAL; > > MIGRATE_CMA isn't defined if CONFIG_CMA=n. > > --- a/mm/compaction.c~mm-compaction-enhance-compaction-finish-condition-fix > +++ a/mm/compaction.c > @@ -1180,11 +1180,12 @@ static int __compact_finished(struct zon > if (!list_empty(&area->free_list[migratetype])) > return COMPACT_PARTIAL; > > +#ifdef CONFIG_CMA > /* MIGRATE_MOVABLE can fallback on MIGRATE_CMA */ > if (migratetype == MIGRATE_MOVABLE && > !list_empty(&area->free_list[MIGRATE_CMA])) > return COMPACT_PARTIAL; > - > +#endif > /* > * Job done if allocation would steal freepages from > * other migratetype buddy lists. > > Please review the rest of the patchset for the CONFIG_CMA=n case (is it > all necessary?), runtime test it and let me know? Okay. Following is update version which solves compile issue for the CONFIG_CMA=n and has minor clean-up pointed by Vlastimil. I did runtime tests for both CONFIG_CMA=y and CONFIG_CMA=n cases and it was fine. :) Thanks. ------->8---------- >From 2f38f08289618598d0bf981218dbb9d6fe7ffa7f Mon Sep 17 00:00:00 2001 From: Joonsoo Kim Date: Mon, 9 Feb 2015 10:59:56 +0900 Subject: [PATCH v5 3/3] mm/compaction: enhance compaction finish condition Compaction has anti fragmentation algorithm. It is that freepage should be more than pageblock order to finish the compaction if we don't find any freepage in requested migratetype buddy list. This is for mitigating fragmentation, but, there is a lack of migratetype consideration and it is too excessive compared to page allocator's anti fragmentation algorithm. Not considering migratetype would cause premature finish of compaction. For example, if allocation request is for unmovable migratetype, freepage with CMA migratetype doesn't help that allocation and compaction should not be stopped. But, current logic regards this situation as compaction is no longer needed, so finish the compaction. Secondly, condition is too excessive compared to page allocator's logic. We can steal freepage from other migratetype and change pageblock migratetype on more relaxed conditions in page allocator. This is designed to prevent fragmentation and we can use it here. Imposing hard constraint only to the compaction doesn't help much in this case since page allocator would cause fragmentation again. To solve these problems, this patch borrows anti fragmentation logic from page allocator. It will reduce premature compaction finish in some cases and reduce excessive compaction work. stress-highalloc test in mmtests with non movable order 7 allocation shows considerable increase of compaction success rate. Compaction success rate (Compaction success * 100 / Compaction stalls, %) 31.82 : 42.20 I tested it on non-reboot 5 runs stress-highalloc benchmark and found that there is no more degradation on allocation success rate than before. That roughly means that this patch doesn't result in more fragmentations. Vlastimil suggests additional idea that we only test for fallbacks when migration scanner has scanned a whole pageblock. It looked good for fragmentation because chance of stealing increase due to making more free pages in certain pageblock. So, I tested it, but, it results in decreased compaction success rate, roughly 38.00. I guess the reason that if system is low memory condition, watermark check could be failed due to not enough order 0 free page and so, sometimes, we can't reach a fallback check although migrate_pfn is aligned to pageblock_nr_pages. I can insert code to cope with this situation but it makes code more complicated so I don't include his idea at this patch. v5: fix compile error if !CONFIG_CMA. minor cleanup. Acked-by: Vlastimil Babka Signed-off-by: Joonsoo Kim --- mm/compaction.c | 16 ++++++++++++++-- mm/internal.h | 2 ++ mm/page_alloc.c | 20 ++++++++++++-------- 3 files changed, 28 insertions(+), 10 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index 8c0d945..ab8037b 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -1174,13 +1174,25 @@ static int __compact_finished(struct zone *zone, struct compact_control *cc, /* Direct compactor: Is a suitable page free? */ for (order = cc->order; order < MAX_ORDER; order++) { struct free_area *area = &zone->free_area[order]; + bool can_steal; /* Job done if page is free of the right migratetype */ if (!list_empty(&area->free_list[migratetype])) return COMPACT_PARTIAL; - /* Job done if allocation would set block type */ - if (order >= pageblock_order && area->nr_free) +#ifdef CONFIG_CMA + /* MIGRATE_MOVABLE can fallback on MIGRATE_CMA */ + if (migratetype == MIGRATE_MOVABLE && + !list_empty(&area->free_list[MIGRATE_CMA])) + return COMPACT_PARTIAL; +#endif + + /* + * Job done if allocation would steal freepages from + * other migratetype buddy lists. + */ + if (find_suitable_fallback(area, order, migratetype, + true, &can_steal) != -1) return COMPACT_PARTIAL; } diff --git a/mm/internal.h b/mm/internal.h index a96da5b..ba878bf 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -200,6 +200,8 @@ isolate_freepages_range(struct compact_control *cc, unsigned long isolate_migratepages_range(struct compact_control *cc, unsigned long low_pfn, unsigned long end_pfn); +int find_suitable_fallback(struct free_area *area, unsigned int order, + int migratetype, bool only_stealable, bool *can_steal); #endif diff --git a/mm/page_alloc.c b/mm/page_alloc.c index c397379..ddf59bf 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1194,9 +1194,14 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page, set_pageblock_migratetype(page, start_type); } -/* Check whether there is a suitable fallback freepage with requested order. */ -static int find_suitable_fallback(struct free_area *area, unsigned int order, - int migratetype, bool *can_steal) +/* + * Check whether there is a suitable fallback freepage with requested order. + * If only_stealable is true, this function returns fallback_mt only if + * we can steal other freepages all together. This would help to reduce + * fragmentation due to mixed migratetype pages in one pageblock. + */ +int find_suitable_fallback(struct free_area *area, unsigned int order, + int migratetype, bool only_stealable, bool *can_steal) { int i; int fallback_mt; @@ -1213,10 +1218,9 @@ static int find_suitable_fallback(struct free_area *area, unsigned int order, if (list_empty(&area->free_list[fallback_mt])) continue; - if (can_steal_fallback(order, migratetype)) - *can_steal = true; - - return fallback_mt; + *can_steal = can_steal_fallback(order, migratetype); + if (!only_stealable || *can_steal) + return fallback_mt; } return -1; @@ -1238,7 +1242,7 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype) --current_order) { area = &(zone->free_area[current_order]); fallback_mt = find_suitable_fallback(area, current_order, - start_migratetype, &can_steal); + start_migratetype, false, &can_steal); if (fallback_mt == -1) continue; -- 1.9.1 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/