Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756192AbbLCHLk (ORCPT ); Thu, 3 Dec 2015 02:11:40 -0500 Received: from mail-pa0-f41.google.com ([209.85.220.41]:35440 "EHLO mail-pa0-f41.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751744AbbLCHLi (ORCPT ); Thu, 3 Dec 2015 02:11:38 -0500 From: Joonsoo Kim X-Google-Original-From: Joonsoo Kim To: Andrew Morton Cc: Vlastimil Babka , Mel Gorman , Rik van Riel , David Rientjes , Minchan Kim , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Joonsoo Kim Subject: [PATCH v3 1/7] mm/compaction: skip useless pfn when updating cached pfn Date: Thu, 3 Dec 2015 16:11:15 +0900 Message-Id: <1449126681-19647-2-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1449126681-19647-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1449126681-19647-1-git-send-email-iamjoonsoo.kim@lge.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4468 Lines: 126 Cached pfn is used to determine the start position of scanner at next compaction run. Current cached pfn points the skipped pageblock so we uselessly checks whether pageblock is valid for compaction and skip-bit is set or not. If we set scanner's cached pfn to next pfn of skipped pageblock, we don't need to do this check. This patch moved update_pageblock_skip() to isolate_(freepages|migratepages). Updating pageblock skip information isn't relevant to CMA so they are more appropriate place to update this information. Signed-off-by: Joonsoo Kim --- mm/compaction.c | 37 ++++++++++++++++++++----------------- 1 file changed, 20 insertions(+), 17 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index 01b1e5e..564047c 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -256,10 +256,9 @@ void reset_isolation_suitable(pg_data_t *pgdat) */ static void update_pageblock_skip(struct compact_control *cc, struct page *page, unsigned long nr_isolated, - bool migrate_scanner) + unsigned long pfn, bool migrate_scanner) { struct zone *zone = cc->zone; - unsigned long pfn; if (cc->ignore_skip_hint) return; @@ -272,8 +271,6 @@ static void update_pageblock_skip(struct compact_control *cc, set_pageblock_skip(page); - pfn = page_to_pfn(page); - /* Update where async and sync compaction should restart */ if (migrate_scanner) { if (pfn > zone->compact_cached_migrate_pfn[0]) @@ -295,7 +292,7 @@ static inline bool isolation_suitable(struct compact_control *cc, static void update_pageblock_skip(struct compact_control *cc, struct page *page, unsigned long nr_isolated, - bool migrate_scanner) + unsigned long pfn, bool migrate_scanner) { } #endif /* CONFIG_COMPACTION */ @@ -527,10 +524,6 @@ isolate_fail: if (locked) spin_unlock_irqrestore(&cc->zone->lock, flags); - /* Update the pageblock-skip if the whole pageblock was scanned */ - if (blockpfn == end_pfn) - update_pageblock_skip(cc, valid_page, total_isolated, false); - count_compact_events(COMPACTFREE_SCANNED, nr_scanned); if (total_isolated) count_compact_events(COMPACTISOLATED, total_isolated); @@ -832,13 +825,6 @@ isolate_success: if (locked) spin_unlock_irqrestore(&zone->lru_lock, flags); - /* - * Update the pageblock-skip information and cached scanner pfn, - * if the whole pageblock was scanned without isolating any page. - */ - if (low_pfn == end_pfn) - update_pageblock_skip(cc, valid_page, nr_isolated, true); - trace_mm_compaction_isolate_migratepages(start_pfn, low_pfn, nr_scanned, nr_isolated); @@ -947,6 +933,7 @@ static void isolate_freepages(struct compact_control *cc) unsigned long block_end_pfn; /* end of current pageblock */ unsigned long low_pfn; /* lowest pfn scanner is able to scan */ struct list_head *freelist = &cc->freepages; + unsigned long nr_isolated; /* * Initialise the free scanner. The starting point is where we last @@ -998,10 +985,18 @@ static void isolate_freepages(struct compact_control *cc) continue; /* Found a block suitable for isolating free pages from. */ - isolate_freepages_block(cc, &isolate_start_pfn, + nr_isolated = isolate_freepages_block(cc, &isolate_start_pfn, block_end_pfn, freelist, false); /* + * Update the pageblock-skip if the whole pageblock + * was scanned + */ + if (isolate_start_pfn == block_end_pfn) + update_pageblock_skip(cc, page, nr_isolated, + block_start_pfn - pageblock_nr_pages, false); + + /* * If we isolated enough freepages, or aborted due to async * compaction being contended, terminate the loop. * Remember where the free scanner should restart next time, @@ -1172,6 +1167,14 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone, cc->last_migrated_pfn = isolate_start_pfn; /* + * Update the pageblock-skip if the whole pageblock + * was scanned without isolating any page. + */ + if (low_pfn == end_pfn) + update_pageblock_skip(cc, page, cc->nr_migratepages, + end_pfn, true); + + /* * Either we isolated something and proceed with migration. Or * we failed and compact_zone should decide if we should * continue or not. -- 1.9.1 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/