Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp3413190imu; Fri, 18 Jan 2019 09:56:33 -0800 (PST) X-Google-Smtp-Source: ALg8bN4blHvIsiw0e2tl41ipCANUdkL7dqHFmsQIS9rAAOAHsNH+G+F4+6MXxrVmAaPB6rRfa7hj X-Received: by 2002:a63:1204:: with SMTP id h4mr18631257pgl.51.1547834193125; Fri, 18 Jan 2019 09:56:33 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1547834193; cv=none; d=google.com; s=arc-20160816; b=h6va1titYeGgw/oo8y+dfkBdBsR/VLfikzm5DNfqgiK45uJ+P1ScUIc5bUYZ+yiHyL hXrXYFwnbHyZ0f6VmCpm2VGytzdpkIKaHGoKCHa5ibi8M7PK6ErEIM0bdqRTUFkCMITE JR5czICdVMZjQ5vWkkxIzNZMrtMan703vrwpq46Va+XwGMQJd/rHvWRqEJe1oLDb5Z1r CXy0Z4UpmWARYyV/kEt7mR44SzKKAw04bWn/2j0GjvZ8/+smvIB+DuboVIsx8Y+TRp3L lC0qBWVmANGbc4G3C1VOMKAnQHJDwUOjwOQ5uJUJEIuoBOP7+T4TRyL5hm+Y8Fu5xUJ6 3qSA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=WlXZdPxbW4Slf9YZxH/vq+9M4F2fJaOayxvPx5Uo+ag=; b=UCdAANmcICa/ez9p2+pXGc8ouJEbZhNMpweyJ31fV1VfoZ+GMdgq1HShRSJ+KON2+l z6ZJjAObuePoCdL2mRpk/Ezr00nxAvM5imq6xDNiPRVNSqaxMC5essR6umsGpFz9h9tN M0KbKlsxjaKdfMZyNoQ+dtw30WJYr1zrXArq3GGONKqenl+1zMMGJl7N5jr/cq7Gi1an mN16zOsZQM4IYxomRNv2aCKWl0wkXGC9oNHCGKDlMZeBlQH3ZclQ2LxTwO1duyqLSRsF a3BZe+je3/Y/HPzV+kyXWaZBHeSasS1A6I1OkSMN3s9+VR7ZPHortIMz22nbNetJ0xsp zWBw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u4si4759343pga.91.2019.01.18.09.56.15; Fri, 18 Jan 2019 09:56:33 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728884AbfARRxc (ORCPT + 99 others); Fri, 18 Jan 2019 12:53:32 -0500 Received: from outbound-smtp10.blacknight.com ([46.22.139.15]:58091 "EHLO outbound-smtp10.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728439AbfARRxc (ORCPT ); Fri, 18 Jan 2019 12:53:32 -0500 Received: from mail.blacknight.com (pemlinmail03.blacknight.ie [81.17.254.16]) by outbound-smtp10.blacknight.com (Postfix) with ESMTPS id CBACE1C3145 for ; Fri, 18 Jan 2019 17:53:28 +0000 (GMT) Received: (qmail 3552 invoked from network); 18 Jan 2019 17:53:28 -0000 Received: from unknown (HELO stampy.163woodhaven.lan) (mgorman@techsingularity.net@[37.228.229.96]) by 81.17.254.9 with ESMTPA; 18 Jan 2019 17:53:28 -0000 From: Mel Gorman To: Andrew Morton Cc: David Rientjes , Andrea Arcangeli , Vlastimil Babka , Linux List Kernel Mailing , Linux-MM , Mel Gorman Subject: [PATCH 10/22] mm, compaction: Keep migration source private to a single compaction instance Date: Fri, 18 Jan 2019 17:51:24 +0000 Message-Id: <20190118175136.31341-11-mgorman@techsingularity.net> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190118175136.31341-1-mgorman@techsingularity.net> References: <20190118175136.31341-1-mgorman@techsingularity.net> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Due to either a fast search of the free list or a linear scan, it is possible for multiple compaction instances to pick the same pageblock for migration. This is lucky for one scanner and increased scanning for all the others. It also allows a race between requests on which first allocates the resulting free block. This patch tests and updates the pageblock skip for the migration scanner carefully. When isolating a block, it will check and skip if the block is already in use. Once the zone lock is acquired, it will be rechecked so that only one scanner can set the pageblock skip for exclusive use. Any scanner contending will continue with a linear scan. The skip bit is still set if no pages can be isolated in a range. While this may result in redundant scanning, it avoids unnecessarily acquiring the zone lock when there are no suitable migration sources. 1-socket thpscale Amean fault-both-1 0.00 ( 0.00%) 0.00 * 0.00%* Amean fault-both-3 3390.40 ( 0.00%) 3024.41 ( 10.80%) Amean fault-both-5 5082.28 ( 0.00%) 4749.30 ( 6.55%) Amean fault-both-7 7012.51 ( 0.00%) 6454.95 ( 7.95%) Amean fault-both-12 11346.63 ( 0.00%) 10324.83 ( 9.01%) Amean fault-both-18 15324.19 ( 0.00%) 12896.82 * 15.84%* Amean fault-both-24 16088.50 ( 0.00%) 13470.60 * 16.27%* Amean fault-both-30 18723.42 ( 0.00%) 17143.99 ( 8.44%) Amean fault-both-32 18612.01 ( 0.00%) 17743.91 ( 4.66%) 5.0.0-rc1 5.0.0-rc1 findmig-v3r15 isolmig-v3r15 Percentage huge-3 89.83 ( 0.00%) 92.96 ( 3.48%) Percentage huge-5 91.96 ( 0.00%) 93.26 ( 1.41%) Percentage huge-7 92.85 ( 0.00%) 93.63 ( 0.84%) Percentage huge-12 92.74 ( 0.00%) 92.80 ( 0.07%) Percentage huge-18 91.71 ( 0.00%) 91.62 ( -0.10%) Percentage huge-24 92.13 ( 0.00%) 91.50 ( -0.69%) Percentage huge-30 93.79 ( 0.00%) 92.73 ( -1.13%) Percentage huge-32 91.27 ( 0.00%) 91.94 ( 0.74%) This shows a reasonable reduction in latency as multiple compaction scanners do not operate on the same blocks with a similar allocation success rate. Compaction migrate scanned 41093126 25646769 Migration scan rates are reduced by 38%. Signed-off-by: Mel Gorman Acked-by: Vlastimil Babka --- mm/compaction.c | 124 ++++++++++++++++++++++++++++++++++++++++++++------------ 1 file changed, 99 insertions(+), 25 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index 92d10eb3d1c7..7c4c9cce7907 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -285,13 +285,52 @@ void reset_isolation_suitable(pg_data_t *pgdat) } } +/* + * Sets the pageblock skip bit if it was clear. Note that this is a hint as + * locks are not required for read/writers. Returns true if it was already set. + */ +static bool test_and_set_skip(struct compact_control *cc, struct page *page, + unsigned long pfn) +{ + bool skip; + + /* Do no update if skip hint is being ignored */ + if (cc->ignore_skip_hint) + return false; + + if (!IS_ALIGNED(pfn, pageblock_nr_pages)) + return false; + + skip = get_pageblock_skip(page); + if (!skip && !cc->no_set_skip_hint) + set_pageblock_skip(page); + + return skip; +} + +static void update_cached_migrate(struct compact_control *cc, unsigned long pfn) +{ + struct zone *zone = cc->zone; + + pfn = pageblock_end_pfn(pfn); + + /* Set for isolation rather than compaction */ + if (cc->no_set_skip_hint) + return; + + if (pfn > zone->compact_cached_migrate_pfn[0]) + zone->compact_cached_migrate_pfn[0] = pfn; + if (cc->mode != MIGRATE_ASYNC && + pfn > zone->compact_cached_migrate_pfn[1]) + zone->compact_cached_migrate_pfn[1] = pfn; +} + /* * If no pages were isolated then mark this pageblock to be skipped in the * future. The information is later cleared by __reset_isolation_suitable(). */ static void update_pageblock_skip(struct compact_control *cc, - struct page *page, unsigned long nr_isolated, - bool migrate_scanner) + struct page *page, unsigned long nr_isolated) { struct zone *zone = cc->zone; unsigned long pfn; @@ -310,16 +349,8 @@ static void update_pageblock_skip(struct compact_control *cc, pfn = page_to_pfn(page); /* Update where async and sync compaction should restart */ - if (migrate_scanner) { - if (pfn > zone->compact_cached_migrate_pfn[0]) - zone->compact_cached_migrate_pfn[0] = pfn; - if (cc->mode != MIGRATE_ASYNC && - pfn > zone->compact_cached_migrate_pfn[1]) - zone->compact_cached_migrate_pfn[1] = pfn; - } else { - if (pfn < zone->compact_cached_free_pfn) - zone->compact_cached_free_pfn = pfn; - } + if (pfn < zone->compact_cached_free_pfn) + zone->compact_cached_free_pfn = pfn; } #else static inline bool isolation_suitable(struct compact_control *cc, @@ -334,10 +365,19 @@ static inline bool pageblock_skip_persistent(struct page *page) } static inline void update_pageblock_skip(struct compact_control *cc, - struct page *page, unsigned long nr_isolated, - bool migrate_scanner) + struct page *page, unsigned long nr_isolated) { } + +static void update_cached_migrate(struct compact_control *cc, unsigned long pfn) +{ +} + +static bool test_and_set_skip(struct compact_control *cc, struct page *page, + unsigned long pfn) +{ + return false; +} #endif /* CONFIG_COMPACTION */ /* @@ -567,7 +607,7 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, /* Update the pageblock-skip if the whole pageblock was scanned */ if (blockpfn == end_pfn) - update_pageblock_skip(cc, valid_page, total_isolated, false); + update_pageblock_skip(cc, valid_page, total_isolated); cc->total_free_scanned += nr_scanned; if (total_isolated) @@ -702,6 +742,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, unsigned long start_pfn = low_pfn; bool skip_on_failure = false; unsigned long next_skip_pfn = 0; + bool skip_updated = false; /* * Ensure that there are not too many pages isolated from the LRU @@ -768,8 +809,19 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, page = pfn_to_page(low_pfn); - if (!valid_page) + /* + * Check if the pageblock has already been marked skipped. + * Only the aligned PFN is checked as the caller isolates + * COMPACT_CLUSTER_MAX at a time so the second call must + * not falsely conclude that the block should be skipped. + */ + if (!valid_page && IS_ALIGNED(low_pfn, pageblock_nr_pages)) { + if (!cc->ignore_skip_hint && get_pageblock_skip(page)) { + low_pfn = end_pfn; + goto isolate_abort; + } valid_page = page; + } /* * Skip if free. We read page order here without zone lock @@ -850,8 +902,19 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, if (!locked) { locked = compact_trylock_irqsave(zone_lru_lock(zone), &flags, cc); - if (!locked) + + /* Allow future scanning if the lock is contended */ + if (!locked) { + clear_pageblock_skip(page); break; + } + + /* Try get exclusive access under lock */ + if (!skip_updated) { + skip_updated = true; + if (test_and_set_skip(cc, page, low_pfn)) + goto isolate_abort; + } /* Recheck PageLRU and PageCompound under lock */ if (!PageLRU(page)) @@ -929,15 +992,20 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, if (unlikely(low_pfn > end_pfn)) low_pfn = end_pfn; +isolate_abort: if (locked) spin_unlock_irqrestore(zone_lru_lock(zone), flags); /* - * Update the pageblock-skip information and cached scanner pfn, - * if the whole pageblock was scanned without isolating any page. + * Updated the cached scanner pfn if the pageblock was scanned + * without isolating a page. The pageblock may not be marked + * skipped already if there were no LRU pages in the block. */ - if (low_pfn == end_pfn) - update_pageblock_skip(cc, valid_page, nr_isolated, true); + if (low_pfn == end_pfn && !nr_isolated) { + if (valid_page && !skip_updated) + set_pageblock_skip(valid_page); + update_cached_migrate(cc, low_pfn); + } trace_mm_compaction_isolate_migratepages(start_pfn, low_pfn, nr_scanned, nr_isolated); @@ -1321,8 +1389,6 @@ static unsigned long fast_find_migrateblock(struct compact_control *cc) nr_scanned++; free_pfn = page_to_pfn(freepage); if (free_pfn < high_pfn) { - update_fast_start_pfn(cc, free_pfn); - /* * Avoid if skipped recently. Ideally it would * move to the tail but even safe iteration of @@ -1339,6 +1405,7 @@ static unsigned long fast_find_migrateblock(struct compact_control *cc) /* Reorder to so a future search skips recent pages */ move_freelist_tail(freelist, freepage); + update_fast_start_pfn(cc, free_pfn); pfn = pageblock_start_pfn(free_pfn); cc->fast_search_fail = 0; set_pageblock_skip(freepage); @@ -1427,8 +1494,15 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone, if (!page) continue; - /* If isolation recently failed, do not retry */ - if (!isolation_suitable(cc, page) && !fast_find_block) + /* + * If isolation recently failed, do not retry. Only check the + * pageblock once. COMPACT_CLUSTER_MAX causes a pageblock + * to be visited multiple times. Assume skip was checked + * before making it "skip" so other compaction instances do + * not scan the same block. + */ + if (IS_ALIGNED(low_pfn, pageblock_nr_pages) && + !fast_find_block && !isolation_suitable(cc, page)) continue; /* -- 2.16.4