Received: by 2002:a25:e74b:0:0:0:0:0 with SMTP id e72csp2928943ybh; Sat, 25 Jul 2020 06:02:56 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxe7Yq7esGyStyq/QaZ4fLuA2TIxcNIYe/D/uqahCu66wzUAaUlMtE7EiAJOddvpCYyKLyy X-Received: by 2002:a17:906:160f:: with SMTP id m15mr14006740ejd.423.1595682175828; Sat, 25 Jul 2020 06:02:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1595682175; cv=none; d=google.com; s=arc-20160816; b=U8cRcUvklaQUyJ/UHodkkcbF/0jw/jBOX+DQiCQyBgQERd75tQ1EgXBaaO8jDSW5EX XFW348NfA2ZBimU6jTkMdePPlguCuQ3UgVVQhADxdfqnl9x47aP+rnXiSVF/hGYLC4Fz N+AwuYg4+AOJtZTaJh5Xs1Pwxz763bvGCj5fkaDclqrB2wQkeqrr2f1Y/XFXwnPZG2wN Rj40xhfRASHb5kpd8cBjRM0hKB7der4QCWry9zOA6dx7oIOO/lamO6SFOizmF1GZdvQF jvQNv8ZVmKg7GB1MXE5VEhkusu5tLeIV1wnHmSjtYU4CQxNxcV6qjKf1avmkSKHEESPh QVMg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:to:from; bh=4YnzLevwgq/wtSa2r23SjloVq7+mZh2aFzjY+gQC2ho=; b=A/Hqza8IXwxLrBLhZ+vdRi8bniB66NGP1wE+agT893t2incAeL1UOGw6ltZnJqtEus rePicSpyKb+C1Tt8Fw4orKYj1yKoaaMRrO2z9Mz0Xmi9SEAn88NUnBWtucLhXLeGd/8J R+oP7oYonHfRFd+z/bWaFfcMVEYzEfbVazfJ9IoS/xy1bqzXNj32LFrXxkMy2DEVWS/8 PVWtvv+m35A9HaSzeFFajoMIymouio66qVb9bwdrKEeIXVYF68mniRLzL/FIRaRM5JXI SO6Pd5ZJIpX0y67EWTPmMXwSPJIUuNX0UOVvo2VPmoImP6K5YP8SC2extvJ6njamtd4M HJlw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id g88si2324377edd.117.2020.07.25.06.02.32; Sat, 25 Jul 2020 06:02:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727968AbgGYNBQ (ORCPT + 99 others); Sat, 25 Jul 2020 09:01:16 -0400 Received: from out30-132.freemail.mail.aliyun.com ([115.124.30.132]:51687 "EHLO out30-132.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727107AbgGYNAd (ORCPT ); Sat, 25 Jul 2020 09:00:33 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R101e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e07484;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=19;SR=0;TI=SMTPD_---0U3l6fmD_1595682015; Received: from aliy8.localdomain(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0U3l6fmD_1595682015) by smtp.aliyun-inc.com(127.0.0.1); Sat, 25 Jul 2020 21:00:26 +0800 From: Alex Shi To: akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, yang.shi@linux.alibaba.com, willy@infradead.org, hannes@cmpxchg.org, lkp@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, shakeelb@google.com, iamjoonsoo.kim@lge.com, richard.weiyang@gmail.com, kirill@shutemov.name, alexander.duyck@gmail.com, rong.a.chen@intel.com Subject: [PATCH v17 14/21] mm/compaction: do page isolation first in compaction Date: Sat, 25 Jul 2020 20:59:51 +0800 Message-Id: <1595681998-19193-15-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1595681998-19193-1-git-send-email-alex.shi@linux.alibaba.com> References: <1595681998-19193-1-git-send-email-alex.shi@linux.alibaba.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently, compaction would get the lru_lock and then do page isolation which works fine with pgdat->lru_lock, since any page isoltion would compete for the lru_lock. If we want to change to memcg lru_lock, we have to isolate the page before getting lru_lock, thus isoltion would block page's memcg change which relay on page isoltion too. Then we could safely use per memcg lru_lock later. The new page isolation use previous introduced TestClearPageLRU() + pgdat lru locking which will be changed to memcg lru lock later. Hugh Dickins fixed following bugs in this patch's early version: Fix lots of crashes under compaction load: isolate_migratepages_block() must clean up appropriately when rejecting a page, setting PageLRU again if it had been cleared; and a put_page() after get_page_unless_zero() cannot safely be done while holding locked_lruvec - it may turn out to be the final put_page(), which will take an lruvec lock when PageLRU. And move __isolate_lru_page_prepare back after get_page_unless_zero to make trylock_page() safe: trylock_page() is not safe to use at this time: its setting PG_locked can race with the page being freed or allocated ("Bad page"), and can also erase flags being set by one of those "sole owners" of a freshly allocated page who use non-atomic __SetPageFlag(). Suggested-by: Johannes Weiner Signed-off-by: Hugh Dickins Signed-off-by: Alex Shi Cc: Andrew Morton Cc: Matthew Wilcox Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org --- include/linux/swap.h | 2 +- mm/compaction.c | 42 +++++++++++++++++++++++++++++++++--------- mm/vmscan.c | 46 ++++++++++++++++++++++++++-------------------- 3 files changed, 60 insertions(+), 30 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 2c29399b29a0..6d23d3beeff7 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -358,7 +358,7 @@ extern void lru_cache_add_active_or_unevictable(struct page *page, extern unsigned long zone_reclaimable_pages(struct zone *zone); extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order, gfp_t gfp_mask, nodemask_t *mask); -extern int __isolate_lru_page(struct page *page, isolate_mode_t mode); +extern int __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode); extern unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg, unsigned long nr_pages, gfp_t gfp_mask, diff --git a/mm/compaction.c b/mm/compaction.c index f14780fc296a..2da2933fe56b 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -869,6 +869,7 @@ static bool too_many_isolated(pg_data_t *pgdat) if (!valid_page && IS_ALIGNED(low_pfn, pageblock_nr_pages)) { if (!cc->ignore_skip_hint && get_pageblock_skip(page)) { low_pfn = end_pfn; + page = NULL; goto isolate_abort; } valid_page = page; @@ -950,6 +951,21 @@ static bool too_many_isolated(pg_data_t *pgdat) if (!(cc->gfp_mask & __GFP_FS) && page_mapping(page)) goto isolate_fail; + /* + * Be careful not to clear PageLRU until after we're + * sure the page is not being freed elsewhere -- the + * page release code relies on it. + */ + if (unlikely(!get_page_unless_zero(page))) + goto isolate_fail; + + if (__isolate_lru_page_prepare(page, isolate_mode) != 0) + goto isolate_fail_put; + + /* Try isolate the page */ + if (!TestClearPageLRU(page)) + goto isolate_fail_put; + /* If we already hold the lock, we can skip some rechecking */ if (!locked) { locked = compact_lock_irqsave(&pgdat->lru_lock, @@ -962,10 +978,6 @@ static bool too_many_isolated(pg_data_t *pgdat) goto isolate_abort; } - /* Recheck PageLRU and PageCompound under lock */ - if (!PageLRU(page)) - goto isolate_fail; - /* * Page become compound since the non-locked check, * and it's on LRU. It can only be a THP so the order @@ -973,16 +985,13 @@ static bool too_many_isolated(pg_data_t *pgdat) */ if (unlikely(PageCompound(page) && !cc->alloc_contig)) { low_pfn += compound_nr(page) - 1; - goto isolate_fail; + SetPageLRU(page); + goto isolate_fail_put; } } lruvec = mem_cgroup_page_lruvec(page, pgdat); - /* Try isolate the page */ - if (__isolate_lru_page(page, isolate_mode) != 0) - goto isolate_fail; - /* The whole page is taken off the LRU; skip the tail pages. */ if (PageCompound(page)) low_pfn += compound_nr(page) - 1; @@ -1011,6 +1020,15 @@ static bool too_many_isolated(pg_data_t *pgdat) } continue; + +isolate_fail_put: + /* Avoid potential deadlock in freeing page under lru_lock */ + if (locked) { + spin_unlock_irqrestore(&pgdat->lru_lock, flags); + locked = false; + } + put_page(page); + isolate_fail: if (!skip_on_failure) continue; @@ -1047,9 +1065,15 @@ static bool too_many_isolated(pg_data_t *pgdat) if (unlikely(low_pfn > end_pfn)) low_pfn = end_pfn; + page = NULL; + isolate_abort: if (locked) spin_unlock_irqrestore(&pgdat->lru_lock, flags); + if (page) { + SetPageLRU(page); + put_page(page); + } /* * Updated the cached scanner pfn once the pageblock has been scanned diff --git a/mm/vmscan.c b/mm/vmscan.c index 4183ae6b54b5..f77748adc340 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1544,20 +1544,20 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone, * * returns 0 on success, -ve errno on failure. */ -int __isolate_lru_page(struct page *page, isolate_mode_t mode) +int __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode) { int ret = -EINVAL; - /* Only take pages on the LRU. */ - if (!PageLRU(page)) - return ret; - /* Compaction should not handle unevictable pages but CMA can do so */ if (PageUnevictable(page) && !(mode & ISOLATE_UNEVICTABLE)) return ret; ret = -EBUSY; + /* Only take pages on the LRU. */ + if (!PageLRU(page)) + return ret; + /* * To minimise LRU disruption, the caller can indicate that it only * wants to isolate pages it will be able to operate on without @@ -1598,20 +1598,9 @@ int __isolate_lru_page(struct page *page, isolate_mode_t mode) if ((mode & ISOLATE_UNMAPPED) && page_mapped(page)) return ret; - if (likely(get_page_unless_zero(page))) { - /* - * Be careful not to clear PageLRU until after we're - * sure the page is not being freed elsewhere -- the - * page release code relies on it. - */ - ClearPageLRU(page); - ret = 0; - } - - return ret; + return 0; } - /* * Update LRU sizes after isolating pages. The LRU size updates must * be complete before mem_cgroup_update_lru_size due to a sanity check. @@ -1691,17 +1680,34 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan, * only when the page is being freed somewhere else. */ scan += nr_pages; - switch (__isolate_lru_page(page, mode)) { + switch (__isolate_lru_page_prepare(page, mode)) { case 0: + /* + * Be careful not to clear PageLRU until after we're + * sure the page is not being freed elsewhere -- the + * page release code relies on it. + */ + if (unlikely(!get_page_unless_zero(page))) + goto busy; + + if (!TestClearPageLRU(page)) { + /* + * This page may in other isolation path, + * but we still hold lru_lock. + */ + put_page(page); + goto busy; + } + nr_taken += nr_pages; nr_zone_taken[page_zonenum(page)] += nr_pages; list_move(&page->lru, dst); break; - +busy: case -EBUSY: /* else it is being freed elsewhere */ list_move(&page->lru, src); - continue; + break; default: BUG(); -- 1.8.3.1