Received: by 2002:a05:6a10:16a7:0:0:0:0 with SMTP id gp39csp335091pxb; Thu, 5 Nov 2020 01:00:40 -0800 (PST) X-Google-Smtp-Source: ABdhPJwaWhK2MkmwmtS32f3BH3AYdiEraMlnneXqorOBt3JMwWiMuC7QJ7nICfKfWWM+2ZuCRv0A X-Received: by 2002:a05:6402:28e:: with SMTP id l14mr1474308edv.157.1604566840474; Thu, 05 Nov 2020 01:00:40 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1604566840; cv=none; d=google.com; s=arc-20160816; b=bH1ovXeGQHVZl8uVx38hEHjyeBVdblyoC4dpRAUPJY3RnZ5jhFQAHIt9p0RqVajLxT ot5hcMNxRuBmzczwMJwNmdptIOsqGQWr5CY3DVcM6HoEMTstboaW/UQKwVkp/KcxtI0o Du9pf8jjAjvLIGDux3DxO/vX2J7Roa95psiIszgMjpIUe2as9OfhIoJMfQQf46yav/XS ZtC3sWWiOLWCHzZtrjh5xyy5QuBldPv7eWI1O0vHUKY+7iCYXFbjNSd82ipZY/caM+eH nQ4f+SUAaFTBevkeqs6cYKERpWu8YxmtyQdov+0i5fOFniJgFIhqbiCkk2e9xVAVuXwx mDNQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=0T78waAk6UFUzwbL2RCRn9y+ON7/jHNls7bLVxfSvCM=; b=r/TKVVnNdCcxtyAO5duUaHp2a/mHdS8T9ebbZoib+rxm1+rSAVGB4K/pGFkxOaGp0d qKtlvKzos8mrJZsD1IHjoGqZjmUa8CwoUV4SOQXRR2fZIiOc3NtPyBr2WJmBOTfWY2po aMU6ELylhXdCxRQ/PVrmhiKkHGN+bycAKJEZrplOh817a3skpGh6OSrjdi1Y0OYYsUS1 ezBDrLkYw2ka810Z73wFPkYBnWq80FdH9qfZsjgFf93wxeNjvSH8WVh39SXzmvkKh2q4 xn7aVozuAnFK/sjlneSVjwTFMCsbYOLuvFd7K3nQrtZOFi07z6jMZ8dEoueFrft6Dzxr ttmw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id e4si629294ejh.230.2020.11.05.01.00.17; Thu, 05 Nov 2020 01:00:40 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731564AbgKEI5p (ORCPT + 99 others); Thu, 5 Nov 2020 03:57:45 -0500 Received: from out30-45.freemail.mail.aliyun.com ([115.124.30.45]:33812 "EHLO out30-45.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730179AbgKEI4P (ORCPT ); Thu, 5 Nov 2020 03:56:15 -0500 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R171e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04420;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=22;SR=0;TI=SMTPD_---0UEJC3Fv_1604566567; Received: from aliy80.localdomain(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0UEJC3Fv_1604566567) by smtp.aliyun-inc.com(127.0.0.1); Thu, 05 Nov 2020 16:56:09 +0800 From: Alex Shi To: akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, willy@infradead.org, hannes@cmpxchg.org, lkp@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, shakeelb@google.com, iamjoonsoo.kim@lge.com, richard.weiyang@gmail.com, kirill@shutemov.name, alexander.duyck@gmail.com, rong.a.chen@intel.com, mhocko@suse.com, vdavydov.dev@gmail.com, shy828301@gmail.com Cc: Andrea Arcangeli Subject: [PATCH v21 04/19] mm/thp: narrow lru locking Date: Thu, 5 Nov 2020 16:55:34 +0800 Message-Id: <1604566549-62481-5-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1604566549-62481-1-git-send-email-alex.shi@linux.alibaba.com> References: <1604566549-62481-1-git-send-email-alex.shi@linux.alibaba.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org lru_lock and page cache xa_lock have no obvious reason to be taken one way round or the other: until now, lru_lock has been taken before page cache xa_lock, when splitting a THP; but nothing else takes them together. Reverse that ordering: let's narrow the lru locking - but leave local_irq_disable to block interrupts throughout, like before. Hugh Dickins point: split_huge_page_to_list() was already silly, to be using the _irqsave variant: it's just been taking sleeping locks, so would already be broken if entered with interrupts enabled. So we can save passing flags argument down to __split_huge_page(). Why change the lock ordering here? That was hard to decide. One reason: when this series reaches per-memcg lru locking, it relies on the THP's memcg to be stable when taking the lru_lock: that is now done after the THP's refcount has been frozen, which ensures page memcg cannot change. Another reason: previously, lock_page_memcg()'s move_lock was presumed to nest inside lru_lock; but now lru_lock must nest inside (page cache lock inside) move_lock, so it becomes possible to use lock_page_memcg() to stabilize page memcg before taking its lru_lock. That is not the mechanism used in this series, but it is an option we want to keep open. [Hugh Dickins: rewrite commit log] Signed-off-by: Alex Shi Reviewed-by: Kirill A. Shutemov Acked-by: Hugh Dickins Cc: Hugh Dickins Cc: Kirill A. Shutemov Cc: Andrea Arcangeli Cc: Johannes Weiner Cc: Matthew Wilcox Cc: Andrew Morton Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- mm/huge_memory.c | 25 +++++++++++++------------ 1 file changed, 13 insertions(+), 12 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 79318d7f7d5d..b70ec0c6076b 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2435,7 +2435,7 @@ static void __split_huge_page_tail(struct page *head, int tail, } static void __split_huge_page(struct page *page, struct list_head *list, - pgoff_t end, unsigned long flags) + pgoff_t end) { struct page *head = compound_head(page); pg_data_t *pgdat = page_pgdat(head); @@ -2445,8 +2445,6 @@ static void __split_huge_page(struct page *page, struct list_head *list, unsigned int nr = thp_nr_pages(head); int i; - lruvec = mem_cgroup_page_lruvec(head, pgdat); - /* complete memcg works before add pages to LRU */ mem_cgroup_split_huge_fixup(head); @@ -2458,6 +2456,11 @@ static void __split_huge_page(struct page *page, struct list_head *list, xa_lock(&swap_cache->i_pages); } + /* prevent PageLRU to go away from under us, and freeze lru stats */ + spin_lock(&pgdat->lru_lock); + + lruvec = mem_cgroup_page_lruvec(head, pgdat); + for (i = nr - 1; i >= 1; i--) { __split_huge_page_tail(head, i, lruvec, list); /* Some pages can be beyond i_size: drop them from page cache */ @@ -2477,6 +2480,8 @@ static void __split_huge_page(struct page *page, struct list_head *list, } ClearPageCompound(head); + spin_unlock(&pgdat->lru_lock); + /* Caller disabled irqs, so they are still disabled here */ split_page_owner(head, nr); @@ -2494,8 +2499,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, page_ref_add(head, 2); xa_unlock(&head->mapping->i_pages); } - - spin_unlock_irqrestore(&pgdat->lru_lock, flags); + local_irq_enable(); remap_page(head, nr); @@ -2641,12 +2645,10 @@ bool can_split_huge_page(struct page *page, int *pextra_pins) int split_huge_page_to_list(struct page *page, struct list_head *list) { struct page *head = compound_head(page); - struct pglist_data *pgdata = NODE_DATA(page_to_nid(head)); struct deferred_split *ds_queue = get_deferred_split_queue(head); struct anon_vma *anon_vma = NULL; struct address_space *mapping = NULL; int count, mapcount, extra_pins, ret; - unsigned long flags; pgoff_t end; VM_BUG_ON_PAGE(is_huge_zero_page(head), head); @@ -2707,9 +2709,8 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) unmap_page(head); VM_BUG_ON_PAGE(compound_mapcount(head), head); - /* prevent PageLRU to go away from under us, and freeze lru stats */ - spin_lock_irqsave(&pgdata->lru_lock, flags); - + /* block interrupt reentry in xa_lock and spinlock */ + local_irq_disable(); if (mapping) { XA_STATE(xas, &mapping->i_pages, page_index(head)); @@ -2739,7 +2740,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) __dec_lruvec_page_state(head, NR_FILE_THPS); } - __split_huge_page(page, list, end, flags); + __split_huge_page(page, list, end); ret = 0; } else { if (IS_ENABLED(CONFIG_DEBUG_VM) && mapcount) { @@ -2753,7 +2754,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) spin_unlock(&ds_queue->split_queue_lock); fail: if (mapping) xa_unlock(&mapping->i_pages); - spin_unlock_irqrestore(&pgdata->lru_lock, flags); + local_irq_enable(); remap_page(head, thp_nr_pages(head)); ret = -EBUSY; } -- 1.8.3.1