Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp11680pxk; Wed, 23 Sep 2020 20:30:15 -0700 (PDT) X-Google-Smtp-Source: ABdhPJybAqS7uySJqjyGdQy4uE9EWRjPy6mI+ajLpaLJozXDPUV3DDG5etf0yOPtoTDS+MZeIk6/ X-Received: by 2002:a50:8fa3:: with SMTP id y32mr2576361edy.78.1600918215628; Wed, 23 Sep 2020 20:30:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1600918215; cv=none; d=google.com; s=arc-20160816; b=bfCwwv5attnAzyIDIXDHv39Man6UqjiFC6e2zfUZ2p5v+6gCCr01dGz+oiHIXNqRKi QEQhc+Wp6SkHlpVC8IoORAYh0jT3eMwPtIY+6PtsV/anolpMP8+8C6mQx3pKXAJ78u8c n5C+V7gVnhFJEOKOouHxxbxKEL5evF9Gv1guCZ8WlJTaPpXrbNrQbqDUPuhbQ9QgApYG 7QKbmId0d9rEldHZh6ZAibELXPUlRItD68ULlIo+h0ZzfMbR/Gb++NuZTVLhiGsXdNET KIBJO7O1g90ekva1TR/nQj59LSHOFpWi/DTgrVKuAem0DDf6wrK5UQ6nesOm0FJqPlXn epFQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=2jpwLHeZmQ9/2BmqKWQ585uo7p28djDpKwVP52CMG68=; b=llw6ruMpq6ATBahHAYYXEHpZWGukYxCXFBEuJ5H+bnKbgXFkkrmymsvyYoMdD1WIzW f9v5WzIY7v04YsXWxCg7b+XSAtmlLaKyInGmhW1j1YuyAS82gygjFb8eVqDwRG9ykkom 9I3qqZPxOtMJG4PEi4okmi5SOAt4wNoj01DB+EbaPE0VIZD+zz6fI5XwF3lIQPJfe3M9 LkkwGHjQV2gJI2jioX+6bh8SCj2qtWWxBY1YgPSm1JzIRQkciXY9zNLBkXqgEzjzYS2O YTKgD3vCTsBeJ/jx+EH/ZXzR/Bmh/BXztvNhyzR8WtuvbSyxp5GqXlrfQb4DMsLql5Sa 7nSQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a12si1174193eda.373.2020.09.23.20.29.52; Wed, 23 Sep 2020 20:30:15 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727050AbgIXD2t (ORCPT + 99 others); Wed, 23 Sep 2020 23:28:49 -0400 Received: from out30-56.freemail.mail.aliyun.com ([115.124.30.56]:44454 "EHLO out30-56.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726878AbgIXD2q (ORCPT ); Wed, 23 Sep 2020 23:28:46 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R131e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04357;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=23;SR=0;TI=SMTPD_---0U9w0Ss1_1600918116; Received: from aliy80.localdomain(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0U9w0Ss1_1600918116) by smtp.aliyun-inc.com(127.0.0.1); Thu, 24 Sep 2020 11:28:39 +0800 From: Alex Shi To: akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, willy@infradead.org, hannes@cmpxchg.org, lkp@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, shakeelb@google.com, iamjoonsoo.kim@lge.com, richard.weiyang@gmail.com, kirill@shutemov.name, alexander.duyck@gmail.com, rong.a.chen@intel.com, mhocko@suse.com, vdavydov.dev@gmail.com, shy828301@gmail.com, aaron.lwe@gmail.com Cc: Andrea Arcangeli Subject: [PATCH v19 06/20] mm/thp: narrow lru locking Date: Thu, 24 Sep 2020 11:28:21 +0800 Message-Id: <1600918115-22007-7-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1600918115-22007-1-git-send-email-alex.shi@linux.alibaba.com> References: <1600918115-22007-1-git-send-email-alex.shi@linux.alibaba.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org lru_lock and page cache xa_lock have no obvious reason to be taken one way round or the other: until now, lru_lock has been taken before page cache xa_lock, when splitting a THP; but nothing else takes them together. Reverse that ordering: let's narrow the lru locking - but leave local_irq_disable to block interrupts throughout, like before. Hugh Dickins point: split_huge_page_to_list() was already silly, to be using the _irqsave variant: it's just been taking sleeping locks, so would already be broken if entered with interrupts enabled. So we can save passing flags argument down to __split_huge_page(). Why change the lock ordering here? That was hard to decide. One reason: when this series reaches per-memcg lru locking, it relies on the THP's memcg to be stable when taking the lru_lock: that is now done after the THP's refcount has been frozen, which ensures page memcg cannot change. Another reason: previously, lock_page_memcg()'s move_lock was presumed to nest inside lru_lock; but now lru_lock must nest inside (page cache lock inside) move_lock, so it becomes possible to use lock_page_memcg() to stabilize page memcg before taking its lru_lock. That is not the mechanism used in this series, but it is an option we want to keep open. [Hugh Dickins: rewrite commit log] Signed-off-by: Alex Shi Reviewed-by: Kirill A. Shutemov Acked-by: Hugh Dickins Cc: Hugh Dickins Cc: Kirill A. Shutemov Cc: Andrea Arcangeli Cc: Johannes Weiner Cc: Matthew Wilcox Cc: Andrew Morton Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- mm/huge_memory.c | 25 +++++++++++++------------ 1 file changed, 13 insertions(+), 12 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 8b92cd197218..63af7611afaf 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2402,7 +2402,7 @@ static void __split_huge_page_tail(struct page *head, int tail, } static void __split_huge_page(struct page *page, struct list_head *list, - pgoff_t end, unsigned long flags) + pgoff_t end) { struct page *head = compound_head(page); pg_data_t *pgdat = page_pgdat(head); @@ -2411,8 +2411,6 @@ static void __split_huge_page(struct page *page, struct list_head *list, unsigned long offset = 0; int i; - lruvec = mem_cgroup_page_lruvec(head, pgdat); - /* complete memcg works before add pages to LRU */ mem_cgroup_split_huge_fixup(head); @@ -2424,6 +2422,11 @@ static void __split_huge_page(struct page *page, struct list_head *list, xa_lock(&swap_cache->i_pages); } + /* prevent PageLRU to go away from under us, and freeze lru stats */ + spin_lock(&pgdat->lru_lock); + + lruvec = mem_cgroup_page_lruvec(head, pgdat); + for (i = HPAGE_PMD_NR - 1; i >= 1; i--) { __split_huge_page_tail(head, i, lruvec, list); /* Some pages can be beyond i_size: drop them from page cache */ @@ -2443,6 +2446,8 @@ static void __split_huge_page(struct page *page, struct list_head *list, } ClearPageCompound(head); + spin_unlock(&pgdat->lru_lock); + /* Caller disabled irqs, so they are still disabled here */ split_page_owner(head, HPAGE_PMD_ORDER); @@ -2460,8 +2465,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, page_ref_add(head, 2); xa_unlock(&head->mapping->i_pages); } - - spin_unlock_irqrestore(&pgdat->lru_lock, flags); + local_irq_enable(); remap_page(head); @@ -2600,12 +2604,10 @@ bool can_split_huge_page(struct page *page, int *pextra_pins) int split_huge_page_to_list(struct page *page, struct list_head *list) { struct page *head = compound_head(page); - struct pglist_data *pgdata = NODE_DATA(page_to_nid(head)); struct deferred_split *ds_queue = get_deferred_split_queue(head); struct anon_vma *anon_vma = NULL; struct address_space *mapping = NULL; int count, mapcount, extra_pins, ret; - unsigned long flags; pgoff_t end; VM_BUG_ON_PAGE(is_huge_zero_page(head), head); @@ -2666,9 +2668,8 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) unmap_page(head); VM_BUG_ON_PAGE(compound_mapcount(head), head); - /* prevent PageLRU to go away from under us, and freeze lru stats */ - spin_lock_irqsave(&pgdata->lru_lock, flags); - + /* block interrupt reentry in xa_lock and spinlock */ + local_irq_disable(); if (mapping) { XA_STATE(xas, &mapping->i_pages, page_index(head)); @@ -2698,7 +2699,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) __dec_node_page_state(head, NR_FILE_THPS); } - __split_huge_page(page, list, end, flags); + __split_huge_page(page, list, end); if (PageSwapCache(head)) { swp_entry_t entry = { .val = page_private(head) }; @@ -2717,7 +2718,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) spin_unlock(&ds_queue->split_queue_lock); fail: if (mapping) xa_unlock(&mapping->i_pages); - spin_unlock_irqrestore(&pgdata->lru_lock, flags); + local_irq_enable(); remap_page(head); ret = -EBUSY; } -- 1.8.3.1