Received: by 2002:a25:c205:0:0:0:0:0 with SMTP id s5csp2150941ybf; Mon, 2 Mar 2020 03:02:39 -0800 (PST) X-Google-Smtp-Source: APXvYqw7rLT7MkksdThhIoy30uG+alEy7G8PkUzpCbjFaM5uclxXjGISFE1qiesXN7RjZJWPNslK X-Received: by 2002:aca:3189:: with SMTP id x131mr11345360oix.33.1583146959344; Mon, 02 Mar 2020 03:02:39 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1583146959; cv=none; d=google.com; s=arc-20160816; b=udZF5hsJCBU9/XFa2V0jIN8G61G6c2OS8Jfmy7r8MnJe0OFwWRj4SxCq2vRQirLYvI H8EE9i43GQ5KKU9j7cb9V0Jh8+OVRwXUqOBwfpKy58BsaFTbcoSKMtY6T61gojd3o6Bc D1HNnMa1WUpJIvxaUEDz1AaIgnokNJyhSMlyDGcx3l2BGlWgCNdZJoRJ6mBGkduR62Dg 3TOEjpXeY2AOmO0ZNBvq0foBI8v89bRvj6E28skVKzp32dP9vZ7TUqRAZOip9QxJF7e3 Kckb/U1rZui7W1NO0ucHyQBrjxTE44oN9/7rUMdTt0e1AUCcnHKugA511scRuwmqWbKu Vi2Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=vJog+2NlRaQIeWZP+57CyQJoGw3ieWx8u4SnR9DZkyM=; b=l8paFs1fO5mh2q2D3Z8tK4Ltk3opt+LWPGjLc7wCmSBnglQaW2TwXTIJqNJRzONBzW KjfCPwUUlIFU3yVONbklucnxY3iogVkbNEdqiAsS6rUSPwx7zl5+Uk3rpAP96XNYZgeR AXO0pRb5XSOIVYDaJbwpSSi5wXo6A5gAeFWoq554HC4+tbMpV5bUTecCuyiy9b7Ytufh y1ag4aBDjoZydoUVbHH2Z4LzQ4/3GYxV7yGynYV0zE/g8wbRESfOF521DTA3j/JevQkd z+qFwsBOOlL94n0/x6q1cyQ2iu+MuKvQQF0NcuACNSiwQi9hPkgO+PHvIQd4/7eTTIfT odAw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m14si5955347otr.131.2020.03.02.03.02.23; Mon, 02 Mar 2020 03:02:39 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727589AbgCBLBE (ORCPT + 99 others); Mon, 2 Mar 2020 06:01:04 -0500 Received: from out30-45.freemail.mail.aliyun.com ([115.124.30.45]:54179 "EHLO out30-45.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726810AbgCBLBD (ORCPT ); Mon, 2 Mar 2020 06:01:03 -0500 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R421e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01f04446;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=16;SR=0;TI=SMTPD_---0TrR9JV8_1583146855; Received: from localhost(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0TrR9JV8_1583146855) by smtp.aliyun-inc.com(127.0.0.1); Mon, 02 Mar 2020 19:00:56 +0800 From: Alex Shi To: cgroups@vger.kernel.org, akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, yang.shi@linux.alibaba.com, willy@infradead.org, hannes@cmpxchg.org, lkp@intel.com Cc: Alex Shi , "Kirill A. Shutemov" , Andrea Arcangeli , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v9 06/20] mm/thp: narrow lru locking Date: Mon, 2 Mar 2020 19:00:16 +0800 Message-Id: <1583146830-169516-7-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1583146830-169516-1-git-send-email-alex.shi@linux.alibaba.com> References: <1583146830-169516-1-git-send-email-alex.shi@linux.alibaba.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Lru locking just guard the lru list and subpage's Mlocked. Including other things can't give help just delay the locking release. So narrow the locking for early lock release and better code meaning. Signed-off-by: Alex Shi Cc: Kirill A. Shutemov Cc: Andrea Arcangeli Cc: Johannes Weiner Cc: Andrew Morton Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- mm/huge_memory.c | 17 +++++++---------- 1 file changed, 7 insertions(+), 10 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 599367d25fca..3835f87d03fd 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2542,13 +2542,14 @@ static void __split_huge_page_tail(struct page *head, int tail, } static void __split_huge_page(struct page *page, struct list_head *list, - pgoff_t end, unsigned long flags) + pgoff_t end) { struct page *head = compound_head(page); pg_data_t *pgdat = page_pgdat(head); struct lruvec *lruvec; struct address_space *swap_cache = NULL; unsigned long offset = 0; + unsigned long flags; int i; lruvec = mem_cgroup_page_lruvec(head, pgdat); @@ -2564,6 +2565,9 @@ static void __split_huge_page(struct page *page, struct list_head *list, xa_lock(&swap_cache->i_pages); } + /* Lru list would be changed, don't care head's LRU bit. */ + spin_lock_irqsave(&pgdat->lru_lock, flags); + for (i = HPAGE_PMD_NR - 1; i >= 1; i--) { __split_huge_page_tail(head, i, lruvec, list); /* Some pages can be beyond i_size: drop them from page cache */ @@ -2581,6 +2585,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, head + i, 0); } } + spin_unlock_irqrestore(&pgdat->lru_lock, flags); ClearPageCompound(head); @@ -2601,8 +2606,6 @@ static void __split_huge_page(struct page *page, struct list_head *list, xa_unlock(&head->mapping->i_pages); } - spin_unlock_irqrestore(&pgdat->lru_lock, flags); - remap_page(head); for (i = 0; i < HPAGE_PMD_NR; i++) { @@ -2740,13 +2743,11 @@ bool can_split_huge_page(struct page *page, int *pextra_pins) int split_huge_page_to_list(struct page *page, struct list_head *list) { struct page *head = compound_head(page); - struct pglist_data *pgdata = NODE_DATA(page_to_nid(head)); struct deferred_split *ds_queue = get_deferred_split_queue(head); struct anon_vma *anon_vma = NULL; struct address_space *mapping = NULL; int count, mapcount, extra_pins, ret; bool mlocked; - unsigned long flags; pgoff_t end; VM_BUG_ON_PAGE(is_huge_zero_page(head), head); @@ -2812,9 +2813,6 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) if (mlocked) lru_add_drain(); - /* prevent PageLRU to go away from under us, and freeze lru stats */ - spin_lock_irqsave(&pgdata->lru_lock, flags); - if (mapping) { XA_STATE(xas, &mapping->i_pages, page_index(head)); @@ -2844,7 +2842,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) __dec_node_page_state(head, NR_FILE_THPS); } - __split_huge_page(page, list, end, flags); + __split_huge_page(page, list, end); if (PageSwapCache(head)) { swp_entry_t entry = { .val = page_private(head) }; @@ -2863,7 +2861,6 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) spin_unlock(&ds_queue->split_queue_lock); fail: if (mapping) xa_unlock(&mapping->i_pages); - spin_unlock_irqrestore(&pgdata->lru_lock, flags); remap_page(head); ret = -EBUSY; } -- 1.8.3.1