Received: by 2002:a25:c205:0:0:0:0:0 with SMTP id s5csp2150478ybf; Mon, 2 Mar 2020 03:02:15 -0800 (PST) X-Google-Smtp-Source: APXvYqzUdHm7xK/FX7QSy+/aIEz7Vi46chAbLL87nh7HILA4AAQnewy9aYcqC4YIo5OWjvWSuuRU X-Received: by 2002:a9d:6212:: with SMTP id g18mr13564495otj.187.1583146935784; Mon, 02 Mar 2020 03:02:15 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1583146935; cv=none; d=google.com; s=arc-20160816; b=GuzMWcBlAF+8itfrGh5T31tiOCoakBCb04C/6SjCMYjW/lAXrBsPD4Qxzwc9admDkJ kCSh/x3JkIWLh6HD76RalAUJdV8blfESTKT9VcmSn3rE2PzDl+tBTmJuvCoICZR8UYDz ZBlap/sh6x1Fk7xIdR+ZB2u3AXOIO4Yc8XtZUZ4ySSmVuddRsRh1uPxfymR3GjqDh+Pt fpRPb0PLfzqaIXUKbtjNkrNMtcDsuRdXHm2mcU3yzwHYTQP7FhLmgEPkeDhBqzrQg0ei PMthb5IoGHOI7eITmHfzjnvsf39m/qpnuaJe86r5ksdHq419fv1G8olhYc/HwU/SXOae Y7Jw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=XkcityIRtxteuIQqa5g9Up24xWr5ezvl7Z4/VKVrxlo=; b=hx7C1v8tfuWodJj5yfCB+1EvPO3aTywhtYAfYjpK5vPbB/rKz3TVCXFcKAjUTOvFhu PXeBtFu77v0IvCbQlKpKMwu2Q2/2PEID6cPo4phWRSaW4zjpRI1EC4NJbp+ogYiNgfd+ 2eM8quG0LOkgCjzn1w7cW5lGnjOEhtpZWNC2yOa8WyrF1T5YrQ5S4HvpHn/2v4QbmQuI qQ0c7wq0yZXz+JDf7VJZXqVbSkPTEErXOHowXW3KetxR01WpXD5uypgPLVVLkUP3fzo/ 635/UPJ9/paD9tHV43naNMzjEqm7E+xfuQ+4ApRruu6DuWruaAWPhS9SNeLweA1h7wF2 uuSg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u6si2399296otg.73.2020.03.02.03.02.00; Mon, 02 Mar 2020 03:02:15 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727689AbgCBLBJ (ORCPT + 99 others); Mon, 2 Mar 2020 06:01:09 -0500 Received: from out30-132.freemail.mail.aliyun.com ([115.124.30.132]:34253 "EHLO out30-132.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725802AbgCBLBH (ORCPT ); Mon, 2 Mar 2020 06:01:07 -0500 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R851e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04407;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=16;SR=0;TI=SMTPD_---0TrQxdHj_1583146853; Received: from localhost(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0TrQxdHj_1583146853) by smtp.aliyun-inc.com(127.0.0.1); Mon, 02 Mar 2020 19:00:54 +0800 From: Alex Shi To: cgroups@vger.kernel.org, akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, yang.shi@linux.alibaba.com, willy@infradead.org, hannes@cmpxchg.org, lkp@intel.com Cc: Alex Shi , Michal Hocko , Vladimir Davydov , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v9 02/20] mm/memcg: fold lock_page_lru into commit_charge Date: Mon, 2 Mar 2020 19:00:12 +0800 Message-Id: <1583146830-169516-3-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1583146830-169516-1-git-send-email-alex.shi@linux.alibaba.com> References: <1583146830-169516-1-git-send-email-alex.shi@linux.alibaba.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org As Konstantin Khlebnikov mentioned: Also I don't like these functions: - called lock/unlock but actually also isolates - used just once - pgdat evaluated twice Cleanup and fold these functions into commit_charge. It also reduces lock time while lrucare && !PageLRU. Signed-off-by: Alex Shi Cc: Johannes Weiner Cc: Michal Hocko Cc: Konstantin Khlebnikov Cc: Vladimir Davydov Cc: Andrew Morton Cc: cgroups@vger.kernel.org Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- mm/memcontrol.c | 57 ++++++++++++++++++++------------------------------------- 1 file changed, 20 insertions(+), 37 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index d09776cd6e10..875e2aebcde7 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2572,41 +2572,11 @@ static void cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages) css_put_many(&memcg->css, nr_pages); } -static void lock_page_lru(struct page *page, int *isolated) -{ - pg_data_t *pgdat = page_pgdat(page); - - spin_lock_irq(&pgdat->lru_lock); - if (PageLRU(page)) { - struct lruvec *lruvec; - - lruvec = mem_cgroup_page_lruvec(page, pgdat); - ClearPageLRU(page); - del_page_from_lru_list(page, lruvec, page_lru(page)); - *isolated = 1; - } else - *isolated = 0; -} - -static void unlock_page_lru(struct page *page, int isolated) -{ - pg_data_t *pgdat = page_pgdat(page); - - if (isolated) { - struct lruvec *lruvec; - - lruvec = mem_cgroup_page_lruvec(page, pgdat); - VM_BUG_ON_PAGE(PageLRU(page), page); - SetPageLRU(page); - add_page_to_lru_list(page, lruvec, page_lru(page)); - } - spin_unlock_irq(&pgdat->lru_lock); -} - static void commit_charge(struct page *page, struct mem_cgroup *memcg, bool lrucare) { - int isolated; + struct lruvec *lruvec = NULL; + pg_data_t *pgdat; VM_BUG_ON_PAGE(page->mem_cgroup, page); @@ -2614,9 +2584,17 @@ static void commit_charge(struct page *page, struct mem_cgroup *memcg, * In some cases, SwapCache and FUSE(splice_buf->radixtree), the page * may already be on some other mem_cgroup's LRU. Take care of it. */ - if (lrucare) - lock_page_lru(page, &isolated); - + if (lrucare) { + pgdat = page_pgdat(page); + spin_lock_irq(&pgdat->lru_lock); + + if (PageLRU(page)) { + lruvec = mem_cgroup_page_lruvec(page, pgdat); + ClearPageLRU(page); + del_page_from_lru_list(page, lruvec, page_lru(page)); + } else + spin_unlock_irq(&pgdat->lru_lock); + } /* * Nobody should be changing or seriously looking at * page->mem_cgroup at this point: @@ -2633,8 +2611,13 @@ static void commit_charge(struct page *page, struct mem_cgroup *memcg, */ page->mem_cgroup = memcg; - if (lrucare) - unlock_page_lru(page, isolated); + if (lrucare && lruvec) { + lruvec = mem_cgroup_page_lruvec(page, pgdat); + VM_BUG_ON_PAGE(PageLRU(page), page); + SetPageLRU(page); + add_page_to_lru_list(page, lruvec, page_lru(page)); + spin_unlock_irq(&pgdat->lru_lock); + } } #ifdef CONFIG_MEMCG_KMEM -- 1.8.3.1