Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp8546880ybl; Wed, 25 Dec 2019 01:06:12 -0800 (PST) X-Google-Smtp-Source: APXvYqwoyFNz62hir+CK8RHHFbX8uXcP9eDqWuCuhBhtWKCphM8u5tM7avwPBaRYyyMPDlde4uKV X-Received: by 2002:a9d:624e:: with SMTP id i14mr44524157otk.371.1577264772753; Wed, 25 Dec 2019 01:06:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1577264772; cv=none; d=google.com; s=arc-20160816; b=vrmAoFqjn1vm3LcZqbcJeN/3R5foGfTiOLHW+ykfr4dsr9XlEfE9QoE1vDLw9RKW+0 hGUjDqFc49V+zEqgj9UZ3bJLlQ3FwPoQ/0sYFkG29Yr3XbmWHOIZemDcyvDj0YYqp6mG q6rjaeEMDDwclwmwNtH+5vO4/IkJiqDCJAADlKIA09drOIkhNX0qzpIsZpimZNBBiGJl etDCeNND4jmKaeS8V6lVeLR1sh93SnF4jVw339aRQZU/coGhxPMLmBE2yie8UNpdesb1 Vw3wMGLaxoJ/fSd4lMCupTutb7cTzqGzyXTolLUv5fwcP0MEVfYRJlCd/gWu1wI7p95g VCqA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=Y7hrzDU12oNFdQxU7rW07WECEUfkJguCWwSC7aNZRdk=; b=whGmgmSJClXSHbLwVj67g9cWWZ3QTAtaL8FqsPMWFAwuVmA1SgakPKFodJWqkqJ3cT ztLXisSQFhK3KW8xo3Dw7VQNADn6KvYktB9RjIwijTXHwPPM9J7YPIMrRAZkS04OcynS gdkn7x3L/o/yPVDanxuy56C1KNOX+Sy00tj9251OE9fmZykQ5LQ5AeCx706n04YJTHNL SUa+qKlxIWYpGuIuFMn624/oGRIYiD6ND90X8E7yaK1QCyxuYCHhoQnV4U4CHL/DDNr4 g5/wY3xXKeV15R0+4u5IkNx5LsbvFEIjWfGV68kyxytwVCELSyYGO5suLMW5T5H9YiZ0 XqvA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w196si8951647oia.135.2019.12.25.01.06.01; Wed, 25 Dec 2019 01:06:12 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727056AbfLYJFH (ORCPT + 99 others); Wed, 25 Dec 2019 04:05:07 -0500 Received: from out30-57.freemail.mail.aliyun.com ([115.124.30.57]:37105 "EHLO out30-57.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726461AbfLYJFE (ORCPT ); Wed, 25 Dec 2019 04:05:04 -0500 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R891e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e07487;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=15;SR=0;TI=SMTPD_---0TltPEgn_1577264680; Received: from localhost(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0TltPEgn_1577264680) by smtp.aliyun-inc.com(127.0.0.1); Wed, 25 Dec 2019 17:04:41 +0800 From: Alex Shi To: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, yang.shi@linux.alibaba.com, willy@infradead.org, shakeelb@google.com, hannes@cmpxchg.org Cc: Michal Hocko , Vladimir Davydov Subject: [PATCH v7 02/10] mm/memcg: fold lru_lock in lock_page_lru Date: Wed, 25 Dec 2019 17:04:18 +0800 Message-Id: <1577264666-246071-3-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1577264666-246071-1-git-send-email-alex.shi@linux.alibaba.com> References: <1577264666-246071-1-git-send-email-alex.shi@linux.alibaba.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From the commit_charge's explanations and mem_cgroup_commit_charge comments, as well as call path when lrucare is ture, The lru_lock is just to guard the task migration(which would be lead to move_account) So it isn't needed when !PageLRU, and better be fold into PageLRU to reduce lock contentions. Signed-off-by: Alex Shi Cc: Johannes Weiner Cc: Michal Hocko Cc: Matthew Wilcox Cc: Vladimir Davydov Cc: Andrew Morton Cc: cgroups@vger.kernel.org Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- mm/memcontrol.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index c5b5f74cfd4d..0ad10caabc3d 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2572,12 +2572,11 @@ static void cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages) static void lock_page_lru(struct page *page, int *isolated) { - pg_data_t *pgdat = page_pgdat(page); - - spin_lock_irq(&pgdat->lru_lock); if (PageLRU(page)) { + pg_data_t *pgdat = page_pgdat(page); struct lruvec *lruvec; + spin_lock_irq(&pgdat->lru_lock); lruvec = mem_cgroup_page_lruvec(page, pgdat); ClearPageLRU(page); del_page_from_lru_list(page, lruvec, page_lru(page)); @@ -2588,17 +2587,17 @@ static void lock_page_lru(struct page *page, int *isolated) static void unlock_page_lru(struct page *page, int isolated) { - pg_data_t *pgdat = page_pgdat(page); if (isolated) { + pg_data_t *pgdat = page_pgdat(page); struct lruvec *lruvec; lruvec = mem_cgroup_page_lruvec(page, pgdat); VM_BUG_ON_PAGE(PageLRU(page), page); SetPageLRU(page); add_page_to_lru_list(page, lruvec, page_lru(page)); + spin_unlock_irq(&pgdat->lru_lock); } - spin_unlock_irq(&pgdat->lru_lock); } static void commit_charge(struct page *page, struct mem_cgroup *memcg, -- 1.8.3.1