Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp925868pxb; Tue, 9 Feb 2021 16:56:26 -0800 (PST) X-Google-Smtp-Source: ABdhPJxtKU7wWSlDbydqRLrltYfHkp4LBoZJa2f47t5AMUL9JBiCR1P4+OLjZA2I6OlUhCrQNn6u X-Received: by 2002:a17:906:2bce:: with SMTP id n14mr364634ejg.171.1612918586034; Tue, 09 Feb 2021 16:56:26 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612918586; cv=none; d=google.com; s=arc-20160816; b=s3UN+cEOyN2ze+RyqrgqENwM9qMo6CIgHdFsP9f9J2+hcF9WODK126mGe1ouoa9NBl cwDcBG1I3ZXH2X1aHJFOW2jVvouj8ob332+LSFjxqyU0Fy5TKJMP3VRDN4ei5tnp25US YS+EVnLVs4CI/LXwvmb7+p3L/lqMa8XjUwdTF+SJh7HU2sVckhI0FItwNgxyVR72q6o8 6HOM/UFVih/haLOQTV7C2PRgG7X6QflmlDUuTtf2EEr4RRsaJ0fwXt1m7SGaqPcMR96A 0ErQUmKV/69KTeeax0Z48znprH07LbqHF745197Cj6XAzhcLud6aaSZXON09wuiOhdlR QU4Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :ironport-sdr:ironport-sdr; bh=y7ISeDpsEHoLEQtYHPeUJXugFWMz7ZTLNIgrUPz2gYA=; b=yy5AetSXw4u0/iMWl0tR5Kwt7zbPEMKzzlutm3xF7lLBtT3//BwjhaOZQU3Vx8UVgW BDTfvwHbWnD8IGuUQw55iZp16QoB2qCJP/ihFrBFhbGo0IvlMsDhT/k3vY/jhG0nGZR5 xrVqQT9uSTX+n/cXWLgx06kgkZ1PMrqqu/b1YvkDJQDLNNJq3r3HdhdyzaoCCNwKbGms 5fPbr2hI6O6+eFFy95I8KcrQ95jnEih1Vi8PHXjyoGl4Pegb43YnlzshVNOTzxUDNmIh gL+YnoofJpYQRYFsLgdyvkoPm+4NDQbxwHNFwOebdx9+F+iQ4svGYQZ5vVGYHBZsr5Jm KqbA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id ca23si297641edb.448.2021.02.09.16.56.03; Tue, 09 Feb 2021 16:56:26 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234403AbhBJAyx (ORCPT + 99 others); Tue, 9 Feb 2021 19:54:53 -0500 Received: from mga12.intel.com ([192.55.52.136]:10379 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234012AbhBIWMW (ORCPT ); Tue, 9 Feb 2021 17:12:22 -0500 IronPort-SDR: rHJ2ugH56T2qKz4IS2f85pPrjGuy3YpYd+6jJtDb5SznAWjkr92tidvNPsvUK2EZhuifWei2aX Rs40/o4icssQ== X-IronPort-AV: E=McAfee;i="6000,8403,9890"; a="161113152" X-IronPort-AV: E=Sophos;i="5.81,166,1610438400"; d="scan'208";a="161113152" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Feb 2021 13:30:37 -0800 IronPort-SDR: ntxFtm3gbYXN9YZW0jDW+P93csodkdqLTeOygb9DELVS/54Q5IXqSoho4FE39fWWsA4tGeFxqB 6mFZv6L04ljg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,166,1610438400"; d="scan'208";a="361953036" Received: from skl-02.jf.intel.com ([10.54.74.28]) by orsmga006.jf.intel.com with ESMTP; 09 Feb 2021 13:30:37 -0800 From: Tim Chen To: Andrew Morton , Johannes Weiner , Michal Hocko , Vladimir Davydov Cc: Tim Chen , Dave Hansen , Ying Huang , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 3/3] mm: Fix missing mem cgroup soft limit tree updates Date: Tue, 9 Feb 2021 12:29:47 -0800 Message-Id: <3b6e4e9aa8b3ee1466269baf23ed82d90a8f791c.1612902157.git.tim.c.chen@linux.intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On a per node basis, the mem cgroup soft limit tree on each node tracks how much a cgroup has exceeded its soft limit memory limit and sorts the cgroup by its excess usage. On page release, the trees are not updated right away, until we have gathered a batch of pages belonging to the same cgroup. This reduces the frequency of updating the soft limit tree and locking of the tree and associated cgroup. However, the batch of pages could contain pages from multiple nodes but only the soft limit tree from one node would get updated. Change the logic so that we update the tree in batch of pages, with each batch of pages all in the same mem cgroup and memory node. An update is issued for the batch of pages of a node collected till now whenever we encounter a page belonging to a different node. Reviewed-by: Ying Huang Signed-off-by: Tim Chen --- mm/memcontrol.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index d72449eeb85a..f5a4a0e4e2ec 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6804,6 +6804,7 @@ struct uncharge_gather { unsigned long pgpgout; unsigned long nr_kmem; struct page *dummy_page; + int nid; }; static inline void uncharge_gather_clear(struct uncharge_gather *ug) @@ -6849,7 +6850,9 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug) * exclusive access to the page. */ - if (ug->memcg != page_memcg(page)) { + if (ug->memcg != page_memcg(page) || + /* uncharge batch update soft limit tree on a node basis */ + (ug->dummy_page && ug->nid != page_to_nid(page))) { if (ug->memcg) { uncharge_batch(ug); uncharge_gather_clear(ug); @@ -6869,6 +6872,7 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug) ug->pgpgout++; ug->dummy_page = page; + ug->nid = page_to_nid(page); page->memcg_data = 0; css_put(&ug->memcg->css); } -- 2.20.1