Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp6796900pxb; Wed, 17 Feb 2021 13:49:05 -0800 (PST) X-Google-Smtp-Source: ABdhPJy4j5gkwTxssyGfGNB7pyZqOP5MMg/PXR35nS1ILALwKlyGffP0ySZkfZ4WInH97w9D9fro X-Received: by 2002:a05:6402:208:: with SMTP id t8mr810771edv.189.1613598544737; Wed, 17 Feb 2021 13:49:04 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1613598544; cv=none; d=google.com; s=arc-20160816; b=J1DDixqVoR1gAwhRWUAOaKEm0UWZ1NrlxMs6IGf73g3XiwCaEs5eAUjPO623/KYATN 2xwQKEKbFTwqtC1JNJOuqttj5gFFcaNIGn/0UNh2FZ0OjzArKnKFDWtmCu1s2NAMDzx6 MfN8cixrHpUnwhWQXZ5MOdLZEXQyk8USw/7jXxUM/mfw7CShZO2tx/fhCT3r3LvIxbIP KeNJi4Ak7+MbPwPGdOfrREvHQ8sQq57RXl1z9jdVtIy4WW/GVt9xxIsQazZ6HW1Sd1vB PejzQ0zbxzrBqFJS483hWf58FRaJ0jqT2QD/6sycBSIuX6rwWA7sNW3YOxYSwxQbnDHa 4KLQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :ironport-sdr:ironport-sdr; bh=DnsSsW64Eyf/rTo5jn6pRFbleXoPywR5rGICAuwta44=; b=wOBeaH75tNZ9xX1I6GngxlNSeYrv0xzZGTdsJ5eSbvRZM7udto6kV5ozBBfDwt99Bw TYVP9AxQMeICFbhOtRE9lyAFJsjheQ+eT112xj/7N2/84lrnzlUq0/BI8N0i3KjI6Eiu jaD4ixWLE4Siz09DVoHm4B4fGWpH1g1Wa0tA+dwL5LNGZhJvw5gEOlWc4nuhaGoOJsa/ ugXwVIgKe8YbtyEtf+NHjnZuEFFWVYuHKOn7x4EeoBYK4aJNfUjGeV7h6aFkAvvIWJnc OZ9qdhSbf4uG69ePMkDSEQVjhvFelKko2F1Q4f/ZztQiD/MAKklIkrTqp5LYeOXTJKUE 09Ig== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id h5si46656edd.175.2021.02.17.13.48.40; Wed, 17 Feb 2021 13:49:04 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232049AbhBQVpX (ORCPT + 99 others); Wed, 17 Feb 2021 16:45:23 -0500 Received: from mga14.intel.com ([192.55.52.115]:37244 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231933AbhBQVo5 (ORCPT ); Wed, 17 Feb 2021 16:44:57 -0500 IronPort-SDR: Uek8OnRX3Wosq9yzC5fgEvZLbBTlDivE6tg7JJB9nUIime4RVTVlGpLfdM9opLU/sUlWIDrtRC wmlRUzTEPtOQ== X-IronPort-AV: E=McAfee;i="6000,8403,9898"; a="182538765" X-IronPort-AV: E=Sophos;i="5.81,185,1610438400"; d="scan'208";a="182538765" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Feb 2021 13:41:57 -0800 IronPort-SDR: fyXWGY+qTbQbuVmeCtCGViaNOmO5KLB9GCktnf6sqEPS4OOLdfqg8pm8qyxyXHVzxofdAq+crC i8lmAP2b6Ivw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,185,1610438400"; d="scan'208";a="401430721" Received: from skl-02.jf.intel.com ([10.54.74.28]) by orsmga007.jf.intel.com with ESMTP; 17 Feb 2021 13:41:57 -0800 From: Tim Chen To: Andrew Morton , Johannes Weiner , Michal Hocko , Vladimir Davydov Cc: Tim Chen , Dave Hansen , Ying Huang , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 3/3] mm: Fix missing mem cgroup soft limit tree updates Date: Wed, 17 Feb 2021 12:41:36 -0800 Message-Id: X-Mailer: git-send-email 2.20.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On a per node basis, the mem cgroup soft limit tree on each node tracks how much a cgroup has exceeded its soft limit memory limit and sorts the cgroup by its excess usage. On page release, the trees are not updated right away, until we have gathered a batch of pages belonging to the same cgroup. This reduces the frequency of updating the soft limit tree and locking of the tree and associated cgroup. However, the batch of pages could contain pages from multiple nodes but only the soft limit tree from one node would get updated. Change the logic so that we update the tree in batch of pages, with each batch of pages all in the same mem cgroup and memory node. An update is issued for the batch of pages of a node collected till now whenever we encounter a page belonging to a different node. Note that this batching for the same node logic is only relevant for v1 cgroup that has a memory soft limit. Reviewed-by: Ying Huang Signed-off-by: Tim Chen --- mm/memcontrol.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index d72449eeb85a..8bddee75f5cb 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6804,6 +6804,7 @@ struct uncharge_gather { unsigned long pgpgout; unsigned long nr_kmem; struct page *dummy_page; + int nid; }; static inline void uncharge_gather_clear(struct uncharge_gather *ug) @@ -6849,7 +6850,13 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug) * exclusive access to the page. */ - if (ug->memcg != page_memcg(page)) { + if (ug->memcg != page_memcg(page) || + /* + * Update soft limit tree used in v1 cgroup in page batch for + * the same node. Relevant only to v1 cgroup with a soft limit. + */ + (ug->dummy_page && ug->nid != page_to_nid(page) && + ug->memcg->soft_limit != PAGE_COUNTER_MAX)) { if (ug->memcg) { uncharge_batch(ug); uncharge_gather_clear(ug); @@ -6869,6 +6876,7 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug) ug->pgpgout++; ug->dummy_page = page; + ug->nid = page_to_nid(page); page->memcg_data = 0; css_put(&ug->memcg->css); } -- 2.20.1