Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp721956pxb; Tue, 2 Feb 2021 16:42:00 -0800 (PST) X-Google-Smtp-Source: ABdhPJzcHjxElSOKSzQxMsswGTC9wbz1eJCDsRl8dgZh9Bq0Hw4dqchfU0ygXEyeU4FqlrK2p0oF X-Received: by 2002:a17:906:2b59:: with SMTP id b25mr587876ejg.401.1612312920079; Tue, 02 Feb 2021 16:42:00 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612312920; cv=none; d=google.com; s=arc-20160816; b=QLHBtICXxHDQ1YHeaJv+Qq9dxnSvzQXVXminStyMNn+RV+rMQDFQv23rsLnLL96F/k e9fX7U0NknaPIKSztt6ZemsbT72JmltZ3nMwOrEpzjgUb92OKnnZlHPplII4FAARJtFZ vnGIA25lTa62viCoZD8os+JYfbMhcZTwvsh5XqFphRxzd2R4KDXiowSsdxkTSbI2k4oR CoSgT0Vb3pR+LV+fht6aAtqgu2c+GWz955EF0zKYK+VmyM7nywtnyq88uLSbfF+qpulG TBCkEvUSy+zsNeYg2brZ31FLe5N+u7PRvHEwffXM9ojCcdVkM+nJQTwahk+OJ9yzFyPo 9PjQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=wH4c6SXNdhNwh+C23BEFkUNz9OTjPe+NannGM5GtmrQ=; b=W9MW1guoTBv5iLcB2WaCJBgqbC3BtIX+zkeFJZntZeiNgzVQ1kcS0QQUAyeRwjWWiL G6T7jsiUyoDa/1sHpnj+QLk4cxgXCnR70F7jMtcDiLky555VBIdkg6frEA9NmMCaj4Sn vXPkIFyHuA+O3M+khA1wLqLvZTls+a2XxwDM5Bgpkw7mFtA10cjG6iMKbEITuQTi6m9m /8uTEaKLIIa4NMwJvprJ3zMS0rwuWvBZTYXE8o/nL1yC+lDCspAdCAsfwVtaP4FswiJf 3q5LpavBHFXltCv3OsLcq65TroBztHKcBO9ufXMVKUoLUEmrCggi6XTVnn6D6JMGDhM8 dO/w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=J29bxNRi; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id z95si219074ede.403.2021.02.02.16.41.35; Tue, 02 Feb 2021 16:42:00 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=J29bxNRi; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238989AbhBBSty (ORCPT + 99 others); Tue, 2 Feb 2021 13:49:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54614 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236599AbhBBSsZ (ORCPT ); Tue, 2 Feb 2021 13:48:25 -0500 Received: from mail-qk1-x72f.google.com (mail-qk1-x72f.google.com [IPv6:2607:f8b0:4864:20::72f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 68807C061355 for ; Tue, 2 Feb 2021 10:48:03 -0800 (PST) Received: by mail-qk1-x72f.google.com with SMTP id a19so20884843qka.2 for ; Tue, 02 Feb 2021 10:48:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=wH4c6SXNdhNwh+C23BEFkUNz9OTjPe+NannGM5GtmrQ=; b=J29bxNRinL1CN+JqxD3H5YMXk5tmfxXkSEjOPJX/ssjGALv8pJGYjjR5KFiU19djZJ S6ZsrjzCu/7fjI1R9wGkp68lqh0Ham2Gkmos778PuL03eCfcX0NJrzAZdDt3IblYPUkg m+bR2gl8Du2Ksk+3F3P4Otlj2gPliOGcn9EL7zt/X9MOjB5pmNpc13Wp6wwl+p6vpN3R ETUpZrK5m+SFtwGy8cKknOGHSThCZWAXNgKzokTBTDb36plDldtYBXVfUWucOZO4abs/ z7WaXx9SJRbFDMlQJNh08vgzpdCyFr4nbRhRFHVEc5AmJpfeG5ASg2LtbyD26zAjntrl NDOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=wH4c6SXNdhNwh+C23BEFkUNz9OTjPe+NannGM5GtmrQ=; b=qWaBOdoCrdDUF+BKhhuNV/QnAwXVVCBH2aBz87rBotshddv3/MekM0U9YQ6Z22BSJY 62Rnna0/wKlTq0kMPFVF6PWZytlpCNL5SMQhyIPB7y2uCLbo5Q/9ogwL6HklUT9SXLbu WoMBkzamVpOjWmdEW+IX9Jf0iga4jqTSD3ygsul/BXmPi5NGN0Wlj5SVcWHm/qNw8FV7 Cm3yjOeAMeIVwCRn/l0qidBrh90K+SjdguddUThe4JITkCJCibK/7lc9uSqhZ5yNP9iu ghXtUVhK2Gt/1nUs7MHKbbhd3NMmMbC9YIlJUafN+Jme10MsdRd7QEfO3mFU+OOtfgH5 SyzQ== X-Gm-Message-State: AOAM531fFOoAkL+ldAiocQeilTKkJXR/mVHvmbpggAkLUB6XaOZlqjcR p2QOI+M2R3mwEUlRj0er47WWNg== X-Received: by 2002:ae9:d881:: with SMTP id u123mr22659274qkf.133.1612291682726; Tue, 02 Feb 2021 10:48:02 -0800 (PST) Received: from localhost (70.44.39.90.res-cmts.bus.ptd.net. [70.44.39.90]) by smtp.gmail.com with ESMTPSA id k129sm18117119qkf.108.2021.02.02.10.48.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 02 Feb 2021 10:48:02 -0800 (PST) From: Johannes Weiner To: Andrew Morton , Tejun Heo Cc: Michal Hocko , Roman Gushchin , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: [PATCH 7/7] mm: memcontrol: consolidate lruvec stat flushing Date: Tue, 2 Feb 2021 13:47:46 -0500 Message-Id: <20210202184746.119084-8-hannes@cmpxchg.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210202184746.119084-1-hannes@cmpxchg.org> References: <20210202184746.119084-1-hannes@cmpxchg.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org There are two functions to flush the per-cpu data of an lruvec into the rest of the cgroup tree: when the cgroup is being freed, and when a CPU disappears during hotplug. The difference is whether all CPUs or just one is being collected, but the rest of the flushing code is the same. Merge them into one function and share the common code. Signed-off-by: Johannes Weiner --- mm/memcontrol.c | 88 +++++++++++++++++++++++-------------------------- 1 file changed, 42 insertions(+), 46 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index b205b2413186..88e8afc49a46 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2410,39 +2410,56 @@ static void drain_all_stock(struct mem_cgroup *root_memcg) mutex_unlock(&percpu_charge_mutex); } -static int memcg_hotplug_cpu_dead(unsigned int cpu) +static void memcg_flush_lruvec_page_state(struct mem_cgroup *memcg, int cpu) { - struct memcg_stock_pcp *stock; - struct mem_cgroup *memcg; - - stock = &per_cpu(memcg_stock, cpu); - drain_stock(stock); + int nid; - for_each_mem_cgroup(memcg) { + for_each_node(nid) { + struct mem_cgroup_per_node *pn = memcg->nodeinfo[nid]; + unsigned long stat[NR_VM_NODE_STAT_ITEMS] = { 0, }; + struct batched_lruvec_stat *lstatc; int i; - for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++) { - int nid; - - for_each_node(nid) { - struct batched_lruvec_stat *lstatc; - struct mem_cgroup_per_node *pn; - long x; - - pn = memcg->nodeinfo[nid]; + if (cpu == -1) { + int cpui; + /* + * The memcg is about to be freed, collect all + * CPUs, no need to zero anything out. + */ + for_each_online_cpu(cpui) { + lstatc = per_cpu_ptr(pn->lruvec_stat_cpu, cpui); + for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++) + stat[i] += lstatc->count[i]; + } + } else { + /* + * The CPU has gone away, collect and zero out + * its stats, it may come back later. + */ + for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++) { lstatc = per_cpu_ptr(pn->lruvec_stat_cpu, cpu); - - x = lstatc->count[i]; + stat[i] = lstatc->count[i]; lstatc->count[i] = 0; - - if (x) { - do { - atomic_long_add(x, &pn->lruvec_stat[i]); - } while ((pn = parent_nodeinfo(pn, nid))); - } } } + + do { + for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++) + atomic_long_add(stat[i], &pn->lruvec_stat[i]); + } while ((pn = parent_nodeinfo(pn, nid))); } +} + +static int memcg_hotplug_cpu_dead(unsigned int cpu) +{ + struct memcg_stock_pcp *stock; + struct mem_cgroup *memcg; + + stock = &per_cpu(memcg_stock, cpu); + drain_stock(stock); + + for_each_mem_cgroup(memcg) + memcg_flush_lruvec_page_state(memcg, cpu); return 0; } @@ -3636,27 +3653,6 @@ static u64 mem_cgroup_read_u64(struct cgroup_subsys_state *css, } } -static void memcg_flush_lruvec_page_state(struct mem_cgroup *memcg) -{ - int node; - - for_each_node(node) { - struct mem_cgroup_per_node *pn = memcg->nodeinfo[node]; - unsigned long stat[NR_VM_NODE_STAT_ITEMS] = {0, }; - struct mem_cgroup_per_node *pi; - int cpu, i; - - for_each_online_cpu(cpu) - for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++) - stat[i] += per_cpu( - pn->lruvec_stat_cpu->count[i], cpu); - - for (pi = pn; pi; pi = parent_nodeinfo(pi, node)) - for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++) - atomic_long_add(stat[i], &pi->lruvec_stat[i]); - } -} - #ifdef CONFIG_MEMCG_KMEM static int memcg_online_kmem(struct mem_cgroup *memcg) { @@ -5197,7 +5193,7 @@ static void mem_cgroup_free(struct mem_cgroup *memcg) * Flush percpu lruvec stats to guarantee the value * correctness on parent's and all ancestor levels. */ - memcg_flush_lruvec_page_state(memcg); + memcg_flush_lruvec_page_state(memcg, -1); __mem_cgroup_free(memcg); } -- 2.30.0