Received: by 2002:ac0:950c:0:0:0:0:0 with SMTP id f12csp2499840imc; Tue, 12 Mar 2019 15:35:57 -0700 (PDT) X-Google-Smtp-Source: APXvYqzD3sbwukmPGMES2og6xZpXCo0riLdPNsbNjO5kvexRjCfeYF+zxhQ2rnvH0Kzdp+ov611q X-Received: by 2002:a63:fe0d:: with SMTP id p13mr36018494pgh.366.1552430157025; Tue, 12 Mar 2019 15:35:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1552430157; cv=none; d=google.com; s=arc-20160816; b=xr2G840Hqsk9vzAWxyEh5CYU1dGg+Kd7d76MeM8o0jBXBaUj9MkGK+NO7w9asgj9ES 3dBNUpkabRqjLmIKIZAAvgNCQCSGkODyZiP7QQHPK4Y6IyAHHbpogXFuuHHeHF1Rf1CU KxGqDK/QP9eY6L+lxo40dApbyKHomLsfuWB5bMxuvH/iMaDQrZjP2tF127nwC9bdKUqW vQG7R3pjbvLkfCSKgJxp1rzXVdVkNB6pHzY4XHorh3rp2THNQJf4FY2YcUDQCrR8CaMK xy9Iu0EJb0OY8wX3JD3GHwNfiWIsYTfyXaM+ZUrbhyjX1s8usCRdkSe9B/z5bPCHGU7X CuLQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=oz/TqQdvSWg5Jr50nmxxA8VLsKG86DG+TYlsVRNN9G4=; b=Kzx5XFUgQ0U8iqSFLcB0favpHvqvNJJ13qBGogQa5rhZTEOO2tdyc+ekCx+tRSXhbx o7NsGiYusx+4I/mVxVtGyx9NDo9++7j0Ho3ErYsvNfKfUR10MapADtnL3oQofUCuuSo/ iTnqj2PMSZfs7l09Vxsk+igh8idK7hYjb2zc8Zgj9CLKPyztdNMNUOhTdO49+1CclYrH rbdeOI9+waCLe8xtI8YpHcw4CS8gbggHKeHICW55hh8QIQQznyd/57EPaShc72Lsy078 EOM8o6h3i3l7MagozSZ77wi8GASmVASg1g/moVY1JD8XT5sF/qp7S85DBpWruS0+sAe2 T0JQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=LXstkKlU; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n34si2777011pld.163.2019.03.12.15.35.41; Tue, 12 Mar 2019 15:35:57 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=LXstkKlU; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727317AbfCLWef (ORCPT + 99 others); Tue, 12 Mar 2019 18:34:35 -0400 Received: from mail-pf1-f194.google.com ([209.85.210.194]:35552 "EHLO mail-pf1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726767AbfCLWeL (ORCPT ); Tue, 12 Mar 2019 18:34:11 -0400 Received: by mail-pf1-f194.google.com with SMTP id j5so2871260pfa.2 for ; Tue, 12 Mar 2019 15:34:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=oz/TqQdvSWg5Jr50nmxxA8VLsKG86DG+TYlsVRNN9G4=; b=LXstkKlUo0CnIa3Kc9zF+D4gAlOlVCACVZQL55UAPdMLJPpES1HylsZEk9D3A62uSi fdBXQCsgnT34wO+Y3YeEVDZDz8SkhJqKZzNp1SLlupnP8AMhk6eQJq0gJEcndGpfsCyy gICdN5uWruLEXqIic9Da9FeFoaleFCceVwVaTlcBB5XxD0QQ9PtvlAWeqrQSIQVAEdW8 NwRAy+W1N0D9imr2hWnXWox7pYqy0heUoeImh6pY8pQi5y3XmlcbCifj9BumcodY0pl9 dhp6sT026DZ5kN93hNlwp4Tt1mOJtPMBEgrfBmcNZ3FLzMxDcbFGfjwXQtA/lQ6FqUiF 1WCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=oz/TqQdvSWg5Jr50nmxxA8VLsKG86DG+TYlsVRNN9G4=; b=B7kOCkNohLjy271lcH85t89/du3ERRtxZftcGsm97f664PRVulSP1HbOh1XER7DWlD bHFNg6t6MlhXjDHNAYt0LHGRa15l78GDZ/X/nkeRPkVyXTDOgopQ1pG50yjDoI3be+pP bhLd8gkXl4WVL0ho6zSq73xH+XTU1dNxQHUKMOA8cRzC4FT+nkOTRs9rOEEPoskfNnBM INK8ksYmaGPCkBxsHSeclPdVbX1O7QN+1hkRAVrTcjwSrW5XgU0tyVt9UuBDUsvkch/e +je2K/EAh5vVvDzz6nU0eR5ZjQXFclIY2LAagXYKYPSw4EY+s5JNnHVDXA69jFLfpAFP v4WA== X-Gm-Message-State: APjAAAU0UV0rYROcMzRAQ0Dnrky7pm6WAZQELnYQHVKO8x7Y+6ekBwFq /idA5Z7bXa1Ssb9j7K8V+2J4h5xDM+Q= X-Received: by 2002:a63:3541:: with SMTP id c62mr36759041pga.157.1552430050528; Tue, 12 Mar 2019 15:34:10 -0700 (PDT) Received: from tower.thefacebook.com ([2620:10d:c090:200::1:3203]) by smtp.gmail.com with ESMTPSA id i13sm14680592pfo.106.2019.03.12.15.34.09 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 12 Mar 2019 15:34:09 -0700 (PDT) From: Roman Gushchin X-Google-Original-From: Roman Gushchin To: linux-mm@kvack.org, kernel-team@fb.com Cc: linux-kernel@vger.kernel.org, Tejun Heo , Rik van Riel , Johannes Weiner , Michal Hocko , Roman Gushchin Subject: [PATCH v2 2/6] mm: prepare to premature release of per-node lruvec_stat_cpu Date: Tue, 12 Mar 2019 15:33:59 -0700 Message-Id: <20190312223404.28665-3-guro@fb.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190312223404.28665-1-guro@fb.com> References: <20190312223404.28665-1-guro@fb.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Similar to the memcg's vmstats_percpu, per-memcg per-node stats consists of percpu- and atomic counterparts, and we do expect that both coexist during the whole life-cycle of the memcg. To prepare for a premature release of percpu per-node data, let's pretend that lruvec_stat_cpu is a rcu-protected pointer, which can be NULL. This patch adds corresponding checks whenever required. Signed-off-by: Roman Gushchin Acked-by: Johannes Weiner --- include/linux/memcontrol.h | 21 +++++++++++++++------ mm/memcontrol.c | 14 +++++++++++--- 2 files changed, 26 insertions(+), 9 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 05ca77767c6a..8ac04632002a 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -126,7 +126,7 @@ struct memcg_shrinker_map { struct mem_cgroup_per_node { struct lruvec lruvec; - struct lruvec_stat __percpu *lruvec_stat_cpu; + struct lruvec_stat __rcu /* __percpu */ *lruvec_stat_cpu; atomic_long_t lruvec_stat[NR_VM_NODE_STAT_ITEMS]; unsigned long lru_zone_size[MAX_NR_ZONES][NR_LRU_LISTS]; @@ -682,6 +682,7 @@ static inline unsigned long lruvec_page_state(struct lruvec *lruvec, static inline void __mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, int val) { + struct lruvec_stat __percpu *lruvec_stat_cpu; struct mem_cgroup_per_node *pn; long x; @@ -697,12 +698,20 @@ static inline void __mod_lruvec_state(struct lruvec *lruvec, __mod_memcg_state(pn->memcg, idx, val); /* Update lruvec */ - x = val + __this_cpu_read(pn->lruvec_stat_cpu->count[idx]); - if (unlikely(abs(x) > MEMCG_CHARGE_BATCH)) { - atomic_long_add(x, &pn->lruvec_stat[idx]); - x = 0; + rcu_read_lock(); + lruvec_stat_cpu = (struct lruvec_stat __percpu *) + rcu_dereference(pn->lruvec_stat_cpu); + if (likely(lruvec_stat_cpu)) { + x = val + __this_cpu_read(lruvec_stat_cpu->count[idx]); + if (unlikely(abs(x) > MEMCG_CHARGE_BATCH)) { + atomic_long_add(x, &pn->lruvec_stat[idx]); + x = 0; + } + __this_cpu_write(lruvec_stat_cpu->count[idx], x); + } else { + atomic_long_add(val, &pn->lruvec_stat[idx]); } - __this_cpu_write(pn->lruvec_stat_cpu->count[idx], x); + rcu_read_unlock(); } static inline void mod_lruvec_state(struct lruvec *lruvec, diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 803c772f354b..5ef4098f3f8d 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2122,6 +2122,7 @@ static void drain_all_stock(struct mem_cgroup *root_memcg) static int memcg_hotplug_cpu_dead(unsigned int cpu) { struct memcg_vmstats_percpu __percpu *vmstats_percpu; + struct lruvec_stat __percpu *lruvec_stat_cpu; struct memcg_stock_pcp *stock; struct mem_cgroup *memcg; @@ -2152,7 +2153,12 @@ static int memcg_hotplug_cpu_dead(unsigned int cpu) struct mem_cgroup_per_node *pn; pn = mem_cgroup_nodeinfo(memcg, nid); - x = this_cpu_xchg(pn->lruvec_stat_cpu->count[i], 0); + + lruvec_stat_cpu = (struct lruvec_stat __percpu*) + rcu_dereference(pn->lruvec_stat_cpu); + if (!lruvec_stat_cpu) + continue; + x = this_cpu_xchg(lruvec_stat_cpu->count[i], 0); if (x) atomic_long_add(x, &pn->lruvec_stat[i]); } @@ -4414,6 +4420,7 @@ struct mem_cgroup *mem_cgroup_from_id(unsigned short id) static int alloc_mem_cgroup_per_node_info(struct mem_cgroup *memcg, int node) { + struct lruvec_stat __percpu *lruvec_stat_cpu; struct mem_cgroup_per_node *pn; int tmp = node; /* @@ -4430,11 +4437,12 @@ static int alloc_mem_cgroup_per_node_info(struct mem_cgroup *memcg, int node) if (!pn) return 1; - pn->lruvec_stat_cpu = alloc_percpu(struct lruvec_stat); - if (!pn->lruvec_stat_cpu) { + lruvec_stat_cpu = alloc_percpu(struct lruvec_stat); + if (!lruvec_stat_cpu) { kfree(pn); return 1; } + rcu_assign_pointer(pn->lruvec_stat_cpu, lruvec_stat_cpu); lruvec_init(&pn->lruvec); pn->usage_in_excess = 0; -- 2.20.1