Received: by 2002:ac0:aed5:0:0:0:0:0 with SMTP id t21csp5401465imb; Thu, 7 Mar 2019 15:02:50 -0800 (PST) X-Google-Smtp-Source: APXvYqwPEncHKrl7fDYEmLdfR/vL7s/y3NGUemlVZ0A5AlE1vfDYerqQwp24x8Wr0X4Pd1xmlTE1 X-Received: by 2002:a17:902:aa47:: with SMTP id c7mr15502422plr.338.1551999770622; Thu, 07 Mar 2019 15:02:50 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1551999770; cv=none; d=google.com; s=arc-20160816; b=itdgu89MwdKfQoxgmaea/D016vmzoDDKjqkChB2Wuxau00OAYU0puy0VrQlP6AdbVG uqlscdBrwYBL6/SIOnb347Y2MfJLcEtkp9+NXgDnAWDwcGguko9CwhBWH2dWfSkgKdCi 66YfGOpJ3Zy/rgFohaqQqSTXjO+f7WFwIwfsXo2c0Dsz1+e2MA2kGF8JXr87VmC9I7yl zmpnOStatBzk1mZyTn+JmtEqMcgR8KQGJGgz7Zg2XMC2StMv3E78CJaQ0E2u6k49vOpe AU5apNCXkaUK1IATDjfA8OO2rBJAKMDsKH5mtaQQXAihAkVdHmi0l0cJiB8bWZAWePOT mcRw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=XVAzLd6hdIPZj35NxoEZAwxg04Tr002qNrQDR1YF6Fc=; b=CkbNlup/nizpFy/E8T5baqu4ib+pB9lVx2z4ujtkPqokZgzJMu2jYyDfN5PmdVaYXS E26WHT1nCJxKBpS4yvwfST/bXDW767OOY+NteLGauHfjA1pCFG8Kn9HUaIrP9cocJjq5 YLzVPMFSrjSPa9dZyHENnMT5SrT/dU1hvCGEkKlgMC+5laYvqH/qvg43RzhlUO5c5n3O atlIFB9x8RK1sULJyV81sHI7L7RM06wJrv6hrTu5cYKIbT+3A8vj45f/K2FMSX/iRFA0 lkncAoKAlfLI8vvwddlgiWW/Db94twHZS9wVLpxN5/zl4iuWLbSH6huPUHfz2L6Jr/vF n10g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=cOsHXC8q; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f30si4980562pgf.208.2019.03.07.15.02.35; Thu, 07 Mar 2019 15:02:50 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=cOsHXC8q; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726425AbfCGXA6 (ORCPT + 99 others); Thu, 7 Mar 2019 18:00:58 -0500 Received: from mail-pg1-f193.google.com ([209.85.215.193]:46897 "EHLO mail-pg1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726227AbfCGXAk (ORCPT ); Thu, 7 Mar 2019 18:00:40 -0500 Received: by mail-pg1-f193.google.com with SMTP id 196so12458149pgf.13 for ; Thu, 07 Mar 2019 15:00:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=XVAzLd6hdIPZj35NxoEZAwxg04Tr002qNrQDR1YF6Fc=; b=cOsHXC8q1NlpYG2/9ohgQ48P9MJVgooAsBgk28QQbidN3suqAiF/mMOKqLcnQisWny D5RakrhfayswSpAIWwziHUMt52tfLfZKPBc/1xsRfVDrRIrqSdV++twQXPUiCwW8KvDa FfffGsXE2gkHQZPwi+1Kh3E4yORU25DiUZHtZhxNTkIiBGwmPNH38R/SFQP7Uv+ZDNiI 45Ptn3Amz54/mVGvbfQduB5lcVD8Qx9kFvrx+5XPyFShwlyP/VnzRKGtbQyUG9wbXLUs MPg4Z5uW7sOdJbJRFNjVgdZM99iSdhB3P6zXS1sDD9weT3lXJBR5Q956ZiY+/TEWY2nQ YCAg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=XVAzLd6hdIPZj35NxoEZAwxg04Tr002qNrQDR1YF6Fc=; b=pAyW/oPqdRFWnqpB9Kn7kNNLctRP7A5JbqBF3AsgT6tO+xtctXYO+omULs/j2WchjA nUdWdfNOD5qb92hCBIrmj+cYtHA2KHVBAZKrf80cQEScDV/7LwKp7H/vHwxobpwnhXUZ 7E7hs7NiH2yubjkg+3X/X2CkflVtOaMRx6zvmk15hCkEHzraObaydFPxJ6zvyHS+zNOj yo4uaLB9R5n1i5IwMS0B01VWCejLzFHtqtY2GNxrMtu+xXAIhVHvaJcTrwB3lnWiC4mw wWfg3BUjOmdvfFxliBK1kpR7t9KPXgOWDK4Yzto0+YIqRqyF9nn+HE73K/gjmDDCytAf oZ5g== X-Gm-Message-State: APjAAAWY/vI4PDtjybzqW461gCV5wt3stT8WvQyasMBAvM8pGdSd4lSl rEoxBRTDf7nt+mfq0NaJP4k= X-Received: by 2002:a63:6ecb:: with SMTP id j194mr14086143pgc.250.1551999639357; Thu, 07 Mar 2019 15:00:39 -0800 (PST) Received: from tower.thefacebook.com ([2620:10d:c090:200::2:d18b]) by smtp.gmail.com with ESMTPSA id i126sm11864806pfb.15.2019.03.07.15.00.38 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 07 Mar 2019 15:00:38 -0800 (PST) From: Roman Gushchin X-Google-Original-From: Roman Gushchin To: linux-mm@kvack.org, kernel-team@fb.com Cc: linux-kernel@vger.kernel.org, Tejun Heo , Rik van Riel , Johannes Weiner , Michal Hocko , Roman Gushchin Subject: [PATCH 2/5] mm: prepare to premature release of per-node lruvec_stat_cpu Date: Thu, 7 Mar 2019 15:00:30 -0800 Message-Id: <20190307230033.31975-3-guro@fb.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190307230033.31975-1-guro@fb.com> References: <20190307230033.31975-1-guro@fb.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Similar to the memcg's vmstats_percpu, per-memcg per-node stats consists of percpu- and atomic counterparts, and we do expect that both coexist during the whole life-cycle of the memcg. To prepare for a premature release of percpu per-node data, let's pretend that lruvec_stat_cpu is a rcu-protected pointer, which can be NULL. This patch adds corresponding checks whenever required. Signed-off-by: Roman Gushchin --- include/linux/memcontrol.h | 21 +++++++++++++++------ mm/memcontrol.c | 11 +++++++++-- 2 files changed, 24 insertions(+), 8 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 05ca77767c6a..8ac04632002a 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -126,7 +126,7 @@ struct memcg_shrinker_map { struct mem_cgroup_per_node { struct lruvec lruvec; - struct lruvec_stat __percpu *lruvec_stat_cpu; + struct lruvec_stat __rcu /* __percpu */ *lruvec_stat_cpu; atomic_long_t lruvec_stat[NR_VM_NODE_STAT_ITEMS]; unsigned long lru_zone_size[MAX_NR_ZONES][NR_LRU_LISTS]; @@ -682,6 +682,7 @@ static inline unsigned long lruvec_page_state(struct lruvec *lruvec, static inline void __mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, int val) { + struct lruvec_stat __percpu *lruvec_stat_cpu; struct mem_cgroup_per_node *pn; long x; @@ -697,12 +698,20 @@ static inline void __mod_lruvec_state(struct lruvec *lruvec, __mod_memcg_state(pn->memcg, idx, val); /* Update lruvec */ - x = val + __this_cpu_read(pn->lruvec_stat_cpu->count[idx]); - if (unlikely(abs(x) > MEMCG_CHARGE_BATCH)) { - atomic_long_add(x, &pn->lruvec_stat[idx]); - x = 0; + rcu_read_lock(); + lruvec_stat_cpu = (struct lruvec_stat __percpu *) + rcu_dereference(pn->lruvec_stat_cpu); + if (likely(lruvec_stat_cpu)) { + x = val + __this_cpu_read(lruvec_stat_cpu->count[idx]); + if (unlikely(abs(x) > MEMCG_CHARGE_BATCH)) { + atomic_long_add(x, &pn->lruvec_stat[idx]); + x = 0; + } + __this_cpu_write(lruvec_stat_cpu->count[idx], x); + } else { + atomic_long_add(val, &pn->lruvec_stat[idx]); } - __this_cpu_write(pn->lruvec_stat_cpu->count[idx], x); + rcu_read_unlock(); } static inline void mod_lruvec_state(struct lruvec *lruvec, diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 803c772f354b..8f3cac02221a 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2122,6 +2122,7 @@ static void drain_all_stock(struct mem_cgroup *root_memcg) static int memcg_hotplug_cpu_dead(unsigned int cpu) { struct memcg_vmstats_percpu __percpu *vmstats_percpu; + struct lruvec_stat __percpu *lruvec_stat_cpu; struct memcg_stock_pcp *stock; struct mem_cgroup *memcg; @@ -2152,7 +2153,12 @@ static int memcg_hotplug_cpu_dead(unsigned int cpu) struct mem_cgroup_per_node *pn; pn = mem_cgroup_nodeinfo(memcg, nid); - x = this_cpu_xchg(pn->lruvec_stat_cpu->count[i], 0); + + lruvec_stat_cpu = (struct lruvec_stat __percpu*) + rcu_dereference(pn->lruvec_stat_cpu); + if (!lruvec_stat_cpu) + continue; + x = this_cpu_xchg(lruvec_stat_cpu->count[i], 0); if (x) atomic_long_add(x, &pn->lruvec_stat[i]); } @@ -4430,7 +4436,8 @@ static int alloc_mem_cgroup_per_node_info(struct mem_cgroup *memcg, int node) if (!pn) return 1; - pn->lruvec_stat_cpu = alloc_percpu(struct lruvec_stat); + rcu_assign_pointer(pn->lruvec_stat_cpu, + alloc_percpu(struct lruvec_stat)); if (!pn->lruvec_stat_cpu) { kfree(pn); return 1; -- 2.20.1