Received: by 2002:a25:8b12:0:0:0:0:0 with SMTP id i18csp5228767ybl; Tue, 27 Aug 2019 01:09:56 -0700 (PDT) X-Google-Smtp-Source: APXvYqyfeSePC25GGALr8z80PpSySN/LtaPvUqPMKO9YPcK6nIgOOMbS6fzVDbZ3x5pbeOkvxNT+ X-Received: by 2002:a17:902:7c8a:: with SMTP id y10mr22731142pll.65.1566893396101; Tue, 27 Aug 2019 01:09:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1566893396; cv=none; d=google.com; s=arc-20160816; b=W9e+NKtngA60jTlEbbsXqgkVqRzWplnptkrapQgJYXvd2L1i2DqPjMJgCt50fTNCPg yR5RWyuk95DMdvUFOYpR5eTnR6Xm75Q+8fzqr+kx6TOzO7aYg+Sl5TM5B6SqV6jjlshm b5Bg67YPyYWZ2jKEKCtHeZvH4Yd/U5fEHHdPeauuw6SLzcfI9m6t5Nq3wh1MWuGLSPnH s+LX/+9gf6o6xx3sgPeGRbVhtgILN0u011Jxcv+ijP6qVG0vOOc53L6QNL3gSaEFnQ8R z3XuP20o4f5FjNJJXU3Wk/iI0uraw6uopCdxskVZoPP39ZaYEjf+kKAo7Yx9FlmvqncE KkZg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=pgPWF/nMtTHenZYczwJC2sNAh0euYVbGZ5VzR+nU+uo=; b=TiTwi/4WUyFQeHZKGrPdjQcuLY8pRDO1tKEyJA7n1W6DIVmtwNfr0VwDcKsTsR7Zei EqYvngEiod/udfRC1C4bbjz+tuR8IZKezNhl1NiSY8WwQbMgLophBxP3iuR2+hCJWVmf pgR+R2dccvSVGglbztCp+2x4Qb7hh4sjEyPGfXRqQpB4F/UQi4x3PmxwhaEFcy/Jb9ty 1VHs8mk5w+lMGE7wKmDwjOjs28HRUyY4ZdeJVee3kUI4AL88MGZdThw0IhRsLTt1Frq1 QKLAtiXCb4w+PnBpQE8apjT/9bIKvFFBiBda7c1QaoWQ3Jq2kOSToxvfCaetwP7F0y9D InbA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=mgyPoAFW; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 136si11281936pgg.354.2019.08.27.01.09.40; Tue, 27 Aug 2019 01:09:56 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=mgyPoAFW; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732133AbfH0IHZ (ORCPT + 99 others); Tue, 27 Aug 2019 04:07:25 -0400 Received: from mail.kernel.org ([198.145.29.99]:37448 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730471AbfH0IHY (ORCPT ); Tue, 27 Aug 2019 04:07:24 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id C1B8D206BA; Tue, 27 Aug 2019 08:07:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1566893243; bh=5l8NdeJXT/Pd1cRyEEteliHe12VXiLgTXUaRCo6yo38=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mgyPoAFWWjRctR9q/WL5HUAJHoQospiQHQriJ+/Rol7hmdx98nO6eZ7oGwC2snZXG vjUQheSJnVb+Cxsz6cR6oqBvHJEQ+J5d9z4fieMh3+4fvK1K79bm2EiVBk/b9zt78l Rz2KgPRlo6LLnxLO0o0LYwHwh6o47DlZwVOLjHT0= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Roman Gushchin , Michal Hocko , Johannes Weiner , Vladimir Davydov , Andrew Morton , Linus Torvalds Subject: [PATCH 5.2 148/162] mm: memcontrol: flush percpu vmstats before releasing memcg Date: Tue, 27 Aug 2019 09:51:16 +0200 Message-Id: <20190827072743.890460259@linuxfoundation.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190827072738.093683223@linuxfoundation.org> References: <20190827072738.093683223@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Roman Gushchin commit c350a99ea2b1b666c28948d74ab46c16913c28a7 upstream. Percpu caching of local vmstats with the conditional propagation by the cgroup tree leads to an accumulation of errors on non-leaf levels. Let's imagine two nested memory cgroups A and A/B. Say, a process belonging to A/B allocates 100 pagecache pages on the CPU 0. The percpu cache will spill 3 times, so that 32*3=96 pages will be accounted to A/B and A atomic vmstat counters, 4 pages will remain in the percpu cache. Imagine A/B is nearby memory.max, so that every following allocation triggers a direct reclaim on the local CPU. Say, each such attempt will free 16 pages on a new cpu. That means every percpu cache will have -16 pages, except the first one, which will have 4 - 16 = -12. A/B and A atomic counters will not be touched at all. Now a user removes A/B. All percpu caches are freed and corresponding vmstat numbers are forgotten. A has 96 pages more than expected. As memory cgroups are created and destroyed, errors do accumulate. Even 1-2 pages differences can accumulate into large numbers. To fix this issue let's accumulate and propagate percpu vmstat values before releasing the memory cgroup. At this point these numbers are stable and cannot be changed. Since on cpu hotplug we do flush percpu vmstats anyway, we can iterate only over online cpus. Link: http://lkml.kernel.org/r/20190819202338.363363-2-guro@fb.com Fixes: 42a300353577 ("mm: memcontrol: fix recursive statistics correctness & scalabilty") Signed-off-by: Roman Gushchin Acked-by: Michal Hocko Cc: Johannes Weiner Cc: Vladimir Davydov Cc: Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- mm/memcontrol.c | 40 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 40 insertions(+) --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3150,6 +3150,41 @@ static u64 mem_cgroup_read_u64(struct cg } } +static void memcg_flush_percpu_vmstats(struct mem_cgroup *memcg) +{ + unsigned long stat[MEMCG_NR_STAT]; + struct mem_cgroup *mi; + int node, cpu, i; + + for (i = 0; i < MEMCG_NR_STAT; i++) + stat[i] = 0; + + for_each_online_cpu(cpu) + for (i = 0; i < MEMCG_NR_STAT; i++) + stat[i] += raw_cpu_read(memcg->vmstats_percpu->stat[i]); + + for (mi = memcg; mi; mi = parent_mem_cgroup(mi)) + for (i = 0; i < MEMCG_NR_STAT; i++) + atomic_long_add(stat[i], &mi->vmstats[i]); + + for_each_node(node) { + struct mem_cgroup_per_node *pn = memcg->nodeinfo[node]; + struct mem_cgroup_per_node *pi; + + for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++) + stat[i] = 0; + + for_each_online_cpu(cpu) + for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++) + stat[i] += raw_cpu_read( + pn->lruvec_stat_cpu->count[i]); + + for (pi = pn; pi; pi = parent_nodeinfo(pi, node)) + for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++) + atomic_long_add(stat[i], &pi->lruvec_stat[i]); + } +} + #ifdef CONFIG_MEMCG_KMEM static int memcg_online_kmem(struct mem_cgroup *memcg) { @@ -4551,6 +4586,11 @@ static void __mem_cgroup_free(struct mem { int node; + /* + * Flush percpu vmstats to guarantee the value correctness + * on parent's and all ancestor levels. + */ + memcg_flush_percpu_vmstats(memcg); for_each_node(node) free_mem_cgroup_per_node_info(memcg, node); free_percpu(memcg->vmstats_percpu);