Received: by 2002:a25:8b12:0:0:0:0:0 with SMTP id i18csp5227085ybl; Tue, 27 Aug 2019 01:08:10 -0700 (PDT) X-Google-Smtp-Source: APXvYqwMp8UY6fxazIG4FwX6+uax6fGZsseG+y0gSYIbS/fg/+S3Fr8PbjkCyFMiaVIH10xgOBn6 X-Received: by 2002:a17:902:bcc2:: with SMTP id o2mr23259607pls.127.1566893290044; Tue, 27 Aug 2019 01:08:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1566893290; cv=none; d=google.com; s=arc-20160816; b=WDl1l1Mq9Dqlv1BT35v0sYetVykYDhqa6SOzghUTKShdxgoAbBg6KDplgXfXpRUi5h Acgyq4dgwHMl5aB6potYufyPVNBo8WUqkQv7wzsSGN4gsozdyX/FyX8o2YP7hR8e7yS3 mYlxbyhqWqrKoqEDinNVpbtbvmYNNlGFxcuKvUofvDHrH+XbcZG+DEMfavphqR5zar27 p71Zjfq4Z+pIgZHGu2o3HUEvtT8lVb6GotRgsGFBRSGyeinu9OHwmQRGOZijBbAA1CRz HzOxQgZPom0JupNDcqX3AVIzoKkoFHDAVeH3PfrY7A6RfTL+pyv7vSJjQuMcmLzm4QYE Zjjw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=MVp7fXtQo5Z/EnlddE+BSz0YB+bfqgl4ypHT906KxRU=; b=NCngj8OTXpnlEi7FuCllkT2axSnbMeSGLp82NC4qEyDbByKhMqTraxjWHMigypbrx0 /PSLPtU+zWH8yRP8fUjbZQRTh17pPXa7WaMLgOlGNXMVXmOABdzIDUjJ0wVrjhg6PVox 7aIxhEdAVwXEj12oEBUqNMsK/jedhWcBiVaJUZdyXEs04RV2VRtplO05YpvuSF6nWr3j T5Z5aC72WLQqRg1ztZMEFfEpMWKcH7jTXYTJn55HBbgt7yqEbnfAWpMBcs0FhXbLbP0Z b4k3MZ0HtzDT+B41frNTP0mqLLO1vjzoxZMiJGBgxzCOBQJHzG9A3094DNPJbNAU8sH3 a+9g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=MsXaUbJL; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f23si749940plj.241.2019.08.27.01.07.55; Tue, 27 Aug 2019 01:08:10 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=MsXaUbJL; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733068AbfH0IGt (ORCPT + 99 others); Tue, 27 Aug 2019 04:06:49 -0400 Received: from mail.kernel.org ([198.145.29.99]:36764 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732635AbfH0IGo (ORCPT ); Tue, 27 Aug 2019 04:06:44 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 01301206BF; Tue, 27 Aug 2019 08:06:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1566893203; bh=3xvGz0//4v6/Aw45avb0487WgKXitcKzbVzpdel9nek=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MsXaUbJLZJgDox64qlBS5KysZ8To2Qjmr39J1DR9Mtwo3gSLBdXDsDO54fDyoQF1D X/jsxcMaCafoEsjekGsVc5+jT8RLcCsP+e9jMAyi4Ey3Ti4TYcusljo+CBybwbg2wt TmZbTn02LJ/zBkE1W46FwPON+yZfLG4QWgbiQbUg= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Roman Gushchin , Michal Hocko , Johannes Weiner , Vladimir Davydov , Andrew Morton , Linus Torvalds Subject: [PATCH 5.2 149/162] mm: memcontrol: flush percpu vmevents before releasing memcg Date: Tue, 27 Aug 2019 09:51:17 +0200 Message-Id: <20190827072743.933362060@linuxfoundation.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190827072738.093683223@linuxfoundation.org> References: <20190827072738.093683223@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Roman Gushchin commit bb65f89b7d3d305c14951f49860711fbcae70692 upstream. Similar to vmstats, percpu caching of local vmevents leads to an accumulation of errors on non-leaf levels. This happens because some leftovers may remain in percpu caches, so that they are never propagated up by the cgroup tree and just disappear into nonexistence with on releasing of the memory cgroup. To fix this issue let's accumulate and propagate percpu vmevents values before releasing the memory cgroup similar to what we're doing with vmstats. Since on cpu hotplug we do flush percpu vmstats anyway, we can iterate only over online cpus. Link: http://lkml.kernel.org/r/20190819202338.363363-4-guro@fb.com Fixes: 42a300353577 ("mm: memcontrol: fix recursive statistics correctness & scalabilty") Signed-off-by: Roman Gushchin Acked-by: Michal Hocko Cc: Johannes Weiner Cc: Vladimir Davydov Cc: Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- mm/memcontrol.c | 22 +++++++++++++++++++++- 1 file changed, 21 insertions(+), 1 deletion(-) --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3185,6 +3185,25 @@ static void memcg_flush_percpu_vmstats(s } } +static void memcg_flush_percpu_vmevents(struct mem_cgroup *memcg) +{ + unsigned long events[NR_VM_EVENT_ITEMS]; + struct mem_cgroup *mi; + int cpu, i; + + for (i = 0; i < NR_VM_EVENT_ITEMS; i++) + events[i] = 0; + + for_each_online_cpu(cpu) + for (i = 0; i < NR_VM_EVENT_ITEMS; i++) + events[i] += raw_cpu_read( + memcg->vmstats_percpu->events[i]); + + for (mi = memcg; mi; mi = parent_mem_cgroup(mi)) + for (i = 0; i < NR_VM_EVENT_ITEMS; i++) + atomic_long_add(events[i], &mi->vmevents[i]); +} + #ifdef CONFIG_MEMCG_KMEM static int memcg_online_kmem(struct mem_cgroup *memcg) { @@ -4587,10 +4606,11 @@ static void __mem_cgroup_free(struct mem int node; /* - * Flush percpu vmstats to guarantee the value correctness + * Flush percpu vmstats and vmevents to guarantee the value correctness * on parent's and all ancestor levels. */ memcg_flush_percpu_vmstats(memcg); + memcg_flush_percpu_vmevents(memcg); for_each_node(node) free_mem_cgroup_per_node_info(memcg, node); free_percpu(memcg->vmstats_percpu);