Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp722824pxb; Tue, 2 Feb 2021 16:43:42 -0800 (PST) X-Google-Smtp-Source: ABdhPJxWDsbwBRwomtaq5E5qJNdX/d6Pv2Qe8RmKAnfKoIQSaLgjIankIOEUu4K0jmbVgeYdGE11 X-Received: by 2002:a17:906:589:: with SMTP id 9mr600009ejn.229.1612313022279; Tue, 02 Feb 2021 16:43:42 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612313022; cv=none; d=google.com; s=arc-20160816; b=0PObHpUaSL0kkIPASG6EZx3K4DmH6S2YK7CBDlgxg3Ok2IwB12i/ZcB0SouT+lPRo0 IbTbVzUHzYFdjEW/XY1ylfvP/5p3nnxZM5+sjK4HDehvaGRgQmzYRIxy/SwlxayhxnlA /a30we5rKMLe4/4g2n8OWBMAxJbY9aX+myXPTwSCQFFM7ihYNixjw4OVcaQ81wDlOBpv DmWEZTx8ord4NJrYTGyJsiPAmgDB7P1mbuZqyir7yf6QurwjMj1XsiHe7E9iUz+x0xiY 41aVVtZdhmQeZh/9/1Q/CxIrbZpptwMByhb7aCaEsfTyLqdgtflQVQrTlahqGf8PSuX4 PNiQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=MBCKydbpWga5DpiOrlHudvnTFh13spVvcDscPggPa+8=; b=szezf25ZY+Q43vedIo9QbndJDirkQkaMijvUvlv0cAQENH7Z1d32jRKy58hGD84OyT AcySJ0YMgS3YaOq/cPftNwnvQzlsD+4WF7OPO4obwzUYpBMhqg7vrscbYQ+pjCZc+5Yf 2HWbhRv7UK9B0jRP+PVIxf0S8qeBZy0d0Rvi+co5izyy4xyywdNjw1BTZz/MJYF9WFkl UP4MxAwYuKJS5WHQ1Jpa/FYOQ9FEY1qhWEBPNVI6Fkf8iRBcZQa/f33w13Yq5qvRPUse +fC3J6tdTCGvbCSWzi4McxODMvtbBEARnTf0N77rTdgysDJsc+lqt+X0epOYlDqgaJmI tFAw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=iixvdo9I; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id qu20si304659ejb.436.2021.02.02.16.43.17; Tue, 02 Feb 2021 16:43:42 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=iixvdo9I; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239028AbhBBSuv (ORCPT + 99 others); Tue, 2 Feb 2021 13:50:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55134 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238826AbhBBSse (ORCPT ); Tue, 2 Feb 2021 13:48:34 -0500 Received: from mail-qk1-x736.google.com (mail-qk1-x736.google.com [IPv6:2607:f8b0:4864:20::736]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0446EC061786 for ; Tue, 2 Feb 2021 10:47:54 -0800 (PST) Received: by mail-qk1-x736.google.com with SMTP id k193so20826073qke.6 for ; Tue, 02 Feb 2021 10:47:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=MBCKydbpWga5DpiOrlHudvnTFh13spVvcDscPggPa+8=; b=iixvdo9II3CKbmX3rdwEdikCy6FYq2Xh7l6cr1JZDyZLV3tA7TuJMIZuIukFI2YdMG BTy4s/CiTvdw7jKAbWtlNoWIJe4vyWfjyD9CrO57tcOSLynLgtpULv2m6u3oUrxMqe6n cdDCwBSYaO1AFHPmCA2adqCaOiR2FF28vjLiw9S4VV2dR2P+1YK3kcyjPDRjRzQJfmHU iUdM0DlrSVzOwnsqjfhCFN1CXQKjBxXFaOvaFjkd0I314jpXy9KqXtn6Gpji3x8apJ7O WNsgg5PiF2eDrCHcn+OWgi/CW/dWhUMfSQcTW2vMCUHR5pB5jPDCL8FW9+KOOoyrSOBR s06A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=MBCKydbpWga5DpiOrlHudvnTFh13spVvcDscPggPa+8=; b=EkflCy6tV2TVRwdhea81N4RCX1DqW1X8P8HoWKhYtpTZdxVs8d2z4OtMFxRoVxbw9/ 9XDfVhuS+jkBfITVi3irebqVkJliJA3eH9iCJVxxNWDEf9WGSd5TW3SbrWYNr2tD58TB nuqOTF7NStWzuWQjgu54xQCwS40LDG5UUgYQ0Qz0PGMmNpzNsUgZ7wz+ANPYaWbHQsA8 GKrYkzRoMI3pAwYHbrznD2hwinxVtXVFkUWiok2xg6Lf1uR4qIAg5wOg6PcPIby+9NvX MyTHIe4//Q9xjZJDviO7uZufjJaKmYIvi+dO4GEL3/IzoTOXr0s/OVxFQ+UhVzItTfcP jX3Q== X-Gm-Message-State: AOAM533fTJFOrRRiMB/fyaMQ8Q4V7UC4mPlEBBnrYKzJZHTP9PjfuGE/ bZZyOTRoL9DIocVTsc6qZX7GiQ== X-Received: by 2002:a37:a40b:: with SMTP id n11mr23845773qke.430.1612291673304; Tue, 02 Feb 2021 10:47:53 -0800 (PST) Received: from localhost (70.44.39.90.res-cmts.bus.ptd.net. [70.44.39.90]) by smtp.gmail.com with ESMTPSA id y135sm18881253qkb.14.2021.02.02.10.47.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 02 Feb 2021 10:47:52 -0800 (PST) From: Johannes Weiner To: Andrew Morton , Tejun Heo Cc: Michal Hocko , Roman Gushchin , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: [PATCH 1/7] mm: memcontrol: fix cpuhotplug statistics flushing Date: Tue, 2 Feb 2021 13:47:40 -0500 Message-Id: <20210202184746.119084-2-hannes@cmpxchg.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210202184746.119084-1-hannes@cmpxchg.org> References: <20210202184746.119084-1-hannes@cmpxchg.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The memcg hotunplug callback erroneously flushes counts on the local CPU, not the counts of the CPU going away; those counts will be lost. Flush the CPU that is actually going away. Also simplify the code a bit by using mod_memcg_state() and count_memcg_events() instead of open-coding the upward flush - this is comparable to how vmstat.c handles hotunplug flushing. Signed-off-by: Johannes Weiner --- mm/memcontrol.c | 35 +++++++++++++++++++++-------------- 1 file changed, 21 insertions(+), 14 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index ed5cc78a8dbf..8120d565dd79 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2411,45 +2411,52 @@ static void drain_all_stock(struct mem_cgroup *root_memcg) static int memcg_hotplug_cpu_dead(unsigned int cpu) { struct memcg_stock_pcp *stock; - struct mem_cgroup *memcg, *mi; + struct mem_cgroup *memcg; stock = &per_cpu(memcg_stock, cpu); drain_stock(stock); for_each_mem_cgroup(memcg) { + struct memcg_vmstats_percpu *statc; int i; + statc = per_cpu_ptr(memcg->vmstats_percpu, cpu); + for (i = 0; i < MEMCG_NR_STAT; i++) { int nid; - long x; - x = this_cpu_xchg(memcg->vmstats_percpu->stat[i], 0); - if (x) - for (mi = memcg; mi; mi = parent_mem_cgroup(mi)) - atomic_long_add(x, &memcg->vmstats[i]); + if (statc->stat[i]) { + mod_memcg_state(memcg, i, statc->stat[i]); + statc->stat[i] = 0; + } if (i >= NR_VM_NODE_STAT_ITEMS) continue; for_each_node(nid) { + struct batched_lruvec_stat *lstatc; struct mem_cgroup_per_node *pn; + long x; pn = mem_cgroup_nodeinfo(memcg, nid); - x = this_cpu_xchg(pn->lruvec_stat_cpu->count[i], 0); - if (x) + lstatc = per_cpu_ptr(pn->lruvec_stat_cpu, cpu); + + x = lstatc->count[i]; + lstatc->count[i] = 0; + + if (x) { do { atomic_long_add(x, &pn->lruvec_stat[i]); } while ((pn = parent_nodeinfo(pn, nid))); + } } } for (i = 0; i < NR_VM_EVENT_ITEMS; i++) { - long x; - - x = this_cpu_xchg(memcg->vmstats_percpu->events[i], 0); - if (x) - for (mi = memcg; mi; mi = parent_mem_cgroup(mi)) - atomic_long_add(x, &memcg->vmevents[i]); + if (statc->events[i]) { + count_memcg_events(memcg, i, statc->events[i]); + statc->events[i] = 0; + } } } -- 2.30.0