Received: by 2002:ac0:aed5:0:0:0:0:0 with SMTP id t21csp5400513imb; Thu, 7 Mar 2019 15:01:32 -0800 (PST) X-Google-Smtp-Source: APXvYqxXZdnln67zSJw3LigNtIUsZEtuYMmREQd/uTbceGhl3Ptdm7z1M5Xc/wOAL36KnZ+PQ/Dp X-Received: by 2002:a62:ea08:: with SMTP id t8mr15129852pfh.60.1551999692040; Thu, 07 Mar 2019 15:01:32 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1551999692; cv=none; d=google.com; s=arc-20160816; b=Sms9zkoqXKvngNeXFmkVyaiGKnwn++LwoRmD49EHqRGqWMc+9KzijVSHoZEMqHpATW FgiX0VzTlLfuEvnxWx5mvV3rq/Mbyj/oDXGQvz1UBHMaad8UU0wKomcv6FCAVIhPLp8N gO9U5nUPgmGJQXx31F+yGm6FmJ3mtE2c0Kh8UdJs+PSFlL9cuiDQbLqAuJCHSVpWEdwA 2eg+Jl0g7DMBh3Qfa/rgI0wFlWOnixv6LKNn6JJXcqEk3H7KYJecUjOm9UPPywKNlKEx bUe88KP66hEXle9CblMPqvHKNo30F8APnUTyCt4GmpbOQPEGosgSCjFznjq43tshCmxc lPEw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=JZX5VXWRV31e/dorMIyOAZjwZx1Lnus1V/4dOHz/P7o=; b=Kad0ytBHrzs2uKSfCIQztYuMfI2nY+N21memLmhk4FWtfUdpWX2jDoDpndtdkUZFVJ iqnjoi7cpXqQn2JSfiOQdqyOhtl+lS54b6L6F5n0szR8LLhox5/dTuMaQuOeXSGBp1X6 whZC4/mU9mzjled+2N+0TsTARuIK+cV3RnQIn/nmsJACn5nu/W0eXNMi9U/5rsXri3gi 7UDlUL8/zdqG8urU7S23njinRuG2CZONLVT3eYlrBh7lpdNQTnYw2Fu7xrF8qcxqrhi6 fCFBg8hlFDuQqrkwoeb51k8ykp+v64xHAxLhLWGnXbKQSMBJzxeDOS/U0bs7yFLNFcok 8ATw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=Xm1kKMC9; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e9si109305pgv.349.2019.03.07.15.01.16; Thu, 07 Mar 2019 15:01:32 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=Xm1kKMC9; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726412AbfCGXAr (ORCPT + 99 others); Thu, 7 Mar 2019 18:00:47 -0500 Received: from mail-pg1-f195.google.com ([209.85.215.195]:37168 "EHLO mail-pg1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726243AbfCGXAn (ORCPT ); Thu, 7 Mar 2019 18:00:43 -0500 Received: by mail-pg1-f195.google.com with SMTP id q206so12460645pgq.4 for ; Thu, 07 Mar 2019 15:00:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JZX5VXWRV31e/dorMIyOAZjwZx1Lnus1V/4dOHz/P7o=; b=Xm1kKMC9B2w4t88wFEbcqoH6e3UjYFTd0TJSqsFikzco+7ogxo96EX0nP16sMJbA/D LLSVBZJBDEz2s/pXjjVn9UsVgWRbZVxLUbIf2ptRACQNUAGqUprEX6a1DRinHKw5cqeA saSPrh0ac7GIbDXM2tqBY5JabMVeKZK5pq2AM6gHHd2NgUEUM5jV35C6N+sm6BhlMSik OrbKHp1PzKc/8Lqk+TeUJEZBxlxgSWd6IjScBzR+zqvggjmPRH3lsIVbl0hk5XRSCeMU 7HEsMpuv84maXQ4wCXQUhIkvGmgg1GgNgoeCuvcEah9F6lF2KaEUozcRhvJrkXRIRfs3 bBLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JZX5VXWRV31e/dorMIyOAZjwZx1Lnus1V/4dOHz/P7o=; b=iW7hdtCuiYmlHJWLQ4v1rTP13R5F/MKmqD7h/MCL5WcvCl3t/aoQetwM0TAMHl9Ku7 4w4Y+0cYC8/ZHyHJX5gU0rC8X0qG8GxncFKHq3aFFRNIzUwYdkC6OT38Zvz5gb/JcY6c XDnyW6Mj6dY+9akUNMN7JRWYnX/hYKcj3Xaf1zKkSMsgJGXH5R7h8IjVzKoTVZ4FNsRy 2UhM45jHmDta9rC/G0TULqa8D0dasbAgdGQDbFRQGg1zR2aOXu3cq9tvOiZILogbHHN6 bnahb84dcq3cFXA3ri7mzDuey8zEiuo/P8aPoOQPNLmc8vyx3xjBiaRmViabhp6WWbW/ 29kQ== X-Gm-Message-State: APjAAAXWlTX4fR13isevRPSa/b9xPyjKykyPaikWvbeRlr5aiiY0tL73 aGp9dEpyQ8nCpFx6nRyVkoE= X-Received: by 2002:a63:8b42:: with SMTP id j63mr13351207pge.79.1551999642755; Thu, 07 Mar 2019 15:00:42 -0800 (PST) Received: from tower.thefacebook.com ([2620:10d:c090:200::2:d18b]) by smtp.gmail.com with ESMTPSA id i126sm11864806pfb.15.2019.03.07.15.00.41 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 07 Mar 2019 15:00:42 -0800 (PST) From: Roman Gushchin X-Google-Original-From: Roman Gushchin To: linux-mm@kvack.org, kernel-team@fb.com Cc: linux-kernel@vger.kernel.org, Tejun Heo , Rik van Riel , Johannes Weiner , Michal Hocko , Roman Gushchin Subject: [PATCH 5/5] mm: spill memcg percpu stats and events before releasing Date: Thu, 7 Mar 2019 15:00:33 -0800 Message-Id: <20190307230033.31975-6-guro@fb.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190307230033.31975-1-guro@fb.com> References: <20190307230033.31975-1-guro@fb.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Spill percpu stats and events data to corresponding before releasing percpu memory. Although per-cpu stats are never exactly precise, dropping them on floor regularly may lead to an accumulation of an error. So, it's safer to sync them before releasing. To minimize the number of atomic updates, let's sum all stats/events on all cpus locally, and then make a single update per entry. Signed-off-by: Roman Gushchin --- mm/memcontrol.c | 52 +++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 52 insertions(+) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 18e863890392..b7eb6fac735e 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -4612,11 +4612,63 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css) return 0; } +/* + * Spill all per-cpu stats and events into atomics. + * Try to minimize the number of atomic writes by gathering data from + * all cpus locally, and then make one atomic update. + * No locking is required, because no one has an access to + * the offlined percpu data. + */ +static void mem_cgroup_spill_offlined_percpu(struct mem_cgroup *memcg) +{ + struct memcg_vmstats_percpu __percpu *vmstats_percpu; + struct lruvec_stat __percpu *lruvec_stat_cpu; + struct mem_cgroup_per_node *pn; + int cpu, i; + long x; + + vmstats_percpu = memcg->vmstats_percpu_offlined; + + for (i = 0; i < MEMCG_NR_STAT; i++) { + int nid; + + x = 0; + for_each_possible_cpu(cpu) + x += per_cpu(vmstats_percpu->stat[i], cpu); + if (x) + atomic_long_add(x, &memcg->vmstats[i]); + + if (i >= NR_VM_NODE_STAT_ITEMS) + continue; + + for_each_node(nid) { + pn = mem_cgroup_nodeinfo(memcg, nid); + lruvec_stat_cpu = pn->lruvec_stat_cpu_offlined; + + x = 0; + for_each_possible_cpu(cpu) + x += per_cpu(lruvec_stat_cpu->count[i], cpu); + if (x) + atomic_long_add(x, &pn->lruvec_stat[i]); + } + } + + for (i = 0; i < NR_VM_EVENT_ITEMS; i++) { + x = 0; + for_each_possible_cpu(cpu) + x += per_cpu(vmstats_percpu->events[i], cpu); + if (x) + atomic_long_add(x, &memcg->vmevents[i]); + } +} + static void mem_cgroup_free_percpu(struct rcu_head *rcu) { struct mem_cgroup *memcg = container_of(rcu, struct mem_cgroup, rcu); int node; + mem_cgroup_spill_offlined_percpu(memcg); + for_each_node(node) { struct mem_cgroup_per_node *pn = memcg->nodeinfo[node]; -- 2.20.1