Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp4156175imm; Mon, 6 Aug 2018 18:23:12 -0700 (PDT) X-Google-Smtp-Source: AAOMgpfZu4htPMDlngommeunDhG0X3D9JJfRMUXs1CR3NoOLtEN/aVLdl/Zd30JETON26WOZ1eje X-Received: by 2002:a62:b0c:: with SMTP id t12-v6mr19464513pfi.36.1533604992333; Mon, 06 Aug 2018 18:23:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1533604992; cv=none; d=google.com; s=arc-20160816; b=v/ULe5UlHK3pazKASqYJVh7tYCz7v25LXYK9xhd3xR3b6hBCH4sYQIGAJz5enKj+XR qaTQjox15MmPwtwcpMUSZv1ypXr7HikU9wo9KWCNhIAA44yFFYvVFp9WxJ6gkRqWGacs lMWtQ28mHsGLXzH+Yvhfph6EsuJQ+y4zPc5PZiOQTmb2a48fto++lIKzIBONXLuTPiKl PC9Ayf9YLqUM86W4emyB8kVqFhSUOMCE3ZejPrc2fZFip8LwPA+UdOheiyIKj/7AjFJY O72EaX910HaxP4ZRlOljhOEOkUctPvV+xNT2Xcjtc0+M7u0z1D8a5LsStqGX3+tz/VWP G8QQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=cSRf+70LKurme3/BfQF5YuMiZQsCYFTMoW03DV75JHc=; b=FpCZKDvYmYmrTh9AzVJLryc8g/1oldKoqi4viKv4PEZUd98yhjMavcGzNVifP4syph xkMv3JuQiby/yAFRakG6DH/cEiszSCi8Ytsim2iEMA1NLrkh/obqlpfiI5/bsdya/Bgu h5XTYmqQZzmwi48aQUuop+OpkjF9fx61MqPo0uE7QSS3CVvMngCXl6Uwe/xEr0tLS4ze AsWQ3x/yAvrLZucdfPP6sr+SDATHdRWxfAa4L/+V6lXtFsrNZB+E+acoBW+64gHu+hcQ 1APyUbvuQokRTiaI9qia/Dz7b+B3QS8zBXduF6Wu4qpYQMdZsntUt+Ry9Z0ftqQ46TJk uKZw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=f6KGPCXi; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v21-v6si13568356pgn.371.2018.08.06.18.22.53; Mon, 06 Aug 2018 18:23:12 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=f6KGPCXi; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733279AbeHGDLP (ORCPT + 99 others); Mon, 6 Aug 2018 23:11:15 -0400 Received: from mail-pf1-f196.google.com ([209.85.210.196]:45236 "EHLO mail-pf1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730834AbeHGDLP (ORCPT ); Mon, 6 Aug 2018 23:11:15 -0400 Received: by mail-pf1-f196.google.com with SMTP id i26-v6so7650811pfo.12 for ; Mon, 06 Aug 2018 17:59:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=cSRf+70LKurme3/BfQF5YuMiZQsCYFTMoW03DV75JHc=; b=f6KGPCXiwLKH7FCSjVu3s9zfka9jCoDv3JszR3ZsCydaukjHVBvQGOF+t/Aon3qSR3 00Rc9esOLNMPviI23Wjx3pkQ00cDD8L65j5eDbkFfm+qKcw7Mw/dc3P4C7Uby04MFnXH cH+yBV2xhBIYRAFHv8501V0L9jzv1C1HlRlf68hQZ7SfD0fzHJjYaiRWlxHYZzGsYolf 11QA+K21tkj+Sfak7P0tTMwR55G327DNhytVRLnnpLWgxm8qf7hGubUwX4sYFCpC2NHS FmBIIXjzIJlxGK7p1AphwOKAohKs2fcCoMrr//GdsaExEVmsGH7JslD7MIBnPOjvymid 53LA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=cSRf+70LKurme3/BfQF5YuMiZQsCYFTMoW03DV75JHc=; b=nt3lYo4rV8jnF01/p4oGQzaPU9jcBdFd2IkgcZ3M+9Eq59G6+sKIFq4LY2x6NDe4zG /+Hfc09pbGZ30MiDx4c8eat7WSGM+zn4tB+luI3VW4U6DWcVSt1lKVTlXdZ4hWO5HXKK p1g9nwgnN8yjsyVLJISBMS4hfcbtVkPUEcm3Urh7eQoyIaf18FPDNTDiXB028M3bG4wc HHSkvhyRutbQC1GSXhrckgIrxkpckSIIZNWJRBev6MtPFgUml6OwTl/atZeSCJYsiuSt n1W6Obru9FVLbI+ykn8KvN/ydEl41h/0GscovJScKr/82Xh8/p72PhI/mzaG+dOGdbu5 cuCQ== X-Gm-Message-State: AOUpUlESW/fqh/XobTet1Owd2I1sSP9gfd/obfp1THYsilrcG2+fACbk 4CTzwfUoQMcRjcUmwehh7KU= X-Received: by 2002:aa7:824d:: with SMTP id e13-v6mr18133949pfn.97.1533603571951; Mon, 06 Aug 2018 17:59:31 -0700 (PDT) Received: from dennisz-mbp.thefacebook.com ([199.201.64.136]) by smtp.gmail.com with ESMTPSA id k64-v6sm17380568pgd.47.2018.08.06.17.59.30 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 06 Aug 2018 17:59:31 -0700 (PDT) From: Dennis Zhou To: Andrew Morton , Tejun Heo , Johannes Weiner , Christoph Lameter , Roman Gushchin Cc: kernel-team@fb.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Dennis Zhou (Facebook)" Subject: [PATCH] proc: add percpu populated pages count to meminfo Date: Mon, 6 Aug 2018 17:56:07 -0700 Message-Id: <20180807005607.53950-1-dennisszhou@gmail.com> X-Mailer: git-send-email 2.13.5 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Dennis Zhou (Facebook)" Currently, percpu memory only exposes allocation and utilization information via debugfs. This more or less is only really useful for understanding the fragmentation and allocation information at a per-chunk level with a few global counters. This is also gated behind a config. BPF and cgroup, for example, have seen an increase use causing increased use of percpu memory. Let's make it easier for someone to identify how much memory is being used. This patch adds the PercpuPopulated stat to meminfo to more easily look up how much percpu memory is in use. This new number includes the cost for all backing pages and not just insight at the a unit, per chunk level. This stat includes only pages used to back the chunks themselves excluding metadata. I think excluding metadata is fair because the backing memory scales with the number of cpus and can quickly outweigh the metadata. It also makes this calculation light. Signed-off-by: Dennis Zhou --- fs/proc/meminfo.c | 2 ++ include/linux/percpu.h | 2 ++ mm/percpu.c | 29 +++++++++++++++++++++++++++++ 3 files changed, 33 insertions(+) diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c index 2fb04846ed11..ddd5249692e9 100644 --- a/fs/proc/meminfo.c +++ b/fs/proc/meminfo.c @@ -7,6 +7,7 @@ #include #include #include +#include #include #include #include @@ -121,6 +122,7 @@ static int meminfo_proc_show(struct seq_file *m, void *v) (unsigned long)VMALLOC_TOTAL >> 10); show_val_kb(m, "VmallocUsed: ", 0ul); show_val_kb(m, "VmallocChunk: ", 0ul); + show_val_kb(m, "PercpuPopulated:", pcpu_nr_populated_pages()); #ifdef CONFIG_MEMORY_FAILURE seq_printf(m, "HardwareCorrupted: %5lu kB\n", diff --git a/include/linux/percpu.h b/include/linux/percpu.h index 296bbe49d5d1..1c80be42822c 100644 --- a/include/linux/percpu.h +++ b/include/linux/percpu.h @@ -149,4 +149,6 @@ extern phys_addr_t per_cpu_ptr_to_phys(void *addr); (typeof(type) __percpu *)__alloc_percpu(sizeof(type), \ __alignof__(type)) +extern int pcpu_nr_populated_pages(void); + #endif /* __LINUX_PERCPU_H */ diff --git a/mm/percpu.c b/mm/percpu.c index 0b6480979ac7..08a4341f30c5 100644 --- a/mm/percpu.c +++ b/mm/percpu.c @@ -169,6 +169,14 @@ static LIST_HEAD(pcpu_map_extend_chunks); */ int pcpu_nr_empty_pop_pages; +/* + * The number of populated pages in use by the allocator, protected by + * pcpu_lock. This number is kept per a unit per chunk (i.e. when a page gets + * allocated/deallocated, it is allocated/deallocated in all units of a chunk + * and increments/decrements this count by 1). + */ +static int pcpu_nr_populated; + /* * Balance work is used to populate or destroy chunks asynchronously. We * try to keep the number of populated free pages between @@ -1232,6 +1240,7 @@ static void pcpu_chunk_populated(struct pcpu_chunk *chunk, int page_start, bitmap_set(chunk->populated, page_start, nr); chunk->nr_populated += nr; + pcpu_nr_populated += nr; if (!for_alloc) { chunk->nr_empty_pop_pages += nr; @@ -1260,6 +1269,7 @@ static void pcpu_chunk_depopulated(struct pcpu_chunk *chunk, chunk->nr_populated -= nr; chunk->nr_empty_pop_pages -= nr; pcpu_nr_empty_pop_pages -= nr; + pcpu_nr_populated -= nr; } /* @@ -2176,6 +2186,9 @@ int __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai, pcpu_nr_empty_pop_pages = pcpu_first_chunk->nr_empty_pop_pages; pcpu_chunk_relocate(pcpu_first_chunk, -1); + /* include all regions of the first chunk */ + pcpu_nr_populated += PFN_DOWN(size_sum); + pcpu_stats_chunk_alloc(); trace_percpu_create_chunk(base_addr); @@ -2745,6 +2758,22 @@ void __init setup_per_cpu_areas(void) #endif /* CONFIG_SMP */ +/* + * pcpu_nr_populated_pages - calculate total number of populated backing pages + * + * This reflects the number of pages populated to back the chunks. + * Metadata is excluded in the number exposed in meminfo as the number of + * backing pages scales with the number of cpus and can quickly outweigh the + * memory used for metadata. It also keeps this calculation nice and simple. + * + * RETURNS: + * Total number of populated backing pages in use by the allocator. + */ +int pcpu_nr_populated_pages(void) +{ + return pcpu_nr_populated * pcpu_nr_units; +} + /* * Percpu allocator is initialized early during boot when neither slab or * workqueue is available. Plug async management until everything is up -- 2.17.1