Received: by 2002:ac0:946b:0:0:0:0:0 with SMTP id j40csp4111996imj; Tue, 12 Feb 2019 09:59:43 -0800 (PST) X-Google-Smtp-Source: AHgI3IZV8Z346chZOx9jNENAh8ybznVHXwVdx6jEhheoMfu527zISitY7aoWzGjHAUYo5tP+XVaI X-Received: by 2002:a63:4346:: with SMTP id q67mr4617294pga.92.1549994383365; Tue, 12 Feb 2019 09:59:43 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1549994383; cv=none; d=google.com; s=arc-20160816; b=EFCcsY+IB8blcIyvmvmERP2EoR6c35toJfBfKxJrdIAKDPScRQGwtxW8aUX4AaB3eC R8chej3heK1BqmBgD4xEzxWZcTc3D5xVYQStrLMazUZ129MAg4UId9O9z0sJy1jqvMr8 dlwCRpODytGcA85J2xWzTnYq8vUUcngHW010C8ICmP3alZjgjf8N0v6PxrV4/JnekePY e7c4gQ9X7gag/2TCWfdKeyi84x6MVXynY1h6XueCBpZK4ttkFaCLvZXbljoe02jeq4gl 3Viq1Q4qkkVeBTIGdcS54Ljg2di73YdY5jxNtOZke1HX33UL+5Rt0K3eclSYi70m2Zk9 SSVA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=gKD32Yoy3OQX5/4gri4PxomY3lVn8k0uhfbQMdVmPC8=; b=N3e8DUseXfIrAqPogWedOzHL0r48F9rThjhszWbepGIzq9uE8n2Uh8p1PSuqmLXY32 NlZ/mr96AJwcLoWgzBFnMFVqpfYKZRIuRVjWeu/TeczmIodex4E6KMXP5Oe2i6OEr87J Ns3fVA9viaI1TUuqkgGnh3VxXlLi5sWhEE5+l9E6Vm81TRdHgQE4We0V0DdLr0xJgbNo KyMswZG+69o3m7GU9rhnlPDv8AaHHUQ12ATHGP7OPCOoNMiVNoYE0W3LUfW+fjpjqDJa AJuZcdY8uuURqeWJsKPv1AwLfrPgmVxSfVdsRUnYqWBO4vs49yAxYWcz6ObxrSLKGdr6 H+EQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=lrYOH5Dd; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w21si13641111ply.143.2019.02.12.09.59.27; Tue, 12 Feb 2019 09:59:43 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=lrYOH5Dd; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730863AbfBLR5Z (ORCPT + 99 others); Tue, 12 Feb 2019 12:57:25 -0500 Received: from mail-pf1-f195.google.com ([209.85.210.195]:34377 "EHLO mail-pf1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730479AbfBLR5T (ORCPT ); Tue, 12 Feb 2019 12:57:19 -0500 Received: by mail-pf1-f195.google.com with SMTP id j18so1660602pfe.1 for ; Tue, 12 Feb 2019 09:57:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=gKD32Yoy3OQX5/4gri4PxomY3lVn8k0uhfbQMdVmPC8=; b=lrYOH5DduVB4SGWITtLwhHB5PkSnt7X1YF/ulNlufQPN10JdBC3JKMtQnvkf5vzP9A YLiuTZoCEu/56JlQngg6SGr+/UVhtpx0a7rvpj98atfRfeSC90uCghKgOA/jrV3byYfR DGprnMtdxx+R7cP3dAL5wPq20Lhz1S40qCx/xiOXW987jnfMQQi1jZIpUKswiA5qxLsN XcBHq4BkH3HQchxPa9HhnAnVtKHnrXLj2zs/a4wdF+cpd6CagMnzw5KWAmZrD5aKtUvE cw33AoTM4HaBgNIVtq6bsALkld1RsUlDXCu2Q1ePr2kSjJGox+I/lilx+murwmqMyiYX aSiQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=gKD32Yoy3OQX5/4gri4PxomY3lVn8k0uhfbQMdVmPC8=; b=dGObYjX6ranMgXjk3NglIlJpi+hozQOucKJyLqVqNdl7F2qi5HbsceHgoDDjUUctC3 K2e9Vq4Y1jqjGh/QOpKt+TZqZN3fpZMnByEIjEQEI9CedYNhe2JksouB5fikDTjRFaGq XGtxjSO+X/dIIykd+E2j2Q8vSVJCcgfs+3pZgTKmfAxAfJzyZ/Abx/31yoxqNy53YxYT Z01cC39gVuXtqhlL6ToU3FxP8e3lh/xDD/ba4CX/TukuM2Wy9B8PTVEuEBiaLo9YN3SZ vV88RF9SjjvVsv0H0lSChKc4bGH53LVmNNvm2uEDL3CDG5WWkjpw8U2OQoHFcxVjQsux B+5g== X-Gm-Message-State: AHQUAubtQsEks+KllERMCuxADd9W7NJjNfF4M6u6vx+Gffuu5elNWr/T U/0nFKIxpFGkvwLtH+qQDEs= X-Received: by 2002:a65:614a:: with SMTP id o10mr4701974pgv.387.1549994238809; Tue, 12 Feb 2019 09:57:18 -0800 (PST) Received: from tower.thefacebook.com ([2620:10d:c090:200::5:4d62]) by smtp.gmail.com with ESMTPSA id z186sm18608427pfz.119.2019.02.12.09.57.17 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 12 Feb 2019 09:57:18 -0800 (PST) From: Roman Gushchin X-Google-Original-From: Roman Gushchin To: linux-mm@kvack.org Cc: Matthew Wilcox , Johannes Weiner , kernel-team@fb.com, Andrew Morton , linux-kernel@vger.kernel.org, Roman Gushchin Subject: [PATCH v2 3/3] mm: show number of vmalloc pages in /proc/meminfo Date: Tue, 12 Feb 2019 09:56:48 -0800 Message-Id: <20190212175648.28738-4-guro@fb.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190212175648.28738-1-guro@fb.com> References: <20190212175648.28738-1-guro@fb.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Vmalloc() is getting more and more used these days (kernel stacks, bpf and percpu allocator are new top users), and the total % of memory consumed by vmalloc() can be pretty significant and changes dynamically. /proc/meminfo is the best place to display this information: its top goal is to show top consumers of the memory. Since the VmallocUsed field in /proc/meminfo is not in use for quite a long time (it has been defined to 0 by the commit a5ad88ce8c7f ("mm: get rid of 'vmalloc_info' from /proc/meminfo")), let's reuse it for showing the actual physical memory consumption of vmalloc(). Signed-off-by: Roman Gushchin Cc: Johannes Weiner Cc: Matthew Wilcox --- fs/proc/meminfo.c | 2 +- include/linux/vmalloc.h | 2 ++ mm/vmalloc.c | 16 ++++++++++++++++ 3 files changed, 19 insertions(+), 1 deletion(-) diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c index 568d90e17c17..465ea0153b2a 100644 --- a/fs/proc/meminfo.c +++ b/fs/proc/meminfo.c @@ -120,7 +120,7 @@ static int meminfo_proc_show(struct seq_file *m, void *v) show_val_kb(m, "Committed_AS: ", committed); seq_printf(m, "VmallocTotal: %8lu kB\n", (unsigned long)VMALLOC_TOTAL >> 10); - show_val_kb(m, "VmallocUsed: ", 0ul); + show_val_kb(m, "VmallocUsed: ", vmalloc_nr_pages()); show_val_kb(m, "VmallocChunk: ", 0ul); show_val_kb(m, "Percpu: ", pcpu_nr_pages()); diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 398e9c95cd61..0b497408272b 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -63,10 +63,12 @@ extern void vm_unmap_aliases(void); #ifdef CONFIG_MMU extern void __init vmalloc_init(void); +extern unsigned long vmalloc_nr_pages(void); #else static inline void vmalloc_init(void) { } +static inline unsigned long vmalloc_nr_pages(void) { return 0; } #endif extern void *vmalloc(unsigned long size); diff --git a/mm/vmalloc.c b/mm/vmalloc.c index f1f19d1105c4..8dd490d8d191 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -340,6 +340,19 @@ static unsigned long cached_align; static unsigned long vmap_area_pcpu_hole; +static DEFINE_PER_CPU(unsigned long, nr_vmalloc_pages); + +unsigned long vmalloc_nr_pages(void) +{ + unsigned long pages = 0; + int cpu; + + for_each_possible_cpu(cpu) + pages += per_cpu(nr_vmalloc_pages, cpu); + + return pages; +} + static struct vmap_area *__find_vmap_area(unsigned long addr) { struct rb_node *n = vmap_area_root.rb_node; @@ -1566,6 +1579,7 @@ static void __vunmap(const void *addr, int deallocate_pages) BUG_ON(!page); __free_pages(page, 0); } + this_cpu_sub(nr_vmalloc_pages, area->nr_pages); kvfree(area->pages); } @@ -1742,12 +1756,14 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, if (unlikely(!page)) { /* Successfully allocated i pages, free them in __vunmap() */ area->nr_pages = i; + this_cpu_add(nr_vmalloc_pages, area->nr_pages); goto fail; } area->pages[i] = page; if (gfpflags_allow_blocking(gfp_mask|highmem_mask)) cond_resched(); } + this_cpu_add(nr_vmalloc_pages, area->nr_pages); if (map_vm_area(area, prot, pages)) goto fail; -- 2.20.1