2019-02-12 17:59:04

by Roman Gushchin

[permalink] [raw]
Subject: [PATCH v2 0/3] vmalloc enhancements

The patchset contains few changes to the vmalloc code, which are
leading to some performance gains and code simplification.

Also, it exports a number of pages, used by vmalloc(),
in /proc/meminfo.

Patch (1) removes some redundancy on __vunmap().
Patch (2) separates memory allocation and data initialization
in alloc_vmap_area()
Patch (3) adds vmalloc counter to /proc/meminfo.

v2->v1:
- rebased on top of current mm tree
- switch from atomic to percpu vmalloc page counter

RFC->v1:
- removed bogus empty lines (suggested by Matthew Wilcox)
- made nr_vmalloc_pages static (suggested by Matthew Wilcox)
- dropped patch 3 from RFC patchset, will post later with
some other changes
- dropped RFC

Roman Gushchin (3):
mm: refactor __vunmap() to avoid duplicated call to find_vm_area()
mm: separate memory allocation and actual work in alloc_vmap_area()
mm: show number of vmalloc pages in /proc/meminfo

fs/proc/meminfo.c | 2 +-
include/linux/vmalloc.h | 2 +
mm/vmalloc.c | 113 +++++++++++++++++++++++++++-------------
3 files changed, 79 insertions(+), 38 deletions(-)

--
2.20.1



2019-02-12 17:57:51

by Roman Gushchin

[permalink] [raw]
Subject: [PATCH v2 2/3] mm: separate memory allocation and actual work in alloc_vmap_area()

alloc_vmap_area() is allocating memory for the vmap_area, and
performing the actual lookup of the vm area and vmap_area
initialization.

This prevents us from using a pre-allocated memory for the map_area
structure, which can be used in some cases to minimize the number
of required memory allocations.

Let's keep the memory allocation part in alloc_vmap_area() and
separate everything else into init_vmap_area().

Signed-off-by: Roman Gushchin <[email protected]>
Acked-by: Johannes Weiner <[email protected]>
Reviewed-by: Matthew Wilcox <[email protected]>
---
mm/vmalloc.c | 50 +++++++++++++++++++++++++++++++++-----------------
1 file changed, 33 insertions(+), 17 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 8f0179895fb5..f1f19d1105c4 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -395,16 +395,10 @@ static void purge_vmap_area_lazy(void);

static BLOCKING_NOTIFIER_HEAD(vmap_notify_list);

-/*
- * Allocate a region of KVA of the specified size and alignment, within the
- * vstart and vend.
- */
-static struct vmap_area *alloc_vmap_area(unsigned long size,
- unsigned long align,
- unsigned long vstart, unsigned long vend,
- int node, gfp_t gfp_mask)
+static int init_vmap_area(struct vmap_area *va, unsigned long size,
+ unsigned long align, unsigned long vstart,
+ unsigned long vend, int node, gfp_t gfp_mask)
{
- struct vmap_area *va;
struct rb_node *n;
unsigned long addr;
int purged = 0;
@@ -416,11 +410,6 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,

might_sleep();

- va = kmalloc_node(sizeof(struct vmap_area),
- gfp_mask & GFP_RECLAIM_MASK, node);
- if (unlikely(!va))
- return ERR_PTR(-ENOMEM);
-
/*
* Only scan the relevant parts containing pointers to other objects
* to avoid false negatives.
@@ -516,7 +505,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
BUG_ON(va->va_start < vstart);
BUG_ON(va->va_end > vend);

- return va;
+ return 0;

overflow:
spin_unlock(&vmap_area_lock);
@@ -538,8 +527,35 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
if (!(gfp_mask & __GFP_NOWARN) && printk_ratelimit())
pr_warn("vmap allocation for size %lu failed: use vmalloc=<size> to increase size\n",
size);
- kfree(va);
- return ERR_PTR(-EBUSY);
+
+ return -EBUSY;
+}
+
+/*
+ * Allocate a region of KVA of the specified size and alignment, within the
+ * vstart and vend.
+ */
+static struct vmap_area *alloc_vmap_area(unsigned long size,
+ unsigned long align,
+ unsigned long vstart,
+ unsigned long vend,
+ int node, gfp_t gfp_mask)
+{
+ struct vmap_area *va;
+ int ret;
+
+ va = kmalloc_node(sizeof(struct vmap_area),
+ gfp_mask & GFP_RECLAIM_MASK, node);
+ if (unlikely(!va))
+ return ERR_PTR(-ENOMEM);
+
+ ret = init_vmap_area(va, size, align, vstart, vend, node, gfp_mask);
+ if (ret) {
+ kfree(va);
+ return ERR_PTR(ret);
+ }
+
+ return va;
}

int register_vmap_purge_notifier(struct notifier_block *nb)
--
2.20.1


2019-02-12 17:59:43

by Roman Gushchin

[permalink] [raw]
Subject: [PATCH v2 3/3] mm: show number of vmalloc pages in /proc/meminfo

Vmalloc() is getting more and more used these days (kernel stacks,
bpf and percpu allocator are new top users), and the total %
of memory consumed by vmalloc() can be pretty significant
and changes dynamically.

/proc/meminfo is the best place to display this information:
its top goal is to show top consumers of the memory.

Since the VmallocUsed field in /proc/meminfo is not in use
for quite a long time (it has been defined to 0 by the
commit a5ad88ce8c7f ("mm: get rid of 'vmalloc_info' from
/proc/meminfo")), let's reuse it for showing the actual
physical memory consumption of vmalloc().

Signed-off-by: Roman Gushchin <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Matthew Wilcox <[email protected]>
---
fs/proc/meminfo.c | 2 +-
include/linux/vmalloc.h | 2 ++
mm/vmalloc.c | 16 ++++++++++++++++
3 files changed, 19 insertions(+), 1 deletion(-)

diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
index 568d90e17c17..465ea0153b2a 100644
--- a/fs/proc/meminfo.c
+++ b/fs/proc/meminfo.c
@@ -120,7 +120,7 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
show_val_kb(m, "Committed_AS: ", committed);
seq_printf(m, "VmallocTotal: %8lu kB\n",
(unsigned long)VMALLOC_TOTAL >> 10);
- show_val_kb(m, "VmallocUsed: ", 0ul);
+ show_val_kb(m, "VmallocUsed: ", vmalloc_nr_pages());
show_val_kb(m, "VmallocChunk: ", 0ul);
show_val_kb(m, "Percpu: ", pcpu_nr_pages());

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 398e9c95cd61..0b497408272b 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -63,10 +63,12 @@ extern void vm_unmap_aliases(void);

#ifdef CONFIG_MMU
extern void __init vmalloc_init(void);
+extern unsigned long vmalloc_nr_pages(void);
#else
static inline void vmalloc_init(void)
{
}
+static inline unsigned long vmalloc_nr_pages(void) { return 0; }
#endif

extern void *vmalloc(unsigned long size);
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index f1f19d1105c4..8dd490d8d191 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -340,6 +340,19 @@ static unsigned long cached_align;

static unsigned long vmap_area_pcpu_hole;

+static DEFINE_PER_CPU(unsigned long, nr_vmalloc_pages);
+
+unsigned long vmalloc_nr_pages(void)
+{
+ unsigned long pages = 0;
+ int cpu;
+
+ for_each_possible_cpu(cpu)
+ pages += per_cpu(nr_vmalloc_pages, cpu);
+
+ return pages;
+}
+
static struct vmap_area *__find_vmap_area(unsigned long addr)
{
struct rb_node *n = vmap_area_root.rb_node;
@@ -1566,6 +1579,7 @@ static void __vunmap(const void *addr, int deallocate_pages)
BUG_ON(!page);
__free_pages(page, 0);
}
+ this_cpu_sub(nr_vmalloc_pages, area->nr_pages);

kvfree(area->pages);
}
@@ -1742,12 +1756,14 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
if (unlikely(!page)) {
/* Successfully allocated i pages, free them in __vunmap() */
area->nr_pages = i;
+ this_cpu_add(nr_vmalloc_pages, area->nr_pages);
goto fail;
}
area->pages[i] = page;
if (gfpflags_allow_blocking(gfp_mask|highmem_mask))
cond_resched();
}
+ this_cpu_add(nr_vmalloc_pages, area->nr_pages);

if (map_vm_area(area, prot, pages))
goto fail;
--
2.20.1


2019-02-12 17:59:50

by Roman Gushchin

[permalink] [raw]
Subject: [PATCH v2 1/3] mm: refactor __vunmap() to avoid duplicated call to find_vm_area()

__vunmap() calls find_vm_area() twice without an obvious reason:
first directly to get the area pointer, second indirectly by calling
remove_vm_area(), which is again searching for the area.

To remove this redundancy, let's split remove_vm_area() into
__remove_vm_area(struct vmap_area *), which performs the actual area
removal, and remove_vm_area(const void *addr) wrapper, which can
be used everywhere, where it has been used before.

On my test setup, I've got 5-10% speed up on vfree()'ing 1000000
of 4-pages vmalloc blocks.

Signed-off-by: Roman Gushchin <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Matthew Wilcox <[email protected]>
---
mm/vmalloc.c | 47 +++++++++++++++++++++++++++--------------------
1 file changed, 27 insertions(+), 20 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index b7455d4c8c12..8f0179895fb5 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1477,6 +1477,24 @@ struct vm_struct *find_vm_area(const void *addr)
return NULL;
}

+static struct vm_struct *__remove_vm_area(struct vmap_area *va)
+{
+ struct vm_struct *vm = va->vm;
+
+ might_sleep();
+
+ spin_lock(&vmap_area_lock);
+ va->vm = NULL;
+ va->flags &= ~VM_VM_AREA;
+ va->flags |= VM_LAZY_FREE;
+ spin_unlock(&vmap_area_lock);
+
+ kasan_free_shadow(vm);
+ free_unmap_vmap_area(va);
+
+ return vm;
+}
+
/**
* remove_vm_area - find and remove a continuous kernel virtual area
* @addr: base address
@@ -1489,31 +1507,20 @@ struct vm_struct *find_vm_area(const void *addr)
*/
struct vm_struct *remove_vm_area(const void *addr)
{
+ struct vm_struct *vm = NULL;
struct vmap_area *va;

- might_sleep();
-
va = find_vmap_area((unsigned long)addr);
- if (va && va->flags & VM_VM_AREA) {
- struct vm_struct *vm = va->vm;
-
- spin_lock(&vmap_area_lock);
- va->vm = NULL;
- va->flags &= ~VM_VM_AREA;
- va->flags |= VM_LAZY_FREE;
- spin_unlock(&vmap_area_lock);
-
- kasan_free_shadow(vm);
- free_unmap_vmap_area(va);
+ if (va && va->flags & VM_VM_AREA)
+ vm = __remove_vm_area(va);

- return vm;
- }
- return NULL;
+ return vm;
}

static void __vunmap(const void *addr, int deallocate_pages)
{
struct vm_struct *area;
+ struct vmap_area *va;

if (!addr)
return;
@@ -1522,17 +1529,18 @@ static void __vunmap(const void *addr, int deallocate_pages)
addr))
return;

- area = find_vm_area(addr);
- if (unlikely(!area)) {
+ va = find_vmap_area((unsigned long)addr);
+ if (unlikely(!va || !(va->flags & VM_VM_AREA))) {
WARN(1, KERN_ERR "Trying to vfree() nonexistent vm area (%p)\n",
addr);
return;
}

+ area = va->vm;
debug_check_no_locks_freed(area->addr, get_vm_area_size(area));
debug_check_no_obj_freed(area->addr, get_vm_area_size(area));

- remove_vm_area(addr);
+ __remove_vm_area(va);
if (deallocate_pages) {
int i;

@@ -1547,7 +1555,6 @@ static void __vunmap(const void *addr, int deallocate_pages)
}

kfree(area);
- return;
}

static inline void __vfree_deferred(const void *addr)
--
2.20.1


2019-02-12 19:46:29

by Johannes Weiner

[permalink] [raw]
Subject: Re: [PATCH v2 0/3] vmalloc enhancements

On Tue, Feb 12, 2019 at 09:56:45AM -0800, Roman Gushchin wrote:
> The patchset contains few changes to the vmalloc code, which are
> leading to some performance gains and code simplification.
>
> Also, it exports a number of pages, used by vmalloc(),
> in /proc/meminfo.
>
> Patch (1) removes some redundancy on __vunmap().
> Patch (2) separates memory allocation and data initialization
> in alloc_vmap_area()
> Patch (3) adds vmalloc counter to /proc/meminfo.
>
> v2->v1:
> - rebased on top of current mm tree
> - switch from atomic to percpu vmalloc page counter

I don't understand what prompted this change to percpu counters.

All writers already write vmap_area_lock and vmap_area_list, so it's
not really saving much. The for_each_possible_cpu() for /proc/meminfo
on the other hand is troublesome.

2019-02-12 21:43:41

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH v2 0/3] vmalloc enhancements

On Tue, 12 Feb 2019 13:47:24 -0500 Johannes Weiner <[email protected]> wrote:

> On Tue, Feb 12, 2019 at 09:56:45AM -0800, Roman Gushchin wrote:
> > The patchset contains few changes to the vmalloc code, which are
> > leading to some performance gains and code simplification.
> >
> > Also, it exports a number of pages, used by vmalloc(),
> > in /proc/meminfo.
> >
> > Patch (1) removes some redundancy on __vunmap().
> > Patch (2) separates memory allocation and data initialization
> > in alloc_vmap_area()
> > Patch (3) adds vmalloc counter to /proc/meminfo.
> >
> > v2->v1:
> > - rebased on top of current mm tree
> > - switch from atomic to percpu vmalloc page counter
>
> I don't understand what prompted this change to percpu counters.
>
> All writers already write vmap_area_lock and vmap_area_list, so it's
> not really saving much. The for_each_possible_cpu() for /proc/meminfo
> on the other hand is troublesome.

percpu_counters would fit here. They have probably-unneeded locking
but I expect that will be acceptable.

And they address the issues with for_each_possible_cpu() avoidance, CPU
hotplug and transient negative values.

2019-02-13 02:04:30

by Roman Gushchin

[permalink] [raw]
Subject: Re: [PATCH v2 0/3] vmalloc enhancements

On Tue, Feb 12, 2019 at 12:34:09PM -0800, Andrew Morton wrote:
> On Tue, 12 Feb 2019 13:47:24 -0500 Johannes Weiner <[email protected]> wrote:
>
> > On Tue, Feb 12, 2019 at 09:56:45AM -0800, Roman Gushchin wrote:
> > > The patchset contains few changes to the vmalloc code, which are
> > > leading to some performance gains and code simplification.
> > >
> > > Also, it exports a number of pages, used by vmalloc(),
> > > in /proc/meminfo.
> > >
> > > Patch (1) removes some redundancy on __vunmap().
> > > Patch (2) separates memory allocation and data initialization
> > > in alloc_vmap_area()
> > > Patch (3) adds vmalloc counter to /proc/meminfo.
> > >
> > > v2->v1:
> > > - rebased on top of current mm tree
> > > - switch from atomic to percpu vmalloc page counter
> >
> > I don't understand what prompted this change to percpu counters.

I *think*, I see some performance difference, but it's barely measurable
in my setup. Also as I remember, Matthew was asking why not percpu here.
So if everybody prefers a global atomic, I'm fine with either.

> >
> > All writers already write vmap_area_lock and vmap_area_list, so it's
> > not really saving much. The for_each_possible_cpu() for /proc/meminfo
> > on the other hand is troublesome.
>
> percpu_counters would fit here. They have probably-unneeded locking
> but I expect that will be acceptable.
>
> And they address the issues with for_each_possible_cpu() avoidance, CPU
> hotplug and transient negative values.

Not sure, because percpu_counters are based on dynamic percpu allocations,
which are using vmalloc under the hood.

Thanks!

2019-02-14 08:17:55

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [PATCH v2 0/3] vmalloc enhancements

On Tue, Feb 12, 2019 at 10:36:12PM +0000, Roman Gushchin wrote:
> On Tue, Feb 12, 2019 at 12:34:09PM -0800, Andrew Morton wrote:
> > On Tue, 12 Feb 2019 13:47:24 -0500 Johannes Weiner <[email protected]> wrote:
> > > I don't understand what prompted this change to percpu counters.
>
> I *think*, I see some performance difference, but it's barely measurable
> in my setup. Also as I remember, Matthew was asking why not percpu here.
> So if everybody prefers a global atomic, I'm fine with either.

I was asking why you were using an accessor instead of a direct reference
to the atomic_long_t.


2019-02-14 10:07:46

by Roman Gushchin

[permalink] [raw]
Subject: Re: [PATCH v2 0/3] vmalloc enhancements

On Tue, Feb 12, 2019 at 12:34:09PM -0800, Andrew Morton wrote:
> On Tue, 12 Feb 2019 13:47:24 -0500 Johannes Weiner <[email protected]> wrote:
>
> > On Tue, Feb 12, 2019 at 09:56:45AM -0800, Roman Gushchin wrote:
> > > The patchset contains few changes to the vmalloc code, which are
> > > leading to some performance gains and code simplification.
> > >
> > > Also, it exports a number of pages, used by vmalloc(),
> > > in /proc/meminfo.
> > >
> > > Patch (1) removes some redundancy on __vunmap().
> > > Patch (2) separates memory allocation and data initialization
> > > in alloc_vmap_area()
> > > Patch (3) adds vmalloc counter to /proc/meminfo.
> > >
> > > v2->v1:
> > > - rebased on top of current mm tree
> > > - switch from atomic to percpu vmalloc page counter
> >
> > I don't understand what prompted this change to percpu counters.
> >
> > All writers already write vmap_area_lock and vmap_area_list, so it's
> > not really saving much. The for_each_possible_cpu() for /proc/meminfo
> > on the other hand is troublesome.
>
> percpu_counters would fit here. They have probably-unneeded locking
> but I expect that will be acceptable.
>
> And they address the issues with for_each_possible_cpu() avoidance, CPU
> hotplug and transient negative values.

Using existing vmap_area_lock (as Johannes suggested) is also problematic,
due to different life-cycles of vma_areas and vmalloc pages. A special flag
will be required to decrease the counter during the lazy deletion of
vmap_areas. Allocation path will require passing a bool flag through too many
nested functions. Also it will be semi-accurate, which is probably tolerable.
So, it's doable, but doesn't look nice to me.

So, using a simple per-cpu counter still seems to best option.
Transient negative value is a valid concern, but easily fixable.
Are there any other? What's the problem with for_each_possible_cpu()?
Reading /proc/meminfo is not that hot, no?

Thanks!