2015-07-02 13:08:22

by Igor Mammedov

[permalink] [raw]
Subject: [PATCH v4 0/2] vhost: support more than 64 memory regions

changes since v3:
* rebased on top of vhost-next branch
changes since v2:
* drop cache patches for now as suggested
* add max_mem_regions module parameter instead of unconditionally
increasing limit
* drop bsearch patch since it's already queued

References to previous versions:
v2: https://lkml.org/lkml/2015/6/17/276
v1: http://www.spinics.net/lists/kvm/msg117654.html

Series allows to tweak vhost's memory regions count limit.

It fixes VM crashing on memory hotplug due to vhost refusing
accepting more than 64 memory regions with max_mem_regions
set to more than 262 slots in default QEMU configuration.

Igor Mammedov (2):
vhost: extend memory regions allocation to vmalloc
vhost: add max_mem_regions module parameter

drivers/vhost/vhost.c | 28 ++++++++++++++++++++++------
1 file changed, 22 insertions(+), 6 deletions(-)

--
1.8.3.1


2015-07-02 13:08:55

by Igor Mammedov

[permalink] [raw]
Subject: [PATCH v4 1/2] vhost: extend memory regions allocation to vmalloc

with large number of memory regions we could end up with
high order allocations and kmalloc could fail if
host is under memory pressure.
Considering that memory regions array is used on hot path
try harder to allocate using kmalloc and if it fails resort
to vmalloc.
It's still better than just failing vhost_set_memory() and
causing guest crash due to it when a new memory hotplugged
to guest.

I'll still look at QEMU side solution to reduce amount of
memory regions it feeds to vhost to make things even better,
but it doesn't hurt for kernel to behave smarter and don't
crash older QEMU's which could use large amount of memory
regions.

Signed-off-by: Igor Mammedov <[email protected]>
---
drivers/vhost/vhost.c | 20 ++++++++++++++++----
1 file changed, 16 insertions(+), 4 deletions(-)

diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index 71bb468..6488011 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -544,7 +544,7 @@ void vhost_dev_cleanup(struct vhost_dev *dev, bool locked)
fput(dev->log_file);
dev->log_file = NULL;
/* No one will access memory at this point */
- kfree(dev->memory);
+ kvfree(dev->memory);
dev->memory = NULL;
WARN_ON(!list_empty(&dev->work_list));
if (dev->worker) {
@@ -674,6 +674,18 @@ static int vhost_memory_reg_sort_cmp(const void *p1, const void *p2)
return 0;
}

+static void *vhost_kvzalloc(unsigned long size)
+{
+ void *n = kzalloc(size, GFP_KERNEL | __GFP_NOWARN | __GFP_REPEAT);
+
+ if (!n) {
+ n = vzalloc(size);
+ if (!n)
+ return ERR_PTR(-ENOMEM);
+ }
+ return n;
+}
+
static long vhost_set_memory(struct vhost_dev *d, struct vhost_memory __user *m)
{
struct vhost_memory mem, *newmem, *oldmem;
@@ -686,7 +698,7 @@ static long vhost_set_memory(struct vhost_dev *d, struct vhost_memory __user *m)
return -EOPNOTSUPP;
if (mem.nregions > VHOST_MEMORY_MAX_NREGIONS)
return -E2BIG;
- newmem = kmalloc(size + mem.nregions * sizeof *m->regions, GFP_KERNEL);
+ newmem = vhost_kvzalloc(size + mem.nregions * sizeof(*m->regions));
if (!newmem)
return -ENOMEM;

@@ -700,7 +712,7 @@ static long vhost_set_memory(struct vhost_dev *d, struct vhost_memory __user *m)
vhost_memory_reg_sort_cmp, NULL);

if (!memory_access_ok(d, newmem, 0)) {
- kfree(newmem);
+ kvfree(newmem);
return -EFAULT;
}
oldmem = d->memory;
@@ -712,7 +724,7 @@ static long vhost_set_memory(struct vhost_dev *d, struct vhost_memory __user *m)
d->vqs[i]->memory = newmem;
mutex_unlock(&d->vqs[i]->mutex);
}
- kfree(oldmem);
+ kvfree(oldmem);
return 0;
}

--
1.8.3.1

2015-07-02 13:08:42

by Igor Mammedov

[permalink] [raw]
Subject: [PATCH v4 2/2] vhost: add max_mem_regions module parameter

it became possible to use a bigger amount of memory
slots, which is used by memory hotplug for
registering hotplugged memory.
However QEMU crashes if it's used with more than ~60
pc-dimm devices and vhost-net enabled since host kernel
in module vhost-net refuses to accept more than 64
memory regions.

Allow to tweak limit via max_mem_regions module paramemter
with default value set to 64 slots.

Signed-off-by: Igor Mammedov <[email protected]>
---
drivers/vhost/vhost.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index 6488011..9a68e2e 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -29,8 +29,12 @@

#include "vhost.h"

+static ushort max_mem_regions = 64;
+module_param(max_mem_regions, ushort, 0444);
+MODULE_PARM_DESC(max_mem_regions,
+ "Maximum number of memory regions in memory map. (default: 64)");
+
enum {
- VHOST_MEMORY_MAX_NREGIONS = 64,
VHOST_MEMORY_F_LOG = 0x1,
};

@@ -696,7 +700,7 @@ static long vhost_set_memory(struct vhost_dev *d, struct vhost_memory __user *m)
return -EFAULT;
if (mem.padding)
return -EOPNOTSUPP;
- if (mem.nregions > VHOST_MEMORY_MAX_NREGIONS)
+ if (mem.nregions > max_mem_regions)
return -E2BIG;
newmem = vhost_kvzalloc(size + mem.nregions * sizeof(*m->regions));
if (!newmem)
--
1.8.3.1

2015-07-08 12:51:33

by Igor Mammedov

[permalink] [raw]
Subject: Re: [PATCH v4 0/2] vhost: support more than 64 memory regions

On Thu, 2 Jul 2015 15:08:09 +0200
Igor Mammedov <[email protected]> wrote:

> changes since v3:
> * rebased on top of vhost-next branch
> changes since v2:
> * drop cache patches for now as suggested
> * add max_mem_regions module parameter instead of unconditionally
> increasing limit
> * drop bsearch patch since it's already queued
>
> References to previous versions:
> v2: https://lkml.org/lkml/2015/6/17/276
> v1: http://www.spinics.net/lists/kvm/msg117654.html
>
> Series allows to tweak vhost's memory regions count limit.
>
> It fixes VM crashing on memory hotplug due to vhost refusing
> accepting more than 64 memory regions with max_mem_regions
> set to more than 262 slots in default QEMU configuration.
>
> Igor Mammedov (2):
> vhost: extend memory regions allocation to vmalloc
> vhost: add max_mem_regions module parameter
>
> drivers/vhost/vhost.c | 28 ++++++++++++++++++++++------
> 1 file changed, 22 insertions(+), 6 deletions(-)
>

ping

2015-07-13 18:15:34

by Igor Mammedov

[permalink] [raw]
Subject: [PATCH] vhost: fix build failure on SPARC

while on x86 target vmalloc.h is included indirectly through
other heaedrs, it's not included on SPARC.
Fix issue by including vmalloc.h directly from vhost.c
like it's done in vhost/net.c

Signed-off-by: Igor Mammedov <[email protected]>
---
drivers/vhost/vhost.c | 1 +
1 file changed, 1 insertion(+)

diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index 9a68e2e..a9fe859 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -22,6 +22,7 @@
#include <linux/file.h>
#include <linux/highmem.h>
#include <linux/slab.h>
+#include <linux/vmalloc.h>
#include <linux/kthread.h>
#include <linux/cgroup.h>
#include <linux/module.h>
--
1.8.3.1

2015-07-13 20:20:04

by Michael S. Tsirkin

[permalink] [raw]
Subject: Re: [PATCH] vhost: fix build failure on SPARC

On Mon, Jul 13, 2015 at 08:15:30PM +0200, Igor Mammedov wrote:
> while on x86 target vmalloc.h is included indirectly through
> other heaedrs, it's not included on SPARC.
> Fix issue by including vmalloc.h directly from vhost.c
> like it's done in vhost/net.c
>
> Signed-off-by: Igor Mammedov <[email protected]>

It's your vmalloc patch that introduces the issue, right?
I squashed this fix into that one.

You can just send fixup! patches in cases like this
in the future, or at least mention the commit that
breaks the build.


> ---
> drivers/vhost/vhost.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> index 9a68e2e..a9fe859 100644
> --- a/drivers/vhost/vhost.c
> +++ b/drivers/vhost/vhost.c
> @@ -22,6 +22,7 @@
> #include <linux/file.h>
> #include <linux/highmem.h>
> #include <linux/slab.h>
> +#include <linux/vmalloc.h>
> #include <linux/kthread.h>
> #include <linux/cgroup.h>
> #include <linux/module.h>
> --
> 1.8.3.1

2015-07-13 21:41:24

by Igor Mammedov

[permalink] [raw]
Subject: Re: [PATCH] vhost: fix build failure on SPARC

On Mon, 13 Jul 2015 23:19:59 +0300
"Michael S. Tsirkin" <[email protected]> wrote:

> On Mon, Jul 13, 2015 at 08:15:30PM +0200, Igor Mammedov wrote:
> > while on x86 target vmalloc.h is included indirectly through
> > other heaedrs, it's not included on SPARC.
> > Fix issue by including vmalloc.h directly from vhost.c
> > like it's done in vhost/net.c
> >
> > Signed-off-by: Igor Mammedov <[email protected]>
>
> It's your vmalloc patch that introduces the issue, right?
> I squashed this fix into that one.
Thanks!
yep, that right. I've sent this patch as reply to vmalloc
patch that introduced issue but forgot to add commit it
that introduced issue.

>
> You can just send fixup! patches in cases like this
> in the future, or at least mention the commit that
> breaks the build.
ok, I'll do it next time.

>
>
> > ---
> > drivers/vhost/vhost.c | 1 +
> > 1 file changed, 1 insertion(+)
> >
> > diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> > index 9a68e2e..a9fe859 100644
> > --- a/drivers/vhost/vhost.c
> > +++ b/drivers/vhost/vhost.c
> > @@ -22,6 +22,7 @@
> > #include <linux/file.h>
> > #include <linux/highmem.h>
> > #include <linux/slab.h>
> > +#include <linux/vmalloc.h>
> > #include <linux/kthread.h>
> > #include <linux/cgroup.h>
> > #include <linux/module.h>
> > --
> > 1.8.3.1

2015-07-16 11:05:46

by Igor Mammedov

[permalink] [raw]
Subject: Re: [PATCH v4 2/2] vhost: add max_mem_regions module parameter

On Thu, 2 Jul 2015 15:08:11 +0200
Igor Mammedov <[email protected]> wrote:

> it became possible to use a bigger amount of memory
> slots, which is used by memory hotplug for
> registering hotplugged memory.
> However QEMU crashes if it's used with more than ~60
> pc-dimm devices and vhost-net enabled since host kernel
> in module vhost-net refuses to accept more than 64
> memory regions.
>
> Allow to tweak limit via max_mem_regions module paramemter
> with default value set to 64 slots.
Michael,

what was the reason not to rise default?
As much as I think I can't come up with one.

Making it as module option doesn't make much sense
since old userspace will crash on new kernels anyway
until admins learn that there is a module option
to rise limit.

Rising default limit on par with kvm's will help
to avoid those crashes and would allow to drop
not necessary option to reduce user confusion.

>
> Signed-off-by: Igor Mammedov <[email protected]>
> ---
> drivers/vhost/vhost.c | 8 ++++++--
> 1 file changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> index 6488011..9a68e2e 100644
> --- a/drivers/vhost/vhost.c
> +++ b/drivers/vhost/vhost.c
> @@ -29,8 +29,12 @@
>
> #include "vhost.h"
>
> +static ushort max_mem_regions = 64;
> +module_param(max_mem_regions, ushort, 0444);
> +MODULE_PARM_DESC(max_mem_regions,
> + "Maximum number of memory regions in memory map. (default: 64)");
> +
> enum {
> - VHOST_MEMORY_MAX_NREGIONS = 64,
> VHOST_MEMORY_F_LOG = 0x1,
> };
>
> @@ -696,7 +700,7 @@ static long vhost_set_memory(struct vhost_dev *d, struct vhost_memory __user *m)
> return -EFAULT;
> if (mem.padding)
> return -EOPNOTSUPP;
> - if (mem.nregions > VHOST_MEMORY_MAX_NREGIONS)
> + if (mem.nregions > max_mem_regions)
> return -E2BIG;
> newmem = vhost_kvzalloc(size + mem.nregions * sizeof(*m->regions));
> if (!newmem)