Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753778AbbGAJHy (ORCPT ); Wed, 1 Jul 2015 05:07:54 -0400 Received: from mx1.redhat.com ([209.132.183.28]:60253 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753671AbbGAJHP (ORCPT ); Wed, 1 Jul 2015 05:07:15 -0400 From: Igor Mammedov To: linux-kernel@vger.kernel.org Cc: pbonzini@redhat.com, mst@redhat.com, kvm@vger.kernel.org Subject: [PATCH v3 1/2] vhost: extend memory regions allocation to vmalloc Date: Wed, 1 Jul 2015 11:07:09 +0200 Message-Id: <1435741630-257939-2-git-send-email-imammedo@redhat.com> In-Reply-To: <1435741630-257939-1-git-send-email-imammedo@redhat.com> References: <1435741630-257939-1-git-send-email-imammedo@redhat.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2972 Lines: 95 with large number of memory regions we could end up with high order allocations and kmalloc could fail if host is under memory pressure. Considering that memory regions array is used on hot path try harder to allocate using kmalloc and if it fails resort to vmalloc. It's still better than just failing vhost_set_memory() and causing guest crash due to it when a new memory hotplugged to guest. I'll still look at QEMU side solution to reduce amount of memory regions it feeds to vhost to make things even better, but it doesn't hurt for kernel to behave smarter and don't crash older QEMU's which could use large amount of memory regions. Signed-off-by: Igor Mammedov --- drivers/vhost/vhost.c | 22 +++++++++++++++++----- 1 file changed, 17 insertions(+), 5 deletions(-) diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index f1e07b8..99931a0 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -471,7 +471,7 @@ void vhost_dev_cleanup(struct vhost_dev *dev, bool locked) fput(dev->log_file); dev->log_file = NULL; /* No one will access memory at this point */ - kfree(dev->memory); + kvfree(dev->memory); dev->memory = NULL; WARN_ON(!list_empty(&dev->work_list)); if (dev->worker) { @@ -601,6 +601,18 @@ static int vhost_memory_reg_sort_cmp(const void *p1, const void *p2) return 0; } +static void *vhost_kvzalloc(unsigned long size) +{ + void *n = kzalloc(size, GFP_KERNEL | __GFP_NOWARN | __GFP_REPEAT); + + if (!n) { + n = vzalloc(size); + if (!n) + return ERR_PTR(-ENOMEM); + } + return n; +} + static long vhost_set_memory(struct vhost_dev *d, struct vhost_memory __user *m) { struct vhost_memory mem, *newmem, *oldmem; @@ -613,21 +625,21 @@ static long vhost_set_memory(struct vhost_dev *d, struct vhost_memory __user *m) return -EOPNOTSUPP; if (mem.nregions > VHOST_MEMORY_MAX_NREGIONS) return -E2BIG; - newmem = kmalloc(size + mem.nregions * sizeof *m->regions, GFP_KERNEL); + newmem = vhost_kvzalloc(size + mem.nregions * sizeof(*m->regions)); if (!newmem) return -ENOMEM; memcpy(newmem, &mem, size); if (copy_from_user(newmem->regions, m->regions, mem.nregions * sizeof *m->regions)) { - kfree(newmem); + kvfree(newmem); return -EFAULT; } sort(newmem->regions, newmem->nregions, sizeof(*newmem->regions), vhost_memory_reg_sort_cmp, NULL); if (!memory_access_ok(d, newmem, 0)) { - kfree(newmem); + kvfree(newmem); return -EFAULT; } oldmem = d->memory; @@ -639,7 +651,7 @@ static long vhost_set_memory(struct vhost_dev *d, struct vhost_memory __user *m) d->vqs[i]->memory = newmem; mutex_unlock(&d->vqs[i]->mutex); } - kfree(oldmem); + kvfree(oldmem); return 0; } -- 1.8.3.1 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/