Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754061AbbG2O3a (ORCPT ); Wed, 29 Jul 2015 10:29:30 -0400 Received: from mx1.redhat.com ([209.132.183.28]:47806 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753960AbbG2O32 (ORCPT ); Wed, 29 Jul 2015 10:29:28 -0400 From: Igor Mammedov To: linux-kernel@vger.kernel.org Cc: mst@redhat.com, pbonzini@redhat.com, kvm@vger.kernel.org Subject: [PATCH 1/2] vhost: add ioctl to query nregions upper limit Date: Wed, 29 Jul 2015 16:29:22 +0200 Message-Id: <1438180163-275465-2-git-send-email-imammedo@redhat.com> In-Reply-To: <1438180163-275465-1-git-send-email-imammedo@redhat.com> References: <1438180163-275465-1-git-send-email-imammedo@redhat.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3458 Lines: 92 From: "Michael S. Tsirkin" Userspace currently simply tries to give vhost as many regions as it happens to have, but you only have the mem table when you have initialized a large part of VM, so graceful failure is very hard to support. The result is that userspace tends to fail catastrophically. Instead, add a new ioctl so userspace can find out how much kernel supports, up front. This returns a positive value that we commit to. Also, document our contract with legacy userspace: when running on an old kernel, you get -1 and you can assume at least 64 slots. Since 0 value's left unused, let's make that mean that the current userspace behaviour (trial and error) is required, just in case we want it back. Signed-off-by: Michael S. Tsirkin Signed-off-by: Igor Mammedov --- drivers/vhost/vhost.c | 7 ++++++- include/uapi/linux/vhost.h | 17 ++++++++++++++++- 2 files changed, 22 insertions(+), 2 deletions(-) diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index eec2f11..76dc0cf 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -30,7 +30,7 @@ #include "vhost.h" -static ushort max_mem_regions = 64; +static ushort max_mem_regions = VHOST_MEM_MAX_NREGIONS_DEFAULT; module_param(max_mem_regions, ushort, 0444); MODULE_PARM_DESC(max_mem_regions, "Maximum number of memory regions in memory map. (default: 64)"); @@ -944,6 +944,11 @@ long vhost_dev_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *argp) long r; int i, fd; + if (ioctl == VHOST_GET_MEM_MAX_NREGIONS) { + r = max_mem_regions; + goto done; + } + /* If you are not the owner, you can become one */ if (ioctl == VHOST_SET_OWNER) { r = vhost_dev_set_owner(d); diff --git a/include/uapi/linux/vhost.h b/include/uapi/linux/vhost.h index ab373191..2511954 100644 --- a/include/uapi/linux/vhost.h +++ b/include/uapi/linux/vhost.h @@ -80,7 +80,7 @@ struct vhost_memory { * Allows subsequent call to VHOST_OWNER_SET to succeed. */ #define VHOST_RESET_OWNER _IO(VHOST_VIRTIO, 0x02) -/* Set up/modify memory layout */ +/* Set up/modify memory layout: see also VHOST_GET_MEM_MAX_NREGIONS below. */ #define VHOST_SET_MEM_TABLE _IOW(VHOST_VIRTIO, 0x03, struct vhost_memory) /* Write logging setup. */ @@ -127,6 +127,21 @@ struct vhost_memory { /* Set eventfd to signal an error */ #define VHOST_SET_VRING_ERR _IOW(VHOST_VIRTIO, 0x22, struct vhost_vring_file) +/* Query upper limit on nregions in VHOST_SET_MEM_TABLE arguments. + * Returns: + * 0 < value <= MAX_INT - gives the upper limit, higher values will fail + * 0 - there's no static limit: try and see if it works + * -1 - on failure + */ +#define VHOST_GET_MEM_MAX_NREGIONS _IO(VHOST_VIRTIO, 0x23) + +/* Returned by VHOST_GET_MEM_MAX_NREGIONS to mean there's no static limit: + * try and it'll work if you are lucky. */ +#define VHOST_MEM_MAX_NREGIONS_NONE 0 +/* We support at least as many nregions in VHOST_SET_MEM_TABLE: + * for use on legacy kernels without VHOST_GET_MEM_MAX_NREGIONS support. */ +#define VHOST_MEM_MAX_NREGIONS_DEFAULT 64 + /* VHOST_NET specific defines */ /* Attach virtio net ring to a raw socket, or tap device. -- 1.8.3.1 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/