Received: by 2002:a05:6a10:f3d0:0:0:0:0 with SMTP id a16csp5210918pxv; Tue, 6 Jul 2021 21:06:40 -0700 (PDT) X-Google-Smtp-Source: ABdhPJycK5KNYHXVKTSEYRa1V4QmTqjR2vc120hEuSCoOawBE5+Qep+5pH226atGvUOMv/lDmF7j X-Received: by 2002:a02:cb90:: with SMTP id u16mr8900937jap.142.1625630800710; Tue, 06 Jul 2021 21:06:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1625630800; cv=none; d=google.com; s=arc-20160816; b=z6yfCLYuI7NlPPTkl05YKBTIzSG7YYEI3BMwWDRsQcCHbHjgpKe8r9RESFcm32FmYR ch94+D9FzHBMJRJOcaaIxUfO1hMItyhXHNYOA+Mf8+dyGGXkQWEIXzdMOycBvbJ2LAcJ 18RO+Tn2TPQ2gYXlLJdXCiVF9Ixp/+G6QXw6qqaIHN7jp/pqGGcirV12OsXrB+AUiZKY UeuOq2j9Xj2ZJH8I7wdGXy6mMhLwF7ELImH21JYxhEG0FB23ycAtd7gb5+2t+Mfz0W7k 31mS6t2yu/+vLZhdDO58ByrQCyCOmJi8p9nSYY34nZPWMG79eS+PFpUMzJUlGGVSPM0t 9rnw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=6zjpr5a14XdAl8V/dJOqiSPX85IwvcMo+5qs0QumtoU=; b=psV5uwK94pxAy+93nhDUF9zdg/ndYzSnDFahk5wISCkLWIo1ArzbeZFygqbOAqtaiC OBTCHJ/sFU+AJJNLcuDW7d4Tgc5bX5rysRlcOtqK1qi5pHhgDmVk7u8KuMgYyYy5dA6C 3Yrq+TreuiNQsiHbaEDeo9gbpz2ukMOBQxJr78Yi4HUW9T8OangZphQRMeDj2G2zUj/z QtFEQ4r92ExbrbiudZ1O53QIcA1O7s3MyBYlAUX8ZFhoqyhYv4eVCPtBlSqVhWfBoFqg zKhBguAWIzd2UQca62RtYOJtBI6taOZ/XgcP6MvlJ9MsSi5vNO//TOu6qbzpl1ESWoLP i8Sg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b=NcIlXNH6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=bytedance.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id z17si18482142jal.97.2021.07.06.21.06.29; Tue, 06 Jul 2021 21:06:40 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b=NcIlXNH6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=bytedance.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230201AbhGGEHT (ORCPT + 99 others); Wed, 7 Jul 2021 00:07:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42378 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230160AbhGGEHT (ORCPT ); Wed, 7 Jul 2021 00:07:19 -0400 Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com [IPv6:2a00:1450:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3BF95C06175F for ; Tue, 6 Jul 2021 21:04:39 -0700 (PDT) Received: by mail-ej1-x635.google.com with SMTP id hc16so1033221ejc.12 for ; Tue, 06 Jul 2021 21:04:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=6zjpr5a14XdAl8V/dJOqiSPX85IwvcMo+5qs0QumtoU=; b=NcIlXNH6QZfLdl+rOmj0a0sYOVGpIuDw2eaLsZZxns1FcZzHWni5N7x3YZs8xhU0Bg p3L+7Qj/WTQbQU692SWnGKpJCRBD772YbcwGtcrRwjRjhQa7GEp8REi/5yYp+PDc0evH vd5DJiXSrQuyVixRfBeXo7htXkahDOMz/kjaOv+/rsZfHBYOJbyoV0fF28QNeKEsJsvH 5VVyJmYRaOtAu+ApBTNbcXtbtwcKwr7TSEoRsUYwmkyyXmZvYBrleo86FA2a+jcohn6u MLDQ3xJFhojpYCsRXRxTYhxZ4UVhpycp0w1pN/nAo5AYlI6pJfkKUJ0/1nYQyixpFN/T UTRQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=6zjpr5a14XdAl8V/dJOqiSPX85IwvcMo+5qs0QumtoU=; b=kSdaz/vUS96Xk7RM48OxJ8kYIqJzt5VqMTOoTVJVTRCOhOp3L37IXyVBhvixdxYxqM zJq409dcmq3txmc24nbGmZuZ533nTbUicOv1bpYms7SwaA+uOc8s0NTVSselMlrDx5aQ AeKNEajI/rmQFuFUE/2InrNYmsTiC9wtqYbiLOjf6pyRzc4YGT5KwX5ju5O9JVAwhFjV +c5GdJhauRs45lLPvDsLcmmcgKmD67BYEVjhpnBd0jeEV2CO5nY0ce9GPcvxokRbWzGl HZ819BEhGOqawEqR76ie8SVw6QJ4MMlJGZqkHe3I5nefboRpkdUxSP25lQHElbn4vIeK lvaw== X-Gm-Message-State: AOAM530sS+rrzFk6rYWC9FizljSlgc/ZHsNzkYpBUuO81QYEGZjVO5Iq g4oFLgK1HTdK4bmlrII41bnZUaNytFO5DlPKxSRU X-Received: by 2002:a17:906:58c7:: with SMTP id e7mr18831497ejs.197.1625630677638; Tue, 06 Jul 2021 21:04:37 -0700 (PDT) MIME-Version: 1.0 References: <20210705071910.31965-1-jasowang@redhat.com> In-Reply-To: <20210705071910.31965-1-jasowang@redhat.com> From: Yongji Xie Date: Wed, 7 Jul 2021 12:04:26 +0800 Message-ID: Subject: Re: [PATCH 1/2] vdpa: support per virtqueue max queue size To: Jason Wang Cc: "Michael S. Tsirkin" , virtualization , linux-kernel , kvm , netdev@vger.kernel.org, Stefan Hajnoczi Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jul 5, 2021 at 3:19 PM Jason Wang wrote: > > Virtio spec allows the device to specify the per virtqueue max queue > size. vDPA needs to adapt to this flexibility. E.g Qemu advertise a > small control virtqueue for virtio-net. > > So this patch adds a index parameter to get_vq_num_max bus operations > for the device to report its per virtqueue max queue size. > > Both VHOST_VDPA_GET_VRING_NUM and VDPA_ATTR_DEV_MAX_VQ_SIZE assume a > global maximum size. So we iterate all the virtqueues to return the > minimal size in this case. Actually, the VHOST_VDPA_GET_VRING_NUM is > not a must for the userspace. Userspace may choose to check the > VHOST_SET_VRING_NUM for proving or validating the maximum virtqueue > size. Anyway, we can invent a per vq version of > VHOST_VDPA_GET_VRING_NUM in the future if it's necessary. > > Signed-off-by: Jason Wang > --- > drivers/vdpa/ifcvf/ifcvf_main.c | 2 +- > drivers/vdpa/mlx5/net/mlx5_vnet.c | 2 +- > drivers/vdpa/vdpa.c | 22 +++++++++++++++++++++- > drivers/vdpa/vdpa_sim/vdpa_sim.c | 2 +- > drivers/vdpa/virtio_pci/vp_vdpa.c | 2 +- > drivers/vhost/vdpa.c | 9 ++++++--- > drivers/virtio/virtio_vdpa.c | 2 +- > include/linux/vdpa.h | 5 ++++- > 8 files changed, 36 insertions(+), 10 deletions(-) > > diff --git a/drivers/vdpa/ifcvf/ifcvf_main.c b/drivers/vdpa/ifcvf/ifcvf_main.c > index ab0ab5cf0f6e..646b340db2af 100644 > --- a/drivers/vdpa/ifcvf/ifcvf_main.c > +++ b/drivers/vdpa/ifcvf/ifcvf_main.c > @@ -254,7 +254,7 @@ static void ifcvf_vdpa_set_status(struct vdpa_device *vdpa_dev, u8 status) > ifcvf_set_status(vf, status); > } > > -static u16 ifcvf_vdpa_get_vq_num_max(struct vdpa_device *vdpa_dev) > +static u16 ifcvf_vdpa_get_vq_num_max(struct vdpa_device *vdpa_dev, u16 qid) > { > return IFCVF_QUEUE_MAX; > } > diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c > index dda5dc6f7737..afd6114d07b0 100644 > --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c > +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c > @@ -1584,7 +1584,7 @@ static void mlx5_vdpa_set_config_cb(struct vdpa_device *vdev, struct vdpa_callba > } > > #define MLX5_VDPA_MAX_VQ_ENTRIES 256 > -static u16 mlx5_vdpa_get_vq_num_max(struct vdpa_device *vdev) > +static u16 mlx5_vdpa_get_vq_num_max(struct vdpa_device *vdev, u16 idx) > { > return MLX5_VDPA_MAX_VQ_ENTRIES; > } > diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c > index bb3f1d1f0422..d77d59811389 100644 > --- a/drivers/vdpa/vdpa.c > +++ b/drivers/vdpa/vdpa.c > @@ -239,6 +239,26 @@ void vdpa_unregister_driver(struct vdpa_driver *drv) > } > EXPORT_SYMBOL_GPL(vdpa_unregister_driver); > > +/** > + * vdpa_get_vq_num_max - get the maximum virtqueue size > + * @vdev: vdpa device > + */ > +u16 vdpa_get_vq_num_max(struct vdpa_device *vdev) > +{ > + const struct vdpa_config_ops *ops = vdev->config; > + u16 s, size = ops->get_vq_num_max(vdev, 0); > + int i; > + > + for (i = 1; i < vdev->nvqs; i++) { > + s = ops->get_vq_num_max(vdev, i); > + if (s && s < size) > + size = s; > + } > + > + return size; > +} > +EXPORT_SYMBOL_GPL(vdpa_get_vq_num_max); > + > /** > * vdpa_mgmtdev_register - register a vdpa management device > * > @@ -502,7 +522,7 @@ vdpa_dev_fill(struct vdpa_device *vdev, struct sk_buff *msg, u32 portid, u32 seq > > device_id = vdev->config->get_device_id(vdev); > vendor_id = vdev->config->get_vendor_id(vdev); > - max_vq_size = vdev->config->get_vq_num_max(vdev); > + max_vq_size = vdpa_get_vq_num_max(vdev); > > err = -EMSGSIZE; > if (nla_put_string(msg, VDPA_ATTR_DEV_NAME, dev_name(&vdev->dev))) > diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c > index 98f793bc9376..49e29056f164 100644 > --- a/drivers/vdpa/vdpa_sim/vdpa_sim.c > +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c > @@ -422,7 +422,7 @@ static void vdpasim_set_config_cb(struct vdpa_device *vdpa, > /* We don't support config interrupt */ > } > > -static u16 vdpasim_get_vq_num_max(struct vdpa_device *vdpa) > +static u16 vdpasim_get_vq_num_max(struct vdpa_device *vdpa, u16 idx) > { > return VDPASIM_QUEUE_MAX; > } > diff --git a/drivers/vdpa/virtio_pci/vp_vdpa.c b/drivers/vdpa/virtio_pci/vp_vdpa.c > index c76ebb531212..2926641fb586 100644 > --- a/drivers/vdpa/virtio_pci/vp_vdpa.c > +++ b/drivers/vdpa/virtio_pci/vp_vdpa.c > @@ -195,7 +195,7 @@ static void vp_vdpa_set_status(struct vdpa_device *vdpa, u8 status) > vp_vdpa_free_irq(vp_vdpa); > } > > -static u16 vp_vdpa_get_vq_num_max(struct vdpa_device *vdpa) > +static u16 vp_vdpa_get_vq_num_max(struct vdpa_device *vdpa, u16 qid) > { > return VP_VDPA_QUEUE_MAX; > } > diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c > index fb41db3da611..c9ec395b8c42 100644 > --- a/drivers/vhost/vdpa.c > +++ b/drivers/vhost/vdpa.c > @@ -289,11 +289,14 @@ static long vhost_vdpa_set_features(struct vhost_vdpa *v, u64 __user *featurep) > > static long vhost_vdpa_get_vring_num(struct vhost_vdpa *v, u16 __user *argp) > { > - struct vdpa_device *vdpa = v->vdpa; > - const struct vdpa_config_ops *ops = vdpa->config; > u16 num; > > - num = ops->get_vq_num_max(vdpa); > + /* > + * VHOST_VDPA_GET_VRING_NUM asssumes a global max virtqueue s/asssumes/assumes. Other looks good to me. Thanks, Yongji