Received: by 2002:a6b:fb09:0:0:0:0:0 with SMTP id h9csp2665082iog; Mon, 20 Jun 2022 01:53:20 -0700 (PDT) X-Google-Smtp-Source: AGRyM1t5LVFhv5e9eU8VU6LILk5fo3Q+bcUo/zHRNWnBMoLnIqqMc+Nm5zx0fOn8MGaAw3jVebFg X-Received: by 2002:a17:906:9c96:b0:711:6c3:c9d7 with SMTP id fj22-20020a1709069c9600b0071106c3c9d7mr18935811ejc.60.1655715199948; Mon, 20 Jun 2022 01:53:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1655715199; cv=none; d=google.com; s=arc-20160816; b=ZRWDyO7A3BOoqxNxhRCEt4AyPIe8u6CIWedUdBaStWJR5joVgKqp10Okbc9hiZ0UwJ q+iHfDjBEfldxHFCvCnw5Thw/p5D1wgV41yp+e5YqcH0Ibs4E7DiY/iD+bdsQwryOVRW fIEHW0uQR4t8Y/W1qCZcoaTckZP6i/yZkMYEQOT7A9madOIP5wa5B2N8I6jukT1mB/Lh lvT/tvUfOjTBeF9EegbHzcJrl7MB/NHXic/P5A9brtMQ/6NpBu7SMqBqdFwcDRtfRdHW OIjxhRWZzpNxxIxKShMDEddGi6Vd6vnBsnpNFg9hDb5yBTxvpyXe+M98HTrJGDXptLid RdIg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=Z08htlmkz0DJSJ9iXibcm1dQjAm709AtQlsm4GUcgAE=; b=iF/I0NSz5RN4TQOZ/MSWPPiJdCZRusbKEgchpr7n26lOM/AYLlFJKuvf6CTE7uUkME z/ZQDnWS+CRnyWPk3su6P/u73Z5p0vdKeE9/cOZ6A0oVqJQHYNnL+LZZdF5kbKE+IntQ lFgPs73PKvwMfzfyu0nHP7ULzBqlIaB+cwhf0lFKLXTDRGiWa7hiZTye5K1k4qYbOtHP wUNeoAauR4jrjiktTXctac8SPKgjoGOe/wkzWbMo4d9vSanMRClMxfkDpGbgW9YC9vDI oAFXhUmwX9eiSx9Qfco98sR1168Ji3rRYdpVrJtnUye/13u6fQOXyNn4dWHW7Cx66KMg rO8g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=PIt3ruiX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y11-20020a056402270b00b0042bd7234fb9si14422499edd.27.2022.06.20.01.52.55; Mon, 20 Jun 2022 01:53:19 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=PIt3ruiX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239128AbiFTIrX (ORCPT + 99 others); Mon, 20 Jun 2022 04:47:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58434 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238807AbiFTIrV (ORCPT ); Mon, 20 Jun 2022 04:47:21 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 375A612AEC for ; Mon, 20 Jun 2022 01:47:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1655714839; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=Z08htlmkz0DJSJ9iXibcm1dQjAm709AtQlsm4GUcgAE=; b=PIt3ruiXwctjY1bkPu6g+V+d6kycTfq6c0Qmdy+EI4YqXos2CyNsN3cUdGIfxOpdbxTtxy TldZc48EMggN2VEFFW9P1TeeKDzFrCA1XLR2G99T4yADxXZt1y5ol/ABakdh/BSNM+8Hyz 5v0H8jPbJ55wQE1mJ97y6SaRhim3zk0= Received: from mail-lj1-f198.google.com (mail-lj1-f198.google.com [209.85.208.198]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-121-ZCn7Wn0zPDOla0FsiPzAPw-1; Mon, 20 Jun 2022 04:47:16 -0400 X-MC-Unique: ZCn7Wn0zPDOla0FsiPzAPw-1 Received: by mail-lj1-f198.google.com with SMTP id 1-20020a2e1641000000b00255569ac874so1174545ljw.12 for ; Mon, 20 Jun 2022 01:47:16 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Z08htlmkz0DJSJ9iXibcm1dQjAm709AtQlsm4GUcgAE=; b=MJOyt2fL8pLvyuyFgaYdsOCw3iUxXxldllAGSwk1vMqYe4T331PWn7Vx+Po5DmA5r7 f+EPU/hLuwvRR4Kxnli7NyB/5UkpN5Glz/xjAp5I77Z8cISl2SEtkp78b1Vak8vJdPSV JZ/jH32nkeYk/dyYNGQYkXOuwuup4MNjrHzcGkdTn1KL0/DdGcL5TI8cA9rmzyBXq/kw i7frxeR1LXqKd4dNTzc186zLvmeQDYruxS53ABoDCO8epxrLXLOpOz9I8nSPV7dBy4K6 AEGumqrLctTygW5atfviM+Lew2di9BDD/ldTh/bXAgiMSSB1LT828VOUeyOwF5qOOa9a JKcw== X-Gm-Message-State: AJIora/w9wQyUcrjVMJXds9wkATGpAjKXewo4c8wYRMTcd9Hbx+SOcbs UqGRlDdula9hjpFT5NG3Csab2pRUs3uLmRCX4nP/SCKqBi8OAhQjeNmJUSBRAT5FXF4TFe3XAUS 8KLZ6ld4jXMP5GbJwTIIV+KsrtO/OEkPa1Pa6Su8p X-Received: by 2002:a05:6512:13a5:b0:47d:c1d9:dea8 with SMTP id p37-20020a05651213a500b0047dc1d9dea8mr13075768lfa.442.1655714834745; Mon, 20 Jun 2022 01:47:14 -0700 (PDT) X-Received: by 2002:a05:6512:13a5:b0:47d:c1d9:dea8 with SMTP id p37-20020a05651213a500b0047dc1d9dea8mr13075757lfa.442.1655714834499; Mon, 20 Jun 2022 01:47:14 -0700 (PDT) MIME-Version: 1.0 References: <20220616132725.50599-1-elic@nvidia.com> <20220616132725.50599-3-elic@nvidia.com> In-Reply-To: <20220616132725.50599-3-elic@nvidia.com> From: Jason Wang Date: Mon, 20 Jun 2022 16:47:03 +0800 Message-ID: Subject: Re: [PATCH RFC 2/3] vdpa/mlx5: Support different address spaces for control and data To: Eli Cohen Cc: eperezma , mst , virtualization , linux-kernel , Si-Wei Liu , Parav Pandit Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-3.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jun 16, 2022 at 9:27 PM Eli Cohen wrote: > > Partition virtqueues to two different address spaces: oce for control Typo, should be "one" > virtqueue which is implemented in software, and one for data virtqueus. And should be "virtqueues". Other than this. Acked-by: Jason Wang > > Signed-off-by: Eli Cohen > --- > drivers/vdpa/mlx5/core/mlx5_vdpa.h | 11 ++++ > drivers/vdpa/mlx5/net/mlx5_vnet.c | 101 +++++++++++++++++++++++++---- > 2 files changed, 101 insertions(+), 11 deletions(-) > > diff --git a/drivers/vdpa/mlx5/core/mlx5_vdpa.h b/drivers/vdpa/mlx5/core/mlx5_vdpa.h > index 44104093163b..6af9fdbb86b7 100644 > --- a/drivers/vdpa/mlx5/core/mlx5_vdpa.h > +++ b/drivers/vdpa/mlx5/core/mlx5_vdpa.h > @@ -70,6 +70,16 @@ struct mlx5_vdpa_wq_ent { > struct mlx5_vdpa_dev *mvdev; > }; > > +enum { > + MLX5_VDPA_DATAVQ_GROUP, > + MLX5_VDPA_CVQ_GROUP, > + MLX5_VDPA_NUMVQ_GROUPS > +}; > + > +enum { > + MLX5_VDPA_NUM_AS = MLX5_VDPA_NUMVQ_GROUPS > +}; > + > struct mlx5_vdpa_dev { > struct vdpa_device vdev; > struct mlx5_core_dev *mdev; > @@ -85,6 +95,7 @@ struct mlx5_vdpa_dev { > struct mlx5_vdpa_mr mr; > struct mlx5_control_vq cvq; > struct workqueue_struct *wq; > + unsigned int group2asid[MLX5_VDPA_NUMVQ_GROUPS]; > }; > > int mlx5_vdpa_alloc_pd(struct mlx5_vdpa_dev *dev, u32 *pdn, u16 uid); > diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c > index ea4bc8a0cd25..34bd81cb697c 100644 > --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c > +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c > @@ -2125,9 +2125,14 @@ static u32 mlx5_vdpa_get_vq_align(struct vdpa_device *vdev) > return PAGE_SIZE; > } > > -static u32 mlx5_vdpa_get_vq_group(struct vdpa_device *vdpa, u16 idx) > +static u32 mlx5_vdpa_get_vq_group(struct vdpa_device *vdev, u16 idx) > { > - return 0; > + struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev); > + > + if (is_ctrl_vq_idx(mvdev, idx)) > + return MLX5_VDPA_CVQ_GROUP; > + > + return MLX5_VDPA_DATAVQ_GROUP; > } > > enum { MLX5_VIRTIO_NET_F_GUEST_CSUM = 1 << 9, > @@ -2541,6 +2546,15 @@ static void mlx5_vdpa_set_status(struct vdpa_device *vdev, u8 status) > up_write(&ndev->reslock); > } > > +static void init_group_to_asid_map(struct mlx5_vdpa_dev *mvdev) > +{ > + int i; > + > + /* default mapping all groups are mapped to asid 0 */ > + for (i = 0; i < MLX5_VDPA_NUMVQ_GROUPS; i++) > + mvdev->group2asid[i] = 0; > +} > + > static int mlx5_vdpa_reset(struct vdpa_device *vdev) > { > struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev); > @@ -2559,7 +2573,9 @@ static int mlx5_vdpa_reset(struct vdpa_device *vdev) > ndev->mvdev.cvq.completed_desc = 0; > memset(ndev->event_cbs, 0, sizeof(*ndev->event_cbs) * (mvdev->max_vqs + 1)); > ndev->mvdev.actual_features = 0; > + init_group_to_asid_map(mvdev); > ++mvdev->generation; > + > if (MLX5_CAP_GEN(mvdev->mdev, umem_uid_0)) { > if (mlx5_vdpa_create_mr(mvdev, NULL)) > mlx5_vdpa_warn(mvdev, "create MR failed\n"); > @@ -2597,26 +2613,76 @@ static u32 mlx5_vdpa_get_generation(struct vdpa_device *vdev) > return mvdev->generation; > } > > -static int mlx5_vdpa_set_map(struct vdpa_device *vdev, unsigned int asid, > - struct vhost_iotlb *iotlb) > +static u32 get_group(struct mlx5_vdpa_dev *mvdev, unsigned int asid) > +{ > + u32 group; > + > + for (group = 0; group < MLX5_VDPA_NUMVQ_GROUPS; group++) { > + if (mvdev->group2asid[group] == asid) > + return group; > + } > + return -EINVAL; > +} > + > +static int set_map_control(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb) > +{ > + u64 start = 0ULL, last = 0ULL - 1; > + struct vhost_iotlb_map *map; > + int err = 0; > + > + spin_lock(&mvdev->cvq.iommu_lock); > + vhost_iotlb_reset(mvdev->cvq.iotlb); > + > + for (map = vhost_iotlb_itree_first(iotlb, start, last); map; > + map = vhost_iotlb_itree_next(map, start, last)) { > + err = vhost_iotlb_add_range(mvdev->cvq.iotlb, map->start, > + map->last, map->addr, map->perm); > + if (err) > + goto out; > + } > + > +out: > + spin_unlock(&mvdev->cvq.iommu_lock); > + return err; > +} > + > +static int set_map_data(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb) > { > - struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev); > - struct mlx5_vdpa_net *ndev = to_mlx5_vdpa_ndev(mvdev); > bool change_map; > int err; > > - down_write(&ndev->reslock); > - > err = mlx5_vdpa_handle_set_map(mvdev, iotlb, &change_map); > if (err) { > mlx5_vdpa_warn(mvdev, "set map failed(%d)\n", err); > - goto err; > + return err; > } > > if (change_map) > err = mlx5_vdpa_change_map(mvdev, iotlb); > > -err: > + return err; > +} > + > +static int mlx5_vdpa_set_map(struct vdpa_device *vdev, unsigned int asid, > + struct vhost_iotlb *iotlb) > +{ > + struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev); > + struct mlx5_vdpa_net *ndev = to_mlx5_vdpa_ndev(mvdev); > + u32 group; > + int err; > + > + down_write(&ndev->reslock); > + group = get_group(mvdev, asid); > + switch (group) { > + case MLX5_VDPA_DATAVQ_GROUP: > + err = set_map_data(mvdev, iotlb); > + break; > + case MLX5_VDPA_CVQ_GROUP: > + err = set_map_control(mvdev, iotlb); > + break; > + default: > + err = -EINVAL; > + } > up_write(&ndev->reslock); > return err; > } > @@ -2796,6 +2862,18 @@ static int mlx5_vdpa_suspend(struct vdpa_device *vdev, bool suspend) > return 0; > } > > +static int mlx5_set_group_asid(struct vdpa_device *vdev, u32 group, > + unsigned int asid) > +{ > + struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev); > + > + if (group >= MLX5_VDPA_NUMVQ_GROUPS) > + return -EINVAL; > + > + mvdev->group2asid[group] = asid; > + return 0; > +} > + > static const struct vdpa_config_ops mlx5_vdpa_ops = { > .set_vq_address = mlx5_vdpa_set_vq_address, > .set_vq_num = mlx5_vdpa_set_vq_num, > @@ -2825,6 +2903,7 @@ static const struct vdpa_config_ops mlx5_vdpa_ops = { > .set_config = mlx5_vdpa_set_config, > .get_generation = mlx5_vdpa_get_generation, > .set_map = mlx5_vdpa_set_map, > + .set_group_asid = mlx5_set_group_asid, > .free = mlx5_vdpa_free, > .suspend = mlx5_vdpa_suspend, > }; > @@ -3047,7 +3126,7 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name, > } > > ndev = vdpa_alloc_device(struct mlx5_vdpa_net, mvdev.vdev, mdev->device, &mlx5_vdpa_ops, > - 1, 1, name, false); > + MLX5_VDPA_NUMVQ_GROUPS, MLX5_VDPA_NUM_AS, name, false); > if (IS_ERR(ndev)) > return PTR_ERR(ndev); > > -- > 2.35.1 >