Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp897487pxb; Wed, 3 Feb 2021 23:10:55 -0800 (PST) X-Google-Smtp-Source: ABdhPJz7ldok84HNO4sU0MZNI8CPsNTdpzxHzxedAeW6rlrMhf4byGtsWJi3iXM0GCF6kS0KHCHg X-Received: by 2002:aa7:ccd3:: with SMTP id y19mr632156edt.145.1612422654833; Wed, 03 Feb 2021 23:10:54 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612422654; cv=none; d=google.com; s=arc-20160816; b=bbOH4X07Xx0MfXTda9ua7VxKoPrcVaPsxv0d/hooFBS/2n7RmWShC9vG+/+Z37b7uB n7R0y0brwswT57A085ERjkfytOzXJD7IgORSMGfGCMhkuqJa72GkO3RhwLEiqpHpd+8n hqKyrYaMB6C0stt0fjq4953OYD3mt4V1iM8+O+X6CzxF5MIWG8To1kdQdxFxYH/NK0z5 jnCSKspD/Xw9kyGBpTPJdFncodpqH85ZDE7Rntty+cnVdlfaue7DicnvJ9Qypz4Oy7pg bSnLCC58KbPC2/K9MBKoJQjGw/0kL9iHGIe6gFyMWW5+VzXnpXdXzLVHmVigKKPsRyvp FVDA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:dkim-signature:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=SkkvE7Rm6LONbN8i1qI6RsXg/NvnkJ2+3rgLTnUEpPc=; b=TK6HMg4T4t13FTmWzbDVKdtaatHKjR+u7tDVl+E+8+sQC87vMt4S9vT/6VL4qsrSgl qM+Fvxm92b3WEjHNYwFx2uR8N33/IrECR6MUNccISJ01JJaxSBhdOBednv7aphcXy42J dgU21jGEATPboufbgWQwFMOuEn1IjYQuUqCrPmJTE+9esSqcaHkRe8eDR0Ny/Aujvbfx xes7rZ59R4jty6Aelt2xgz2RGFeVXDGgBUKzuOZvht3pyGnAMKSGT05H7voO+lGRDF6v cIPU3kifMJzBJhVEsusGjcTLigJFsANocQlr5flWM9oZB/3NT+iWiw/qx6OmqlDCAnTt /gTg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=cOqzGQgG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id x9si3011015ejc.595.2021.02.03.23.10.30; Wed, 03 Feb 2021 23:10:54 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=cOqzGQgG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233957AbhBDHH1 (ORCPT + 99 others); Thu, 4 Feb 2021 02:07:27 -0500 Received: from hqnvemgate26.nvidia.com ([216.228.121.65]:13447 "EHLO hqnvemgate26.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233805AbhBDHHH (ORCPT ); Thu, 4 Feb 2021 02:07:07 -0500 Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Wed, 03 Feb 2021 23:06:27 -0800 Received: from mtl-vdi-166.wap.labs.mlnx (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 4 Feb 2021 07:06:24 +0000 Date: Thu, 4 Feb 2021 09:06:21 +0200 From: Eli Cohen To: Si-Wei Liu CC: Si-Wei Liu , , , , , Subject: Re: [PATCH] vdpa/mlx5: Restore the hardware used index after change map Message-ID: <20210204070621.GB236663@mtl-vdi-166.wap.labs.mlnx> References: <20210202142901.7131-1-elic@nvidia.com> <20210203064812.GA33072@mtl-vdi-166.wap.labs.mlnx> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.9.5 (bf161cf53efb) (2018-04-13) X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1612422387; bh=SkkvE7Rm6LONbN8i1qI6RsXg/NvnkJ2+3rgLTnUEpPc=; h=Date:From:To:CC:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To:User-Agent: X-Originating-IP:X-ClientProxiedBy; b=cOqzGQgGmx+q7+HBugxfWIGLx8ovkUAqZXs6iIjBJDjieOSmqUGgJdgxYs2SpQoLV omY06BSQSofZY2TjNd2FPWKvhEMCN+co8wfmdZe63TaF4AlTlfz4pPk98fPv2ady7t tOTRWFx5PynbGb1uIAxYtuL0gkHXBbp4e6YFOSNq3meXFONGVVr5KF2C91sCZvneu6 y0lb++PdSmdThvWSxHLRJSTcwhr5jTtRx81O9zGjec7jc1CqO9/C8j8FmGx0qGsT17 oPXhJE3WsZUEtklAf+BoMMvgqWTyYJ1OONRfCEy33TnAQW17M80d8ew+fiktlkKgIA XpZk4g21aYl6g== Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Feb 03, 2021 at 12:33:26PM -0800, Si-Wei Liu wrote: > On Tue, Feb 2, 2021 at 10:48 PM Eli Cohen wrote: > > > > On Tue, Feb 02, 2021 at 09:14:02AM -0800, Si-Wei Liu wrote: > > > On Tue, Feb 2, 2021 at 6:34 AM Eli Cohen wrote: > > > > > > > > When a change of memory map occurs, the hardware resources are destroyed > > > > and then re-created again with the new memory map. In such case, we need > > > > to restore the hardware available and used indices. The driver failed to > > > > restore the used index which is added here. > > > > > > > > Fixes 1a86b377aa21 ("vdpa/mlx5: Add VDPA driver for supported mlx5 devices") > > > > Signed-off-by: Eli Cohen > > > > --- > > > > This patch is being sent again a single patch the fixes hot memory > > > > addtion to a qemy process. > > > > > > > > drivers/vdpa/mlx5/net/mlx5_vnet.c | 7 +++++++ > > > > 1 file changed, 7 insertions(+) > > > > > > > > diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c > > > > index 88dde3455bfd..839f57c64a6f 100644 > > > > --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c > > > > +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c > > > > @@ -87,6 +87,7 @@ struct mlx5_vq_restore_info { > > > > u64 device_addr; > > > > u64 driver_addr; > > > > u16 avail_index; > > > > + u16 used_index; > > > > bool ready; > > > > struct vdpa_callback cb; > > > > bool restore; > > > > @@ -121,6 +122,7 @@ struct mlx5_vdpa_virtqueue { > > > > u32 virtq_id; > > > > struct mlx5_vdpa_net *ndev; > > > > u16 avail_idx; > > > > + u16 used_idx; > > > > int fw_state; > > > > > > > > /* keep last in the struct */ > > > > @@ -804,6 +806,7 @@ static int create_virtqueue(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtque > > > > > > > > obj_context = MLX5_ADDR_OF(create_virtio_net_q_in, in, obj_context); > > > > MLX5_SET(virtio_net_q_object, obj_context, hw_available_index, mvq->avail_idx); > > > > + MLX5_SET(virtio_net_q_object, obj_context, hw_used_index, mvq->used_idx); > > > > > > The saved indexes will apply to the new virtqueue object whenever it > > > is created. In virtio spec, these indexes will reset back to zero when > > > the virtio device is reset. But I don't see how it's done today. IOW, > > > I don't see where avail_idx and used_idx get cleared from the mvq for > > > device reset via set_status(). > > > > > > > Right, but this is not strictly related to this patch. I will post > > another patch to fix this. > > Better to post these two patches in a series.Or else it may cause VM > reboot problem as that is where the device gets reset. The avail_index > did not as the correct value will be written to by driver right after, > but used_idx introduced by this patch is supplied by device hence this > patch alone would introduce regression. > OK, will do. > > > > BTW, can you describe a secnario that would cause a reset (through > > calling set_status()) that happens after the VQ has been used? > > You can try reboot the guest, that'll be the easy way to test. > Thanks! > -Siwei > > > > > > -Siwei > > > > > > > > > > MLX5_SET(virtio_net_q_object, obj_context, queue_feature_bit_mask_12_3, > > > > get_features_12_3(ndev->mvdev.actual_features)); > > > > vq_ctx = MLX5_ADDR_OF(virtio_net_q_object, obj_context, virtio_q_context); > > > > @@ -1022,6 +1025,7 @@ static int connect_qps(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueue *m > > > > struct mlx5_virtq_attr { > > > > u8 state; > > > > u16 available_index; > > > > + u16 used_index; > > > > }; > > > > > > > > static int query_virtqueue(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueue *mvq, > > > > @@ -1052,6 +1056,7 @@ static int query_virtqueue(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueu > > > > memset(attr, 0, sizeof(*attr)); > > > > attr->state = MLX5_GET(virtio_net_q_object, obj_context, state); > > > > attr->available_index = MLX5_GET(virtio_net_q_object, obj_context, hw_available_index); > > > > + attr->used_index = MLX5_GET(virtio_net_q_object, obj_context, hw_used_index); > > > > kfree(out); > > > > return 0; > > > > > > > > @@ -1610,6 +1615,7 @@ static int save_channel_info(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqu > > > > return err; > > > > > > > > ri->avail_index = attr.available_index; > > > > + ri->used_index = attr.used_index; > > > > ri->ready = mvq->ready; > > > > ri->num_ent = mvq->num_ent; > > > > ri->desc_addr = mvq->desc_addr; > > > > @@ -1654,6 +1660,7 @@ static void restore_channels_info(struct mlx5_vdpa_net *ndev) > > > > continue; > > > > > > > > mvq->avail_idx = ri->avail_index; > > > > + mvq->used_idx = ri->used_index; > > > > mvq->ready = ri->ready; > > > > mvq->num_ent = ri->num_ent; > > > > mvq->desc_addr = ri->desc_addr; > > > > -- > > > > 2.29.2 > > > >