2024-02-28 09:14:27

by brookxu.cn

[permalink] [raw]
Subject: [PATCH] nvme: fix reconnection fail due to reserved tag allocation

From: Chunguang Xu <[email protected]>

We found a issue on production environment while using NVMe
over RDMA, admin_q reconnect failed forever while remote
target and network is ok. After dig into it, we found it
may caused by a ABBA deadlock due to tag allocation. In my
case, the tag was hold by a keep alive request waiting
inside admin_q, as we quiesced admin_q while reset ctrl,
so the request maked as idle and will not process before
reset success. As fabric_q shares tagset with admin_q,
while reconnect remote target, we need a tag for connect
command, but the only one reserved tag was held by keep
alive command which waiting inside admin_q. As a result,
we failed to reconnect admin_q forever.

In order to workaround this issue, I think we should not
retry keep alive request while controller reconnecting,
as we have stopped keep alive while resetting controller,
and will start it again while init finish, so it maybe ok
to drop it.

Signed-off-by: Chunguang Xu <[email protected]>
---
drivers/nvme/host/core.c | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 0a96362912ce..07ed2b6a75fb 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -371,6 +371,8 @@ enum nvme_disposition {

static inline enum nvme_disposition nvme_decide_disposition(struct request *req)
{
+ struct nvme_ctrl *ctrl = nvme_req(req)->ctrl;
+
if (likely(nvme_req(req)->status == 0))
return COMPLETE;

@@ -382,6 +384,12 @@ static inline enum nvme_disposition nvme_decide_disposition(struct request *req)
nvme_req(req)->retries >= nvme_max_retries)
return COMPLETE;

+ if (nvme_ctrl_state(ctrl) != NVME_CTRL_LIVE) {
+ if (nvme_req(req)->cmd->common.opcode ==
+ nvme_admin_keep_alive)
+ return COMPLETE;
+ }
+
if (req->cmd_flags & REQ_NVME_MPATH) {
if (nvme_is_path_error(nvme_req(req)->status) ||
blk_queue_dying(req->q))
@@ -1296,8 +1304,7 @@ static enum rq_end_io_ret nvme_keep_alive_end_io(struct request *rq,
ctrl->ka_last_check_time = jiffies;
ctrl->comp_seen = false;
spin_lock_irqsave(&ctrl->lock, flags);
- if (ctrl->state == NVME_CTRL_LIVE ||
- ctrl->state == NVME_CTRL_CONNECTING)
+ if (ctrl->state == NVME_CTRL_LIVE)
startka = true;
spin_unlock_irqrestore(&ctrl->lock, flags);
if (startka)
--
2.25.1



2024-03-07 09:36:37

by Sagi Grimberg

[permalink] [raw]
Subject: Re: [PATCH] nvme: fix reconnection fail due to reserved tag allocation



On 28/02/2024 11:14, brookxu.cn wrote:
> From: Chunguang Xu <[email protected]>
>
> We found a issue on production environment while using NVMe
> over RDMA, admin_q reconnect failed forever while remote
> target and network is ok. After dig into it, we found it
> may caused by a ABBA deadlock due to tag allocation. In my
> case, the tag was hold by a keep alive request waiting
> inside admin_q, as we quiesced admin_q while reset ctrl,
> so the request maked as idle and will not process before
> reset success. As fabric_q shares tagset with admin_q,
> while reconnect remote target, we need a tag for connect
> command, but the only one reserved tag was held by keep
> alive command which waiting inside admin_q. As a result,
> we failed to reconnect admin_q forever.
>
> In order to workaround this issue, I think we should not
> retry keep alive request while controller reconnecting,
> as we have stopped keep alive while resetting controller,
> and will start it again while init finish, so it maybe ok
> to drop it.

This is the wrong fix.
First we should note that this is a regression caused by:
ed01fee283a0 ("nvme-fabrics: only reserve a single tag")

Then, you need to restore reserving two tags for the admin
tagset.

2024-03-07 10:35:22

by Sagi Grimberg

[permalink] [raw]
Subject: Re: [PATCH] nvme: fix reconnection fail due to reserved tag allocation



On 07/03/2024 12:32, 许春光 wrote:
> Thanks for review, seems that we should revert this patch
> ed01fee283a0, ed01fee283a0 seems just a alone 'optimization'. If no
> double, I will send another patch.

Not a revert, but a fix with a Fixes tag. Just use
NVMF_ADMIN_RESERVED_TAGS and NVMF_IO_RESERVED_TAGS.


>
> Thanks
>
> Sagi Grimberg <[email protected]> 于2024年3月7日周四 17:36写道:
>>
>>
>> On 28/02/2024 11:14, brookxu.cn wrote:
>>> From: Chunguang Xu <[email protected]>
>>>
>>> We found a issue on production environment while using NVMe
>>> over RDMA, admin_q reconnect failed forever while remote
>>> target and network is ok. After dig into it, we found it
>>> may caused by a ABBA deadlock due to tag allocation. In my
>>> case, the tag was hold by a keep alive request waiting
>>> inside admin_q, as we quiesced admin_q while reset ctrl,
>>> so the request maked as idle and will not process before
>>> reset success. As fabric_q shares tagset with admin_q,
>>> while reconnect remote target, we need a tag for connect
>>> command, but the only one reserved tag was held by keep
>>> alive command which waiting inside admin_q. As a result,
>>> we failed to reconnect admin_q forever.
>>>
>>> In order to workaround this issue, I think we should not
>>> retry keep alive request while controller reconnecting,
>>> as we have stopped keep alive while resetting controller,
>>> and will start it again while init finish, so it maybe ok
>>> to drop it.
>> This is the wrong fix.
>> First we should note that this is a regression caused by:
>> ed01fee283a0 ("nvme-fabrics: only reserve a single tag")
>>
>> Then, you need to restore reserving two tags for the admin
>> tagset.


2024-03-07 10:39:54

by brookxu.cn

[permalink] [raw]
Subject: Re: [PATCH] nvme: fix reconnection fail due to reserved tag allocation

Thanks for review, seems that we should revert this patch
ed01fee283a0, ed01fee283a0 seems just a alone 'optimization'. If no
double, I will send another patch.

Thanks

Sagi Grimberg <[email protected]> 于2024年3月7日周四 17:36写道:
>
>
>
> On 28/02/2024 11:14, brookxu.cn wrote:
> > From: Chunguang Xu <[email protected]>
> >
> > We found a issue on production environment while using NVMe
> > over RDMA, admin_q reconnect failed forever while remote
> > target and network is ok. After dig into it, we found it
> > may caused by a ABBA deadlock due to tag allocation. In my
> > case, the tag was hold by a keep alive request waiting
> > inside admin_q, as we quiesced admin_q while reset ctrl,
> > so the request maked as idle and will not process before
> > reset success. As fabric_q shares tagset with admin_q,
> > while reconnect remote target, we need a tag for connect
> > command, but the only one reserved tag was held by keep
> > alive command which waiting inside admin_q. As a result,
> > we failed to reconnect admin_q forever.
> >
> > In order to workaround this issue, I think we should not
> > retry keep alive request while controller reconnecting,
> > as we have stopped keep alive while resetting controller,
> > and will start it again while init finish, so it maybe ok
> > to drop it.
>
> This is the wrong fix.
> First we should note that this is a regression caused by:
> ed01fee283a0 ("nvme-fabrics: only reserve a single tag")
>
> Then, you need to restore reserving two tags for the admin
> tagset.