2018-11-30 00:00:22

by Jaesoo Lee

[permalink] [raw]
Subject: [PATCH] nvme-rdma: complete requests from ->timeout

After f6e7d48 (block: remove BLK_EH_HANDLED), the low-level device driver
is responsible to complete the timed out request and a series of changes
were submitted for various LLDDs to make completions from ->timeout
subsequently. However, adding the completion code in NVMe driver was
skipped with the assumption made in below NVMe LLDD's change.

Commit message of db8c48e (nvme: return BLK_EH_DONE from ->timeout):
NVMe always completes the request before returning from ->timeout, either
by polling for it, or by disabling the controller. Return BLK_EH_DONE so
that the block layer doesn't even try to complete it again.

This does not hold at least for NVMe RDMA host driver. An example scenario
is when the RDMA connection is gone while the controller is being deleted.
In this case, the nvmf_reg_write32() for sending shutdown admin command by
the delete_work could be hung forever if the command is not completed by
the timeout handler.

Stack trace of hang looks like:
kworker/u66:2 D 0 21098 2 0x80000080
Workqueue: nvme-delete-wq nvme_delete_ctrl_work
Call Trace:
__schedule+0x2ab/0x880
schedule+0x36/0x80
schedule_timeout+0x161/0x300
? __next_timer_interrupt+0xe0/0xe0
io_schedule_timeout+0x1e/0x50
wait_for_completion_io_timeout+0x130/0x1a0
? wake_up_q+0x80/0x80
blk_execute_rq+0x6e/0xa0
__nvme_submit_sync_cmd+0x6e/0xe0
nvmf_reg_write32+0x6c/0xc0 [nvme_fabrics]
nvme_shutdown_ctrl+0x56/0x110
nvme_rdma_shutdown_ctrl+0xf8/0x100 [nvme_rdma]
nvme_rdma_delete_ctrl+0x1a/0x20 [nvme_rdma]
nvme_delete_ctrl_work+0x66/0x90
process_one_work+0x179/0x390
worker_thread+0x1da/0x3e0
kthread+0x105/0x140
? max_active_store+0x80/0x80
? kthread_bind+0x20/0x20
ret_from_fork+0x35/0x40

Signed-off-by: Jaesoo Lee <[email protected]>
---
drivers/nvme/host/rdma.c | 1 +
1 file changed, 1 insertion(+)

diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index d181caf..25319b7 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -1688,6 +1688,7 @@ static int nvme_rdma_cm_handler(struct rdma_cm_id *cm_id,
/* fail with DNR on cmd timeout */
nvme_req(rq)->status = NVME_SC_ABORT_REQ | NVME_SC_DNR;

+ blk_mq_complete_request(rq);
return BLK_EH_DONE;
}

--
1.9.1



2018-11-30 01:31:17

by Sagi Grimberg

[permalink] [raw]
Subject: Re: [PATCH] nvme-rdma: complete requests from ->timeout


> This does not hold at least for NVMe RDMA host driver. An example scenario
> is when the RDMA connection is gone while the controller is being deleted.
> In this case, the nvmf_reg_write32() for sending shutdown admin command by
> the delete_work could be hung forever if the command is not completed by
> the timeout handler.

If the queue is gone, this means that the queue has already flushed and
any commands that were inflight has completed with a flush error
completion...

Can you describe the scenario that caused this hang? When has the
queue became "gone" and when did the shutdown command execute?

2018-11-30 01:56:58

by Jaesoo Lee

[permalink] [raw]
Subject: Re: [PATCH] nvme-rdma: complete requests from ->timeout

Not the queue, but the RDMA connections.

Let me describe the scenario.

1. connected nvme-rdma target with 500 namespaces
: this will make the nvme_remove_namespaces() took a long time to
complete and open the window vulnerable to this bug
2. host will take below code path for nvme_delete_ctrl_work and send
normal shutdown in nvme_shutdown_ctrl()
- nvme_stop_ctrl
- nvme_stop_keep_alive --> stopped keep alive
- nvme_remove_namespaces --> took too long time, over 10~15s
- nvme_rdma_shutdown_ctrl
- nvme_rdma_teardown_io_queues
- nvme_shutdown_ctrl
- nvmf_reg_write32
-__nvme_submit_sync_cmd --> nvme_delete_ctrl_work blocked here
- nvme_rdma_teardown_admin_queue
- nvme_uninit_ctrl
- nvme_put_ctrl
3. the rdma connection is disconnected by the nvme-rdma target
: in our case, this is triggered by the target side timeout mechanism
: I did not try, but I think this could happen if we lost the RoCE link, too.
4. the shutdown notification command timed out and the work stuck
while leaving the controller in NVME_CTRL_DELETING state

Thanks,

Jaesoo Lee.


On Thu, Nov 29, 2018 at 5:30 PM Sagi Grimberg <[email protected]> wrote:
>
>
> > This does not hold at least for NVMe RDMA host driver. An example scenario
> > is when the RDMA connection is gone while the controller is being deleted.
> > In this case, the nvmf_reg_write32() for sending shutdown admin command by
> > the delete_work could be hung forever if the command is not completed by
> > the timeout handler.
>
> If the queue is gone, this means that the queue has already flushed and
> any commands that were inflight has completed with a flush error
> completion...
>
> Can you describe the scenario that caused this hang? When has the
> queue became "gone" and when did the shutdown command execute?

2018-12-07 00:19:47

by Jaesoo Lee

[permalink] [raw]
Subject: Re: [PATCH] nvme-rdma: complete requests from ->timeout

Could you please take a look at this bug and code review?

We are seeing more instances of this bug and found that reconnect_work
could hang as well, as can be seen from below stacktrace.

Workqueue: nvme-wq nvme_rdma_reconnect_ctrl_work [nvme_rdma]
Call Trace:
__schedule+0x2ab/0x880
schedule+0x36/0x80
schedule_timeout+0x161/0x300
? __next_timer_interrupt+0xe0/0xe0
io_schedule_timeout+0x1e/0x50
wait_for_completion_io_timeout+0x130/0x1a0
? wake_up_q+0x80/0x80
blk_execute_rq+0x6e/0xa0
__nvme_submit_sync_cmd+0x6e/0xe0
nvmf_connect_admin_queue+0x128/0x190 [nvme_fabrics]
? wait_for_completion_interruptible_timeout+0x157/0x1b0
nvme_rdma_start_queue+0x5e/0x90 [nvme_rdma]
nvme_rdma_setup_ctrl+0x1b4/0x730 [nvme_rdma]
nvme_rdma_reconnect_ctrl_work+0x27/0x70 [nvme_rdma]
process_one_work+0x179/0x390
worker_thread+0x4f/0x3e0
kthread+0x105/0x140
? max_active_store+0x80/0x80
? kthread_bind+0x20/0x20

This bug is produced by setting MTU of RoCE interface to '568' for
test while running I/O traffics.

Thanks,

Jaesoo Lee.

On Thu, Nov 29, 2018 at 5:54 PM Jaesoo Lee <[email protected]> wrote:
>
> Not the queue, but the RDMA connections.
>
> Let me describe the scenario.
>
> 1. connected nvme-rdma target with 500 namespaces
> : this will make the nvme_remove_namespaces() took a long time to
> complete and open the window vulnerable to this bug
> 2. host will take below code path for nvme_delete_ctrl_work and send
> normal shutdown in nvme_shutdown_ctrl()
> - nvme_stop_ctrl
> - nvme_stop_keep_alive --> stopped keep alive
> - nvme_remove_namespaces --> took too long time, over 10~15s
> - nvme_rdma_shutdown_ctrl
> - nvme_rdma_teardown_io_queues
> - nvme_shutdown_ctrl
> - nvmf_reg_write32
> -__nvme_submit_sync_cmd --> nvme_delete_ctrl_work blocked here
> - nvme_rdma_teardown_admin_queue
> - nvme_uninit_ctrl
> - nvme_put_ctrl
> 3. the rdma connection is disconnected by the nvme-rdma target
> : in our case, this is triggered by the target side timeout mechanism
> : I did not try, but I think this could happen if we lost the RoCE link, too.
> 4. the shutdown notification command timed out and the work stuck
> while leaving the controller in NVME_CTRL_DELETING state
>
> Thanks,
>
> Jaesoo Lee.
>
>
> On Thu, Nov 29, 2018 at 5:30 PM Sagi Grimberg <[email protected]> wrote:
> >
> >
> > > This does not hold at least for NVMe RDMA host driver. An example scenario
> > > is when the RDMA connection is gone while the controller is being deleted.
> > > In this case, the nvmf_reg_write32() for sending shutdown admin command by
> > > the delete_work could be hung forever if the command is not completed by
> > > the timeout handler.
> >
> > If the queue is gone, this means that the queue has already flushed and
> > any commands that were inflight has completed with a flush error
> > completion...
> >
> > Can you describe the scenario that caused this hang? When has the
> > queue became "gone" and when did the shutdown command execute?

2018-12-07 20:06:32

by Sagi Grimberg

[permalink] [raw]
Subject: Re: [PATCH] nvme-rdma: complete requests from ->timeout


> Could you please take a look at this bug and code review?
>
> We are seeing more instances of this bug and found that reconnect_work
> could hang as well, as can be seen from below stacktrace.
>
> Workqueue: nvme-wq nvme_rdma_reconnect_ctrl_work [nvme_rdma]
> Call Trace:
> __schedule+0x2ab/0x880
> schedule+0x36/0x80
> schedule_timeout+0x161/0x300
> ? __next_timer_interrupt+0xe0/0xe0
> io_schedule_timeout+0x1e/0x50
> wait_for_completion_io_timeout+0x130/0x1a0
> ? wake_up_q+0x80/0x80
> blk_execute_rq+0x6e/0xa0
> __nvme_submit_sync_cmd+0x6e/0xe0
> nvmf_connect_admin_queue+0x128/0x190 [nvme_fabrics]
> ? wait_for_completion_interruptible_timeout+0x157/0x1b0
> nvme_rdma_start_queue+0x5e/0x90 [nvme_rdma]
> nvme_rdma_setup_ctrl+0x1b4/0x730 [nvme_rdma]
> nvme_rdma_reconnect_ctrl_work+0x27/0x70 [nvme_rdma]
> process_one_work+0x179/0x390
> worker_thread+0x4f/0x3e0
> kthread+0x105/0x140
> ? max_active_store+0x80/0x80
> ? kthread_bind+0x20/0x20
>
> This bug is produced by setting MTU of RoCE interface to '568' for
> test while running I/O traffics.

I think that with the latest changes from Keith we can no longer rely
on blk-mq to barrier racing completions. We will probably need
to barrier ourselves in nvme-rdma...

2018-12-08 02:05:47

by Keith Busch

[permalink] [raw]
Subject: Re: [PATCH] nvme-rdma: complete requests from ->timeout

On Fri, Dec 07, 2018 at 12:05:37PM -0800, Sagi Grimberg wrote:
>
> > Could you please take a look at this bug and code review?
> >
> > We are seeing more instances of this bug and found that reconnect_work
> > could hang as well, as can be seen from below stacktrace.
> >
> > Workqueue: nvme-wq nvme_rdma_reconnect_ctrl_work [nvme_rdma]
> > Call Trace:
> > __schedule+0x2ab/0x880
> > schedule+0x36/0x80
> > schedule_timeout+0x161/0x300
> > ? __next_timer_interrupt+0xe0/0xe0
> > io_schedule_timeout+0x1e/0x50
> > wait_for_completion_io_timeout+0x130/0x1a0
> > ? wake_up_q+0x80/0x80
> > blk_execute_rq+0x6e/0xa0
> > __nvme_submit_sync_cmd+0x6e/0xe0
> > nvmf_connect_admin_queue+0x128/0x190 [nvme_fabrics]
> > ? wait_for_completion_interruptible_timeout+0x157/0x1b0
> > nvme_rdma_start_queue+0x5e/0x90 [nvme_rdma]
> > nvme_rdma_setup_ctrl+0x1b4/0x730 [nvme_rdma]
> > nvme_rdma_reconnect_ctrl_work+0x27/0x70 [nvme_rdma]
> > process_one_work+0x179/0x390
> > worker_thread+0x4f/0x3e0
> > kthread+0x105/0x140
> > ? max_active_store+0x80/0x80
> > ? kthread_bind+0x20/0x20
> >
> > This bug is produced by setting MTU of RoCE interface to '568' for
> > test while running I/O traffics.
>
> I think that with the latest changes from Keith we can no longer rely
> on blk-mq to barrier racing completions. We will probably need
> to barrier ourselves in nvme-rdma...

You really need to do that anyway. If you were relying on blk-mq to save
you from double completions by ending a request in the nvme driver while
the lower half can still complete the same one, the only thing preventing
data corruption is the probability the request wasn't reallocated for a
new command.

2018-12-08 06:30:31

by Jaesoo Lee

[permalink] [raw]
Subject: Re: [PATCH] nvme-rdma: complete requests from ->timeout

Now, I see that my patch is not safe and can cause double completions.
However, I am having a hard time finding out a good solution to
barrier the racing completions.

Could you suggest where the fix should go and what should it look
like? We can provide more details on reproducing this issue if that
helps.

On Fri, Dec 7, 2018 at 6:04 PM Keith Busch <[email protected]> wrote:
>
> On Fri, Dec 07, 2018 at 12:05:37PM -0800, Sagi Grimberg wrote:
> >
> > > Could you please take a look at this bug and code review?
> > >
> > > We are seeing more instances of this bug and found that reconnect_work
> > > could hang as well, as can be seen from below stacktrace.
> > >
> > > Workqueue: nvme-wq nvme_rdma_reconnect_ctrl_work [nvme_rdma]
> > > Call Trace:
> > > __schedule+0x2ab/0x880
> > > schedule+0x36/0x80
> > > schedule_timeout+0x161/0x300
> > > ? __next_timer_interrupt+0xe0/0xe0
> > > io_schedule_timeout+0x1e/0x50
> > > wait_for_completion_io_timeout+0x130/0x1a0
> > > ? wake_up_q+0x80/0x80
> > > blk_execute_rq+0x6e/0xa0
> > > __nvme_submit_sync_cmd+0x6e/0xe0
> > > nvmf_connect_admin_queue+0x128/0x190 [nvme_fabrics]
> > > ? wait_for_completion_interruptible_timeout+0x157/0x1b0
> > > nvme_rdma_start_queue+0x5e/0x90 [nvme_rdma]
> > > nvme_rdma_setup_ctrl+0x1b4/0x730 [nvme_rdma]
> > > nvme_rdma_reconnect_ctrl_work+0x27/0x70 [nvme_rdma]
> > > process_one_work+0x179/0x390
> > > worker_thread+0x4f/0x3e0
> > > kthread+0x105/0x140
> > > ? max_active_store+0x80/0x80
> > > ? kthread_bind+0x20/0x20
> > >
> > > This bug is produced by setting MTU of RoCE interface to '568' for
> > > test while running I/O traffics.
> >
> > I think that with the latest changes from Keith we can no longer rely
> > on blk-mq to barrier racing completions. We will probably need
> > to barrier ourselves in nvme-rdma...
>
> You really need to do that anyway. If you were relying on blk-mq to save
> you from double completions by ending a request in the nvme driver while
> the lower half can still complete the same one, the only thing preventing
> data corruption is the probability the request wasn't reallocated for a
> new command.

2018-12-09 14:23:57

by Nitzan Carmi

[permalink] [raw]
Subject: Re: [PATCH] nvme-rdma: complete requests from ->timeout

Hi,
We encountered similar issue.
I think that the problem is that error_recovery might not even be
queued, in case we're in DELETING state (or CONNECTING state, for that
matter), because we cannot move from those states to RESETTING.

We prepared some patches which handle completions in case such scenario
happens (which, in fact, might happen in numerous error flows).

Does it solve your problem?
Nitzan.


On 30/11/2018 03:30, Sagi Grimberg wrote:
>
>> This does not hold at least for NVMe RDMA host driver. An example
>> scenario
>> is when the RDMA connection is gone while the controller is being
>> deleted.
>> In this case, the nvmf_reg_write32() for sending shutdown admin
>> command by
>> the delete_work could be hung forever if the command is not completed by
>> the timeout handler.
>
> If the queue is gone, this means that the queue has already flushed and
> any commands that were inflight has completed with a flush error
> completion...
>
> Can you describe the scenario that caused this hang? When has the
> queue became "gone" and when did the shutdown command execute?
>
> _______________________________________________
> Linux-nvme mailing list
> [email protected]
> http://lists.infradead.org/mailman/listinfo/linux-nvme


Attachments:
0001-nvme-Introduce-nvme_is_aen_req-function.patch (2.28 kB)
0001-nvme-Introduce-nvme_is_aen_req-function.patch
0002-nvme-rdma-Handle-completions-if-error_recovery-fails.patch (7.00 kB)
0002-nvme-rdma-Handle-completions-if-error_recovery-fails.patch
Download all attachments

2018-12-11 00:34:09

by Jaesoo Lee

[permalink] [raw]
Subject: Re: [PATCH] nvme-rdma: complete requests from ->timeout

It seems that your patch is addressing the same bug. I will see if
that works for our failure scenarios.

Why don't you make it upstream?

On Sun, Dec 9, 2018 at 6:22 AM Nitzan Carmi <[email protected]> wrote:
>
> Hi,
> We encountered similar issue.
> I think that the problem is that error_recovery might not even be
> queued, in case we're in DELETING state (or CONNECTING state, for that
> matter), because we cannot move from those states to RESETTING.
>
> We prepared some patches which handle completions in case such scenario
> happens (which, in fact, might happen in numerous error flows).
>
> Does it solve your problem?
> Nitzan.
>
>
> On 30/11/2018 03:30, Sagi Grimberg wrote:
> >
> >> This does not hold at least for NVMe RDMA host driver. An example
> >> scenario
> >> is when the RDMA connection is gone while the controller is being
> >> deleted.
> >> In this case, the nvmf_reg_write32() for sending shutdown admin
> >> command by
> >> the delete_work could be hung forever if the command is not completed by
> >> the timeout handler.
> >
> > If the queue is gone, this means that the queue has already flushed and
> > any commands that were inflight has completed with a flush error
> > completion...
> >
> > Can you describe the scenario that caused this hang? When has the
> > queue became "gone" and when did the shutdown command execute?
> >
> > _______________________________________________
> > Linux-nvme mailing list
> > [email protected]
> > http://lists.infradead.org/mailman/listinfo/linux-nvme

2018-12-11 09:17:10

by Nitzan Carmi

[permalink] [raw]
Subject: Re: [PATCH] nvme-rdma: complete requests from ->timeout

I was just in the middle of sending this to upstream when I saw your
mail, and thought too that it addresses the same bug, although I see a
little different call trace than yours.

I would be happy if you can verify that this patch works for you too,
and we can push it to upstream.

On 11/12/2018 01:40, Jaesoo Lee wrote:
> It seems that your patch is addressing the same bug. I will see if
> that works for our failure scenarios.
>
> Why don't you make it upstream?
>
> On Sun, Dec 9, 2018 at 6:22 AM Nitzan Carmi <[email protected]> wrote:
>>
>> Hi,
>> We encountered similar issue.
>> I think that the problem is that error_recovery might not even be
>> queued, in case we're in DELETING state (or CONNECTING state, for that
>> matter), because we cannot move from those states to RESETTING.
>>
>> We prepared some patches which handle completions in case such scenario
>> happens (which, in fact, might happen in numerous error flows).
>>
>> Does it solve your problem?
>> Nitzan.
>>
>>
>> On 30/11/2018 03:30, Sagi Grimberg wrote:
>>>
>>>> This does not hold at least for NVMe RDMA host driver. An example
>>>> scenario
>>>> is when the RDMA connection is gone while the controller is being
>>>> deleted.
>>>> In this case, the nvmf_reg_write32() for sending shutdown admin
>>>> command by
>>>> the delete_work could be hung forever if the command is not completed by
>>>> the timeout handler.
>>>
>>> If the queue is gone, this means that the queue has already flushed and
>>> any commands that were inflight has completed with a flush error
>>> completion...
>>>
>>> Can you describe the scenario that caused this hang? When has the
>>> queue became "gone" and when did the shutdown command execute?
>>>
>>> _______________________________________________
>>> Linux-nvme mailing list
>>> [email protected]
>>> http://lists.infradead.org/mailman/listinfo/linux-nvme

2018-12-11 23:19:28

by Jaesoo Lee

[permalink] [raw]
Subject: Re: [PATCH] nvme-rdma: complete requests from ->timeout

I cannot reproduce the bug with the patch; in my failure scenarios, it
seems that completing the request on errors in nvme_rdma_send_done
makes __nvme_submit_sync_cmd to be unblocked. Also, I think this is
safe from the double completions.

However, it seems that nvme_rdma_timeout code is still not free from
the double completion problem. So, it looks promising to me if you
could separate out the nvme_rdma_wr_error handling code as a new
patch.
On Tue, Dec 11, 2018 at 1:14 AM Nitzan Carmi <[email protected]> wrote:
>
> I was just in the middle of sending this to upstream when I saw your
> mail, and thought too that it addresses the same bug, although I see a
> little different call trace than yours.
>
> I would be happy if you can verify that this patch works for you too,
> and we can push it to upstream.
>
> On 11/12/2018 01:40, Jaesoo Lee wrote:
> > It seems that your patch is addressing the same bug. I will see if
> > that works for our failure scenarios.
> >
> > Why don't you make it upstream?
> >
> > On Sun, Dec 9, 2018 at 6:22 AM Nitzan Carmi <[email protected]> wrote:
> >>
> >> Hi,
> >> We encountered similar issue.
> >> I think that the problem is that error_recovery might not even be
> >> queued, in case we're in DELETING state (or CONNECTING state, for that
> >> matter), because we cannot move from those states to RESETTING.
> >>
> >> We prepared some patches which handle completions in case such scenario
> >> happens (which, in fact, might happen in numerous error flows).
> >>
> >> Does it solve your problem?
> >> Nitzan.
> >>
> >>
> >> On 30/11/2018 03:30, Sagi Grimberg wrote:
> >>>
> >>>> This does not hold at least for NVMe RDMA host driver. An example
> >>>> scenario
> >>>> is when the RDMA connection is gone while the controller is being
> >>>> deleted.
> >>>> In this case, the nvmf_reg_write32() for sending shutdown admin
> >>>> command by
> >>>> the delete_work could be hung forever if the command is not completed by
> >>>> the timeout handler.
> >>>
> >>> If the queue is gone, this means that the queue has already flushed and
> >>> any commands that were inflight has completed with a flush error
> >>> completion...
> >>>
> >>> Can you describe the scenario that caused this hang? When has the
> >>> queue became "gone" and when did the shutdown command execute?
> >>>
> >>> _______________________________________________
> >>> Linux-nvme mailing list
> >>> [email protected]
> >>> http://lists.infradead.org/mailman/listinfo/linux-nvme

2018-12-11 23:40:13

by Sagi Grimberg

[permalink] [raw]
Subject: Re: [PATCH] nvme-rdma: complete requests from ->timeout

> I cannot reproduce the bug with the patch; in my failure scenarios, it
> seems that completing the request on errors in nvme_rdma_send_done
> makes __nvme_submit_sync_cmd to be unblocked. Also, I think this is
> safe from the double completions.
>
> However, it seems that nvme_rdma_timeout code is still not free from
> the double completion problem. So, it looks promising to me if you
> could separate out the nvme_rdma_wr_error handling code as a new
> patch.

Guys, can you please send proper patches so we can review properly?

2018-12-12 01:33:03

by Jaesoo Lee

[permalink] [raw]
Subject: Re: [PATCH] nvme-rdma: complete requests from ->timeout

Please drop this patch. However, it would be happy if this bug can be
fixed as soon as possible.

Nitzan, do you mind if you send your patch for review?
On Tue, Dec 11, 2018 at 3:39 PM Sagi Grimberg <[email protected]> wrote:
>
> > I cannot reproduce the bug with the patch; in my failure scenarios, it
> > seems that completing the request on errors in nvme_rdma_send_done
> > makes __nvme_submit_sync_cmd to be unblocked. Also, I think this is
> > safe from the double completions.
> >
> > However, it seems that nvme_rdma_timeout code is still not free from
> > the double completion problem. So, it looks promising to me if you
> > could separate out the nvme_rdma_wr_error handling code as a new
> > patch.
>
> Guys, can you please send proper patches so we can review properly?