When a stacked block device inserts a request into another block device
using blk_insert_cloned_request, the request's nr_phys_segments field gets
recalculated by a call to blk_recalc_rq_segments in
blk_cloned_rq_check_limits. But blk_recalc_rq_segments does not know how to
handle multi-segment discards. For disk types which can handle
multi-segment discards like nvme, this results in discard requests which
claim a single segment when it should report several, triggering a warning
in nvme and causing nvme to fail the discard from the invalid state.
WARNING: CPU: 5 PID: 191 at drivers/nvme/host/core.c:700 nvme_setup_discard+0x170/0x1e0 [nvme_core]
...
nvme_setup_cmd+0x217/0x270 [nvme_core]
nvme_loop_queue_rq+0x51/0x1b0 [nvme_loop]
__blk_mq_try_issue_directly+0xe7/0x1b0
blk_mq_request_issue_directly+0x41/0x70
? blk_account_io_start+0x40/0x50
dm_mq_queue_rq+0x200/0x3e0
blk_mq_dispatch_rq_list+0x10a/0x7d0
? __sbitmap_queue_get+0x25/0x90
? elv_rb_del+0x1f/0x30
? deadline_remove_request+0x55/0xb0
? dd_dispatch_request+0x181/0x210
__blk_mq_do_dispatch_sched+0x144/0x290
? bio_attempt_discard_merge+0x134/0x1f0
__blk_mq_sched_dispatch_requests+0x129/0x180
blk_mq_sched_dispatch_requests+0x30/0x60
__blk_mq_run_hw_queue+0x47/0xe0
__blk_mq_delay_run_hw_queue+0x15b/0x170
blk_mq_sched_insert_requests+0x68/0xe0
blk_mq_flush_plug_list+0xf0/0x170
blk_finish_plug+0x36/0x50
xlog_cil_committed+0x19f/0x290 [xfs]
xlog_cil_process_committed+0x57/0x80 [xfs]
xlog_state_do_callback+0x1e0/0x2a0 [xfs]
xlog_ioend_work+0x2f/0x80 [xfs]
process_one_work+0x1b6/0x350
worker_thread+0x53/0x3e0
? process_one_work+0x350/0x350
kthread+0x11b/0x140
? __kthread_bind_mask+0x60/0x60
ret_from_fork+0x22/0x30
This patch fixes blk_recalc_rq_segments to be aware of devices which can
have multi-segment discards. It calculates the correct discard segment
count by counting the number of bio as each discard bio is considered its
own segment.
Fixes: 1e739730c5b9 ("block: optionally merge discontiguous discard bios into a single request")
Signed-off-by: David Jeffery <[email protected]>
Reviewed-by: Ming Lei <[email protected]>
Reviewed-by: Laurence Oberman <[email protected]>
---
V2 explicitly returns 1 instead of falling through in the no-multi case and
handles REQ_OP_SECURE_ERASE like REQ_OP_DISCARD.
block/blk-merge.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 808768f6b174..756473295f19 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -383,6 +383,14 @@ unsigned int blk_recalc_rq_segments(struct request *rq)
switch (bio_op(rq->bio)) {
case REQ_OP_DISCARD:
case REQ_OP_SECURE_ERASE:
+ if (queue_max_discard_segments(rq->q) > 1) {
+ struct bio *bio = rq->bio;
+
+ for_each_bio(bio)
+ nr_phys_segs++;
+ return nr_phys_segs;
+ }
+ return 1;
case REQ_OP_WRITE_ZEROES:
return 0;
case REQ_OP_WRITE_SAME:
On Thu, Feb 11, 2021 at 10:48 PM David Jeffery <[email protected]> wrote:
>
> When a stacked block device inserts a request into another block device
> using blk_insert_cloned_request, the request's nr_phys_segments field gets
> recalculated by a call to blk_recalc_rq_segments in
> blk_cloned_rq_check_limits. But blk_recalc_rq_segments does not know how to
> handle multi-segment discards. For disk types which can handle
> multi-segment discards like nvme, this results in discard requests which
> claim a single segment when it should report several, triggering a warning
> in nvme and causing nvme to fail the discard from the invalid state.
>
> WARNING: CPU: 5 PID: 191 at drivers/nvme/host/core.c:700 nvme_setup_discard+0x170/0x1e0 [nvme_core]
> ...
> nvme_setup_cmd+0x217/0x270 [nvme_core]
> nvme_loop_queue_rq+0x51/0x1b0 [nvme_loop]
> __blk_mq_try_issue_directly+0xe7/0x1b0
> blk_mq_request_issue_directly+0x41/0x70
> ? blk_account_io_start+0x40/0x50
> dm_mq_queue_rq+0x200/0x3e0
> blk_mq_dispatch_rq_list+0x10a/0x7d0
> ? __sbitmap_queue_get+0x25/0x90
> ? elv_rb_del+0x1f/0x30
> ? deadline_remove_request+0x55/0xb0
> ? dd_dispatch_request+0x181/0x210
> __blk_mq_do_dispatch_sched+0x144/0x290
> ? bio_attempt_discard_merge+0x134/0x1f0
> __blk_mq_sched_dispatch_requests+0x129/0x180
> blk_mq_sched_dispatch_requests+0x30/0x60
> __blk_mq_run_hw_queue+0x47/0xe0
> __blk_mq_delay_run_hw_queue+0x15b/0x170
> blk_mq_sched_insert_requests+0x68/0xe0
> blk_mq_flush_plug_list+0xf0/0x170
> blk_finish_plug+0x36/0x50
> xlog_cil_committed+0x19f/0x290 [xfs]
> xlog_cil_process_committed+0x57/0x80 [xfs]
> xlog_state_do_callback+0x1e0/0x2a0 [xfs]
> xlog_ioend_work+0x2f/0x80 [xfs]
> process_one_work+0x1b6/0x350
> worker_thread+0x53/0x3e0
> ? process_one_work+0x350/0x350
> kthread+0x11b/0x140
> ? __kthread_bind_mask+0x60/0x60
> ret_from_fork+0x22/0x30
>
> This patch fixes blk_recalc_rq_segments to be aware of devices which can
> have multi-segment discards. It calculates the correct discard segment
> count by counting the number of bio as each discard bio is considered its
> own segment.
>
> Fixes: 1e739730c5b9 ("block: optionally merge discontiguous discard bios into a single request")
> Signed-off-by: David Jeffery <[email protected]>
> Reviewed-by: Ming Lei <[email protected]>
> Reviewed-by: Laurence Oberman <[email protected]>
>
> ---
> V2 explicitly returns 1 instead of falling through in the no-multi case and
> handles REQ_OP_SECURE_ERASE like REQ_OP_DISCARD.
>
> block/blk-merge.c | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/block/blk-merge.c b/block/blk-merge.c
> index 808768f6b174..756473295f19 100644
> --- a/block/blk-merge.c
> +++ b/block/blk-merge.c
> @@ -383,6 +383,14 @@ unsigned int blk_recalc_rq_segments(struct request *rq)
> switch (bio_op(rq->bio)) {
> case REQ_OP_DISCARD:
> case REQ_OP_SECURE_ERASE:
> + if (queue_max_discard_segments(rq->q) > 1) {
> + struct bio *bio = rq->bio;
> +
> + for_each_bio(bio)
> + nr_phys_segs++;
> + return nr_phys_segs;
> + }
> + return 1;
> case REQ_OP_WRITE_ZEROES:
> return 0;
> case REQ_OP_WRITE_SAME:
>
Hello Jens,
Any chance to merge this patch which fixes one regression caused
by commit 1e739730c5b9?
--
Ming Lei
On 2/11/21 7:38 AM, David Jeffery wrote:
> When a stacked block device inserts a request into another block device
> using blk_insert_cloned_request, the request's nr_phys_segments field gets
> recalculated by a call to blk_recalc_rq_segments in
> blk_cloned_rq_check_limits. But blk_recalc_rq_segments does not know how to
> handle multi-segment discards. For disk types which can handle
> multi-segment discards like nvme, this results in discard requests which
> claim a single segment when it should report several, triggering a warning
> in nvme and causing nvme to fail the discard from the invalid state.
>
> WARNING: CPU: 5 PID: 191 at drivers/nvme/host/core.c:700 nvme_setup_discard+0x170/0x1e0 [nvme_core]
> ...
> nvme_setup_cmd+0x217/0x270 [nvme_core]
> nvme_loop_queue_rq+0x51/0x1b0 [nvme_loop]
> __blk_mq_try_issue_directly+0xe7/0x1b0
> blk_mq_request_issue_directly+0x41/0x70
> ? blk_account_io_start+0x40/0x50
> dm_mq_queue_rq+0x200/0x3e0
> blk_mq_dispatch_rq_list+0x10a/0x7d0
> ? __sbitmap_queue_get+0x25/0x90
> ? elv_rb_del+0x1f/0x30
> ? deadline_remove_request+0x55/0xb0
> ? dd_dispatch_request+0x181/0x210
> __blk_mq_do_dispatch_sched+0x144/0x290
> ? bio_attempt_discard_merge+0x134/0x1f0
> __blk_mq_sched_dispatch_requests+0x129/0x180
> blk_mq_sched_dispatch_requests+0x30/0x60
> __blk_mq_run_hw_queue+0x47/0xe0
> __blk_mq_delay_run_hw_queue+0x15b/0x170
> blk_mq_sched_insert_requests+0x68/0xe0
> blk_mq_flush_plug_list+0xf0/0x170
> blk_finish_plug+0x36/0x50
> xlog_cil_committed+0x19f/0x290 [xfs]
> xlog_cil_process_committed+0x57/0x80 [xfs]
> xlog_state_do_callback+0x1e0/0x2a0 [xfs]
> xlog_ioend_work+0x2f/0x80 [xfs]
> process_one_work+0x1b6/0x350
> worker_thread+0x53/0x3e0
> ? process_one_work+0x350/0x350
> kthread+0x11b/0x140
> ? __kthread_bind_mask+0x60/0x60
> ret_from_fork+0x22/0x30
>
> This patch fixes blk_recalc_rq_segments to be aware of devices which can
> have multi-segment discards. It calculates the correct discard segment
> count by counting the number of bio as each discard bio is considered its
> own segment.
Applied, thanks.
--
Jens Axboe