2021-02-01 16:53:39

by David Jeffery

[permalink] [raw]
Subject: [PATCH] block: recalculate segment count for multi-segment discard requests correctly

When a stacked block device inserts a request into another block device
using blk_insert_cloned_request, the request's nr_phys_segments field gets
recalculated by a call to blk_recalc_rq_segments in
blk_cloned_rq_check_limits. But blk_recalc_rq_segments does not know how to
handle multi-segment discards. For disk types which can handle
multi-segment discards like nvme, this results in discard requests which
claim a single segment when it should report several, triggering a warning
in nvme and causing nvme to fail the discard from the invalid state.

WARNING: CPU: 5 PID: 191 at drivers/nvme/host/core.c:700 nvme_setup_discard+0x170/0x1e0 [nvme_core]
...
nvme_setup_cmd+0x217/0x270 [nvme_core]
nvme_loop_queue_rq+0x51/0x1b0 [nvme_loop]
__blk_mq_try_issue_directly+0xe7/0x1b0
blk_mq_request_issue_directly+0x41/0x70
? blk_account_io_start+0x40/0x50
dm_mq_queue_rq+0x200/0x3e0
blk_mq_dispatch_rq_list+0x10a/0x7d0
? __sbitmap_queue_get+0x25/0x90
? elv_rb_del+0x1f/0x30
? deadline_remove_request+0x55/0xb0
? dd_dispatch_request+0x181/0x210
__blk_mq_do_dispatch_sched+0x144/0x290
? bio_attempt_discard_merge+0x134/0x1f0
__blk_mq_sched_dispatch_requests+0x129/0x180
blk_mq_sched_dispatch_requests+0x30/0x60
__blk_mq_run_hw_queue+0x47/0xe0
__blk_mq_delay_run_hw_queue+0x15b/0x170
blk_mq_sched_insert_requests+0x68/0xe0
blk_mq_flush_plug_list+0xf0/0x170
blk_finish_plug+0x36/0x50
xlog_cil_committed+0x19f/0x290 [xfs]
xlog_cil_process_committed+0x57/0x80 [xfs]
xlog_state_do_callback+0x1e0/0x2a0 [xfs]
xlog_ioend_work+0x2f/0x80 [xfs]
process_one_work+0x1b6/0x350
worker_thread+0x53/0x3e0
? process_one_work+0x350/0x350
kthread+0x11b/0x140
? __kthread_bind_mask+0x60/0x60
ret_from_fork+0x22/0x30

This patch fixes blk_recalc_rq_segments to be aware of devices which can
have multi-segment discards. It calculates the correct discard segment
count by counting the number of bio as each discard bio is considered its
own segment.

Signed-off-by: David Jeffery <[email protected]>
Tested-by: Laurence Oberman <[email protected]>
---
block/blk-merge.c | 7 +++++++
1 file changed, 7 insertions(+)

diff --git a/block/blk-merge.c b/block/blk-merge.c
index 808768f6b174..fe7358bd5d09 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -382,6 +382,13 @@ unsigned int blk_recalc_rq_segments(struct request *rq)

switch (bio_op(rq->bio)) {
case REQ_OP_DISCARD:
+ if (queue_max_discard_segments(rq->q) > 1) {
+ struct bio *bio = rq->bio;
+ for_each_bio(bio)
+ nr_phys_segs++;
+ return nr_phys_segs;
+ }
+ /* fall through */
case REQ_OP_SECURE_ERASE:
case REQ_OP_WRITE_ZEROES:
return 0;
--
2.29.2


2021-02-02 03:36:57

by Ming Lei

[permalink] [raw]
Subject: Re: [PATCH] block: recalculate segment count for multi-segment discard requests correctly

On Mon, Feb 01, 2021 at 11:48:50AM -0500, David Jeffery wrote:
> When a stacked block device inserts a request into another block device
> using blk_insert_cloned_request, the request's nr_phys_segments field gets
> recalculated by a call to blk_recalc_rq_segments in
> blk_cloned_rq_check_limits. But blk_recalc_rq_segments does not know how to
> handle multi-segment discards. For disk types which can handle
> multi-segment discards like nvme, this results in discard requests which
> claim a single segment when it should report several, triggering a warning
> in nvme and causing nvme to fail the discard from the invalid state.
>
> WARNING: CPU: 5 PID: 191 at drivers/nvme/host/core.c:700 nvme_setup_discard+0x170/0x1e0 [nvme_core]
> ...
> nvme_setup_cmd+0x217/0x270 [nvme_core]
> nvme_loop_queue_rq+0x51/0x1b0 [nvme_loop]
> __blk_mq_try_issue_directly+0xe7/0x1b0
> blk_mq_request_issue_directly+0x41/0x70
> ? blk_account_io_start+0x40/0x50
> dm_mq_queue_rq+0x200/0x3e0
> blk_mq_dispatch_rq_list+0x10a/0x7d0
> ? __sbitmap_queue_get+0x25/0x90
> ? elv_rb_del+0x1f/0x30
> ? deadline_remove_request+0x55/0xb0
> ? dd_dispatch_request+0x181/0x210
> __blk_mq_do_dispatch_sched+0x144/0x290
> ? bio_attempt_discard_merge+0x134/0x1f0
> __blk_mq_sched_dispatch_requests+0x129/0x180
> blk_mq_sched_dispatch_requests+0x30/0x60
> __blk_mq_run_hw_queue+0x47/0xe0
> __blk_mq_delay_run_hw_queue+0x15b/0x170
> blk_mq_sched_insert_requests+0x68/0xe0
> blk_mq_flush_plug_list+0xf0/0x170
> blk_finish_plug+0x36/0x50
> xlog_cil_committed+0x19f/0x290 [xfs]
> xlog_cil_process_committed+0x57/0x80 [xfs]
> xlog_state_do_callback+0x1e0/0x2a0 [xfs]
> xlog_ioend_work+0x2f/0x80 [xfs]
> process_one_work+0x1b6/0x350
> worker_thread+0x53/0x3e0
> ? process_one_work+0x350/0x350
> kthread+0x11b/0x140
> ? __kthread_bind_mask+0x60/0x60
> ret_from_fork+0x22/0x30
>
> This patch fixes blk_recalc_rq_segments to be aware of devices which can
> have multi-segment discards. It calculates the correct discard segment
> count by counting the number of bio as each discard bio is considered its
> own segment.
>
> Signed-off-by: David Jeffery <[email protected]>
> Tested-by: Laurence Oberman <[email protected]>
> ---
> block/blk-merge.c | 7 +++++++
> 1 file changed, 7 insertions(+)
>
> diff --git a/block/blk-merge.c b/block/blk-merge.c
> index 808768f6b174..fe7358bd5d09 100644
> --- a/block/blk-merge.c
> +++ b/block/blk-merge.c
> @@ -382,6 +382,13 @@ unsigned int blk_recalc_rq_segments(struct request *rq)
>
> switch (bio_op(rq->bio)) {
> case REQ_OP_DISCARD:
> + if (queue_max_discard_segments(rq->q) > 1) {
> + struct bio *bio = rq->bio;
> + for_each_bio(bio)
> + nr_phys_segs++;
> + return nr_phys_segs;
> + }
> + /* fall through */
> case REQ_OP_SECURE_ERASE:

REQ_OP_SECURE_ERASE needs to be covered since block layer treats
the two in very similar way from discard viewpoint.

Also single range discard should be fixed too, since block layer
thinks single-range discard req segment is 1. Otherwise, the warning in
virtblk_setup_discard_write_zeroes() still may be triggered, at least.


--
Ming

2021-02-03 00:51:13

by David Jeffery

[permalink] [raw]
Subject: Re: [PATCH] block: recalculate segment count for multi-segment discard requests correctly

On Tue, Feb 02, 2021 at 11:33:43AM +0800, Ming Lei wrote:
>
> On Mon, Feb 01, 2021 at 11:48:50AM -0500, David Jeffery wrote:
> > When a stacked block device inserts a request into another block device
> > using blk_insert_cloned_request, the request's nr_phys_segments field gets
> > recalculated by a call to blk_recalc_rq_segments in
> > blk_cloned_rq_check_limits. But blk_recalc_rq_segments does not know how to
> > handle multi-segment discards. For disk types which can handle
> > multi-segment discards like nvme, this results in discard requests which
> > claim a single segment when it should report several, triggering a warning
> > in nvme and causing nvme to fail the discard from the invalid state.
> >
> > WARNING: CPU: 5 PID: 191 at drivers/nvme/host/core.c:700 nvme_setup_discard+0x170/0x1e0 [nvme_core]
> > ...
> > nvme_setup_cmd+0x217/0x270 [nvme_core]
> > nvme_loop_queue_rq+0x51/0x1b0 [nvme_loop]
> > __blk_mq_try_issue_directly+0xe7/0x1b0
> > blk_mq_request_issue_directly+0x41/0x70
> > ? blk_account_io_start+0x40/0x50
> > dm_mq_queue_rq+0x200/0x3e0
> > blk_mq_dispatch_rq_list+0x10a/0x7d0
> > ? __sbitmap_queue_get+0x25/0x90
> > ? elv_rb_del+0x1f/0x30
> > ? deadline_remove_request+0x55/0xb0
> > ? dd_dispatch_request+0x181/0x210
> > __blk_mq_do_dispatch_sched+0x144/0x290
> > ? bio_attempt_discard_merge+0x134/0x1f0
> > __blk_mq_sched_dispatch_requests+0x129/0x180
> > blk_mq_sched_dispatch_requests+0x30/0x60
> > __blk_mq_run_hw_queue+0x47/0xe0
> > __blk_mq_delay_run_hw_queue+0x15b/0x170
> > blk_mq_sched_insert_requests+0x68/0xe0
> > blk_mq_flush_plug_list+0xf0/0x170
> > blk_finish_plug+0x36/0x50
> > xlog_cil_committed+0x19f/0x290 [xfs]
> > xlog_cil_process_committed+0x57/0x80 [xfs]
> > xlog_state_do_callback+0x1e0/0x2a0 [xfs]
> > xlog_ioend_work+0x2f/0x80 [xfs]
> > process_one_work+0x1b6/0x350
> > worker_thread+0x53/0x3e0
> > ? process_one_work+0x350/0x350
> > kthread+0x11b/0x140
> > ? __kthread_bind_mask+0x60/0x60
> > ret_from_fork+0x22/0x30
> >
> > This patch fixes blk_recalc_rq_segments to be aware of devices which can
> > have multi-segment discards. It calculates the correct discard segment
> > count by counting the number of bio as each discard bio is considered its
> > own segment.
> >
> > Signed-off-by: David Jeffery <[email protected]>
> > Tested-by: Laurence Oberman <[email protected]>
> > ---
> > block/blk-merge.c | 7 +++++++
> > 1 file changed, 7 insertions(+)
> >
> > diff --git a/block/blk-merge.c b/block/blk-merge.c
> > index 808768f6b174..fe7358bd5d09 100644
> > --- a/block/blk-merge.c
> > +++ b/block/blk-merge.c
> > @@ -382,6 +382,13 @@ unsigned int blk_recalc_rq_segments(struct request *rq)
> >
> > switch (bio_op(rq->bio)) {
> > case REQ_OP_DISCARD:
> > + if (queue_max_discard_segments(rq->q) > 1) {
> > + struct bio *bio = rq->bio;
> > + for_each_bio(bio)
> > + nr_phys_segs++;
> > + return nr_phys_segs;
> > + }
> > + /* fall through */
> > case REQ_OP_SECURE_ERASE:
>
> REQ_OP_SECURE_ERASE needs to be covered since block layer treats
> the two in very similar way from discard viewpoint.
>
> Also single range discard should be fixed too, since block layer
> thinks single-range discard req segment is 1. Otherwise, the warning in
> virtblk_setup_discard_write_zeroes() still may be triggered, at least.
>
>
> --
> Ming
>

The return 0 does seem to be an old relic that does not make sense anymore.
Moving REQ_OP_SECURE_ERASE to be with discard and removing the old return 0,
is this what you had in mind?


diff --git a/block/blk-merge.c b/block/blk-merge.c
index 808768f6b174..68458aa01b05 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -383,8 +383,14 @@ unsigned int blk_recalc_rq_segments(struct request *rq)
switch (bio_op(rq->bio)) {
case REQ_OP_DISCARD:
case REQ_OP_SECURE_ERASE:
+ if (queue_max_discard_segments(rq->q) > 1) {
+ struct bio *bio = rq->bio;
+ for_each_bio(bio)
+ nr_phys_segs++;
+ return nr_phys_segs;
+ }
+ /* fall through */
case REQ_OP_WRITE_ZEROES:
- return 0;
case REQ_OP_WRITE_SAME:
return 1;
}

--
David Jeffery

2021-02-03 02:40:53

by Ming Lei

[permalink] [raw]
Subject: Re: [PATCH] block: recalculate segment count for multi-segment discard requests correctly

On Tue, Feb 02, 2021 at 03:43:55PM -0500, David Jeffery wrote:
> On Tue, Feb 02, 2021 at 11:33:43AM +0800, Ming Lei wrote:
> >
> > On Mon, Feb 01, 2021 at 11:48:50AM -0500, David Jeffery wrote:
> > > When a stacked block device inserts a request into another block device
> > > using blk_insert_cloned_request, the request's nr_phys_segments field gets
> > > recalculated by a call to blk_recalc_rq_segments in
> > > blk_cloned_rq_check_limits. But blk_recalc_rq_segments does not know how to
> > > handle multi-segment discards. For disk types which can handle
> > > multi-segment discards like nvme, this results in discard requests which
> > > claim a single segment when it should report several, triggering a warning
> > > in nvme and causing nvme to fail the discard from the invalid state.
> > >
> > > WARNING: CPU: 5 PID: 191 at drivers/nvme/host/core.c:700 nvme_setup_discard+0x170/0x1e0 [nvme_core]
> > > ...
> > > nvme_setup_cmd+0x217/0x270 [nvme_core]
> > > nvme_loop_queue_rq+0x51/0x1b0 [nvme_loop]
> > > __blk_mq_try_issue_directly+0xe7/0x1b0
> > > blk_mq_request_issue_directly+0x41/0x70
> > > ? blk_account_io_start+0x40/0x50
> > > dm_mq_queue_rq+0x200/0x3e0
> > > blk_mq_dispatch_rq_list+0x10a/0x7d0
> > > ? __sbitmap_queue_get+0x25/0x90
> > > ? elv_rb_del+0x1f/0x30
> > > ? deadline_remove_request+0x55/0xb0
> > > ? dd_dispatch_request+0x181/0x210
> > > __blk_mq_do_dispatch_sched+0x144/0x290
> > > ? bio_attempt_discard_merge+0x134/0x1f0
> > > __blk_mq_sched_dispatch_requests+0x129/0x180
> > > blk_mq_sched_dispatch_requests+0x30/0x60
> > > __blk_mq_run_hw_queue+0x47/0xe0
> > > __blk_mq_delay_run_hw_queue+0x15b/0x170
> > > blk_mq_sched_insert_requests+0x68/0xe0
> > > blk_mq_flush_plug_list+0xf0/0x170
> > > blk_finish_plug+0x36/0x50
> > > xlog_cil_committed+0x19f/0x290 [xfs]
> > > xlog_cil_process_committed+0x57/0x80 [xfs]
> > > xlog_state_do_callback+0x1e0/0x2a0 [xfs]
> > > xlog_ioend_work+0x2f/0x80 [xfs]
> > > process_one_work+0x1b6/0x350
> > > worker_thread+0x53/0x3e0
> > > ? process_one_work+0x350/0x350
> > > kthread+0x11b/0x140
> > > ? __kthread_bind_mask+0x60/0x60
> > > ret_from_fork+0x22/0x30
> > >
> > > This patch fixes blk_recalc_rq_segments to be aware of devices which can
> > > have multi-segment discards. It calculates the correct discard segment
> > > count by counting the number of bio as each discard bio is considered its
> > > own segment.
> > >
> > > Signed-off-by: David Jeffery <[email protected]>
> > > Tested-by: Laurence Oberman <[email protected]>
> > > ---
> > > block/blk-merge.c | 7 +++++++
> > > 1 file changed, 7 insertions(+)
> > >
> > > diff --git a/block/blk-merge.c b/block/blk-merge.c
> > > index 808768f6b174..fe7358bd5d09 100644
> > > --- a/block/blk-merge.c
> > > +++ b/block/blk-merge.c
> > > @@ -382,6 +382,13 @@ unsigned int blk_recalc_rq_segments(struct request *rq)
> > >
> > > switch (bio_op(rq->bio)) {
> > > case REQ_OP_DISCARD:
> > > + if (queue_max_discard_segments(rq->q) > 1) {
> > > + struct bio *bio = rq->bio;
> > > + for_each_bio(bio)
> > > + nr_phys_segs++;
> > > + return nr_phys_segs;
> > > + }
> > > + /* fall through */
> > > case REQ_OP_SECURE_ERASE:
> >
> > REQ_OP_SECURE_ERASE needs to be covered since block layer treats
> > the two in very similar way from discard viewpoint.
> >
> > Also single range discard should be fixed too, since block layer
> > thinks single-range discard req segment is 1. Otherwise, the warning in
> > virtblk_setup_discard_write_zeroes() still may be triggered, at least.
> >
> >
> > --
> > Ming
> >
>
> The return 0 does seem to be an old relic that does not make sense anymore.
> Moving REQ_OP_SECURE_ERASE to be with discard and removing the old return 0,
> is this what you had in mind?
>
>
> diff --git a/block/blk-merge.c b/block/blk-merge.c
> index 808768f6b174..68458aa01b05 100644
> --- a/block/blk-merge.c
> +++ b/block/blk-merge.c
> @@ -383,8 +383,14 @@ unsigned int blk_recalc_rq_segments(struct request *rq)
> switch (bio_op(rq->bio)) {
> case REQ_OP_DISCARD:
> case REQ_OP_SECURE_ERASE:
> + if (queue_max_discard_segments(rq->q) > 1) {
> + struct bio *bio = rq->bio;
> + for_each_bio(bio)
> + nr_phys_segs++;
> + return nr_phys_segs;
> + }
> + /* fall through */
> case REQ_OP_WRITE_ZEROES:
> - return 0;
> case REQ_OP_WRITE_SAME:
> return 1;

WRITE_SAME uses same buffer, so the nr_segment is still one; WRITE_ZERO
doesn't need extra payload, so nr_segments is zero, see
blk_bio_write_zeroes_split(), blk_bio_write_same_split, attempt_merge()
and blk_rq_merge_ok().

--
Ming

2021-02-03 03:22:27

by Chaitanya Kulkarni

[permalink] [raw]
Subject: Re: [PATCH] block: recalculate segment count for multi-segment discard requests correctly

On 2/2/21 18:39, Ming Lei wrote:
> + /* fall through */
> case REQ_OP_WRITE_ZEROES:
> - return 0;
I don't think returning 1 for write-zeroes is right,
did you test this patch with write-zeores enabled controller with
the right fs that triggers this behavior ?

2021-02-03 03:23:15

by Chaitanya Kulkarni

[permalink] [raw]
Subject: Re: [PATCH] block: recalculate segment count for multi-segment discard requests correctly

On 2/2/21 18:39, Ming Lei wrote:
> + struct bio *bio = rq->bio;
> + for_each_bio(bio)
> + nr_phys_segs++;
> + return nr_phys_segs;
> + }
Also, you need to add a new line after declaration of bio in the above
code block.

2021-02-03 13:57:50

by Laurence Oberman

[permalink] [raw]
Subject: Re: [PATCH] block: recalculate segment count for multi-segment discard requests correctly

On Wed, 2021-02-03 at 03:15 +0000, Chaitanya Kulkarni wrote:
> On 2/2/21 18:39, Ming Lei wrote:
> > + /* fall through */
> > case REQ_OP_WRITE_ZEROES:
> > - return 0;
>
> I don't think returning 1 for write-zeroes is right,
> did you test this patch with write-zeores enabled controller with
> the right fs that triggers this behavior ?
>
>
I tested the first iteration of the patch fully mounting an XFS file
system with -o discard and creating and deleting files.
That was our specific RHEL8 failure we were handling here with David's
first submission.

I can test his most recent, I have not done that yet.
Again, please follow up with exactly what you want based against
David's patch and I can test that.

Regards
Laurence

2021-02-03 15:17:51

by Laurence Oberman

[permalink] [raw]
Subject: Re: [PATCH] block: recalculate segment count for multi-segment discard requests correctly

On Wed, 2021-02-03 at 08:50 -0500, Laurence Oberman wrote:
> On Wed, 2021-02-03 at 03:15 +0000, Chaitanya Kulkarni wrote:
> > On 2/2/21 18:39, Ming Lei wrote:
> > > + /* fall through */
> > > case REQ_OP_WRITE_ZEROES:
> > > - return 0;
> >
> > I don't think returning 1 for write-zeroes is right,
> > did you test this patch with write-zeores enabled controller with
> > the right fs that triggers this behavior ?
> >
> >
>
> I tested the first iteration of the patch fully mounting an XFS file
> system with -o discard and creating and deleting files.
> That was our specific RHEL8 failure we were handling here with
> David's
> first submission.
>
> I can test his most recent, I have not done that yet.
> Again, please follow up with exactly what you want based against
> David's patch and I can test that.
>
> Regards
> Laurence

So if I understand what it is you were wanting I will test this now.

diff --git a/block/blk-merge.c b/block/blk-merge.c
index 808768f6b174..a9bd958c07c4 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -383,6 +383,13 @@ unsigned int blk_recalc_rq_segments(struct request
*rq)
switch (bio_op(rq->bio)) {
case REQ_OP_DISCARD:
case REQ_OP_SECURE_ERASE:
+ if (queue_max_discard_segments(rq->q) > 1) {
+ struct bio *bio = rq->bio;

+ for_each_bio(bio)
+ nr_phys_segs++;
+ return nr_phys_segs;
+ }
+ /* fall through */
case REQ_OP_WRITE_ZEROES:
return 0;
case REQ_OP_WRITE_SAME:

2021-02-03 16:26:57

by David Jeffery

[permalink] [raw]
Subject: Re: [PATCH] block: recalculate segment count for multi-segment discard requests correctly

On Wed, Feb 03, 2021 at 10:35:17AM +0800, Ming Lei wrote:
>
> On Tue, Feb 02, 2021 at 03:43:55PM -0500, David Jeffery wrote:
> > The return 0 does seem to be an old relic that does not make sense anymore.
> > Moving REQ_OP_SECURE_ERASE to be with discard and removing the old return 0,
> > is this what you had in mind?
> >
> >
> > diff --git a/block/blk-merge.c b/block/blk-merge.c
> > index 808768f6b174..68458aa01b05 100644
> > --- a/block/blk-merge.c
> > +++ b/block/blk-merge.c
> > @@ -383,8 +383,14 @@ unsigned int blk_recalc_rq_segments(struct request *rq)
> > switch (bio_op(rq->bio)) {
> > case REQ_OP_DISCARD:
> > case REQ_OP_SECURE_ERASE:
> > + if (queue_max_discard_segments(rq->q) > 1) {
> > + struct bio *bio = rq->bio;
> > + for_each_bio(bio)
> > + nr_phys_segs++;
> > + return nr_phys_segs;
> > + }
> > + /* fall through */
> > case REQ_OP_WRITE_ZEROES:
> > - return 0;
> > case REQ_OP_WRITE_SAME:
> > return 1;
>
> WRITE_SAME uses same buffer, so the nr_segment is still one; WRITE_ZERO
> doesn't need extra payload, so nr_segments is zero, see
> blk_bio_write_zeroes_split(), blk_bio_write_same_split, attempt_merge()
> and blk_rq_merge_ok().
>

I thought you mentioned virtio-blk because of how some drivers handle
zeroing and discarding similarly and wanted to align the segment count with
discard behavior for WRITE_ZEROES too. (Though that would also need an update
to blk_bio_write_zeroes_split as you pointed out.) So you want me to leave
WRITE_ZEROES behavior alone and let blk_rq_nr_discard_segments() keep doing
the hiding of a 0 rq->nr_phys_segments as 1 segment in the WRITE_ZEROES treated
as a discard case?

diff --git a/block/blk-merge.c b/block/blk-merge.c
index 808768f6b174..756473295f19 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -383,6 +383,14 @@ unsigned int blk_recalc_rq_segments(struct request *rq)
switch (bio_op(rq->bio)) {
case REQ_OP_DISCARD:
case REQ_OP_SECURE_ERASE:
+ if (queue_max_discard_segments(rq->q) > 1) {
+ struct bio *bio = rq->bio;
+
+ for_each_bio(bio)
+ nr_phys_segs++;
+ return nr_phys_segs;
+ }
+ return 1;
case REQ_OP_WRITE_ZEROES:
return 0;
case REQ_OP_WRITE_SAME:

--
David Jeffery

2021-02-04 03:58:14

by Ming Lei

[permalink] [raw]
Subject: Re: [PATCH] block: recalculate segment count for multi-segment discard requests correctly

On Wed, Feb 03, 2021 at 11:23:37AM -0500, David Jeffery wrote:
> On Wed, Feb 03, 2021 at 10:35:17AM +0800, Ming Lei wrote:
> >
> > On Tue, Feb 02, 2021 at 03:43:55PM -0500, David Jeffery wrote:
> > > The return 0 does seem to be an old relic that does not make sense anymore.
> > > Moving REQ_OP_SECURE_ERASE to be with discard and removing the old return 0,
> > > is this what you had in mind?
> > >
> > >
> > > diff --git a/block/blk-merge.c b/block/blk-merge.c
> > > index 808768f6b174..68458aa01b05 100644
> > > --- a/block/blk-merge.c
> > > +++ b/block/blk-merge.c
> > > @@ -383,8 +383,14 @@ unsigned int blk_recalc_rq_segments(struct request *rq)
> > > switch (bio_op(rq->bio)) {
> > > case REQ_OP_DISCARD:
> > > case REQ_OP_SECURE_ERASE:
> > > + if (queue_max_discard_segments(rq->q) > 1) {
> > > + struct bio *bio = rq->bio;
> > > + for_each_bio(bio)
> > > + nr_phys_segs++;
> > > + return nr_phys_segs;
> > > + }
> > > + /* fall through */
> > > case REQ_OP_WRITE_ZEROES:
> > > - return 0;
> > > case REQ_OP_WRITE_SAME:
> > > return 1;
> >
> > WRITE_SAME uses same buffer, so the nr_segment is still one; WRITE_ZERO
> > doesn't need extra payload, so nr_segments is zero, see
> > blk_bio_write_zeroes_split(), blk_bio_write_same_split, attempt_merge()
> > and blk_rq_merge_ok().
> >
>
> I thought you mentioned virtio-blk because of how some drivers handle
> zeroing and discarding similarly and wanted to align the segment count with
> discard behavior for WRITE_ZEROES too. (Though that would also need an update

virtio-blk is just one example which supports both single discard range
and multiple discard range, meantime virtblk_setup_discard_write_zeroes()
simply maps write zero into discard directly.

Just found blk_rq_nr_discard_segments() returns >=1 segments always, so
looks your patch is enough for avoiding the warning.

> to blk_bio_write_zeroes_split as you pointed out.) So you want me to leave
> WRITE_ZEROES behavior alone and let blk_rq_nr_discard_segments() keep doing
> the hiding of a 0 rq->nr_phys_segments as 1 segment in the WRITE_ZEROES treated
> as a discard case?
>
> diff --git a/block/blk-merge.c b/block/blk-merge.c
> index 808768f6b174..756473295f19 100644
> --- a/block/blk-merge.c
> +++ b/block/blk-merge.c
> @@ -383,6 +383,14 @@ unsigned int blk_recalc_rq_segments(struct request *rq)
> switch (bio_op(rq->bio)) {
> case REQ_OP_DISCARD:
> case REQ_OP_SECURE_ERASE:
> + if (queue_max_discard_segments(rq->q) > 1) {
> + struct bio *bio = rq->bio;
> +
> + for_each_bio(bio)
> + nr_phys_segs++;
> + return nr_phys_segs;
> + }
> + return 1;
> case REQ_OP_WRITE_ZEROES:
> return 0;
> case REQ_OP_WRITE_SAME:

This patch returns 1 for single-range discard explicitly. However, it
isn't necessary because of blk_rq_nr_discard_segments().

Maybe we can align to blk_bio_discard_split() in future, but that can be
done as cleanup.

Thanks,
Ming

2021-02-04 03:58:26

by Ming Lei

[permalink] [raw]
Subject: Re: [PATCH] block: recalculate segment count for multi-segment discard requests correctly

On Mon, Feb 01, 2021 at 11:48:50AM -0500, David Jeffery wrote:
> When a stacked block device inserts a request into another block device
> using blk_insert_cloned_request, the request's nr_phys_segments field gets
> recalculated by a call to blk_recalc_rq_segments in
> blk_cloned_rq_check_limits. But blk_recalc_rq_segments does not know how to
> handle multi-segment discards. For disk types which can handle
> multi-segment discards like nvme, this results in discard requests which
> claim a single segment when it should report several, triggering a warning
> in nvme and causing nvme to fail the discard from the invalid state.
>
> WARNING: CPU: 5 PID: 191 at drivers/nvme/host/core.c:700 nvme_setup_discard+0x170/0x1e0 [nvme_core]
> ...
> nvme_setup_cmd+0x217/0x270 [nvme_core]
> nvme_loop_queue_rq+0x51/0x1b0 [nvme_loop]
> __blk_mq_try_issue_directly+0xe7/0x1b0
> blk_mq_request_issue_directly+0x41/0x70
> ? blk_account_io_start+0x40/0x50
> dm_mq_queue_rq+0x200/0x3e0
> blk_mq_dispatch_rq_list+0x10a/0x7d0
> ? __sbitmap_queue_get+0x25/0x90
> ? elv_rb_del+0x1f/0x30
> ? deadline_remove_request+0x55/0xb0
> ? dd_dispatch_request+0x181/0x210
> __blk_mq_do_dispatch_sched+0x144/0x290
> ? bio_attempt_discard_merge+0x134/0x1f0
> __blk_mq_sched_dispatch_requests+0x129/0x180
> blk_mq_sched_dispatch_requests+0x30/0x60
> __blk_mq_run_hw_queue+0x47/0xe0
> __blk_mq_delay_run_hw_queue+0x15b/0x170
> blk_mq_sched_insert_requests+0x68/0xe0
> blk_mq_flush_plug_list+0xf0/0x170
> blk_finish_plug+0x36/0x50
> xlog_cil_committed+0x19f/0x290 [xfs]
> xlog_cil_process_committed+0x57/0x80 [xfs]
> xlog_state_do_callback+0x1e0/0x2a0 [xfs]
> xlog_ioend_work+0x2f/0x80 [xfs]
> process_one_work+0x1b6/0x350
> worker_thread+0x53/0x3e0
> ? process_one_work+0x350/0x350
> kthread+0x11b/0x140
> ? __kthread_bind_mask+0x60/0x60
> ret_from_fork+0x22/0x30
>
> This patch fixes blk_recalc_rq_segments to be aware of devices which can
> have multi-segment discards. It calculates the correct discard segment
> count by counting the number of bio as each discard bio is considered its
> own segment.
>
> Signed-off-by: David Jeffery <[email protected]>
> Tested-by: Laurence Oberman <[email protected]>
> ---
> block/blk-merge.c | 7 +++++++
> 1 file changed, 7 insertions(+)
>
> diff --git a/block/blk-merge.c b/block/blk-merge.c
> index 808768f6b174..fe7358bd5d09 100644
> --- a/block/blk-merge.c
> +++ b/block/blk-merge.c
> @@ -382,6 +382,13 @@ unsigned int blk_recalc_rq_segments(struct request *rq)
>
> switch (bio_op(rq->bio)) {
> case REQ_OP_DISCARD:
> + if (queue_max_discard_segments(rq->q) > 1) {
> + struct bio *bio = rq->bio;
> + for_each_bio(bio)
> + nr_phys_segs++;
> + return nr_phys_segs;
> + }
> + /* fall through */
> case REQ_OP_SECURE_ERASE:
> case REQ_OP_WRITE_ZEROES:
> return 0;

blk_rq_nr_discard_segments() always returns >=1 segments, so no similar
issue in case of single range discard.

Reviewed-by: Ming Lei <[email protected]>

And it can be thought as:

Fixes: 1e739730c5b9 ("block: optionally merge discontiguous discard bios into a single request")


--
Ming

2021-02-04 16:52:51

by Laurence Oberman

[permalink] [raw]
Subject: Re: [PATCH] block: recalculate segment count for multi-segment discard requests correctly

On Thu, 2021-02-04 at 10:27 +0800, Ming Lei wrote:
> On Mon, Feb 01, 2021 at 11:48:50AM -0500, David Jeffery wrote:
> > When a stacked block device inserts a request into another block
> > device
> > using blk_insert_cloned_request, the request's nr_phys_segments
> > field gets
> > recalculated by a call to blk_recalc_rq_segments in
> > blk_cloned_rq_check_limits. But blk_recalc_rq_segments does not
> > know how to
> > handle multi-segment discards. For disk types which can handle
> > multi-segment discards like nvme, this results in discard requests
> > which
> > claim a single segment when it should report several, triggering a
> > warning
> > in nvme and causing nvme to fail the discard from the invalid
> > state.
> >
> > WARNING: CPU: 5 PID: 191 at drivers/nvme/host/core.c:700
> > nvme_setup_discard+0x170/0x1e0 [nvme_core]
> > ...
> > nvme_setup_cmd+0x217/0x270 [nvme_core]
> > nvme_loop_queue_rq+0x51/0x1b0 [nvme_loop]
> > __blk_mq_try_issue_directly+0xe7/0x1b0
> > blk_mq_request_issue_directly+0x41/0x70
> > ? blk_account_io_start+0x40/0x50
> > dm_mq_queue_rq+0x200/0x3e0
> > blk_mq_dispatch_rq_list+0x10a/0x7d0
> > ? __sbitmap_queue_get+0x25/0x90
> > ? elv_rb_del+0x1f/0x30
> > ? deadline_remove_request+0x55/0xb0
> > ? dd_dispatch_request+0x181/0x210
> > __blk_mq_do_dispatch_sched+0x144/0x290
> > ? bio_attempt_discard_merge+0x134/0x1f0
> > __blk_mq_sched_dispatch_requests+0x129/0x180
> > blk_mq_sched_dispatch_requests+0x30/0x60
> > __blk_mq_run_hw_queue+0x47/0xe0
> > __blk_mq_delay_run_hw_queue+0x15b/0x170
> > blk_mq_sched_insert_requests+0x68/0xe0
> > blk_mq_flush_plug_list+0xf0/0x170
> > blk_finish_plug+0x36/0x50
> > xlog_cil_committed+0x19f/0x290 [xfs]
> > xlog_cil_process_committed+0x57/0x80 [xfs]
> > xlog_state_do_callback+0x1e0/0x2a0 [xfs]
> > xlog_ioend_work+0x2f/0x80 [xfs]
> > process_one_work+0x1b6/0x350
> > worker_thread+0x53/0x3e0
> > ? process_one_work+0x350/0x350
> > kthread+0x11b/0x140
> > ? __kthread_bind_mask+0x60/0x60
> > ret_from_fork+0x22/0x30
> >
> > This patch fixes blk_recalc_rq_segments to be aware of devices
> > which can
> > have multi-segment discards. It calculates the correct discard
> > segment
> > count by counting the number of bio as each discard bio is
> > considered its
> > own segment.
> >
> > Signed-off-by: David Jeffery <[email protected]>
> > Tested-by: Laurence Oberman <[email protected]>
> > ---
> > block/blk-merge.c | 7 +++++++
> > 1 file changed, 7 insertions(+)
> >
> > diff --git a/block/blk-merge.c b/block/blk-merge.c
> > index 808768f6b174..fe7358bd5d09 100644
> > --- a/block/blk-merge.c
> > +++ b/block/blk-merge.c
> > @@ -382,6 +382,13 @@ unsigned int blk_recalc_rq_segments(struct
> > request *rq)
> >
> > switch (bio_op(rq->bio)) {
> > case REQ_OP_DISCARD:
> > + if (queue_max_discard_segments(rq->q) > 1) {
> > + struct bio *bio = rq->bio;
> > + for_each_bio(bio)
> > + nr_phys_segs++;
> > + return nr_phys_segs;
> > + }
> > + /* fall through */
> > case REQ_OP_SECURE_ERASE:
> > case REQ_OP_WRITE_ZEROES:
> > return 0;
>
> blk_rq_nr_discard_segments() always returns >=1 segments, so no
> similar
> issue in case of single range discard.
>
> Reviewed-by: Ming Lei <[email protected]>
>
> And it can be thought as:
>
> Fixes: 1e739730c5b9 ("block: optionally merge discontiguous discard
> bios into a single request")
>
>

Great, can we get enough acks and push this through its urgent for me
Reviewed-by: Laurence Oberman <[email protected]>

2021-02-08 20:21:03

by Laurence Oberman

[permalink] [raw]
Subject: Re: [PATCH] block: recalculate segment count for multi-segment discard requests correctly

On Thu, 2021-02-04 at 11:43 -0500, Laurence Oberman wrote:
> On Thu, 2021-02-04 at 10:27 +0800, Ming Lei wrote:
> > On Mon, Feb 01, 2021 at 11:48:50AM -0500, David Jeffery wrote:
> > > When a stacked block device inserts a request into another block
> > > device
> > > using blk_insert_cloned_request, the request's nr_phys_segments
> > > field gets
> > > recalculated by a call to blk_recalc_rq_segments in
> > > blk_cloned_rq_check_limits. But blk_recalc_rq_segments does not
> > > know how to
> > > handle multi-segment discards. For disk types which can handle
> > > multi-segment discards like nvme, this results in discard
> > > requests
> > > which
> > > claim a single segment when it should report several, triggering
> > > a
> > > warning
> > > in nvme and causing nvme to fail the discard from the invalid
> > > state.
> > >
> > > WARNING: CPU: 5 PID: 191 at drivers/nvme/host/core.c:700
> > > nvme_setup_discard+0x170/0x1e0 [nvme_core]
> > > ...
> > > nvme_setup_cmd+0x217/0x270 [nvme_core]
> > > nvme_loop_queue_rq+0x51/0x1b0 [nvme_loop]
> > > __blk_mq_try_issue_directly+0xe7/0x1b0
> > > blk_mq_request_issue_directly+0x41/0x70
> > > ? blk_account_io_start+0x40/0x50
> > > dm_mq_queue_rq+0x200/0x3e0
> > > blk_mq_dispatch_rq_list+0x10a/0x7d0
> > > ? __sbitmap_queue_get+0x25/0x90
> > > ? elv_rb_del+0x1f/0x30
> > > ? deadline_remove_request+0x55/0xb0
> > > ? dd_dispatch_request+0x181/0x210
> > > __blk_mq_do_dispatch_sched+0x144/0x290
> > > ? bio_attempt_discard_merge+0x134/0x1f0
> > > __blk_mq_sched_dispatch_requests+0x129/0x180
> > > blk_mq_sched_dispatch_requests+0x30/0x60
> > > __blk_mq_run_hw_queue+0x47/0xe0
> > > __blk_mq_delay_run_hw_queue+0x15b/0x170
> > > blk_mq_sched_insert_requests+0x68/0xe0
> > > blk_mq_flush_plug_list+0xf0/0x170
> > > blk_finish_plug+0x36/0x50
> > > xlog_cil_committed+0x19f/0x290 [xfs]
> > > xlog_cil_process_committed+0x57/0x80 [xfs]
> > > xlog_state_do_callback+0x1e0/0x2a0 [xfs]
> > > xlog_ioend_work+0x2f/0x80 [xfs]
> > > process_one_work+0x1b6/0x350
> > > worker_thread+0x53/0x3e0
> > > ? process_one_work+0x350/0x350
> > > kthread+0x11b/0x140
> > > ? __kthread_bind_mask+0x60/0x60
> > > ret_from_fork+0x22/0x30
> > >
> > > This patch fixes blk_recalc_rq_segments to be aware of devices
> > > which can
> > > have multi-segment discards. It calculates the correct discard
> > > segment
> > > count by counting the number of bio as each discard bio is
> > > considered its
> > > own segment.
> > >
> > > Signed-off-by: David Jeffery <[email protected]>
> > > Tested-by: Laurence Oberman <[email protected]>
> > > ---
> > > block/blk-merge.c | 7 +++++++
> > > 1 file changed, 7 insertions(+)
> > >
> > > diff --git a/block/blk-merge.c b/block/blk-merge.c
> > > index 808768f6b174..fe7358bd5d09 100644
> > > --- a/block/blk-merge.c
> > > +++ b/block/blk-merge.c
> > > @@ -382,6 +382,13 @@ unsigned int blk_recalc_rq_segments(struct
> > > request *rq)
> > >
> > > switch (bio_op(rq->bio)) {
> > > case REQ_OP_DISCARD:
> > > + if (queue_max_discard_segments(rq->q) > 1) {
> > > + struct bio *bio = rq->bio;
> > > + for_each_bio(bio)
> > > + nr_phys_segs++;
> > > + return nr_phys_segs;
> > > + }
> > > + /* fall through */
> > > case REQ_OP_SECURE_ERASE:
> > > case REQ_OP_WRITE_ZEROES:
> > > return 0;
> >
> > blk_rq_nr_discard_segments() always returns >=1 segments, so no
> > similar
> > issue in case of single range discard.
> >
> > Reviewed-by: Ming Lei <[email protected]>
> >
> > And it can be thought as:
> >
> > Fixes: 1e739730c5b9 ("block: optionally merge discontiguous discard
> > bios into a single request")
> >
> >
>
> Great, can we get enough acks and push this through its urgent for me
> Reviewed-by: Laurence Oberman <[email protected]>

Hate to ping again, but we cant take this into RHEL unless its
upstream, can we get enough acks to get this in.

Many Thanks
Laurence

2021-02-08 20:24:58

by John Pittman

[permalink] [raw]
Subject: Re: [PATCH] block: recalculate segment count for multi-segment discard requests correctly

Hi Jens, when you get a moment, could you take a quick look at this one for ack?

On Thu, Feb 4, 2021 at 11:49 AM Laurence Oberman <[email protected]> wrote:
>
> On Thu, 2021-02-04 at 10:27 +0800, Ming Lei wrote:
> > On Mon, Feb 01, 2021 at 11:48:50AM -0500, David Jeffery wrote:
> > > When a stacked block device inserts a request into another block
> > > device
> > > using blk_insert_cloned_request, the request's nr_phys_segments
> > > field gets
> > > recalculated by a call to blk_recalc_rq_segments in
> > > blk_cloned_rq_check_limits. But blk_recalc_rq_segments does not
> > > know how to
> > > handle multi-segment discards. For disk types which can handle
> > > multi-segment discards like nvme, this results in discard requests
> > > which
> > > claim a single segment when it should report several, triggering a
> > > warning
> > > in nvme and causing nvme to fail the discard from the invalid
> > > state.
> > >
> > > WARNING: CPU: 5 PID: 191 at drivers/nvme/host/core.c:700
> > > nvme_setup_discard+0x170/0x1e0 [nvme_core]
> > > ...
> > > nvme_setup_cmd+0x217/0x270 [nvme_core]
> > > nvme_loop_queue_rq+0x51/0x1b0 [nvme_loop]
> > > __blk_mq_try_issue_directly+0xe7/0x1b0
> > > blk_mq_request_issue_directly+0x41/0x70
> > > ? blk_account_io_start+0x40/0x50
> > > dm_mq_queue_rq+0x200/0x3e0
> > > blk_mq_dispatch_rq_list+0x10a/0x7d0
> > > ? __sbitmap_queue_get+0x25/0x90
> > > ? elv_rb_del+0x1f/0x30
> > > ? deadline_remove_request+0x55/0xb0
> > > ? dd_dispatch_request+0x181/0x210
> > > __blk_mq_do_dispatch_sched+0x144/0x290
> > > ? bio_attempt_discard_merge+0x134/0x1f0
> > > __blk_mq_sched_dispatch_requests+0x129/0x180
> > > blk_mq_sched_dispatch_requests+0x30/0x60
> > > __blk_mq_run_hw_queue+0x47/0xe0
> > > __blk_mq_delay_run_hw_queue+0x15b/0x170
> > > blk_mq_sched_insert_requests+0x68/0xe0
> > > blk_mq_flush_plug_list+0xf0/0x170
> > > blk_finish_plug+0x36/0x50
> > > xlog_cil_committed+0x19f/0x290 [xfs]
> > > xlog_cil_process_committed+0x57/0x80 [xfs]
> > > xlog_state_do_callback+0x1e0/0x2a0 [xfs]
> > > xlog_ioend_work+0x2f/0x80 [xfs]
> > > process_one_work+0x1b6/0x350
> > > worker_thread+0x53/0x3e0
> > > ? process_one_work+0x350/0x350
> > > kthread+0x11b/0x140
> > > ? __kthread_bind_mask+0x60/0x60
> > > ret_from_fork+0x22/0x30
> > >
> > > This patch fixes blk_recalc_rq_segments to be aware of devices
> > > which can
> > > have multi-segment discards. It calculates the correct discard
> > > segment
> > > count by counting the number of bio as each discard bio is
> > > considered its
> > > own segment.
> > >
> > > Signed-off-by: David Jeffery <[email protected]>
> > > Tested-by: Laurence Oberman <[email protected]>
> > > ---
> > > block/blk-merge.c | 7 +++++++
> > > 1 file changed, 7 insertions(+)
> > >
> > > diff --git a/block/blk-merge.c b/block/blk-merge.c
> > > index 808768f6b174..fe7358bd5d09 100644
> > > --- a/block/blk-merge.c
> > > +++ b/block/blk-merge.c
> > > @@ -382,6 +382,13 @@ unsigned int blk_recalc_rq_segments(struct
> > > request *rq)
> > >
> > > switch (bio_op(rq->bio)) {
> > > case REQ_OP_DISCARD:
> > > + if (queue_max_discard_segments(rq->q) > 1) {
> > > + struct bio *bio = rq->bio;
> > > + for_each_bio(bio)
> > > + nr_phys_segs++;
> > > + return nr_phys_segs;
> > > + }
> > > + /* fall through */
> > > case REQ_OP_SECURE_ERASE:
> > > case REQ_OP_WRITE_ZEROES:
> > > return 0;
> >
> > blk_rq_nr_discard_segments() always returns >=1 segments, so no
> > similar
> > issue in case of single range discard.
> >
> > Reviewed-by: Ming Lei <[email protected]>
> >
> > And it can be thought as:
> >
> > Fixes: 1e739730c5b9 ("block: optionally merge discontiguous discard
> > bios into a single request")
> >
> >
>
> Great, can we get enough acks and push this through its urgent for me
> Reviewed-by: Laurence Oberman <[email protected]>
>