From: Niklas Cassel <[email protected]>
Currently, __blk_mq_alloc_request() (via blk_mq_rq_ctx_init()) calls the
I/O scheduler callback e->type->ops.prepare_request(), which will set
RQF_ELVPRIV, even though passthrough (and flush) requests will later
bypass the I/O scheduler in blk_mq_submit_bio().
Later, blk_mq_free_request() checks if the RQF_ELVPRIV flag is set,
if it is, the e->type->ops.finish_request() I/O scheduler callback
will be called.
i.e., the prepare_request and finish_request I/O scheduler callbacks
will be called for requests which were never inserted to the I/O
scheduler.
Fix this by not calling e->type->ops.prepare_request(), nor setting
the RQF_ELVPRIV flag for passthrough requests.
Since the RQF_ELVPRIV flag will not get set for passthrough requests,
e->type->ops.prepare_request() will no longer get called for
passthrough requests which were never inserted to the I/O scheduler.
Signed-off-by: Niklas Cassel <[email protected]>
---
block/blk-mq.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 65d3a63aecc6..0816af125059 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -328,7 +328,12 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data,
data->ctx->rq_dispatched[op_is_sync(data->cmd_flags)]++;
refcount_set(&rq->ref, 1);
- if (!op_is_flush(data->cmd_flags)) {
+ /*
+ * Flush/passthrough requests are special and go directly to the
+ * dispatch list, bypassing the scheduler.
+ */
+ if (!op_is_flush(data->cmd_flags) &&
+ !blk_op_is_passthrough(data->cmd_flags)) {
struct elevator_queue *e = data->q->elevator;
rq->elv.icq = NULL;
--
2.31.1
On Tue, Sep 07, 2021 at 02:21:55PM +0000, Niklas Cassel wrote:
> From: Niklas Cassel <[email protected]>
>
> Currently, __blk_mq_alloc_request() (via blk_mq_rq_ctx_init()) calls the
> I/O scheduler callback e->type->ops.prepare_request(), which will set
> RQF_ELVPRIV, even though passthrough (and flush) requests will later
> bypass the I/O scheduler in blk_mq_submit_bio().
>
> Later, blk_mq_free_request() checks if the RQF_ELVPRIV flag is set,
> if it is, the e->type->ops.finish_request() I/O scheduler callback
> will be called.
>
> i.e., the prepare_request and finish_request I/O scheduler callbacks
> will be called for requests which were never inserted to the I/O
> scheduler.
>
> Fix this by not calling e->type->ops.prepare_request(), nor setting
> the RQF_ELVPRIV flag for passthrough requests.
> Since the RQF_ELVPRIV flag will not get set for passthrough requests,
> e->type->ops.prepare_request() will no longer get called for
> passthrough requests which were never inserted to the I/O scheduler.
>
> Signed-off-by: Niklas Cassel <[email protected]>
> ---
> block/blk-mq.c | 7 ++++++-
> 1 file changed, 6 insertions(+), 1 deletion(-)
>
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 65d3a63aecc6..0816af125059 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -328,7 +328,12 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data,
> data->ctx->rq_dispatched[op_is_sync(data->cmd_flags)]++;
> refcount_set(&rq->ref, 1);
>
> - if (!op_is_flush(data->cmd_flags)) {
> + /*
> + * Flush/passthrough requests are special and go directly to the
> + * dispatch list, bypassing the scheduler.
> + */
> + if (!op_is_flush(data->cmd_flags) &&
> + !blk_op_is_passthrough(data->cmd_flags)) {
Looks fine:
Reviewed-by: Ming Lei <[email protected]>
--
Ming