2018-10-24 07:23:51

by jianchao.wang

[permalink] [raw]
Subject: [PATCH] blk-mq: ensure hctx to be ran on mapped cpu when issue directly

When issue request directly and the task is migrated out of the
original cpu where it allocates request, hctx could be ran on
the cpu where it is not mapped. To fix this, insert the request
if BLK_MQ_F_BLOCKING is set, check whether the current is mapped
to the hctx and invoke __blk_mq_issue_directly under preemption
disabled.

Signed-off-by: Jianchao Wang <[email protected]>
---
block/blk-mq.c | 17 ++++++++++++++++-
1 file changed, 16 insertions(+), 1 deletion(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index e3c39ea..0cdc306 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1717,6 +1717,12 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
{
struct request_queue *q = rq->q;
bool run_queue = true;
+ blk_status_t ret;
+
+ if (hctx->flags & BLK_MQ_F_BLOCKING) {
+ bypass_insert = false;
+ goto insert;
+ }

/*
* RCU or SRCU read lock is needed before checking quiesced flag.
@@ -1734,6 +1740,11 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
if (q->elevator && !bypass_insert)
goto insert;

+ if (!cpumask_test_cpu(get_cpu(), hctx->cpumask)) {
+ bypass_insert = false;
+ goto insert;
+ }
+
if (!blk_mq_get_dispatch_budget(hctx))
goto insert;

@@ -1742,8 +1753,12 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
goto insert;
}

- return __blk_mq_issue_directly(hctx, rq, cookie);
+ ret = __blk_mq_issue_directly(hctx, rq, cookie);
+ put_cpu();
+ return ret;
+
insert:
+ put_cpu();
if (bypass_insert)
return BLK_STS_RESOURCE;

--
2.7.4



2018-10-25 16:26:48

by Jens Axboe

[permalink] [raw]
Subject: Re: [PATCH] blk-mq: ensure hctx to be ran on mapped cpu when issue directly

On 10/24/18 9:20 AM, Jianchao Wang wrote:
> When issue request directly and the task is migrated out of the
> original cpu where it allocates request, hctx could be ran on
> the cpu where it is not mapped. To fix this, insert the request
> if BLK_MQ_F_BLOCKING is set, check whether the current is mapped
> to the hctx and invoke __blk_mq_issue_directly under preemption
> disabled.
>
> Signed-off-by: Jianchao Wang <[email protected]>
> ---
> block/blk-mq.c | 17 ++++++++++++++++-
> 1 file changed, 16 insertions(+), 1 deletion(-)
>
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index e3c39ea..0cdc306 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -1717,6 +1717,12 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
> {
> struct request_queue *q = rq->q;
> bool run_queue = true;
> + blk_status_t ret;
> +
> + if (hctx->flags & BLK_MQ_F_BLOCKING) {
> + bypass_insert = false;
> + goto insert;
> + }

I'd do a prep patch that moves the insert logic out of this function,
and just have the caller do it by return BLK_STS_RESOURCE, for instance.
It's silly that we have that in both the caller and inside this function.

> @@ -1734,6 +1740,11 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
> if (q->elevator && !bypass_insert)
> goto insert;
>
> + if (!cpumask_test_cpu(get_cpu(), hctx->cpumask)) {
> + bypass_insert = false;
> + goto insert;
> + }

Should be fine to just do smp_processor_id() here, as we're inside
hctx_lock() here.

--
Jens Axboe


2018-10-26 01:39:09

by jianchao.wang

[permalink] [raw]
Subject: Re: [PATCH] blk-mq: ensure hctx to be ran on mapped cpu when issue directly

Hi Jens

On 10/26/18 12:25 AM, Jens Axboe wrote:
> On 10/24/18 9:20 AM, Jianchao Wang wrote:
>> When issue request directly and the task is migrated out of the
>> original cpu where it allocates request, hctx could be ran on
>> the cpu where it is not mapped. To fix this, insert the request
>> if BLK_MQ_F_BLOCKING is set, check whether the current is mapped
>> to the hctx and invoke __blk_mq_issue_directly under preemption
>> disabled.
>>
>> Signed-off-by: Jianchao Wang <[email protected]>
>> ---
>> block/blk-mq.c | 17 ++++++++++++++++-
>> 1 file changed, 16 insertions(+), 1 deletion(-)
>>
>> diff --git a/block/blk-mq.c b/block/blk-mq.c
>> index e3c39ea..0cdc306 100644
>> --- a/block/blk-mq.c
>> +++ b/block/blk-mq.c
>> @@ -1717,6 +1717,12 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
>> {
>> struct request_queue *q = rq->q;
>> bool run_queue = true;
>> + blk_status_t ret;
>> +
>> + if (hctx->flags & BLK_MQ_F_BLOCKING) {
>> + bypass_insert = false;
>> + goto insert;
>> + }
>
> I'd do a prep patch that moves the insert logic out of this function,
> and just have the caller do it by return BLK_STS_RESOURCE, for instance.
> It's silly that we have that in both the caller and inside this function.

Yes.

>
>> @@ -1734,6 +1740,11 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
>> if (q->elevator && !bypass_insert)
>> goto insert;
>>
>> + if (!cpumask_test_cpu(get_cpu(), hctx->cpumask)) {
>> + bypass_insert = false;
>> + goto insert;
>> + }
>
> Should be fine to just do smp_processor_id() here, as we're inside
> hctx_lock() here.
>

If the rcu is preemptible, smp_processor_id will not enough here.

Thanks
Jianchao

2018-10-26 01:42:24

by Jens Axboe

[permalink] [raw]
Subject: Re: [PATCH] blk-mq: ensure hctx to be ran on mapped cpu when issue directly

On 10/25/18 7:38 PM, jianchao.wang wrote:
> Hi Jens
>
> On 10/26/18 12:25 AM, Jens Axboe wrote:
>> On 10/24/18 9:20 AM, Jianchao Wang wrote:
>>> When issue request directly and the task is migrated out of the
>>> original cpu where it allocates request, hctx could be ran on
>>> the cpu where it is not mapped. To fix this, insert the request
>>> if BLK_MQ_F_BLOCKING is set, check whether the current is mapped
>>> to the hctx and invoke __blk_mq_issue_directly under preemption
>>> disabled.
>>>
>>> Signed-off-by: Jianchao Wang <[email protected]>
>>> ---
>>> block/blk-mq.c | 17 ++++++++++++++++-
>>> 1 file changed, 16 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/block/blk-mq.c b/block/blk-mq.c
>>> index e3c39ea..0cdc306 100644
>>> --- a/block/blk-mq.c
>>> +++ b/block/blk-mq.c
>>> @@ -1717,6 +1717,12 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
>>> {
>>> struct request_queue *q = rq->q;
>>> bool run_queue = true;
>>> + blk_status_t ret;
>>> +
>>> + if (hctx->flags & BLK_MQ_F_BLOCKING) {
>>> + bypass_insert = false;
>>> + goto insert;
>>> + }
>>
>> I'd do a prep patch that moves the insert logic out of this function,
>> and just have the caller do it by return BLK_STS_RESOURCE, for instance.
>> It's silly that we have that in both the caller and inside this function.
>
> Yes.
>
>>
>>> @@ -1734,6 +1740,11 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
>>> if (q->elevator && !bypass_insert)
>>> goto insert;
>>>
>>> + if (!cpumask_test_cpu(get_cpu(), hctx->cpumask)) {
>>> + bypass_insert = false;
>>> + goto insert;
>>> + }
>>
>> Should be fine to just do smp_processor_id() here, as we're inside
>> hctx_lock() here.
>>
>
> If the rcu is preemptible, smp_processor_id will not enough here.

True, for some reason I keep forgetting that rcu_*_lock() doesn't
imply preempt_disable() anymore.

--
Jens Axboe