2017-07-27 05:10:16

by Michael Ellerman

[permalink] [raw]
Subject: blk_mq_sched_insert_request: inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage

Hi Jens,

I'm seeing the lockdep warning below on shutdown on a Power8 machine
using IPR.

If I'm reading it right it looks like the spin_lock() (non-irq) in
blk_mq_sched_insert_request() is the immediate cause.

Looking at blk_mq_requeue_work() (the caller), it is doing
spin_lock_irqsave(). So is switching blk_mq_sched_insert_request() to
spin_lock_irqsave() the right fix?

cheers


ipr 0001:08:00.0: shutdown

================================
WARNING: inconsistent lock state
4.13.0-rc2-gcc6x-gf74c89b #1 Not tainted
--------------------------------
inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage.
swapper/28/0 [HC0[0]:SC1[1]:HE1:SE0] takes:
(&(&hctx->lock)->rlock){+.?...}, at: [<c0000000005b60f4>] blk_mq_sched_dispatch_requests+0xa4/0x2a0
{SOFTIRQ-ON-W} state was registered at:
lock_acquire+0xec/0x2e0
_raw_spin_lock+0x44/0x70
blk_mq_sched_insert_request+0x88/0x1f0
blk_mq_requeue_work+0x108/0x180
process_one_work+0x310/0x800
worker_thread+0x88/0x520
kthread+0x164/0x1b0
ret_from_kernel_thread+0x5c/0x74
irq event stamp: 3572314
hardirqs last enabled at (3572314): [<c000000000b71998>] _raw_spin_unlock_irqrestore+0x58/0xb0
hardirqs last disabled at (3572313): [<c000000000b716ec>] _raw_spin_lock_irqsave+0x3c/0x90
softirqs last enabled at (3572302): [<c0000000000df17c>] irq_enter+0x9c/0xe0
softirqs last disabled at (3572303): [<c0000000000df2c8>] irq_exit+0x108/0x150

other info that might help us debug this:
Possible unsafe locking scenario:

CPU0
----
lock(&(&hctx->lock)->rlock);
<Interrupt>
lock(&(&hctx->lock)->rlock);

*** DEADLOCK ***

2 locks held by swapper/28/0:
#0: ((&ipr_cmd->timer)){+.-...}, at: [<c0000000001936f0>] call_timer_fn+0x10/0x4b0
#1: (rcu_read_lock){......}, at: [<c0000000005aca60>] __blk_mq_run_hw_queue+0xa0/0x2c0

stack backtrace:
CPU: 28 PID: 0 Comm: swapper/28 Not tainted 4.13.0-rc2-gcc6x-gf74c89b #1
Call Trace:
[c000001fffe97550] [c000000000b50818] dump_stack+0xe8/0x160 (unreliable)
[c000001fffe97590] [c0000000001586d0] print_usage_bug+0x2d0/0x390
[c000001fffe97640] [c000000000158f34] mark_lock+0x7a4/0x8e0
[c000001fffe976f0] [c00000000015a000] __lock_acquire+0x6a0/0x1a70
[c000001fffe97860] [c00000000015befc] lock_acquire+0xec/0x2e0
[c000001fffe97930] [c000000000b71514] _raw_spin_lock+0x44/0x70
[c000001fffe97960] [c0000000005b60f4] blk_mq_sched_dispatch_requests+0xa4/0x2a0
[c000001fffe979c0] [c0000000005acac0] __blk_mq_run_hw_queue+0x100/0x2c0
[c000001fffe97a00] [c0000000005ad478] __blk_mq_delay_run_hw_queue+0x118/0x130
[c000001fffe97a40] [c0000000005ad61c] blk_mq_start_hw_queues+0x6c/0xa0
[c000001fffe97a80] [c000000000797aac] scsi_kick_queue+0x2c/0x60
[c000001fffe97aa0] [c000000000797cf0] scsi_run_queue+0x210/0x360
[c000001fffe97b10] [c00000000079b888] scsi_run_host_queues+0x48/0x80
[c000001fffe97b40] [c0000000007b6090] ipr_ioa_bringdown_done+0x70/0x1e0
[c000001fffe97bc0] [c0000000007bc860] ipr_reset_ioa_job+0x80/0xf0
[c000001fffe97bf0] [c0000000007b4d50] ipr_reset_timer_done+0xd0/0x100
[c000001fffe97c30] [c0000000001937bc] call_timer_fn+0xdc/0x4b0
[c000001fffe97cf0] [c000000000193d08] expire_timers+0x178/0x330
[c000001fffe97d60] [c0000000001940c8] run_timer_softirq+0xb8/0x120
[c000001fffe97de0] [c000000000b726a8] __do_softirq+0x168/0x6d8
[c000001fffe97ef0] [c0000000000df2c8] irq_exit+0x108/0x150
[c000001fffe97f10] [c000000000017bf4] __do_irq+0x2a4/0x4a0
[c000001fffe97f90] [c00000000002da50] call_do_irq+0x14/0x24
[c0000007fad93aa0] [c000000000017e8c] do_IRQ+0x9c/0x140
[c0000007fad93af0] [c000000000008b98] hardware_interrupt_common+0x138/0x140
--- interrupt: 501 at .L1.42+0x0/0x4
LR = arch_local_irq_restore.part.4+0x84/0xb0
[c0000007fad93de0] [c0000007ffc1f7d8] 0xc0000007ffc1f7d8 (unreliable)
[c0000007fad93e00] [c000000000988d3c] cpuidle_enter_state+0x1bc/0x530
[c0000007fad93e60] [c0000000001457cc] call_cpuidle+0x4c/0x90
[c0000007fad93e80] [c000000000145b28] do_idle+0x208/0x2f0
[c0000007fad93ef0] [c000000000145f8c] cpu_startup_entry+0x3c/0x50
[c0000007fad93f20] [c000000000042bc0] start_secondary+0x3b0/0x4b0
[c0000007fad93f90] [c00000000000ac6c] start_secondary_prolog+0x10/0x14


2017-07-27 14:02:56

by Jens Axboe

[permalink] [raw]
Subject: Re: blk_mq_sched_insert_request: inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage

On 07/26/2017 11:10 PM, Michael Ellerman wrote:
> Hi Jens,
>
> I'm seeing the lockdep warning below on shutdown on a Power8 machine
> using IPR.
>
> If I'm reading it right it looks like the spin_lock() (non-irq) in
> blk_mq_sched_insert_request() is the immediate cause.

All the users of ctx->lock should be from process context.

> Looking at blk_mq_requeue_work() (the caller), it is doing
> spin_lock_irqsave(). So is switching blk_mq_sched_insert_request() to
> spin_lock_irqsave() the right fix?

That's because the requeue lock needs to be IRQ safe. However, the
context allows for just spin_lock_irq() for that lock there, so that
should be fixed up. Not your issue, of course, but we don't need to
save flags there.

> ipr 0001:08:00.0: shutdown
>
> ================================
> WARNING: inconsistent lock state
> 4.13.0-rc2-gcc6x-gf74c89b #1 Not tainted
> --------------------------------
> inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage.
> swapper/28/0 [HC0[0]:SC1[1]:HE1:SE0] takes:
> (&(&hctx->lock)->rlock){+.?...}, at: [<c0000000005b60f4>] blk_mq_sched_dispatch_requests+0xa4/0x2a0
> {SOFTIRQ-ON-W} state was registered at:
> lock_acquire+0xec/0x2e0
> _raw_spin_lock+0x44/0x70
> blk_mq_sched_insert_request+0x88/0x1f0
> blk_mq_requeue_work+0x108/0x180
> process_one_work+0x310/0x800
> worker_thread+0x88/0x520
> kthread+0x164/0x1b0
> ret_from_kernel_thread+0x5c/0x74
> irq event stamp: 3572314
> hardirqs last enabled at (3572314): [<c000000000b71998>] _raw_spin_unlock_irqrestore+0x58/0xb0
> hardirqs last disabled at (3572313): [<c000000000b716ec>] _raw_spin_lock_irqsave+0x3c/0x90
> softirqs last enabled at (3572302): [<c0000000000df17c>] irq_enter+0x9c/0xe0
> softirqs last disabled at (3572303): [<c0000000000df2c8>] irq_exit+0x108/0x150
>
> other info that might help us debug this:
> Possible unsafe locking scenario:
>
> CPU0
> ----
> lock(&(&hctx->lock)->rlock);
> <Interrupt>
> lock(&(&hctx->lock)->rlock);
>
> *** DEADLOCK ***
>
> 2 locks held by swapper/28/0:
> #0: ((&ipr_cmd->timer)){+.-...}, at: [<c0000000001936f0>] call_timer_fn+0x10/0x4b0
> #1: (rcu_read_lock){......}, at: [<c0000000005aca60>] __blk_mq_run_hw_queue+0xa0/0x2c0
>
> stack backtrace:
> CPU: 28 PID: 0 Comm: swapper/28 Not tainted 4.13.0-rc2-gcc6x-gf74c89b #1
> Call Trace:
> [c000001fffe97550] [c000000000b50818] dump_stack+0xe8/0x160 (unreliable)
> [c000001fffe97590] [c0000000001586d0] print_usage_bug+0x2d0/0x390
> [c000001fffe97640] [c000000000158f34] mark_lock+0x7a4/0x8e0
> [c000001fffe976f0] [c00000000015a000] __lock_acquire+0x6a0/0x1a70
> [c000001fffe97860] [c00000000015befc] lock_acquire+0xec/0x2e0
> [c000001fffe97930] [c000000000b71514] _raw_spin_lock+0x44/0x70
> [c000001fffe97960] [c0000000005b60f4] blk_mq_sched_dispatch_requests+0xa4/0x2a0
> [c000001fffe979c0] [c0000000005acac0] __blk_mq_run_hw_queue+0x100/0x2c0
> [c000001fffe97a00] [c0000000005ad478] __blk_mq_delay_run_hw_queue+0x118/0x130
> [c000001fffe97a40] [c0000000005ad61c] blk_mq_start_hw_queues+0x6c/0xa0
> [c000001fffe97a80] [c000000000797aac] scsi_kick_queue+0x2c/0x60
> [c000001fffe97aa0] [c000000000797cf0] scsi_run_queue+0x210/0x360
> [c000001fffe97b10] [c00000000079b888] scsi_run_host_queues+0x48/0x80
> [c000001fffe97b40] [c0000000007b6090] ipr_ioa_bringdown_done+0x70/0x1e0
> [c000001fffe97bc0] [c0000000007bc860] ipr_reset_ioa_job+0x80/0xf0
> [c000001fffe97bf0] [c0000000007b4d50] ipr_reset_timer_done+0xd0/0x100
> [c000001fffe97c30] [c0000000001937bc] call_timer_fn+0xdc/0x4b0
> [c000001fffe97cf0] [c000000000193d08] expire_timers+0x178/0x330
> [c000001fffe97d60] [c0000000001940c8] run_timer_softirq+0xb8/0x120
> [c000001fffe97de0] [c000000000b726a8] __do_softirq+0x168/0x6d8
> [c000001fffe97ef0] [c0000000000df2c8] irq_exit+0x108/0x150
> [c000001fffe97f10] [c000000000017bf4] __do_irq+0x2a4/0x4a0
> [c000001fffe97f90] [c00000000002da50] call_do_irq+0x14/0x24
> [c0000007fad93aa0] [c000000000017e8c] do_IRQ+0x9c/0x140
> [c0000007fad93af0] [c000000000008b98] hardware_interrupt_common+0x138/0x140
> --- interrupt: 501 at .L1.42+0x0/0x4
> LR = arch_local_irq_restore.part.4+0x84/0xb0
> [c0000007fad93de0] [c0000007ffc1f7d8] 0xc0000007ffc1f7d8 (unreliable)
> [c0000007fad93e00] [c000000000988d3c] cpuidle_enter_state+0x1bc/0x530
> [c0000007fad93e60] [c0000000001457cc] call_cpuidle+0x4c/0x90
> [c0000007fad93e80] [c000000000145b28] do_idle+0x208/0x2f0
> [c0000007fad93ef0] [c000000000145f8c] cpu_startup_entry+0x3c/0x50
> [c0000007fad93f20] [c000000000042bc0] start_secondary+0x3b0/0x4b0
> [c0000007fad93f90] [c00000000000ac6c] start_secondary_prolog+0x10/0x14

The bug looks like SCSI running the queue inline from IRQ
context, that's not a good idea. Can you confirm the below works for
you?


diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index f6097b89d5d3..78740ebf966c 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -497,7 +497,7 @@ static void scsi_run_queue(struct request_queue *q)
scsi_starved_list_run(sdev->host);

if (q->mq_ops)
- blk_mq_run_hw_queues(q, false);
+ blk_mq_run_hw_queues(q, true);
else
blk_run_queue(q);
}

--
Jens Axboe


--
Jens Axboe

2017-07-27 14:48:19

by Bart Van Assche

[permalink] [raw]
Subject: Re: blk_mq_sched_insert_request: inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage

On Thu, 2017-07-27 at 08:02 -0600, Jens Axboe wrote:
> The bug looks like SCSI running the queue inline from IRQ
> context, that's not a good idea. Can you confirm the below works for
> you?
>
>
> diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
> index f6097b89d5d3..78740ebf966c 100644
> --- a/drivers/scsi/scsi_lib.c
> +++ b/drivers/scsi/scsi_lib.c
> @@ -497,7 +497,7 @@ static void scsi_run_queue(struct request_queue *q)
> scsi_starved_list_run(sdev->host);
>
> if (q->mq_ops)
> - blk_mq_run_hw_queues(q, false);
> + blk_mq_run_hw_queues(q, true);
> else
> blk_run_queue(q);
> }

Hello Jens,

scsi_run_queue() works fine if no scheduler is configured. Additionally, that
code predates the introduction of blk-mq I/O schedulers. I think it is
nontrivial for block driver authors to figure out that a queue has to be run
from process context if a scheduler has been configured that does not support
to be run from interrupt context. How about adding WARN_ON_ONCE(in_interrupt())
to blk_mq_start_hw_queue() or replacing the above patch by the following:


Subject: [PATCH] blk-mq: Make it safe to call blk_mq_start_hw_queues() from interrupt context

blk_mq_start_hw_queues() triggers a queue run. Some functions that
get called to run a queue, e.g. dd_dispatch_request(), are not IRQ-safe.
Hence run the queue asynchronously if blk_mq_start_hw_queues() is called
from interrupt context.

Signed-off-by: Bart Van Assche <[email protected]>
---
block/blk-mq.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 041f7b7fa0d6..c5cb3b2aabcf 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1251,7 +1251,7 @@ void blk_mq_start_hw_queue(struct blk_mq_hw_ctx *hctx)
{
clear_bit(BLK_MQ_S_STOPPED, &hctx->state);

- blk_mq_run_hw_queue(hctx, false);
+ blk_mq_run_hw_queue(hctx, in_interrupt());
}
EXPORT_SYMBOL(blk_mq_start_hw_queue);

Thanks,

Bart.

2017-07-27 16:26:35

by Jens Axboe

[permalink] [raw]
Subject: Re: blk_mq_sched_insert_request: inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage

On 07/27/2017 08:47 AM, Bart Van Assche wrote:
> On Thu, 2017-07-27 at 08:02 -0600, Jens Axboe wrote:
>> The bug looks like SCSI running the queue inline from IRQ
>> context, that's not a good idea. Can you confirm the below works for
>> you?
>>
>>
>> diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
>> index f6097b89d5d3..78740ebf966c 100644
>> --- a/drivers/scsi/scsi_lib.c
>> +++ b/drivers/scsi/scsi_lib.c
>> @@ -497,7 +497,7 @@ static void scsi_run_queue(struct request_queue *q)
>> scsi_starved_list_run(sdev->host);
>>
>> if (q->mq_ops)
>> - blk_mq_run_hw_queues(q, false);
>> + blk_mq_run_hw_queues(q, true);
>> else
>> blk_run_queue(q);
>> }
>
> Hello Jens,
>
> scsi_run_queue() works fine if no scheduler is configured. Additionally, that
> code predates the introduction of blk-mq I/O schedulers. I think it is
> nontrivial for block driver authors to figure out that a queue has to be run
> from process context if a scheduler has been configured that does not support
> to be run from interrupt context.

No it doesn't, you could never run the queue from interrupt context with
async == false. So I don't think that's confusing at all, you should
always be aware of the context.

> How about adding WARN_ON_ONCE(in_interrupt()) to
> blk_mq_start_hw_queue() or replacing the above patch by the following:

No, I hate having dependencies like that, because they always just catch
one of them. Looks like the IPR path that hits this should just offload
to a workqueue or similar, you don't have to make any scsi_run_queue()
async.

--
Jens Axboe

2017-07-28 06:19:40

by Michael Ellerman

[permalink] [raw]
Subject: Re: blk_mq_sched_insert_request: inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage

Jens Axboe <[email protected]> writes:
> On 07/27/2017 08:47 AM, Bart Van Assche wrote:
>> On Thu, 2017-07-27 at 08:02 -0600, Jens Axboe wrote:
>>> The bug looks like SCSI running the queue inline from IRQ
>>> context, that's not a good idea.
...
>>
>> scsi_run_queue() works fine if no scheduler is configured. Additionally, that
>> code predates the introduction of blk-mq I/O schedulers. I think it is
>> nontrivial for block driver authors to figure out that a queue has to be run
>> from process context if a scheduler has been configured that does not support
>> to be run from interrupt context.
>
> No it doesn't, you could never run the queue from interrupt context with
> async == false. So I don't think that's confusing at all, you should
> always be aware of the context.
>
>> How about adding WARN_ON_ONCE(in_interrupt()) to
>> blk_mq_start_hw_queue() or replacing the above patch by the following:
>
> No, I hate having dependencies like that, because they always just catch
> one of them. Looks like the IPR path that hits this should just offload
> to a workqueue or similar, you don't have to make any scsi_run_queue()
> async.

OK, so the resolution is "fix it in IPR" ?

cheers

2017-07-28 14:25:53

by Jens Axboe

[permalink] [raw]
Subject: Re: blk_mq_sched_insert_request: inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage

On 07/28/2017 12:19 AM, Michael Ellerman wrote:
> Jens Axboe <[email protected]> writes:
>> On 07/27/2017 08:47 AM, Bart Van Assche wrote:
>>> On Thu, 2017-07-27 at 08:02 -0600, Jens Axboe wrote:
>>>> The bug looks like SCSI running the queue inline from IRQ
>>>> context, that's not a good idea.
> ...
>>>
>>> scsi_run_queue() works fine if no scheduler is configured. Additionally, that
>>> code predates the introduction of blk-mq I/O schedulers. I think it is
>>> nontrivial for block driver authors to figure out that a queue has to be run
>>> from process context if a scheduler has been configured that does not support
>>> to be run from interrupt context.
>>
>> No it doesn't, you could never run the queue from interrupt context with
>> async == false. So I don't think that's confusing at all, you should
>> always be aware of the context.
>>
>>> How about adding WARN_ON_ONCE(in_interrupt()) to
>>> blk_mq_start_hw_queue() or replacing the above patch by the following:
>>
>> No, I hate having dependencies like that, because they always just catch
>> one of them. Looks like the IPR path that hits this should just offload
>> to a workqueue or similar, you don't have to make any scsi_run_queue()
>> async.
>
> OK, so the resolution is "fix it in IPR" ?

I'll leave that to the SCSI crew. But at least one bug is in IPR, if you
look at the call trace:

- timer function triggers, runs ipr_reset_timer_done(), which grabs the
host lock AND disables interrupts.
- further down in the call path, ipr_ioa_bringdown_done() uncondtionally
enables interrupts:

spin_unlock_irq(ioa_cfg->host->host_lock);
scsi_unblock_requests(ioa_cfg->host);
spin_lock_irq(ioa_cfg->host->host_lock);

And the call to scsi_unblock_requests() is the one that ultimately runs
the queue. The IRQ issue aside here, scsi_unblock_requests() could run
the queue async, and we could retain the normal sync run otherwise.

Can you try the below fix? Should be more palatable than the previous
one. Brian, maybe you can take a look at the IRQ issue mentioned above?

diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index f6097b89d5d3..dfb89596af81 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -481,13 +481,14 @@ static void scsi_starved_list_run(struct Scsi_Host *shost)
* Purpose: Select a proper request queue to serve next
*
* Arguments: q - last request's queue
+ * async - run queues async, if we need to
*
* Returns: Nothing
*
* Notes: The previous command was completely finished, start
* a new one if possible.
*/
-static void scsi_run_queue(struct request_queue *q)
+static void scsi_run_queue(struct request_queue *q, bool async)
{
struct scsi_device *sdev = q->queuedata;

@@ -497,7 +498,7 @@ static void scsi_run_queue(struct request_queue *q)
scsi_starved_list_run(sdev->host);

if (q->mq_ops)
- blk_mq_run_hw_queues(q, false);
+ blk_mq_run_hw_queues(q, async);
else
blk_run_queue(q);
}
@@ -509,7 +510,7 @@ void scsi_requeue_run_queue(struct work_struct *work)

sdev = container_of(work, struct scsi_device, requeue_work);
q = sdev->request_queue;
- scsi_run_queue(q);
+ scsi_run_queue(q, false);
}

/*
@@ -543,17 +544,22 @@ static void scsi_requeue_command(struct request_queue *q, struct scsi_cmnd *cmd)
blk_requeue_request(q, req);
spin_unlock_irqrestore(q->queue_lock, flags);

- scsi_run_queue(q);
+ scsi_run_queue(q, true);

put_device(&sdev->sdev_gendev);
}

-void scsi_run_host_queues(struct Scsi_Host *shost)
+static void __scsi_run_host_queues(struct Scsi_Host *shost, bool async)
{
struct scsi_device *sdev;

shost_for_each_device(sdev, shost)
- scsi_run_queue(sdev->request_queue);
+ scsi_run_queue(sdev->request_queue, async);
+}
+
+void scsi_run_host_queues(struct Scsi_Host *shost)
+{
+ __scsi_run_host_queues(shost, false);
}

static void scsi_uninit_cmd(struct scsi_cmnd *cmd)
@@ -671,7 +677,7 @@ static bool scsi_end_request(struct request *req, blk_status_t error,
blk_finish_request(req, error);
spin_unlock_irqrestore(q->queue_lock, flags);

- scsi_run_queue(q);
+ scsi_run_queue(q, false);
}

put_device(&sdev->sdev_gendev);
@@ -2293,7 +2299,7 @@ EXPORT_SYMBOL(scsi_block_requests);
void scsi_unblock_requests(struct Scsi_Host *shost)
{
shost->host_self_blocked = 0;
- scsi_run_host_queues(shost);
+ __scsi_run_host_queues(shost, true);
}
EXPORT_SYMBOL(scsi_unblock_requests);

@@ -2897,10 +2903,10 @@ scsi_device_quiesce(struct scsi_device *sdev)
if (err)
return err;

- scsi_run_queue(sdev->request_queue);
+ scsi_run_queue(sdev->request_queue, false);
while (atomic_read(&sdev->device_busy)) {
msleep_interruptible(200);
- scsi_run_queue(sdev->request_queue);
+ scsi_run_queue(sdev->request_queue, false);
}
return 0;
}
@@ -2924,7 +2930,7 @@ void scsi_device_resume(struct scsi_device *sdev)
mutex_lock(&sdev->state_mutex);
if (sdev->sdev_state == SDEV_QUIESCE &&
scsi_device_set_state(sdev, SDEV_RUNNING) == 0)
- scsi_run_queue(sdev->request_queue);
+ scsi_run_queue(sdev->request_queue, false);
mutex_unlock(&sdev->state_mutex);
}
EXPORT_SYMBOL(scsi_device_resume);

--
Jens Axboe

2017-07-28 15:14:45

by Bart Van Assche

[permalink] [raw]
Subject: Re: blk_mq_sched_insert_request: inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage

On Fri, 2017-07-28 at 08:25 -0600, Jens Axboe wrote:
> On 07/28/2017 12:19 AM, Michael Ellerman wrote:
> > OK, so the resolution is "fix it in IPR" ?
>
> I'll leave that to the SCSI crew. But at least one bug is in IPR, if you
> look at the call trace:
>
> - timer function triggers, runs ipr_reset_timer_done(), which grabs the
> host lock AND disables interrupts.
> - further down in the call path, ipr_ioa_bringdown_done() uncondtionally
> enables interrupts:
>
> spin_unlock_irq(ioa_cfg->host->host_lock);
> scsi_unblock_requests(ioa_cfg->host);
> spin_lock_irq(ioa_cfg->host->host_lock);
>
> And the call to scsi_unblock_requests() is the one that ultimately runs
> the queue. The IRQ issue aside here, scsi_unblock_requests() could run
> the queue async, and we could retain the normal sync run otherwise.
>
> Can you try the below fix? Should be more palatable than the previous
> one. Brian, maybe you can take a look at the IRQ issue mentioned above?
>
> [ ... ]

Hello Jens,

Are there other block drivers that can call blk_mq_start_hw_queues() from
interrupt context? I'm currently working on converting the skd driver
(drivers/block/skd_main.c) from a single queue block driver into a scsi-mq
driver. The skd driver calls blk_start_queue() from interrupt context. As we
know it is not safe to call blk_mq_start_hw_queues() from interrupt context.
Can you recommend me how I should proceed: should I implement a solution in
the skd driver or should perhaps the blk-mq core be modified?

Thanks,

Bart.

2017-07-28 15:18:28

by Jens Axboe

[permalink] [raw]
Subject: Re: blk_mq_sched_insert_request: inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage

On 07/28/2017 09:13 AM, Bart Van Assche wrote:
> On Fri, 2017-07-28 at 08:25 -0600, Jens Axboe wrote:
>> On 07/28/2017 12:19 AM, Michael Ellerman wrote:
>>> OK, so the resolution is "fix it in IPR" ?
>>
>> I'll leave that to the SCSI crew. But at least one bug is in IPR, if you
>> look at the call trace:
>>
>> - timer function triggers, runs ipr_reset_timer_done(), which grabs the
>> host lock AND disables interrupts.
>> - further down in the call path, ipr_ioa_bringdown_done() uncondtionally
>> enables interrupts:
>>
>> spin_unlock_irq(ioa_cfg->host->host_lock);
>> scsi_unblock_requests(ioa_cfg->host);
>> spin_lock_irq(ioa_cfg->host->host_lock);
>>
>> And the call to scsi_unblock_requests() is the one that ultimately runs
>> the queue. The IRQ issue aside here, scsi_unblock_requests() could run
>> the queue async, and we could retain the normal sync run otherwise.
>>
>> Can you try the below fix? Should be more palatable than the previous
>> one. Brian, maybe you can take a look at the IRQ issue mentioned above?
>>
>> [ ... ]
>
> Hello Jens,
>
> Are there other block drivers that can call blk_mq_start_hw_queues()
> from interrupt context? I'm currently working on converting the skd
> driver (drivers/block/skd_main.c) from a single queue block driver
> into a scsi-mq driver. The skd driver calls blk_start_queue() from
> interrupt context. As we know it is not safe to call
> blk_mq_start_hw_queues() from interrupt context. Can you recommend me
> how I should proceed: should I implement a solution in the skd driver
> or should perhaps the blk-mq core be modified?

Great that you a converting that driver! If there's a need for it, we
could always expose the sync/async need in blk_mq_start_hw_queues().
>From a quick look at the driver, it's using start queue very liberally.
Would probably make sense to see which ones of those are actually
needed. For resource management, we've got better interfaces on the
blk-mq side, for instance.

Since this is a conversion, might make sense to not modify
blk_mq_start_hw_queues() and simply provide an alternative
blk_mq_start_hw_queues_async(). That will keep the conversion straight
forward. Then the next step could be to fixup skd, and then we could
drop the _async() variant again, hopefully.

--
Jens Axboe

2017-07-28 20:42:03

by Brian King

[permalink] [raw]
Subject: Re: blk_mq_sched_insert_request: inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage

On 07/28/2017 10:17 AM, Brian J King wrote:
> Jens Axboe <[email protected]> wrote on 07/28/2017 09:25:48 AM:
>
>> Can you try the below fix? Should be more palatable than the previous
>> one. Brian, maybe you can take a look at the IRQ issue mentioned above?

Michael,

Does this address the issue you are seeing?

Thanks,

Brian

8<

Index: linux-2.6.git/drivers/scsi/ipr.c
===================================================================
--- linux-2.6.git.orig/drivers/scsi/ipr.c
+++ linux-2.6.git/drivers/scsi/ipr.c
@@ -3351,6 +3351,16 @@ static void ipr_worker_thread(struct wor
return;
}

+ if (ioa_cfg->scsi_unblock) {
+ ioa_cfg->scsi_unblock = 0;
+ ioa_cfg->scsi_blocked = 0;
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ scsi_unblock_requests(ioa_cfg->host);
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ if (ioa_cfg->scsi_blocked)
+ scsi_block_requests(ioa_cfg->host);
+ }
+
if (!ioa_cfg->scan_enabled) {
spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
return;
@@ -7211,9 +7221,8 @@ static int ipr_ioa_bringdown_done(struct
ENTER;
if (!ioa_cfg->hrrq[IPR_INIT_HRRQ].removing_ioa) {
ipr_trace;
- spin_unlock_irq(ioa_cfg->host->host_lock);
- scsi_unblock_requests(ioa_cfg->host);
- spin_lock_irq(ioa_cfg->host->host_lock);
+ ioa_cfg->scsi_unblock = 1;
+ schedule_work(&ioa_cfg->work_q);
}

ioa_cfg->in_reset_reload = 0;
@@ -7287,13 +7296,7 @@ static int ipr_ioa_reset_done(struct ipr
list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q);
wake_up_all(&ioa_cfg->reset_wait_q);

- spin_unlock(ioa_cfg->host->host_lock);
- scsi_unblock_requests(ioa_cfg->host);
- spin_lock(ioa_cfg->host->host_lock);
-
- if (!ioa_cfg->hrrq[IPR_INIT_HRRQ].allow_cmds)
- scsi_block_requests(ioa_cfg->host);
-
+ ioa_cfg->scsi_unblock = 1;
schedule_work(&ioa_cfg->work_q);
LEAVE;
return IPR_RC_JOB_RETURN;
@@ -9249,8 +9252,11 @@ static void _ipr_initiate_ioa_reset(stru
spin_unlock(&ioa_cfg->hrrq[i]._lock);
}
wmb();
- if (!ioa_cfg->hrrq[IPR_INIT_HRRQ].removing_ioa)
+ if (!ioa_cfg->hrrq[IPR_INIT_HRRQ].removing_ioa) {
+ ioa_cfg->scsi_unblock = 0;
+ ioa_cfg->scsi_blocked = 1;
scsi_block_requests(ioa_cfg->host);
+ }

ipr_cmd = ipr_get_free_ipr_cmnd(ioa_cfg);
ioa_cfg->reset_cmd = ipr_cmd;
@@ -9306,9 +9312,8 @@ static void ipr_initiate_ioa_reset(struc
wake_up_all(&ioa_cfg->reset_wait_q);

if (!ioa_cfg->hrrq[IPR_INIT_HRRQ].removing_ioa) {
- spin_unlock_irq(ioa_cfg->host->host_lock);
- scsi_unblock_requests(ioa_cfg->host);
- spin_lock_irq(ioa_cfg->host->host_lock);
+ ioa_cfg->scsi_unblock = 1;
+ schedule_work(&ioa_cfg->work_q);
}
return;
} else {
Index: linux-2.6.git/drivers/scsi/ipr.h
===================================================================
--- linux-2.6.git.orig/drivers/scsi/ipr.h
+++ linux-2.6.git/drivers/scsi/ipr.h
@@ -1488,6 +1488,8 @@ struct ipr_ioa_cfg {
u8 cfg_locked:1;
u8 clear_isr:1;
u8 probe_done:1;
+ u8 scsi_unblock:1;
+ u8 scsi_blocked:1;

u8 revid;


2017-07-31 12:03:37

by Michael Ellerman

[permalink] [raw]
Subject: Re: blk_mq_sched_insert_request: inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage

Brian King <[email protected]> writes:

> On 07/28/2017 10:17 AM, Brian J King wrote:
>> Jens Axboe <[email protected]> wrote on 07/28/2017 09:25:48 AM:
>>
>>> Can you try the below fix? Should be more palatable than the previous
>>> one. Brian, maybe you can take a look at the IRQ issue mentioned above?
>
> Michael,
>
> Does this address the issue you are seeing?

Yes it seems to, thanks.

I only see the trace on reboot, and not 100% of the time. But I've
survived a couple of reboots now without seeing anything, so I think
this is helping.

I'll put the patch in my Jenkins over night and let you know how it
survives that, which should be ~= 25 boots.

cheers

2017-08-01 06:54:20

by Michael Ellerman

[permalink] [raw]
Subject: Re: blk_mq_sched_insert_request: inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage

Michael Ellerman <[email protected]> writes:

> Brian King <[email protected]> writes:
>
>> On 07/28/2017 10:17 AM, Brian J King wrote:
>>> Jens Axboe <[email protected]> wrote on 07/28/2017 09:25:48 AM:
>>>
>>>> Can you try the below fix? Should be more palatable than the previous
>>>> one. Brian, maybe you can take a look at the IRQ issue mentioned above?
>>
>> Michael,
>>
>> Does this address the issue you are seeing?
>
> Yes it seems to, thanks.
>
> I only see the trace on reboot, and not 100% of the time. But I've
> survived a couple of reboots now without seeing anything, so I think
> this is helping.
>
> I'll put the patch in my Jenkins over night and let you know how it
> survives that, which should be ~= 25 boots.

No lockdep warnings or other oddness over night, so that patch looks
good to me.

cheers

2017-08-01 06:55:07

by Michael Ellerman

[permalink] [raw]
Subject: Re: blk_mq_sched_insert_request: inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage

Jens Axboe <[email protected]> writes:
...
>
> Can you try the below fix? Should be more palatable than the previous
> one. Brian, maybe you can take a look at the IRQ issue mentioned above?

Given the patch from Brian fixed the lockdep warning, do you still want
me to try and test this one?

cheers

> diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
> index f6097b89d5d3..dfb89596af81 100644
> --- a/drivers/scsi/scsi_lib.c
> +++ b/drivers/scsi/scsi_lib.c
> @@ -481,13 +481,14 @@ static void scsi_starved_list_run(struct Scsi_Host *shost)
> * Purpose: Select a proper request queue to serve next
> *
> * Arguments: q - last request's queue
> + * async - run queues async, if we need to
> *
> * Returns: Nothing
> *
> * Notes: The previous command was completely finished, start
> * a new one if possible.
> */
> -static void scsi_run_queue(struct request_queue *q)
> +static void scsi_run_queue(struct request_queue *q, bool async)
> {
> struct scsi_device *sdev = q->queuedata;
>
> @@ -497,7 +498,7 @@ static void scsi_run_queue(struct request_queue *q)
> scsi_starved_list_run(sdev->host);
>
> if (q->mq_ops)
> - blk_mq_run_hw_queues(q, false);
> + blk_mq_run_hw_queues(q, async);
> else
> blk_run_queue(q);
> }
> @@ -509,7 +510,7 @@ void scsi_requeue_run_queue(struct work_struct *work)
>
> sdev = container_of(work, struct scsi_device, requeue_work);
> q = sdev->request_queue;
> - scsi_run_queue(q);
> + scsi_run_queue(q, false);
> }
>
> /*
> @@ -543,17 +544,22 @@ static void scsi_requeue_command(struct request_queue *q, struct scsi_cmnd *cmd)
> blk_requeue_request(q, req);
> spin_unlock_irqrestore(q->queue_lock, flags);
>
> - scsi_run_queue(q);
> + scsi_run_queue(q, true);
>
> put_device(&sdev->sdev_gendev);
> }
>
> -void scsi_run_host_queues(struct Scsi_Host *shost)
> +static void __scsi_run_host_queues(struct Scsi_Host *shost, bool async)
> {
> struct scsi_device *sdev;
>
> shost_for_each_device(sdev, shost)
> - scsi_run_queue(sdev->request_queue);
> + scsi_run_queue(sdev->request_queue, async);
> +}
> +
> +void scsi_run_host_queues(struct Scsi_Host *shost)
> +{
> + __scsi_run_host_queues(shost, false);
> }
>
> static void scsi_uninit_cmd(struct scsi_cmnd *cmd)
> @@ -671,7 +677,7 @@ static bool scsi_end_request(struct request *req, blk_status_t error,
> blk_finish_request(req, error);
> spin_unlock_irqrestore(q->queue_lock, flags);
>
> - scsi_run_queue(q);
> + scsi_run_queue(q, false);
> }
>
> put_device(&sdev->sdev_gendev);
> @@ -2293,7 +2299,7 @@ EXPORT_SYMBOL(scsi_block_requests);
> void scsi_unblock_requests(struct Scsi_Host *shost)
> {
> shost->host_self_blocked = 0;
> - scsi_run_host_queues(shost);
> + __scsi_run_host_queues(shost, true);
> }
> EXPORT_SYMBOL(scsi_unblock_requests);
>
> @@ -2897,10 +2903,10 @@ scsi_device_quiesce(struct scsi_device *sdev)
> if (err)
> return err;
>
> - scsi_run_queue(sdev->request_queue);
> + scsi_run_queue(sdev->request_queue, false);
> while (atomic_read(&sdev->device_busy)) {
> msleep_interruptible(200);
> - scsi_run_queue(sdev->request_queue);
> + scsi_run_queue(sdev->request_queue, false);
> }
> return 0;
> }
> @@ -2924,7 +2930,7 @@ void scsi_device_resume(struct scsi_device *sdev)
> mutex_lock(&sdev->state_mutex);
> if (sdev->sdev_state == SDEV_QUIESCE &&
> scsi_device_set_state(sdev, SDEV_RUNNING) == 0)
> - scsi_run_queue(sdev->request_queue);
> + scsi_run_queue(sdev->request_queue, false);
> mutex_unlock(&sdev->state_mutex);
> }
> EXPORT_SYMBOL(scsi_device_resume);
>
> --
> Jens Axboe

2017-08-01 15:21:39

by Brian King

[permalink] [raw]
Subject: [PATCH] ipr: Fix scsi-mq lockdep issue

Fixes the following lockdep warning that can occur when scsi-mq is enabled
with ipr due to ipr calling scsi_unblock_requests from irq context. The fix
is to move the call to scsi_unblock_requests to ipr's existing workqueue.

stack backtrace:
CPU: 28 PID: 0 Comm: swapper/28 Not tainted 4.13.0-rc2-gcc6x-gf74c89b #1
Call Trace:
[c000001fffe97550] [c000000000b50818] dump_stack+0xe8/0x160 (unreliable)
[c000001fffe97590] [c0000000001586d0] print_usage_bug+0x2d0/0x390
[c000001fffe97640] [c000000000158f34] mark_lock+0x7a4/0x8e0
[c000001fffe976f0] [c00000000015a000] __lock_acquire+0x6a0/0x1a70
[c000001fffe97860] [c00000000015befc] lock_acquire+0xec/0x2e0
[c000001fffe97930] [c000000000b71514] _raw_spin_lock+0x44/0x70
[c000001fffe97960] [c0000000005b60f4] blk_mq_sched_dispatch_requests+0xa4/0x2a0
[c000001fffe979c0] [c0000000005acac0] __blk_mq_run_hw_queue+0x100/0x2c0
[c000001fffe97a00] [c0000000005ad478] __blk_mq_delay_run_hw_queue+0x118/0x130
[c000001fffe97a40] [c0000000005ad61c] blk_mq_start_hw_queues+0x6c/0xa0
[c000001fffe97a80] [c000000000797aac] scsi_kick_queue+0x2c/0x60
[c000001fffe97aa0] [c000000000797cf0] scsi_run_queue+0x210/0x360
[c000001fffe97b10] [c00000000079b888] scsi_run_host_queues+0x48/0x80
[c000001fffe97b40] [c0000000007b6090] ipr_ioa_bringdown_done+0x70/0x1e0
[c000001fffe97bc0] [c0000000007bc860] ipr_reset_ioa_job+0x80/0xf0
[c000001fffe97bf0] [c0000000007b4d50] ipr_reset_timer_done+0xd0/0x100
[c000001fffe97c30] [c0000000001937bc] call_timer_fn+0xdc/0x4b0
[c000001fffe97cf0] [c000000000193d08] expire_timers+0x178/0x330
[c000001fffe97d60] [c0000000001940c8] run_timer_softirq+0xb8/0x120
[c000001fffe97de0] [c000000000b726a8] __do_softirq+0x168/0x6d8
[c000001fffe97ef0] [c0000000000df2c8] irq_exit+0x108/0x150
[c000001fffe97f10] [c000000000017bf4] __do_irq+0x2a4/0x4a0
[c000001fffe97f90] [c00000000002da50] call_do_irq+0x14/0x24
[c0000007fad93aa0] [c000000000017e8c] do_IRQ+0x9c/0x140
[c0000007fad93af0] [c000000000008b98] hardware_interrupt_common+0x138/0x140

Reported-by: Michael Ellerman <[email protected]>
Signed-off-by: Brian King <[email protected]>
---

Index: linux-2.6.git/drivers/scsi/ipr.c
===================================================================
--- linux-2.6.git.orig/drivers/scsi/ipr.c
+++ linux-2.6.git/drivers/scsi/ipr.c
@@ -3351,6 +3351,16 @@ static void ipr_worker_thread(struct wor
return;
}

+ if (ioa_cfg->scsi_unblock) {
+ ioa_cfg->scsi_unblock = 0;
+ ioa_cfg->scsi_blocked = 0;
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ scsi_unblock_requests(ioa_cfg->host);
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ if (ioa_cfg->scsi_blocked)
+ scsi_block_requests(ioa_cfg->host);
+ }
+
if (!ioa_cfg->scan_enabled) {
spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
return;
@@ -7211,9 +7221,8 @@ static int ipr_ioa_bringdown_done(struct
ENTER;
if (!ioa_cfg->hrrq[IPR_INIT_HRRQ].removing_ioa) {
ipr_trace;
- spin_unlock_irq(ioa_cfg->host->host_lock);
- scsi_unblock_requests(ioa_cfg->host);
- spin_lock_irq(ioa_cfg->host->host_lock);
+ ioa_cfg->scsi_unblock = 1;
+ schedule_work(&ioa_cfg->work_q);
}

ioa_cfg->in_reset_reload = 0;
@@ -7287,13 +7296,7 @@ static int ipr_ioa_reset_done(struct ipr
list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q);
wake_up_all(&ioa_cfg->reset_wait_q);

- spin_unlock(ioa_cfg->host->host_lock);
- scsi_unblock_requests(ioa_cfg->host);
- spin_lock(ioa_cfg->host->host_lock);
-
- if (!ioa_cfg->hrrq[IPR_INIT_HRRQ].allow_cmds)
- scsi_block_requests(ioa_cfg->host);
-
+ ioa_cfg->scsi_unblock = 1;
schedule_work(&ioa_cfg->work_q);
LEAVE;
return IPR_RC_JOB_RETURN;
@@ -9249,8 +9252,11 @@ static void _ipr_initiate_ioa_reset(stru
spin_unlock(&ioa_cfg->hrrq[i]._lock);
}
wmb();
- if (!ioa_cfg->hrrq[IPR_INIT_HRRQ].removing_ioa)
+ if (!ioa_cfg->hrrq[IPR_INIT_HRRQ].removing_ioa) {
+ ioa_cfg->scsi_unblock = 0;
+ ioa_cfg->scsi_blocked = 1;
scsi_block_requests(ioa_cfg->host);
+ }

ipr_cmd = ipr_get_free_ipr_cmnd(ioa_cfg);
ioa_cfg->reset_cmd = ipr_cmd;
@@ -9306,9 +9312,8 @@ static void ipr_initiate_ioa_reset(struc
wake_up_all(&ioa_cfg->reset_wait_q);

if (!ioa_cfg->hrrq[IPR_INIT_HRRQ].removing_ioa) {
- spin_unlock_irq(ioa_cfg->host->host_lock);
- scsi_unblock_requests(ioa_cfg->host);
- spin_lock_irq(ioa_cfg->host->host_lock);
+ ioa_cfg->scsi_unblock = 1;
+ schedule_work(&ioa_cfg->work_q);
}
return;
} else {
Index: linux-2.6.git/drivers/scsi/ipr.h
===================================================================
--- linux-2.6.git.orig/drivers/scsi/ipr.h
+++ linux-2.6.git/drivers/scsi/ipr.h
@@ -1488,6 +1488,8 @@ struct ipr_ioa_cfg {
u8 cfg_locked:1;
u8 clear_isr:1;
u8 probe_done:1;
+ u8 scsi_unblock:1;
+ u8 scsi_blocked:1;

u8 revid;


2017-08-01 15:26:13

by Jens Axboe

[permalink] [raw]
Subject: Re: blk_mq_sched_insert_request: inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage

On 08/01/2017 12:55 AM, Michael Ellerman wrote:
> Jens Axboe <[email protected]> writes:
> ...
>>
>> Can you try the below fix? Should be more palatable than the previous
>> one. Brian, maybe you can take a look at the IRQ issue mentioned above?
>
> Given the patch from Brian fixed the lockdep warning, do you still want
> me to try and test this one?

Nope, we don't have to do that. I'd much rather just add a WARN_ON()
or similar to make sure we catch buggy users earlier. scsi_run_queue()
needs a

WARN_ON(in_interrupt());

but it might be better to put that in __blk_mq_run_hw_queue().

--
Jens Axboe

2017-08-07 17:42:12

by Martin K. Petersen

[permalink] [raw]
Subject: Re: [PATCH] ipr: Fix scsi-mq lockdep issue


Brian,

> Fixes the following lockdep warning that can occur when scsi-mq is
> enabled with ipr due to ipr calling scsi_unblock_requests from irq
> context. The fix is to move the call to scsi_unblock_requests to ipr's
> existing workqueue.

Applied to 4.13/scsi-fixes. Thank you!

--
Martin K. Petersen Oracle Linux Engineering