On Sat, 31 Jan 2015 19:43:15 -0800 Fengguang Wu <[email protected]>
wrote:
> Hi all,
>
> I see 2 __might_sleep() warnings on when running LKP tests on
> v3.19-rc6, one related to raid5 and another related to btrfs.
>
> They might be exposed by this patch.
>
> commit 8eb23b9f35aae413140d3fda766a98092c21e9b0
> Author: Peter Zijlstra <[email protected]>
>
> sched: Debug nested sleeps
>
> Validate we call might_sleep() with TASK_RUNNING, which catches places
> where we nest blocking primitives, eg. mutex usage in a wait loop.
>
> Since all blocking is arranged through task_struct::state, nesting
> this will cause the inner primitive to set TASK_RUNNING and the outer
> will thus not block.
>
> Another observed problem is calling a blocking function from
> schedule()->sched_submit_work()->blk_schedule_flush_plug() which will
> then destroy the task state for the actual __schedule() call that
> comes after it.
>
>
> dmesg-ivb44:20150129001242:x86_64-rhel:3.19.0-rc6-g26bc420b:1
>
>
> FSUse% Count Size Files/sec App Overhead
> [ 60.691525] ------------[ cut here ]------------
> [ 60.697499] WARNING: CPU: 0 PID: 1065 at kernel/sched/core.c:7300 __might_sleep+0xbd/0xd0()
> [ 60.709010] do not call blocking ops when !TASK_RUNNING; state=2 set at [<ffffffff810b63ff>] prepare_to_wait+0x2f/0x90
> [ 60.721646] Modules linked in: f2fs raid456 async_raid6_recov async_memcpy async_pq async_xor xor async_tx raid6_pq ipmi_watchdog netconsole sg sd_mod mgag200 syscopyarea sysfillrect isci sysimgblt libsas ttm snd_pcm ahci snd_timer drm_kms_helper scsi_transport_sas libahci snd sb_edac soundcore drm libata edac_core i2c_i801 pcspkr wmi ipmi_si ipmi_msghandler
> [ 60.759585] CPU: 0 PID: 1065 Comm: kworker/u481:6 Not tainted 3.19.0-rc6-g26bc420b #1
> [ 60.769025] Hardware name: Intel Corporation S2600WP/S2600WP, BIOS SE5C600.86B.02.02.0002.122320131210 12/23/2013
> [ 60.781193] Workqueue: writeback bdi_writeback_workfn (flush-9:0)
> [ 60.788725] ffffffff81b75d50 ffff88080979b3e8 ffffffff818a38f0 ffff88081ee100f8
> [ 60.797820] ffff88080979b438 ffff88080979b428 ffffffff8107260a ffff88080979b428
> [ 60.806879] ffffffff81b8c759 00000000000004d9 0000000000000000 0000000063fbe018
> [ 60.815935] Call Trace:
> [ 60.819368] [<ffffffff818a38f0>] dump_stack+0x4c/0x65
> [ 60.825817] [<ffffffff8107260a>] warn_slowpath_common+0x8a/0xc0
> [ 60.833269] [<ffffffff81072686>] warn_slowpath_fmt+0x46/0x50
> [ 60.840379] [<ffffffff810afe95>] ? pick_next_task_fair+0x1b5/0x8d0
> [ 60.848104] [<ffffffff810b63ff>] ? prepare_to_wait+0x2f/0x90
> [ 60.855215] [<ffffffff810b63ff>] ? prepare_to_wait+0x2f/0x90
> [ 60.862337] [<ffffffff8109874d>] __might_sleep+0xbd/0xd0
> [ 60.869044] [<ffffffff811c7cd7>] kmem_cache_alloc_trace+0x1d7/0x250
> [ 60.876830] [<ffffffff817175d7>] ? bitmap_get_counter+0x117/0x280
> [ 60.884429] [<ffffffff817175d7>] bitmap_get_counter+0x117/0x280
> [ 60.891807] [<ffffffff810f6d02>] ? __module_text_address+0x12/0x70
> [ 60.899452] [<ffffffff81717f54>] bitmap_startwrite+0x74/0x300
> [ 60.906601] [<ffffffffa017659a>] add_stripe_bio+0x2aa/0x350 [raid456]
> [ 60.914518] [<ffffffffa017d20d>] make_request+0x1dd/0xf30 [raid456]
This one is a false-positive - I think.
It is certainly true that if the inner primitive needs to block, then the
outer loop will not wait. However that case is the exception. Most of the
time the inner blocking primitive isn't called and the outer loop will wait as
expected. Certainly the inner blocking primitive (a kmalloc) wouldn't be
called more than once without the outer loop making real progress.
If the outer loop sometime runs around the loop and extra time, that is no great cost.
However I see the value in having these warnings, even if they don't work for
me.
I guess I could
__set_current_state(TASK_RUNNING);
somewhere to defeat the warning, and add a comment explaining why.
Would that be a good thing?
Thanks,
NeilBrown
(high-jacking the thread a bit... I don't have the patch that I want to reply
to still in my mail box: the subject still matches...)
I just got a might-sleep warning in my own testing.
This was introduced by
commit e22b886a8a43b147e1994a9f970f678fc0df2033
Author: Peter Zijlstra <[email protected]>
Date: Wed Sep 24 10:18:48 2014 +0200
sched/wait: Add might_sleep() checks
In particular:
@@ -318,6 +320,7 @@ do {
*/
#define wait_event_cmd(wq, condition, cmd1, cmd2) \
do { \
+ might_sleep(); \
if (condition) \
break; \
__wait_event_cmd(wq, condition, cmd1, cmd2); \
Where I call this in raid5_quiesce(), 'cmd1' releases a lock and enables
interrupts and cmd2 takes the lock and disables interrupts.
So it is perfectly OK to sleep at the point where schedule is called, but not
at the point where wait_event_cmd is called.
I can't use wait_event_lock_irq_cmd() as there are actually several spinlocks
I need to manipulate.
So I'm hoping that this part of the patch (at least) can be reverted.
Otherwise I guess I'll need to use __wait_event_cmd().
Thanks,
NeilBrown
On Sun, Feb 1, 2015 at 3:03 PM, NeilBrown <[email protected]> wrote:
>
> I guess I could
> __set_current_state(TASK_RUNNING);
> somewhere to defeat the warning, and add a comment explaining why.
>
> Would that be a good thing?
Use "sched_annotate_sleep()" instead, but yes, add a comment about why it's ok.
Linus
On Sun, 1 Feb 2015 21:08:12 -0800 Linus Torvalds
<[email protected]> wrote:
> On Sun, Feb 1, 2015 at 3:03 PM, NeilBrown <[email protected]> wrote:
> >
> > I guess I could
> > __set_current_state(TASK_RUNNING);
> > somewhere to defeat the warning, and add a comment explaining why.
> >
> > Would that be a good thing?
>
> Use "sched_annotate_sleep()" instead, but yes, add a comment about why it's ok.
>
> Linus
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
OK - following patch is queued to appear in a pull request tomorrow (I hope).
Thanks,
NeilBrown
From: NeilBrown <[email protected]>
Date: Mon, 2 Feb 2015 17:08:03 +1100
Subject: [PATCH] md/bitmap: fix a might_sleep() warning.
commit 8eb23b9f35aae413140d3fda766a98092c21e9b0
sched: Debug nested sleeps
causes false-positive warnings in RAID5 code.
This annotation removes them and adds a comment
explaining why there is no real problem.
Reported-by: Fengguang Wu <[email protected]>
Signed-off-by: NeilBrown <[email protected]>
diff --git a/drivers/md/bitmap.c b/drivers/md/bitmap.c
index da3604e73e8a..1695ee5f3ffc 100644
--- a/drivers/md/bitmap.c
+++ b/drivers/md/bitmap.c
@@ -72,6 +72,19 @@ __acquires(bitmap->lock)
/* this page has not been allocated yet */
spin_unlock_irq(&bitmap->lock);
+ /* It is possible that this is being called inside a
+ * prepare_to_wait/finish_wait loop from raid5c:make_request().
+ * In general it is not permitted to sleep in that context as it
+ * can cause the loop to spin freely.
+ * That doesn't apply here as we can only reach this point
+ * once with any loop.
+ * When this function completes, either bp[page].map or
+ * bp[page].hijacked. In either case, this function will
+ * abort before getting to this point again. So there is
+ * no risk of a free-spin, and so it is safe to assert
+ * that sleeping here is allowed.
+ */
+ sched_annotate_sleep();
mappage = kzalloc(PAGE_SIZE, GFP_NOIO);
spin_lock_irq(&bitmap->lock);