While kernel was resyncing raid5 array on 4 sata disks this happened
------------[ cut here ]------------
kernel BUG at drivers/md/raid5.c:2398!
invalid opcode: 0000 [1] SMP
CPU 0
Modules linked in: acquirewdt advantechwdt hpwdt iTCO_wdt ib700wdt pcwd_pci pcwd_usb wafer5823wdt wdt_pci ipmi_watchdog ipmi_msghandler ipv6 nf_conntrack_ipv4 xt_state nf_conntrack xt_tcpudp ipt_REJECT iptable_filter
xt_CLASSIFY iptable_mangle ip_tables x_tables sch_sfq sch_htb forcedeth ext3 jbd mbcache raid456 async_xor async_memcpy async_tx xor raid1 usbkbd usb_storage libusual ohci_hcd ssb pcmcia pcmcia_core firmware_class
ehci_hcd usbcore xfs scsi_wait_scan sd_mod sata_nv libata dock scsi_mod raid10 md_mod
Pid: 1619, comm: md3_resync Not tainted 2.6.25.6-vanilla-1 #1
RIP: 0010:[<ffffffff8816d740>] [<ffffffff8816d740>] :raid456:cleanup_module+0x4620/0x8380
RSP: 0018:ffff81007c511b40 EFLAGS: 00010202
RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffff81007c448b30
RDX: 0000000000000004 RSI: 0000000000000000 RDI: 00000000ffffffff
RBP: ffff81007c448ac0 R08: ffff81007c448b38 R09: 0000000000000000
R10: ffff81007c448b38 R11: 0000000000000003 R12: 0000000000000003
R13: ffff81007cd26000 R14: 0000000000000003 R15: 0000000000000000
FS: 0000000041a2d960(0000) GS:ffffffff805b2000(0000) knlGS:0000000000000000
CS: 0010 DS: 0018 ES: 0018 CR0: 000000008005003b
CR2: 00007f2a52084000 CR3: 000000002c906000 CR4: 00000000000006e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process md3_resync (pid: 1619, threadinfo ffff81007c510000, task ffff81007e95f5a0)
Stack: 0000000000000000 00000004010060f0 0000000000000001 0000000000000000
0000000000000003 0000000000000000 0000000000000000 0000000000000000
0000000000000000 0000000000000000 0000000000000001 ffff81007c448ac0
Call Trace:
[<ffffffff8816ee59>] ? :raid456:cleanup_module+0x5d39/0x8380
[<ffffffff88169726>] ? :raid456:cleanup_module+0x606/0x8380
[<ffffffff8816a246>] ? :raid456:cleanup_module+0x1126/0x8380
[<ffffffff8817037e>] ? :raid456:cleanup_module+0x725e/0x8380
[<ffffffff80332868>] ? generic_unplug_device+0x18/0x30
[<ffffffff80331395>] ? blk_unplug+0x35/0x70
[<ffffffff88169220>] ? :raid456:cleanup_module+0x100/0x8380
[<ffffffff8800657e>] ? :md_mod:md_do_sync+0x32e/0x1370
[<ffffffff8024cb60>] ? autoremove_wake_function+0x0/0x30
[<ffffffff88009480>] ? :md_mod:md_check_recovery+0x1ec0/0x2220
[<ffffffff880094b0>] ? :md_mod:md_check_recovery+0x1ef0/0x2220
[<ffffffff88009480>] ? :md_mod:md_check_recovery+0x1ec0/0x2220
[<ffffffff8024c75b>] ? kthread+0x4b/0x80
[<ffffffff8020cfe8>] ? child_rip+0xa/0x12
[<ffffffff8024c710>] ? kthread+0x0/0x80
[<ffffffff8020cfde>] ? child_rip+0x0/0x12
Code: 00 00 00 01 83 44 24 1c 01 e9 5d f6 ff ff f0 0f ba 6d 38 06 f0 0f ba 6d 38 02 e9 bf fe ff ff 0f 0b eb fe 0f 0b eb fe 0f 0b eb fe <0f> 0b eb fe 66 66 66 2e 0f 1f 84 00 00 00 00 00 41 57 49 89 f8
RIP [<ffffffff8816d740>] :raid456:cleanup_module+0x4620/0x8380
RSP <ffff81007c511b40>
---[ end trace f55178690f3a1b5d ]---
--
Arkadiusz Miśkiewicz PLD/Linux Team
arekm / maven.pl http://ftp.pld-linux.org/
On Wednesday July 9, [email protected] wrote:
>
> While kernel was resyncing raid5 array on 4 sata disks this happened
Thanks for the report.
>
> ------------[ cut here ]------------
> kernel BUG at drivers/md/raid5.c:2398!
So in handle_parity_checks5, s->uptodate is not == disks.
Not good (obviously).
We only get into handle_parity_checks5 if:
STRIPE_OP_CHECK or STRIPE_OP_MODE_REPAIR_PD
are set in sh->ops.pending
or
s.syncing and s.locked == 0 (and some other stuff).
The first two bits only get set inside handle_parity_checks5,
so the first time handle_parity_checks5 was called on this
stripe_head, s.syncing was true and s.locked == 0.
If s.syncing, and s.uptodate < disks, then we will have already
called handle_issuing_new_read_requests5 which will have tried to read
all disks that aren't uptodate, so
s.uptodate + s.locked == disks
which makes the BUG impossible .... except.....
If we already have uptodate == disks-1, then it doesn't read
the missing block and falls straight down to the BUG.
Dan: I think this is your code. In
__handle_issuing_new_read_requests5
the
} else if ((s->uptodate < disks - 1) &&
test_bit(R5_Insync, &dev->flags)) {
looks wrong. We at least want a test on s->syncing in there, maybe:
} else if (((s->uptodate < disks - 1) || s->syncing) &&
test_bit(R5_Insync, &dev->flags)) {
and given that we only compute blocks when a device is failed, (see 15
lines earlier) I think we probably just want
} else if (test_bit(R5_Insync, &dev->flags)) {
I notice that is was it in linux-next (though the functions are
renamed - it is fetch_block5 there).
I wonder if there is still time for 2.6.26 .. probably not. It'll be
released immediately after lwn.net release their weekly edition :-)
Arkadiusz: a reboot (which you have probably done already) is all you
can do here. Your array will resync, and almost certainly won't hit
the bug again. There should be no data loss.
NeilBrown
On Wed, 2008-07-09 at 18:07 -0700, Neil Brown wrote:
> Dan: I think this is your code. In
> __handle_issuing_new_read_requests5
> the
> } else if ((s->uptodate < disks - 1) &&
> test_bit(R5_Insync, &dev->flags)) {
>
> looks wrong. We at least want a test on s->syncing in there, maybe:
> } else if (((s->uptodate < disks - 1) || s->syncing)
> &&
> test_bit(R5_Insync, &dev->flags)) {
>
> and given that we only compute blocks when a device is failed, (see 15
> lines earlier) I think we probably just want
> } else if (test_bit(R5_Insync, &dev->flags)) {
>
> I notice that is was it in linux-next (though the functions are
> renamed - it is fetch_block5 there).
Yes, I had realized it was obsolete... missed that it was buggy.
>
> I wonder if there is still time for 2.6.26 .. probably not. It'll be
> released immediately after lwn.net release their weekly edition :-)
Here is a patch against latest mainline.
---snip--->
md: ensure all blocks are uptodate or locked when syncing
From: Dan Williams <[email protected]>
Remove the dubious attempt to prefer 'compute' over 'read'. Not only is it
wrong given commit c337869d (md: do not compute parity unless it is on a failed
drive), but it can trigger a BUG_ON in handle_parity_checks5().
Cc: <[email protected]>
Signed-off-by: Dan Williams <[email protected]>
---
drivers/md/raid5.c | 7 +------
1 files changed, 1 insertions(+), 6 deletions(-)
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 54c8ee2..3b27df5 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -2017,12 +2017,7 @@ static int __handle_issuing_new_read_requests5(struct stripe_head *sh,
*/
s->uptodate++;
return 0; /* uptodate + compute == disks */
- } else if ((s->uptodate < disks - 1) &&
- test_bit(R5_Insync, &dev->flags)) {
- /* Note: we hold off compute operations while checks are
- * in flight, but we still prefer 'compute' over 'read'
- * hence we only read if (uptodate < * disks-1)
- */
+ } else if (test_bit(R5_Insync, &dev->flags)) {
set_bit(R5_LOCKED, &dev->flags);
set_bit(R5_Wantread, &dev->flags);
if (!test_and_set_bit(STRIPE_OP_IO, &sh->ops.pending))