i have 2.6.27
trying thease command
# create raid6 with 4 block devices
mdadm --create /dev/md2 --level=6 --raid-devices=4 /dev/loop{1,2,3,4}
# wait to sync
sleep 3
# create and mount FS
mkfs.ext2 /dev/md2
mount /dev/md2 -t ext2 /mnt
# add block device and grow, wait for resync
mdadm --add /dev/md2 /dev/loop5
mdadm --grow /dev/md2 --raid-devices=5
sleep 3
# dmesg prints: VFS: busy inodes on changed media.
# umount hangs ...
umount /dev/md2
# also: "mdadm --query /dev/md2" hangs in D state.
attached: qemu script (start.sh), init script (mini), .config (config)
i'm not on list, please CC me.
also sprach roma1390 <[email protected]> [2009.03.21.2129 +0100]:
> # add block device and grow, wait for resync
> mdadm --add /dev/md2 /dev/loop5
> mdadm --grow /dev/md2 --raid-devices=5
> sleep 3
>
> # dmesg prints: VFS: busy inodes on changed media.
> # umount hangs ...
> umount /dev/md2
I suggest unmounting before growing?
--
martin | http://madduck.net/ | http://two.sentenc.es/
infinite loop: see 'loop, infinite'.
loop, infinite: see 'infinite loop'.
spamtraps: [email protected]
On Sun, Mar 22, 2009 at 11:20, martin f krafft <[email protected]> wrote:
> also sprach roma1390 <[email protected]> [2009.03.21.2129 +0100]:
>> # add block device and grow, wait for resync
>> mdadm --add /dev/md2 /dev/loop5
>> mdadm --grow /dev/md2 --raid-devices=5
>> sleep 3
>>
>> # dmesg prints: VFS: busy inodes on changed media.
>> # umount hangs ...
>> umount /dev/md2
>
> I suggest unmounting before growing?
So this hang is by design? I think it must return EBUSY if can't
continue operation.
Documentation claims than Linux MD can grow online for raid5 and
raid6, like You can resize online ext3 FS.
On Sun, Mar 22, 2009 at 12:41, roma1390 <[email protected]> wrote:
> On Sun, Mar 22, 2009 at 11:20, martin f krafft <[email protected]> wrote:
>> also sprach roma1390 <[email protected]> [2009.03.21.2129 +0100]:
>>> # add block device and grow, wait for resync
>>> mdadm --add /dev/md2 /dev/loop5
>>> mdadm --grow /dev/md2 --raid-devices=5
>>> sleep 3
>>>
>>> # dmesg prints: VFS: busy inodes on changed media.
>>> # umount hangs ...
>>> umount /dev/md2
>>
>> I suggest unmounting before growing?
>
> So this hang is by design? I think it must return EBUSY if can't
> continue operation.
> Documentation claims than Linux MD can grow online for raid5 and
> raid6, like You can resize online ext3 FS.
Got sysrq pending timers, blocked state, task trace, and full dmesg
On Saturday March 21, [email protected] wrote:
> i have 2.6.27
> trying thease command
>
> # create raid6 with 4 block devices
> mdadm --create /dev/md2 --level=6 --raid-devices=4 /dev/loop{1,2,3,4}
> # wait to sync
> sleep 3
> # create and mount FS
> mkfs.ext2 /dev/md2
> mount /dev/md2 -t ext2 /mnt
>
> # add block device and grow, wait for resync
> mdadm --add /dev/md2 /dev/loop5
> mdadm --grow /dev/md2 --raid-devices=5
> sleep 3
>
> # dmesg prints: VFS: busy inodes on changed media.
> # umount hangs ...
> umount /dev/md2
> # also: "mdadm --query /dev/md2" hangs in D state.
Thanks for the detailed report.
I can almost reproduce this.
However when I run it, the "mdadm --grow" prints
mdadm: Need to backup 384K of critical section..
(as expected) and then hangs.
If I interrupt it and proceed, then the umount hangs.
Is this the case for you?
The umount hangs because there is something important that mdadm needs
to do which it didn't do because it was interrupted. During the
'critical section', mdadm causes all writes to the start of the device
to be blocked. The umount tries to write the filesystem superblock
and hangs.
You can test if this is the problem by running the command
cat /sys/block/md2/md/suspend_hi > /sys/block/md2/md/suspend_lo
That should allow the 'umount' to complete.
This will only happen on very small arrays that take less than a
couple of seconds for the reshape to complete. I have a patch for
mdadm which makes it more robust in this situation. It will be in
future releases.
Does this explain what is happening to you?
Thanks,
NeilBrown
Neil Brown wrote:
> On Saturday March 21, [email protected] wrote:
> I can almost reproduce this.
> However when I run it, the "mdadm --grow" prints mdadm: Need to
backup 384K of critical section..
> (as expected) and then hangs.
>
> If I interrupt it and proceed, then the umount hangs.
>
> Is this the case for you?
Yes, mdadm hangs by itself, If i start mdadm in background and wait some
time for reconstrucion, umount still hangs. I didn't touch mdadm, and
mdadm likes to stop...
> The umount hangs because there is something important that mdadm needs
> to do which it didn't do because it was interrupted. During the
> 'critical section', mdadm causes all writes to the start of the device
> to be blocked. The umount tries to write the filesystem superblock
> and hangs.
> You can test if this is the problem by running the command
>
> cat /sys/block/md2/md/suspend_hi > /sys/block/md2/md/suspend_lo
First try was:
mdadm --grow /dev/md2 --raid-devices=5 &
sleep 3
cat /sys/block/md2/md/suspend_hi
# output: 768
cat /sys/block/md2/md/suspend_lo
# output: 0
cat /sys/block/md2/md/suspend_hi > /sys/block/md2/md/suspend_lo
sleep 20
umount /dev/md2
And this works! mdadm unhangs, and umount doesn't block any more.
> That should allow the 'umount' to complete.
>
> This will only happen on very small arrays that take less than a
> couple of seconds for the reshape to complete. I have a patch for
> mdadm which makes it more robust in this situation. It will be in
> future releases.
>
> Does this explain what is happening to you?
Yes, thanks. May be if I retest this situation with same kernel and
limited recovery bandwith, then may be i can't hit same problem again?
> Thanks,
> NeilBrown
Thanks.
roma1390 wrote:
> Neil Brown wrote:
> > This will only happen on very small arrays that take less than a
> > couple of seconds for the reshape to complete. I have a patch for
> > mdadm which makes it more robust in this situation. It will be in
> > future releases.
> >
> > Does this explain what is happening to you?
>
> Yes, thanks. May be if I retest this situation with same kernel and
> limited recovery bandwith, then may be i can't hit same problem again?
Yes limiting speed also is posible workaround for this problem:
...
echo 512 > /sys/block/md2/md/sync_speed_min
echo 512 > /sys/block/md2/md/sync_speed_max
mkfs.ext2 /dev/md2
mount /dev/md2 -t ext2 /mnt
mdadm --add /dev/md2 /dev/loop5
mdadm --grow /dev/md2 --raid-devices=5
...
Bigger values (like 10000) doesn't help.
Thanks.