The first of these fixes issues with the new bmap based bitmap file
access code, and possibly should be an -mm hotfix, and without it,
'internal' bitmaps don't work any more :-(
Others are minor and unrelated.
Thanks,
NeilBrown
[PATCH 001 of 3] md: Change md/bitmap file handling to use bmap to file blocks-fix
[PATCH 002 of 3] md: Fix inverted test for 'repair' directive.
[PATCH 003 of 3] md: Calculate correct array size for raid10 in new offset mode.
On 16/05/2006 1:12 p.m., NeilBrown wrote:
> The first of these fixes issues with the new bmap based bitmap file
> access code, and possibly should be an -mm hotfix, and without it,
> 'internal' bitmaps don't work any more :-(
>
> Others are minor and unrelated.
>
> Thanks,
> NeilBrown
>
>
> [PATCH 001 of 3] md: Change md/bitmap file handling to use bmap to file blocks-fix
> [PATCH 002 of 3] md: Fix inverted test for 'repair' directive.
> [PATCH 003 of 3] md: Calculate correct array size for raid10 in new offset mode.
Patch 1 fixes the problems I was having with RAID-1 arrays not able to start up
on 2.6.17-rc4-mm1. Thanks for that.
However things appear still not quite right on boot, as each mount works but
displays as though it didn't work, ie:
md: considering sdc2 ...
md: adding sdc2 ...
md: adding sda2 ...
md: created md0
md: bind<sda2>
md: bind<sdc2>
md: running: <sdc2><sda2>
raid1: raid set md0 active with 0 out of 2 mirrors
0 out of 2 ?
cat /proc/mdstats shows that everything does in fact seem to be working:
md0 : active raid1 sdc2[1] sda2[0]
24410688 blocks [2/2] [UU]
bitmap: 21/187 pages [84KB], 64KB chunk
The array otherwise seems to be fine. I guess it's just a visual glitch.
reuben
On Thursday May 18, [email protected] wrote:
>
> However things appear still not quite right on boot, as each mount works but
> displays as though it didn't work, ie:
>
> md: considering sdc2 ...
> md: adding sdc2 ...
> md: adding sda2 ...
> md: created md0
> md: bind<sda2>
> md: bind<sdc2>
> md: running: <sdc2><sda2>
> raid1: raid set md0 active with 0 out of 2 mirrors
>
> 0 out of 2 ?
That is fixed by this patch, which I thought I had submitted...
Time get the latest -mm and see which of my patches are still pending
I guess.
Thanks,
NeilBrown
------------------------------
Fix recently broken calculation of degraded for raid1
A recent patch broke this code: rdev doesn't have meaningful
value at this point - disk->rdev is what should be used.
Signed-off-by: Neil Brown <[email protected]>
### Diffstat output
./drivers/md/raid1.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff ./drivers/md/raid1.c~current~ ./drivers/md/raid1.c
--- ./drivers/md/raid1.c~current~ 2006-05-02 14:15:28.000000000 +1000
+++ ./drivers/md/raid1.c 2006-05-02 14:15:44.000000000 +1000
@@ -1889,7 +1889,7 @@ static int run(mddev_t *mddev)
disk = conf->mirrors + i;
if (!disk->rdev ||
- !test_bit(In_sync, &rdev->flags)) {
+ !test_bit(In_sync, &disk->rdev->flags)) {
disk->head_position = 0;
mddev->degraded++;
}