2004-09-13 12:27:57

by J.A. Magallon

[permalink] [raw]
Subject: Problems with raid in 2.6.9-rc1-mm4

Hi...

I try to set up a RAID in linux 2.6. I succeeded in one other box, building
a raid5 disk on top of SATA drives.

In this box, I have a 240Gb disk plus two 120Gb ones:

annwn:~# fdisk -l /dev/hd[acd]

Disk /dev/hda: 251.0 GB, 251000193024 bytes
255 heads, 63 sectors/track, 30515 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/hda1 1 30515 245111706 fd Linux raid autodetect

Disk /dev/hdc: 120.0 GB, 120034123776 bytes
255 heads, 63 sectors/track, 14593 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/hdc1 1 14593 117218241 fd Linux raid autodetect

Disk /dev/hdd: 120.0 GB, 120034123776 bytes
255 heads, 63 sectors/track, 14593 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/hdd1 1 14593 117218241 fd Linux raid autodetect

As disks are of different sizes, I though about doing a volume with the small
ones, and then a raid with the big.
As stripping on two IDE disks on the same IDE bus is silly, I wanted to
do a linear array. So this is what I would like:

/dev/md1 (raid0) -- /dev/hda1 (240Gb)
\- /dev/md0 (raid linear) -- /dev/hdc1
\- /dev/hdd1

I built both arrays with:

#!/bin/bash

mdadm -S /dev/md0
mdadm --create --verbose --run \
/dev/md0 --level=linear --chunk=64 \
--raid-devices=2 \
/dev/hdc1 /dev/hdd1

mdadm -S /dev/md1
mdadm --create --verbose --run \
/dev/md1 --level=0 --chunk=64 \
--raid-devices=2 \
/dev/hda1 /dev/md0

But on reboot, md0 is autodetected, but not so md1 ????
Log:

Sep 13 13:12:49 annwn kernel: md: linear personality registered as nr 1
Sep 13 13:12:49 annwn kernel: md: raid0 personality registered as nr 2
Sep 13 13:12:49 annwn kernel: md: raid1 personality registered as nr 3
Sep 13 13:12:49 annwn kernel: md: raid10 personality registered as nr 9
Sep 13 13:12:49 annwn kernel: md: raid5 personality registered as nr 4
Sep 13 13:12:49 annwn kernel: raid5: automatically using best checksumming function: pIII_sse
Sep 13 13:12:49 annwn kernel: pIII_sse : 2248.000 MB/sec
Sep 13 13:12:49 annwn kernel: raid5: using function: pIII_sse (2248.000 MB/sec)
Sep 13 13:12:49 annwn kernel: raid6: int32x1 468 MB/s
Sep 13 13:12:49 annwn kernel: raid6: int32x2 585 MB/s
Sep 13 13:12:49 annwn rpc.statd[3113]: statd running as root. chown /var/lib/nfs/sm to choose different user
Sep 13 13:12:49 annwn kernel: raid6: int32x4 476 MB/s
Sep 13 13:12:49 annwn kernel: raid6: int32x8 355 MB/s
Sep 13 13:12:49 annwn kernel: raid6: mmxx1 1460 MB/s
Sep 13 13:12:49 annwn kernel: raid6: mmxx2 1832 MB/s
Sep 13 13:12:49 annwn kernel: raid6: sse1x1 882 MB/s
Sep 13 13:12:49 annwn kernel: raid6: sse1x2 1609 MB/s
Sep 13 13:12:49 annwn kernel: raid6: sse2x1 1964 MB/s
Sep 13 13:12:49 annwn kernel: raid6: sse2x2 2000 MB/s
Sep 13 13:12:49 annwn kernel: raid6: using algorithm sse2x2 (2000 MB/s)
Sep 13 13:12:49 annwn kernel: md: raid6 personality registered as nr 8
Sep 13 13:12:49 annwn kernel: md: multipath personality registered as nr 7
Sep 13 13:12:49 annwn kernel: md: md driver 0.90.0 MAX_MD_DEVS=256, MD_SB_DISKS=27
Sep 13 13:12:49 annwn kernel: md: Autodetecting RAID arrays.
Sep 13 13:12:49 annwn kernel: md: autorun ...
Sep 13 13:12:49 annwn kernel: md: considering hdd1 ...
Sep 13 13:12:49 annwn kernel: md: adding hdd1 ...
Sep 13 13:12:49 annwn kernel: md: adding hdc1 ...
Sep 13 13:12:49 annwn kernel: md: hda1 has different UUID to hdd1
Sep 13 13:12:49 annwn kernel: md: created md0
Sep 13 13:12:49 annwn kernel: md: bind<hdc1>
Sep 13 13:12:49 annwn kernel: md: bind<hdd1>
Sep 13 13:12:49 annwn kernel: md: running: <hdd1><hdc1>
Sep 13 13:12:49 annwn kernel: md: considering hda1 ...
Sep 13 13:12:49 annwn kernel: md: adding hda1 ...
Sep 13 13:12:49 annwn kernel: md: created md1
Sep 13 13:12:49 annwn kernel: md: bind<hda1>
Sep 13 13:12:49 annwn kernel: md: running: <hda1>
Sep 13 13:12:49 annwn kernel: md1: setting max_sectors to 128, segment boundary to 32767
Sep 13 13:12:49 annwn kernel: raid0: looking at hda1
Sep 13 13:12:49 annwn kernel: raid0: comparing hda1(245111616) with hda1(245111616)
Sep 13 13:12:49 annwn kernel: raid0: END
Sep 13 13:12:49 annwn kernel: raid0: ==> UNIQUE
Sep 13 13:12:49 annwn kernel: raid0: 1 zones
Sep 13 13:12:49 annwn kernel: raid0: FINAL 1 zones
Sep 13 13:12:49 annwn kernel: raid0: too few disks (1 of 2) - aborting!
Sep 13 13:12:49 annwn kernel: md: pers->run() failed ...
Sep 13 13:12:49 annwn kernel: md :do_md_run() returned -22
Sep 13 13:12:49 annwn kernel: md: md1 stopped.
Sep 13 13:12:49 annwn kernel: md: unbind<hda1>
Sep 13 13:12:49 annwn kernel: md: export_rdev(hda1)
Sep 13 13:12:49 annwn kernel: md: ... autorun DONE.

What is the problem ? The kernel can't detect that md0 is part of an array ?
Is this setup possible at all ?
Do I have to partition md0 and create a 'fd' partition ?

Aside, I have thogut of this a a possible solution: split hda in two, and buid
a raid0 with hda1,hdc1,hda2,hdd1 (hda is faster...). Is this so stupid as it
looks to me (think of rplacinf hda2 when it fails....).


TIA



--
J.A. Magallon <jamagallon()able!es> \ Software is like sex:
werewolf!able!es \ It's better when it's free
Mandrakelinux release 10.1 (RC 1) for i586
Linux 2.6.9-rc1-mm4 (gcc 3.4.1 (Mandrakelinux (Alpha 3.4.1-3mdk)) #2



2004-09-14 01:13:57

by NeilBrown

[permalink] [raw]
Subject: Re: Problems with raid in 2.6.9-rc1-mm4

On Monday September 13, [email protected] wrote:
...
> As disks are of different sizes, I though about doing a volume with the small
> ones, and then a raid with the big.
> As stripping on two IDE disks on the same IDE bus is silly, I wanted to
> do a linear array. So this is what I would like:
>
> /dev/md1 (raid0) -- /dev/hda1 (240Gb)
> \- /dev/md0 (raid linear) -- /dev/hdc1
> \- /dev/hdd1
>
....
>
> What is the problem ? The kernel can't detect that md0 is part of an
> array ?

No, it cannot.

> Is this setup possible at all ?

Yes. But don't use "autodetect" to start the array.
Use "mdadm". It is more flexible.

Or even, just use raid0 across all 3 drives. raid0 copes quite
happily with drives of different sizes.

> Do I have to partition md0 and create a 'fd' partition ?

It might work, but I wouldn't suggest it.

>
> Aside, I have thogut of this a a possible solution: split hda in two, and buid
> a raid0 with hda1,hdc1,hda2,hdd1 (hda is faster...). Is this so stupid as it
> looks to me (think of rplacinf hda2 when it fails....).

This would not be a clever idea for performance reasons.

If any drive fails you have lost all your data and will need to
restore from backups, so the possibility that hda2 dies is not
particularly interesting.

NeilBrown