2007-05-15 23:23:41

by Jeff Zheng

[permalink] [raw]
Subject: Software raid0 will crash the file-system, when each disk is 5TB

Hi everyone:

We are experiencing problems with software raid0, with very
large disk arrays.
We are using two 3ware disk array controllers, each of them is connected
8 750GB harddrives. And we build a software raid0 on top of that. The
total capacity is 5.5TB+5.5TB=11TB

We use jfs as the file-system, we have a test application that write
data continuously to the disks. After writing 52 10GB files, jfs
crashed. And we are not able to recover it, fsck doesn't recognise it
anymore.
We then tried xfs, same application, lasted a little longer, but gives
kernel crash later.

We then reconfigured the hardware array, this time we configured two
disk array from each controller, than we have 4 disk arrays, each of
them have 4 750GB harddrives. Than build a new software raid0 on top of
that. Total capacity is still the same, but 2.75T+2.75T+2.75T+2.75T=11T.

This time we managed to fill the whole 11T data without problem, we are
still doing validation on all 11TB of data written to the disks.

It happened on 2.6.20 and 2.6.13.

So I think the problem is in the way on software raid handling very
large disk, maybe a integer overflow or something. I've searched on the
web, only find another guy complaining the same thing on the xfs mailing
list.

Anybody have a clue?


Jeff


2007-05-15 23:30:21

by Michal Piotrowski

[permalink] [raw]
Subject: Re: Software raid0 will crash the file-system, when each disk is 5TB

[Ingo, Neil, linux-raid added to CC]

On 16/05/07, Jeff Zheng <[email protected]> wrote:
> Hi everyone:
>
> We are experiencing problems with software raid0, with very
> large disk arrays.
> We are using two 3ware disk array controllers, each of them is connected
> 8 750GB harddrives. And we build a software raid0 on top of that. The
> total capacity is 5.5TB+5.5TB=11TB
>
> We use jfs as the file-system, we have a test application that write
> data continuously to the disks. After writing 52 10GB files, jfs
> crashed. And we are not able to recover it, fsck doesn't recognise it
> anymore.
> We then tried xfs, same application, lasted a little longer, but gives
> kernel crash later.
>
> We then reconfigured the hardware array, this time we configured two
> disk array from each controller, than we have 4 disk arrays, each of
> them have 4 750GB harddrives. Than build a new software raid0 on top of
> that. Total capacity is still the same, but 2.75T+2.75T+2.75T+2.75T=11T.
>
> This time we managed to fill the whole 11T data without problem, we are
> still doing validation on all 11TB of data written to the disks.
>
> It happened on 2.6.20 and 2.6.13.
>
> So I think the problem is in the way on software raid handling very
> large disk, maybe a integer overflow or something. I've searched on the
> web, only find another guy complaining the same thing on the xfs mailing
> list.
>
> Anybody have a clue?
>
>
> Jeff
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>

Regards,
Michal

--
Michal K. K. Piotrowski
Kernel Monkeys
(http://kernel.wikidot.com/start)

2007-05-16 00:04:20

by NeilBrown

[permalink] [raw]
Subject: Re: Software raid0 will crash the file-system, when each disk is 5TB

On Wednesday May 16, [email protected] wrote:
> >
> > Anybody have a clue?
> >

No...
When a raid0 array is assemble, quite a lot of message get printed
about number of zones and hash_spacing etc. Can you collect and post
those. Both for the failing case (2*5.5T) and the working case
(4*2.55T) is possible.

NeilBrown

2007-05-16 01:56:46

by Jeff Zheng

[permalink] [raw]
Subject: RE: Software raid0 will crash the file-system, when each disk is 5TB

Here is the information of the created raid0. Hope it is enough.

Jeff

The crashing one:
md: bind<sdd>
md: bind<sde>
md: raid0 personality registered for level 0
md0: setting max_sectors to 4096, segment boundary to 1048575
raid0: looking at sde
raid0: comparing sde(5859284992) with sde(5859284992)
raid0: END
raid0: ==> UNIQUE
raid0: 1 zones
raid0: looking at sdd
raid0: comparing sdd(5859284992) with sde(5859284992)
raid0: EQUAL
raid0: FINAL 1 zones
raid0: done.
raid0 : md_size is 11718569984 blocks.
raid0 : conf->hash_spacing is 11718569984 blocks.
raid0 : nb_zone is 2.
raid0 : Allocating 8 bytes for hash.
JFS: nTxBlock = 8192, nTxLock = 65536

The working one:
md: bind<sde>
md: bind<sdf>
md: bind<sdg>
md: bind<sdd>
md0: setting max_sectors to 4096, segment boundary to 1048575
raid0: looking at sdd
raid0: comparing sdd(2929641472) with sdd(2929641472)
raid0: END
raid0: ==> UNIQUE
raid0: 1 zones
raid0: looking at sdg
raid0: comparing sdg(2929641472) with sdd(2929641472)
raid0: EQUAL
raid0: looking at sdf
raid0: comparing sdf(2929641472) with sdd(2929641472)
raid0: EQUAL
raid0: looking at sde
raid0: comparing sde(2929641472) with sdd(2929641472)
raid0: EQUAL
raid0: FINAL 1 zones
raid0: done.
raid0 : md_size is 11718565888 blocks.
raid0 : conf->hash_spacing is 11718565888 blocks.
raid0 : nb_zone is 2.
raid0 : Allocating 8 bytes for hash.
JFS: nTxBlock = 8192, nTxLock = 65536

-----Original Message-----
From: Neil Brown [mailto:[email protected]]
Sent: Wednesday, 16 May 2007 12:04 p.m.
To: Michal Piotrowski
Cc: Jeff Zheng; Ingo Molnar; [email protected];
[email protected]; [email protected]
Subject: Re: Software raid0 will crash the file-system, when each disk
is 5TB

On Wednesday May 16, [email protected] wrote:
> >
> > Anybody have a clue?
> >

No...
When a raid0 array is assemble, quite a lot of message get printed
about number of zones and hash_spacing etc. Can you collect and post
those. Both for the failing case (2*5.5T) and the working case
(4*2.55T) is possible.

NeilBrown

2007-05-16 14:05:08

by Andreas Dilger

[permalink] [raw]
Subject: Re: Software raid0 will crash the file-system, when each disk is 5TB

On May 16, 2007 11:09 +1200, Jeff Zheng wrote:
> We are using two 3ware disk array controllers, each of them is connected
> 8 750GB harddrives. And we build a software raid0 on top of that. The
> total capacity is 5.5TB+5.5TB=11TB
>
> We use jfs as the file-system, we have a test application that write
> data continuously to the disks. After writing 52 10GB files, jfs
> crashed. And we are not able to recover it, fsck doesn't recognise it
> anymore.
> We then tried xfs, same application, lasted a little longer, but gives
> kernel crash later.

Check if your kernel has CONFIG_LBD enabled.

The kernel doesn't check if the block layer can actually write to
a block device > 2TB.

Cheers, Andreas
--
Andreas Dilger
Principal Software Engineer
Cluster File Systems, Inc.

2007-05-16 17:27:53

by Bill Davidsen

[permalink] [raw]
Subject: Re: Software raid0 will crash the file-system, when each disk is 5TB

Jeff Zheng wrote:
> Here is the information of the created raid0. Hope it is enough.
>
>
If I read this correctly, the problem is with JFS rather than RAID? Have
you tried not mounting the JFS filesystem but just starting the array
which crashes, so you can read bits of it, etc, and verify that the
array itself is working?

And can you run an fsck on the filesystem, if that makes sense? I assume
you got to actually write a f/s at one time, and I've never used JFS
under Linux. I spent five+ years using it on AIX, though, complex but
robust.
> The crashing one:
> md: bind<sdd>
> md: bind<sde>
> md: raid0 personality registered for level 0
> md0: setting max_sectors to 4096, segment boundary to 1048575
> raid0: looking at sde
> raid0: comparing sde(5859284992) with sde(5859284992)
> raid0: END
> raid0: ==> UNIQUE
> raid0: 1 zones
> raid0: looking at sdd
> raid0: comparing sdd(5859284992) with sde(5859284992)
> raid0: EQUAL
> raid0: FINAL 1 zones
> raid0: done.
> raid0 : md_size is 11718569984 blocks.
> raid0 : conf->hash_spacing is 11718569984 blocks.
> raid0 : nb_zone is 2.
> raid0 : Allocating 8 bytes for hash.
> JFS: nTxBlock = 8192, nTxLock = 65536
>
> The working one:
> md: bind<sde>
> md: bind<sdf>
> md: bind<sdg>
> md: bind<sdd>
> md0: setting max_sectors to 4096, segment boundary to 1048575
> raid0: looking at sdd
> raid0: comparing sdd(2929641472) with sdd(2929641472)
> raid0: END
> raid0: ==> UNIQUE
> raid0: 1 zones
> raid0: looking at sdg
> raid0: comparing sdg(2929641472) with sdd(2929641472)
> raid0: EQUAL
> raid0: looking at sdf
> raid0: comparing sdf(2929641472) with sdd(2929641472)
> raid0: EQUAL
> raid0: looking at sde
> raid0: comparing sde(2929641472) with sdd(2929641472)
> raid0: EQUAL
> raid0: FINAL 1 zones
> raid0: done.
> raid0 : md_size is 11718565888 blocks.
> raid0 : conf->hash_spacing is 11718565888 blocks.
> raid0 : nb_zone is 2.
> raid0 : Allocating 8 bytes for hash.
> JFS: nTxBlock = 8192, nTxLock = 65536
>
> -----Original Message-----
> From: Neil Brown [mailto:[email protected]]
> Sent: Wednesday, 16 May 2007 12:04 p.m.
> To: Michal Piotrowski
> Cc: Jeff Zheng; Ingo Molnar; [email protected];
> [email protected]; [email protected]
> Subject: Re: Software raid0 will crash the file-system, when each disk
> is 5TB
>
> On Wednesday May 16, [email protected] wrote:
>
>>> Anybody have a clue?
>>>
>>>
>
> No...
> When a raid0 array is assemble, quite a lot of message get printed
> about number of zones and hash_spacing etc. Can you collect and post
> those. Both for the failing case (2*5.5T) and the working case
> (4*2.55T) is possible.
>


--
bill davidsen <[email protected]>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979

2007-05-16 18:03:40

by David Lang

[permalink] [raw]
Subject: Re: Software raid0 will crash the file-system, when each disk is 5TB

On Wed, 16 May 2007, Bill Davidsen wrote:

> Jeff Zheng wrote:
>> Here is the information of the created raid0. Hope it is enough.
>>
>>
> If I read this correctly, the problem is with JFS rather than RAID?

he had the same problem with xfs.

David Lang

2007-05-16 18:09:11

by David Lang

[permalink] [raw]
Subject: Re: Software raid0 will crash the file-system, when each disk is 5TB

On Wed, 16 May 2007, Andreas Dilger wrote:

> On May 16, 2007 11:09 +1200, Jeff Zheng wrote:
>> We are using two 3ware disk array controllers, each of them is connected
>> 8 750GB harddrives. And we build a software raid0 on top of that. The
>> total capacity is 5.5TB+5.5TB=11TB
>>
>> We use jfs as the file-system, we have a test application that write
>> data continuously to the disks. After writing 52 10GB files, jfs
>> crashed. And we are not able to recover it, fsck doesn't recognise it
>> anymore.
>> We then tried xfs, same application, lasted a little longer, but gives
>> kernel crash later.
>
> Check if your kernel has CONFIG_LBD enabled.
>
> The kernel doesn't check if the block layer can actually write to
> a block device > 2TB.

my experiance is taht if you don't have CONFIG_LBD enabled then the kernel
will report the larger disk as 2G and everything will work, you just won't
get all the space.

plus he seems to be crashing around 500G of data

and finally (if I am reading the post correctly) if he configures the
drives as 4x2.2TB=11TB instead of 2x5.5TB=11TB he doesn't have the same
problem.

I'm getting ready to setup a similar machine that will have 3x10TB (3 15
disk arrays with 750G drives), but won't be ready to try this for a few
more days.

David Lang

2007-05-16 18:19:45

by Jan Engelhardt

[permalink] [raw]
Subject: Re: Software raid0 will crash the file-system, when each disk is 5TB


On May 16 2007 11:04, [email protected] wrote:
>
> I'm getting ready to setup a similar machine that will have 3x10TB (3 15 disk
> arrays with 750G drives), but won't be ready to try this for a few more days.

You could emulate it with VMware. Big disks are quite "cheap" when
they are not allocated.


Jan
--

2007-05-16 21:42:44

by Jeff Zheng

[permalink] [raw]
Subject: RE: Software raid0 will crash the file-system, when each disk is 5TB

Problem is that is only happens when you actually write data to the
raid. You need the actual space to reproduce the problem.

Jeff

-----Original Message-----
From: Jan Engelhardt [mailto:[email protected]]
Sent: Thursday, 17 May 2007 6:17 a.m.
To: [email protected]
Cc: Andreas Dilger; Jeff Zheng; [email protected];
[email protected]
Subject: Re: Software raid0 will crash the file-system, when each disk
is 5TB


On May 16 2007 11:04, [email protected] wrote:
>
> I'm getting ready to setup a similar machine that will have 3x10TB (3
> 15 disk arrays with 750G drives), but won't be ready to try this for a
few more days.

You could emulate it with VMware. Big disks are quite "cheap" when they
are not allocated.


Jan
--

2007-05-16 21:44:24

by Jeff Zheng

[permalink] [raw]
Subject: RE: Software raid0 will crash the file-system, when each disk is 5TB


You will definitely meet the same problem. As very large hardware disk
becomes more and more popular, this will become a big issue for software
raid.


Jeff

-----Original Message-----
From: [email protected] [mailto:[email protected]]
Sent: Thursday, 17 May 2007 6:04 a.m.
To: Andreas Dilger
Cc: Jeff Zheng; [email protected];
[email protected]
Subject: Re: Software raid0 will crash the file-system, when each disk
is 5TB


my experiance is taht if you don't have CONFIG_LBD enabled then the
kernel will report the larger disk as 2G and everything will work, you
just won't get all the space.

plus he seems to be crashing around 500G of data

and finally (if I am reading the post correctly) if he configures the
drives as 4x2.2TB=11TB instead of 2x5.5TB=11TB he doesn't have the same
problem.

I'm getting ready to setup a similar machine that will have 3x10TB (3 15
disk arrays with 750G drives), but won't be ready to try this for a few
more days.

David Lang

2007-05-17 00:48:39

by NeilBrown

[permalink] [raw]
Subject: RE: Software raid0 will crash the file-system, when each disk is 5TB

On Wednesday May 16, [email protected] wrote:
> Here is the information of the created raid0. Hope it is enough.

Thanks.
Everything looks fine here.

The only difference of any significance between the working and
non-working configurations is that in the non-working, the component
devices are larger than 2Gig, and hence have sector offsets greater
than 32 bits.

This does cause a slightly different code path in one place, but I
cannot see it making a difference. But maybe it does.

What architecture is this running on?
What C compiler are you using?

Can you try with this patch? It is the only thing that I can find
that could conceivably go wrong.

Thanks,
NeilBrown

Signed-off-by: Neil Brown <[email protected]>

### Diffstat output
./drivers/md/raid0.c | 1 +
1 file changed, 1 insertion(+)

diff .prev/drivers/md/raid0.c ./drivers/md/raid0.c
--- .prev/drivers/md/raid0.c 2007-05-17 10:33:30.000000000 +1000
+++ ./drivers/md/raid0.c 2007-05-17 10:34:02.000000000 +1000
@@ -461,6 +461,7 @@ static int raid0_make_request (request_q

while (block >= (zone->zone_offset + zone->size))
zone++;
+ BUG_ON(block < zone->zone_offset);

sect_in_chunk = bio->bi_sector & ((chunk_size<<1) -1);

2007-05-17 02:10:04

by Jeff Zheng

[permalink] [raw]
Subject: RE: Software raid0 will crash the file-system, when each disk is 5TB


> The only difference of any significance between the working
> and non-working configurations is that in the non-working,
> the component devices are larger than 2Gig, and hence have
> sector offsets greater than 32 bits.

Do u mean 2T here?, but in both configuartion, the component devices are
larger than 2T (2.25T&5.5T).

> This does cause a slightly different code path in one place,
> but I cannot see it making a difference. But maybe it does.
>
> What architecture is this running on?
> What C compiler are you using?

I386(i686)
Gcc 4.0.2 20051125,
Distro is Fedora core, we've tried fc4 and fc6.

> Can you try with this patch? It is the only thing that I can
> find that could conceivably go wrong.
>

OK, I will try the patach and post the result.

Best Regards
Jeff Zheng

2007-05-17 02:46:07

by NeilBrown

[permalink] [raw]
Subject: RE: Software raid0 will crash the file-system, when each disk is 5TB

On Thursday May 17, [email protected] wrote:
>
> > The only difference of any significance between the working
> > and non-working configurations is that in the non-working,
> > the component devices are larger than 2Gig, and hence have
> > sector offsets greater than 32 bits.
>
> Do u mean 2T here?, but in both configuartion, the component devices are
> larger than 2T (2.25T&5.5T).

Yes, I meant 2T, and yes, the components are always over 2T. So I'm
at a complete loss. The raid0 code follows the same paths and does
the same things and uses 64bit arithmetic where needed.

So I have no idea how there could be a difference between these two
cases.

I'm at a loss...

NeilBrown

2007-05-17 03:11:49

by Jeff Zheng

[permalink] [raw]
Subject: RE: Software raid0 will crash the file-system, when each disk is 5TB

I tried the patch, same problem show up, but no bug_on report

Is there any other things I can do?


Jeff


> Yes, I meant 2T, and yes, the components are always over 2T.
> So I'm at a complete loss. The raid0 code follows the same
> paths and does the same things and uses 64bit arithmetic where needed.
>
> So I have no idea how there could be a difference between
> these two cases.
>
> I'm at a loss...
>
> NeilBrown
>

2007-05-17 04:32:43

by NeilBrown

[permalink] [raw]
Subject: RE: Software raid0 will crash the file-system, when each disk is 5TB

On Thursday May 17, [email protected] wrote:
> I tried the patch, same problem show up, but no bug_on report
>
> Is there any other things I can do?
>

What is the nature of the corruption? Is it data in a file that is
wrong when you read it back, or does the filesystem metadata get
corrupted?

Can you try the configuration that works, and sha1sum the files after
you have written them to make sure that they really are correct?
My thought here is "maybe there is a bad block on one device, and the
block is used for data in the 'working' config, and for metadata in
the 'broken' config.

Can you try a degraded raid10 configuration. e.g.

mdadm -C /dev/md1 --level=10 --raid-disks=4 /dev/first missing \
/dev/second missing

That will lay out the data in exactly the same place as with raid0,
but will use totally different code paths to access it. If you still
get a problem, then it isn't in the raid0 code.

Maybe try version 1 metadata (mdadm --metadata=1). I doubt that would
make a difference, but as I am grasping at straws already, it may be a
straw woth trying.

NeilBrown

2007-05-17 04:53:21

by David Lang

[permalink] [raw]
Subject: RE: Software raid0 will crash the file-system, when each disk is 5TB

On Thu, 17 May 2007, Neil Brown wrote:

> On Thursday May 17, [email protected] wrote:
>>
>>> The only difference of any significance between the working
>>> and non-working configurations is that in the non-working,
>>> the component devices are larger than 2Gig, and hence have
>>> sector offsets greater than 32 bits.
>>
>> Do u mean 2T here?, but in both configuartion, the component devices are
>> larger than 2T (2.25T&5.5T).
>
> Yes, I meant 2T, and yes, the components are always over 2T.

2T decimal or 2T binary?

> So I'm
> at a complete loss. The raid0 code follows the same paths and does
> the same things and uses 64bit arithmetic where needed.
>
> So I have no idea how there could be a difference between these two
> cases.
>
> I'm at a loss...
>
> NeilBrown
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>

2007-05-17 05:03:58

by NeilBrown

[permalink] [raw]
Subject: RE: Software raid0 will crash the file-system, when each disk is 5TB

On Wednesday May 16, [email protected] wrote:
> On Thu, 17 May 2007, Neil Brown wrote:
>
> > On Thursday May 17, [email protected] wrote:
> >>
> >>> The only difference of any significance between the working
> >>> and non-working configurations is that in the non-working,
> >>> the component devices are larger than 2Gig, and hence have
> >>> sector offsets greater than 32 bits.
> >>
> >> Do u mean 2T here?, but in both configuartion, the component devices are
> >> larger than 2T (2.25T&5.5T).
> >
> > Yes, I meant 2T, and yes, the components are always over 2T.
>
> 2T decimal or 2T binary?
>

Either. The smallest as actually 2.75T (typo above).
Precisely it was
2929641472 kilobytes
or
5859282944 sectors
or
0x15D3D9000 sectors.

So it is over 32bits already...

Uhm, I just noticed something.
'chunk' is unsigned long, and when it gets shifted up, we might lose
bits. That could still happen with the 4*2.75T arrangement, but is
much more likely in the 2*5.5T arrangement.

Jeff, can you try this patch?

Thanks.
NeilBrown


Signed-off-by: Neil Brown <[email protected]>

### Diffstat output
./drivers/md/raid0.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff .prev/drivers/md/raid0.c ./drivers/md/raid0.c
--- .prev/drivers/md/raid0.c 2007-05-17 10:33:30.000000000 +1000
+++ ./drivers/md/raid0.c 2007-05-17 15:02:15.000000000 +1000
@@ -475,7 +475,7 @@ static int raid0_make_request (request_q
x = block >> chunksize_bits;
tmp_dev = zone->dev[sector_div(x, zone->nb_dev)];
}
- rsect = (((chunk << chunksize_bits) + zone->dev_offset)<<1)
+ rsect = ((((sector_t)chunk << chunksize_bits) + zone->dev_offset)<<1)
+ sect_in_chunk;

bio->bi_bdev = tmp_dev->bdev;

2007-05-17 05:08:19

by Jeff Zheng

[permalink] [raw]
Subject: RE: Software raid0 will crash the file-system, when each disk is 5TB


> What is the nature of the corruption? Is it data in a file
> that is wrong when you read it back, or does the filesystem
> metadata get corrupted?
The corruption is in fs metadata, jfs is completely destroied, after
Umount, fsck does not recogonize it as jfs anymore. Xfs gives kernel
Crash, but seems still recoverable.
>
> Can you try the configuration that works, and sha1sum the
> files after you have written them to make sure that they
> really are correct?
We have verified the data on the working configuration, we have written
around 900 identical 10G files , and verified that the md5sum is
actually
the same. The verification took two days though :)

> My thought here is "maybe there is a bad block on one device,
> and the block is used for data in the 'working' config, and
> for metadata in the 'broken' config.
>
> Can you try a degraded raid10 configuration. e.g.
>
> mdadm -C /dev/md1 --level=10 --raid-disks=4 /dev/first missing \
> /dev/second missing
>
> That will lay out the data in exactly the same place as with
> raid0, but will use totally different code paths to access
> it. If you still get a problem, then it isn't in the raid0 code.

I will try this later today. As I'm now trying different size of the
component.
3.4T, seems working. Test 4.1T right now.

> Maybe try version 1 metadata (mdadm --metadata=1). I doubt
> that would make a difference, but as I am grasping at straws
> already, it may be a straw woth trying.

Well the problem may also be in 3ware disk array, or disk array driver.
The guy
complaining about the same problem is also using 3ware disk array
controller.
But there is no way to verify that and a single disk array has been
working fine for us.

Jeff

2007-05-17 05:31:29

by NeilBrown

[permalink] [raw]
Subject: RE: Software raid0 will crash the file-system, when each disk is 5TB

On Thursday May 17, [email protected] wrote:
>
> Uhm, I just noticed something.
> 'chunk' is unsigned long, and when it gets shifted up, we might lose
> bits. That could still happen with the 4*2.75T arrangement, but is
> much more likely in the 2*5.5T arrangement.

Actually, it cannot be a problem with the 4*2.75T arrangement.
chuck << chunksize_bits

will not exceed the size of the underlying device *in*kilobytes*.
In that case that is 0xAE9EC800 which will git in a 32bit long.
We don't double it to make sectors until after we add
zone->dev_offset, which is "sector_t" and so 64bit arithmetic is used.

So I'm quite certain this bug will cause exactly the problems
experienced!!

>
> Jeff, can you try this patch?

Don't bother about the other tests I mentioned, just try this one.
Thanks.

NeilBrown

> Signed-off-by: Neil Brown <[email protected]>
>
> ### Diffstat output
> ./drivers/md/raid0.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff .prev/drivers/md/raid0.c ./drivers/md/raid0.c
> --- .prev/drivers/md/raid0.c 2007-05-17 10:33:30.000000000 +1000
> +++ ./drivers/md/raid0.c 2007-05-17 15:02:15.000000000 +1000
> @@ -475,7 +475,7 @@ static int raid0_make_request (request_q
> x = block >> chunksize_bits;
> tmp_dev = zone->dev[sector_div(x, zone->nb_dev)];
> }
> - rsect = (((chunk << chunksize_bits) + zone->dev_offset)<<1)
> + rsect = ((((sector_t)chunk << chunksize_bits) + zone->dev_offset)<<1)
> + sect_in_chunk;
>
> bio->bi_bdev = tmp_dev->bdev;

2007-05-17 05:39:16

by Jeff Zheng

[permalink] [raw]
Subject: RE: Software raid0 will crash the file-system, when each disk is 5TB


Yeah, seems you've locked it down, :D. I've written 600GB of data now,
and anything is still fine.
Will let it run overnight, and fill the whole 11T. I'll post the result
tomorrow

Thanks a lot though.

Jeff

> -----Original Message-----
> From: Neil Brown [mailto:[email protected]]
> Sent: Thursday, 17 May 2007 5:31 p.m.
> To: [email protected]; Jeff Zheng; Michal Piotrowski; Ingo
> Molnar; [email protected];
> [email protected]; [email protected]
> Subject: RE: Software raid0 will crash the file-system, when
> each disk is 5TB
>
> On Thursday May 17, [email protected] wrote:
> >
> > Uhm, I just noticed something.
> > 'chunk' is unsigned long, and when it gets shifted up, we
> might lose
> > bits. That could still happen with the 4*2.75T arrangement, but is
> > much more likely in the 2*5.5T arrangement.
>
> Actually, it cannot be a problem with the 4*2.75T arrangement.
> chuck << chunksize_bits
>
> will not exceed the size of the underlying device *in*kilobytes*.
> In that case that is 0xAE9EC800 which will git in a 32bit long.
> We don't double it to make sectors until after we add
> zone->dev_offset, which is "sector_t" and so 64bit arithmetic is used.
>
> So I'm quite certain this bug will cause exactly the problems
> experienced!!
>
> >
> > Jeff, can you try this patch?
>
> Don't bother about the other tests I mentioned, just try this one.
> Thanks.
>
> NeilBrown
>
> > Signed-off-by: Neil Brown <[email protected]>
> >
> > ### Diffstat output
> > ./drivers/md/raid0.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff .prev/drivers/md/raid0.c ./drivers/md/raid0.c
> > --- .prev/drivers/md/raid0.c 2007-05-17
> 10:33:30.000000000 +1000
> > +++ ./drivers/md/raid0.c 2007-05-17 15:02:15.000000000 +1000
> > @@ -475,7 +475,7 @@ static int raid0_make_request (request_q
> > x = block >> chunksize_bits;
> > tmp_dev = zone->dev[sector_div(x, zone->nb_dev)];
> > }
> > - rsect = (((chunk << chunksize_bits) + zone->dev_offset)<<1)
> > + rsect = ((((sector_t)chunk << chunksize_bits) +
> > +zone->dev_offset)<<1)
> > + sect_in_chunk;
> >
> > bio->bi_bdev = tmp_dev->bdev;
>

2007-05-17 07:24:44

by Jan Engelhardt

[permalink] [raw]
Subject: RE: Software raid0 will crash the file-system, when each disk is 5TB


On May 17 2007 09:42, Jeff Zheng wrote:
>
>Problem is that is only happens when you actually write data to the
>raid. You need the actual space to reproduce the problem.

That should not be a big problem. Create like 4x950G virtual sparse
drives (takes roughly or so 4x100 MB on the host after mkfs), then
set up a raid inside the vm, with which you can play - even write
to it. Not sure how RAID5 will affect the host size of the virtual
drives, since during first resync when all disks are blank, it will
XOR it (0^0=1), and hence fills up the host disk.


Jan
--

2007-05-17 11:11:47

by NeilBrown

[permalink] [raw]
Subject: RE: Software raid0 will crash the file-system, when each disk is 5TB

On Thursday May 17, [email protected] wrote:
> XOR it (0^0=1), and hence fills up the host disk.

Uhmm... you need to check your maths.

$ perl -e 'printf "%d\n", 0^0;'
0

:-)
NeilBrown

2007-05-17 15:34:37

by Jan Engelhardt

[permalink] [raw]
Subject: RE: Software raid0 will crash the file-system, when each disk is 5TB


On May 17 2007 21:11, Neil Brown wrote:
>On Thursday May 17, [email protected] wrote:
>> XOR it (0^0=1), and hence fills up the host disk.
>
>Uhmm... you need to check your maths.
>
>$ perl -e 'printf "%d\n", 0^0;'
>0
>
>:-)

(ouch)
You know just as I that ^ is the power operator!
I just... wrongly named it XOR :p

$ echo '0^0' | bc -l
1


Well, right, setting up a blank raid5 array inside vmware will not make
the host file significantly larger, making it easy to build megatera
arrays with gigabyte range host disks.


Jan
--

2007-05-17 22:56:06

by Jeff Zheng

[permalink] [raw]
Subject: RE: Software raid0 will crash the file-system, when each disk is 5TB

Fix confirmed, filled the whole 11T hard disk, without crashing.
I presume this would go into 2.6.22

Thanks again.

Jeff

> -----Original Message-----
> From: [email protected]
> [mailto:[email protected]] On Behalf Of Jeff Zheng
> Sent: Thursday, 17 May 2007 5:39 p.m.
> To: Neil Brown; [email protected]; Michal Piotrowski; Ingo
> Molnar; [email protected];
> [email protected]; [email protected]
> Subject: RE: Software raid0 will crash the file-system, when
> each disk is 5TB
>
>
> Yeah, seems you've locked it down, :D. I've written 600GB of
> data now, and anything is still fine.
> Will let it run overnight, and fill the whole 11T. I'll post
> the result tomorrow
>
> Thanks a lot though.
>
> Jeff
>
> > -----Original Message-----
> > From: Neil Brown [mailto:[email protected]]
> > Sent: Thursday, 17 May 2007 5:31 p.m.
> > To: [email protected]; Jeff Zheng; Michal Piotrowski; Ingo Molnar;
> > [email protected]; [email protected];
> > [email protected]
> > Subject: RE: Software raid0 will crash the file-system,
> when each disk
> > is 5TB
> >
> > On Thursday May 17, [email protected] wrote:
> > >
> > > Uhm, I just noticed something.
> > > 'chunk' is unsigned long, and when it gets shifted up, we
> > might lose
> > > bits. That could still happen with the 4*2.75T
> arrangement, but is
> > > much more likely in the 2*5.5T arrangement.
> >
> > Actually, it cannot be a problem with the 4*2.75T arrangement.
> > chuck << chunksize_bits
> >
> > will not exceed the size of the underlying device *in*kilobytes*.
> > In that case that is 0xAE9EC800 which will git in a 32bit long.
> > We don't double it to make sectors until after we add
> > zone->dev_offset, which is "sector_t" and so 64bit
> arithmetic is used.
> >
> > So I'm quite certain this bug will cause exactly the problems
> > experienced!!
> >
> > >
> > > Jeff, can you try this patch?
> >
> > Don't bother about the other tests I mentioned, just try this one.
> > Thanks.
> >
> > NeilBrown
> >
> > > Signed-off-by: Neil Brown <[email protected]>
> > >
> > > ### Diffstat output
> > > ./drivers/md/raid0.c | 2 +-
> > > 1 file changed, 1 insertion(+), 1 deletion(-)
> > >
> > > diff .prev/drivers/md/raid0.c ./drivers/md/raid0.c
> > > --- .prev/drivers/md/raid0.c 2007-05-17
> > 10:33:30.000000000 +1000
> > > +++ ./drivers/md/raid0.c 2007-05-17 15:02:15.000000000 +1000
> > > @@ -475,7 +475,7 @@ static int raid0_make_request (request_q
> > > x = block >> chunksize_bits;
> > > tmp_dev = zone->dev[sector_div(x, zone->nb_dev)];
> > > }
> > > - rsect = (((chunk << chunksize_bits) + zone->dev_offset)<<1)
> > > + rsect = ((((sector_t)chunk << chunksize_bits) +
> > > +zone->dev_offset)<<1)
> > > + sect_in_chunk;
> > >
> > > bio->bi_bdev = tmp_dev->bdev;
> >
> -
> To unsubscribe from this list: send the line "unsubscribe
> linux-raid" in the body of a message to
> [email protected] More majordomo info at
> http://vger.kernel.org/majordomo-info.html
>

2007-05-18 00:22:22

by NeilBrown

[permalink] [raw]
Subject: RE: Software raid0 will crash the file-system, when each disk is 5TB

On Friday May 18, [email protected] wrote:
> Fix confirmed, filled the whole 11T hard disk, without crashing.
> I presume this would go into 2.6.22

Yes, and probably 2.6.21.y, though the patch will be slightly
different, see below.
>
> Thanks again.

And thank-you for pursuing this with me.

NeilBrown


---------------------------
Avoid overflow in raid0 calculation with large components.

If a raid0 has a component device larger than 4TB, and is accessed on
a 32bit machines, then as 'chunk' is unsigned lock,
chunk << chunksize_bits
can overflow (this can be as high as the size of the device in KB).
chunk itself will not overflow (without triggering a BUG).

So change 'chunk' to be 'sector_t, and get rid of the 'BUG' as it becomes
impossible to hit.

Cc: "Jeff Zheng" <[email protected]>
Signed-off-by: Neil Brown <[email protected]>

### Diffstat output
./drivers/md/raid0.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)

diff .prev/drivers/md/raid0.c ./drivers/md/raid0.c
--- .prev/drivers/md/raid0.c 2007-05-17 10:33:30.000000000 +1000
+++ ./drivers/md/raid0.c 2007-05-17 16:14:12.000000000 +1000
@@ -415,7 +415,7 @@ static int raid0_make_request (request_q
raid0_conf_t *conf = mddev_to_conf(mddev);
struct strip_zone *zone;
mdk_rdev_t *tmp_dev;
- unsigned long chunk;
+ sector_t chunk;
sector_t block, rsect;
const int rw = bio_data_dir(bio);

@@ -470,7 +470,6 @@ static int raid0_make_request (request_q

sector_div(x, zone->nb_dev);
chunk = x;
- BUG_ON(x != (sector_t)chunk);

x = block >> chunksize_bits;
tmp_dev = zone->dev[sector_div(x, zone->nb_dev)];

2007-05-22 21:30:19

by Bill Davidsen

[permalink] [raw]
Subject: Re: Software raid0 will crash the file-system, when each disk is 5TB

Jeff Zheng wrote:
> Fix confirmed, filled the whole 11T hard disk, without crashing.
> I presume this would go into 2.6.22
>
Since it results in a full loss of data, I would hope it goes into
2.6.21.x -stable.

> Thanks again.
>
> Jeff
>
>> -----Original Message-----
>> From: [email protected]
>> [mailto:[email protected]] On Behalf Of Jeff Zheng
>> Sent: Thursday, 17 May 2007 5:39 p.m.
>> To: Neil Brown; [email protected]; Michal Piotrowski; Ingo
>> Molnar; [email protected];
>> [email protected]; [email protected]
>> Subject: RE: Software raid0 will crash the file-system, when
>> each disk is 5TB
>>
>>
>> Yeah, seems you've locked it down, :D. I've written 600GB of
>> data now, and anything is still fine.
>> Will let it run overnight, and fill the whole 11T. I'll post
>> the result tomorrow
>>
>> Thanks a lot though.
>>
>> Jeff
>>
>>> -----Original Message-----
>>> From: Neil Brown [mailto:[email protected]]
>>> Sent: Thursday, 17 May 2007 5:31 p.m.
>>> To: [email protected]; Jeff Zheng; Michal Piotrowski; Ingo Molnar;
>>> [email protected]; [email protected];
>>> [email protected]
>>> Subject: RE: Software raid0 will crash the file-system,
>> when each disk
>>> is 5TB
>>>
>>> On Thursday May 17, [email protected] wrote:
>>>> Uhm, I just noticed something.
>>>> 'chunk' is unsigned long, and when it gets shifted up, we
>>> might lose
>>>> bits. That could still happen with the 4*2.75T
>> arrangement, but is
>>>> much more likely in the 2*5.5T arrangement.
>>> Actually, it cannot be a problem with the 4*2.75T arrangement.
>>> chuck << chunksize_bits
>>>
>>> will not exceed the size of the underlying device *in*kilobytes*.
>>> In that case that is 0xAE9EC800 which will git in a 32bit long.
>>> We don't double it to make sectors until after we add
>>> zone->dev_offset, which is "sector_t" and so 64bit
>> arithmetic is used.
>>> So I'm quite certain this bug will cause exactly the problems
>>> experienced!!
>>>
>>>> Jeff, can you try this patch?
>>> Don't bother about the other tests I mentioned, just try this one.
>>> Thanks.
>>>
>>> NeilBrown
>>>
>>>> Signed-off-by: Neil Brown <[email protected]>
>>>>
>>>> ### Diffstat output
>>>> ./drivers/md/raid0.c | 2 +-
>>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>>
>>>> diff .prev/drivers/md/raid0.c ./drivers/md/raid0.c
>>>> --- .prev/drivers/md/raid0.c 2007-05-17
>>> 10:33:30.000000000 +1000
>>>> +++ ./drivers/md/raid0.c 2007-05-17 15:02:15.000000000 +1000
>>>> @@ -475,7 +475,7 @@ static int raid0_make_request (request_q
>>>> x = block >> chunksize_bits;
>>>> tmp_dev = zone->dev[sector_div(x, zone->nb_dev)];
>>>> }
>>>> - rsect = (((chunk << chunksize_bits) + zone->dev_offset)<<1)
>>>> + rsect = ((((sector_t)chunk << chunksize_bits) +
>>>> +zone->dev_offset)<<1)
>>>> + sect_in_chunk;
>>>>
>>>> bio->bi_bdev = tmp_dev->bdev;
>> -
>> To unsubscribe from this list: send the line "unsubscribe
>> linux-raid" in the body of a message to
>> [email protected] More majordomo info at
>> http://vger.kernel.org/majordomo-info.html
>>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>


--
Bill Davidsen <[email protected]>
"We have more to fear from the bungling of the incompetent than from
the machinations of the wicked." - from Slashdot