2004-01-02 20:12:46

by Elliott Bennett

[permalink] [raw]
Subject: Re: JFS resize=0 problem in 2.6.0

On Sun, Dec 28, 2003 at 04:05:03PM -0800, Mike Fedyk wrote:
> On Sun, Dec 28, 2003 at 10:30:28AM -0500, [email protected] wrote:
> > It seems to me that line 264 is attempting to test for the mount
> > paramater "resize=0", and when it comes across this, resize to the full
> > size of the volume. However, this doesn't work. I believe it should
> > test for the char '0' (*resize=='0'), not against literal zero.
> >
> > Let me know if I'm way off base here. But the below patch does allow a
> > $ mount -o remount,resize=0 /mnt/test
> > to resize the jfs filesystem to the full size of the volume.
>
> And it won't without the patch?
>
> What errors do you get if it fails without the patch?

It won't without the patch.

No errors...it just doesn't resize. :)

Here's a shell snippit. I have just resized /dev/vg/lv from 700M to
900M, and I'm running the ORIGINAL jfs module:

root@tesla:~# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg-lv 713500 220 713280 1% /mnt
root@tesla:~# mount
/dev/mapper/vg-lv on /mnt type jfs (rw)
root@tesla:~# mount -o remount,resize=0 /mnt
root@tesla:~# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg-lv 713500 220 713280 1% /mnt

..the resizing failed. But, switching to the patched module:

root@tesla:~# umount /mnt
root@tesla:~# rmmod jfs
root@tesla:~# cp ~/jfs_patch.ko /lib/modules/2.6.0/kernel/fs/jfs/jfs.ko
root@tesla:~# mount -t jfs /dev/vg/lv /mnt
root@tesla:~# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg-lv 713500 220 713280 1% /mnt
root@tesla:~# mount -o remount,resize=0 /mnt
root@tesla:~# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg-lv 917272 244 917028 1% /mnt

resizing works.


Oddly, now, an additional lvextend and remount/resize fails:

root@tesla:~# lvextend /dev/vg/lv -L +200M
/dev/hdd: open failed: Read-only file system
Extending logical volume lv to 1.07 GB
Logical volume lv successfully resized
root@tesla:~# mount -o remount,resize=0 /mnt
root@tesla:~# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg-lv 917272 244 917028 1% /mnt
root@tesla:~# mount
/dev/mapper/vg-lv on /mnt type jfs (rw,resize=0,resize=0)

note "resize=0,resize=0" above.
...but unmounting it, remounting it, and then resizing it works:

root@tesla:~# umount /mnt
root@tesla:~# mount -t jfs /dev/vg/lv /mnt
root@tesla:~# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg-lv 917272 244 917028 1% /mnt
root@tesla:~# mount -o remount,resize=0 /mnt
root@tesla:~# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg-lv 1121040 272 1120768 1% /mnt
root@tesla:~# mount
/dev/mapper/vg-lv on /mnt type jfs (rw,resize=0)

It turns out that it's not so much of the fact that this is a
"second" extention during the same mount as the fact that it was
extended while mounted. It seems that nomatter when I lvextend (mounted
or unmounted), i must unmount (if mounted), then mount, then
remount/resize for a resize to take effect. If I don't unmount after
lvextending, and try to resize, I get:
jfs_extendfs: volume hasn't grown, returning


Soo...I would say that something is awry with the resizing of JFS
filesystems. My patch obviously doesn't fix everything, but at least
makes resizing to fill available space *possible*. :)

The code surrounding my patch change treats the same variable (resize,
which is a pointer to args[0].from) as a string, so it seems pretty
obvious to me it should be comparing to '0'.


-Elliott Bennett


2004-01-02 21:24:24

by Mike Fedyk

[permalink] [raw]
Subject: Re: JFS resize=0 problem in 2.6.0

On Fri, Jan 02, 2004 at 03:12:21PM -0500, Elliott Bennett wrote:
> Soo...I would say that something is awry with the resizing of JFS
> filesystems. My patch obviously doesn't fix everything, but at least
> makes resizing to fill available space *possible*. :)
>
> The code surrounding my patch change treats the same variable (resize,
> which is a pointer to args[0].from) as a string, so it seems pretty
> obvious to me it should be comparing to '0'.

I would be careful of DM and MD RAID in 2.6.0. There are some bugs flying
around mentioning XFS->DM->MD RAID, , but also reproducable with Ext3->DM.

So if you're using DM, you might want to do some extra consistancy checks in
your tests, and don't use it with important data.

2004-01-02 22:28:41

by Christophe Saout

[permalink] [raw]
Subject: Re: JFS resize=0 problem in 2.6.0

Am Fr, den 02.01.2004 schrieb Mike Fedyk um 22:24:

> I would be careful of DM and MD RAID in 2.6.0. There are some bugs flying
> around mentioning XFS->DM->MD RAID, , but also reproducable with Ext3->DM.
>
> So if you're using DM, you might want to do some extra consistancy checks in
> your tests, and don't use it with important data.

Only DM or DM->RAID? There is one bug on bugzilla mentioning XFS
problems on a >2TB LVM volume dating back to 2.5.73. DM uses sector_t
everywhere, so perhaps it might be a different problem.

DM has a workaround included since one of the later test kernels so that
it should not break on top of the raid code. Also someone posted
bugfixes not too long ago that fixed some hard to trigger bugs with
bi_idx not always starting at zero (which should have affected raid and
dm code under rare circumstances). These bugfixes are in 2.6.1-pre1 I
think.

There is also one bug that isn't fixed in 2.6.0 (but in the -mm kernels
for some time and also in 2.6.1-pre1), it concerns online resizing of
mounted filesystems.

At least I think the core DM code should be safe under normal usage. I
bombed it with all kinds of BIOs, not only the page sized ones most
filesystems and swap code create, using an IDE disk. I'm using it on all
my systems for a long time now.

Elliot's problem is a different though, the jfs option parser doesn't do
what he wants. ;)