2007-07-10 13:06:51

by Kalpak Shah

[permalink] [raw]
Subject: Random corruption test for e2fsck

Hi,

This is a random corruption test which can be included in the e2fsprogs
regression tests. It does the following:
1) Create an test fs and format it with ext2/3/4 and random selection of
features.
2) Mount it and copy data into it.
3) Move around the blocks of the filesystem randomly causing corruption.
Also overwrite some random blocks with garbage from /dev/urandom. Create
a copy of this corrupted filesystem.
4) Unmount and run e2fsck. If the first run of e2fsck produces any
errors like uncorrected errors, library error, segfault, usage error,
etc. then it is deemed a bug. But in any case, a second run of e2fsck is
done to check if it renders the filesystem clean.
5) If the test went by without any errors the test image is deleted and
in case of any errors the user is notified that the log of this test run
should be mailed to linux-ext4@ and the image should be preserved.

Any comments are welcome.

---
Signed-off-by: Andreas Dilger <[email protected]>
Signed-off-by: Kalpak Shah <[email protected]>


Thanks,
Kalpak.


Attachments:
e2fsprogs-tests-f_random_corruption.patch (9.51 kB)

2007-07-10 14:58:58

by Theodore Ts'o

[permalink] [raw]
Subject: Re: Random corruption test for e2fsck

On Tue, Jul 10, 2007 at 06:37:40PM +0530, Kalpak Shah wrote:
> Hi,
>
> This is a random corruption test which can be included in the e2fsprogs
> regression tests.
> 1) Create an test fs and format it with ext2/3/4 and random selection of
> features.
> 2) Mount it and copy data into it.

This requires root privileges in order to mount the loop filesystem.
Any chance you could change it to use debugfs to populate the
filesystem, so we don't need root privs in order to mount it.

This will increase the number of people that will actually run the
test, and more importantly not encourage people from running "make
check" as root.

> 3) Move around the blocks of the filesystem randomly causing corruption.
> Also overwrite some random blocks with garbage from /dev/urandom. Create
> a copy of this corrupted filesystem.
>
> 4) Unmount and run e2fsck. If the first run of e2fsck produces any
> errors like uncorrected errors, library error, segfault, usage error,
> etc. then it is deemed a bug. But in any case, a second run of e2fsck is
> done to check if it renders the filesystem clean.

Err, you do unmount the filesystem first before you start corrupting
it, right? (Checking script; sure looks like it.)

> 5) If the test went by without any errors the test image is deleted and
> in case of any errors the user is notified that the log of this test run
> should be mailed to linux-ext4@ and the image should be preserved.

I certainly like the general concept!!

I wonder if the code to create a random filesystem and corrupting it
should be kept as separate shell script, since it can be reused in
another of interesting ways. One thought would be to write a test
script that mounts corrupted filesystems using UML and then does some
exercises on it (tar cvf on the filesyste, random renames on the
filesystem, rm -rf of all of the contents of the filesystems), to see
whether we can provoke a kernel oops.

Regards,

- Ted

2007-07-10 15:46:09

by Eric Sandeen

[permalink] [raw]
Subject: Re: Random corruption test for e2fsck

Theodore Tso wrote:
>> 5) If the test went by without any errors the test image is deleted and
>> in case of any errors the user is notified that the log of this test run
>> should be mailed to linux-ext4@ and the image should be preserved.
>
> I certainly like the general concept!!
>
> I wonder if the code to create a random filesystem and corrupting it
> should be kept as separate shell script, since it can be reused in
> another of interesting ways. One thought would be to write a test
> script that mounts corrupted filesystems using UML and then does some
> exercises on it (tar cvf on the filesyste, random renames on the
> filesystem, rm -rf of all of the contents of the filesystems), to see
> whether we can provoke a kernel oops.

FWIW, that's what fsfuzzer does, in an fs-agnostic way.

-Eric

2007-07-10 15:50:58

by Eric Sandeen

[permalink] [raw]
Subject: Re: Random corruption test for e2fsck

Kalpak Shah wrote:
> Hi,
>
> This is a random corruption test which can be included in the e2fsprogs
> regression tests. It does the following:
> 1) Create an test fs and format it with ext2/3/4 and random selection of
> features.
> 2) Mount it and copy data into it.
> 3) Move around the blocks of the filesystem randomly causing corruption.
> Also overwrite some random blocks with garbage from /dev/urandom. Create
> a copy of this corrupted filesystem.
> 4) Unmount and run e2fsck. If the first run of e2fsck produces any
> errors like uncorrected errors, library error, segfault, usage error,
> etc. then it is deemed a bug. But in any case, a second run of e2fsck is
> done to check if it renders the filesystem clean.
> 5) If the test went by without any errors the test image is deleted and
> in case of any errors the user is notified that the log of this test run
> should be mailed to linux-ext4@ and the image should be preserved.
>
> Any comments are welcome.

Seems like a pretty good idea. I had played with such a thing using
fsfuzzer... fsfuzzer always seemed at least as useful useful as a fsck
tester than a kernel code tester anyway. (OOC, did you look at fsfuzzer
when you did this?)

My only concern is that since it's introducing random corruption, new
things will probably pop up from time to time; when we do an rpm build
for Fedora/RHEL, it automatically runs make check:

%check
make check

which seems like a reasonably good idea to me. However, I'd rather not
have last-minute build failures introduced by new random collection of
bits that have never been seen before. Maybe "make RANDOM=0 check" as
an option would be a good idea for automated builds...?

Thanks,
-Eric

2007-07-11 07:02:11

by Kalpak Shah

[permalink] [raw]
Subject: Re: Random corruption test for e2fsck

On Tue, 2007-07-10 at 10:58 -0400, Theodore Tso wrote:
> On Tue, Jul 10, 2007 at 06:37:40PM +0530, Kalpak Shah wrote:
> > Hi,
> >
> > This is a random corruption test which can be included in the e2fsprogs
> > regression tests.
> > 1) Create an test fs and format it with ext2/3/4 and random selection of
> > features.
> > 2) Mount it and copy data into it.
>
> This requires root privileges in order to mount the loop filesystem.
> Any chance you could change it to use debugfs to populate the
> filesystem, so we don't need root privs in order to mount it.
>
> This will increase the number of people that will actually run the
> test, and more importantly not encourage people from running "make
> check" as root.

That is a good idea. With this script, the mount would just fail without
root privileges and the test would be done on an empty filesystem. I
will make this change and post it.


> > 3) Move around the blocks of the filesystem randomly causing corruption.
> > Also overwrite some random blocks with garbage from /dev/urandom. Create
> > a copy of this corrupted filesystem.
> >
> > 4) Unmount and run e2fsck. If the first run of e2fsck produces any
> > errors like uncorrected errors, library error, segfault, usage error,
> > etc. then it is deemed a bug. But in any case, a second run of e2fsck is
> > done to check if it renders the filesystem clean.
>
> Err, you do unmount the filesystem first before you start corrupting
> it, right? (Checking script; sure looks like it.)
>

Yes, the filesystem is unmounted before the corruption begins.

> > 5) If the test went by without any errors the test image is deleted and
> > in case of any errors the user is notified that the log of this test run
> > should be mailed to linux-ext4@ and the image should be preserved.
>
> I certainly like the general concept!!
>
> I wonder if the code to create a random filesystem and corrupting it
> should be kept as separate shell script, since it can be reused in
> another of interesting ways. One thought would be to write a test
> script that mounts corrupted filesystems using UML and then does some
> exercises on it (tar cvf on the filesyste, random renames on the
> filesystem, rm -rf of all of the contents of the filesystems), to see
> whether we can provoke a kernel oops.

Well, there is a MOUNT_AFTER_CORRUPTION option in the script which can
be enhanced to do this.

Thanks,
Kalpak.

> Regards,
>
> - Ted
> -
> To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html

2007-07-11 14:25:39

by Andi Kleen

[permalink] [raw]
Subject: Re: Random corruption test for e2fsck

Kalpak Shah <[email protected]> writes:

> regression tests. It does the following:
> 1) Create an test fs and format it with ext2/3/4 and random selection of
> features.
> 2) Mount it and copy data into it.
> 3) Move around the blocks of the filesystem randomly causing corruption.
> Also overwrite some random blocks with garbage from /dev/urandom. Create
> a copy of this corrupted filesystem.

If you use a normal pseudo random number generator and print the seed
(e.g. create from the time) initially the image can be easily recreated
later without shipping it around. /dev/urandom
is not really needed for this since you don't need cryptographic
strength randomness. Besides urandom data is precious and it's
a pity to use it up needlessly.

bash has $RANDOM built in for this purpose.

-Andi

2007-07-11 16:03:46

by Andreas Dilger

[permalink] [raw]
Subject: Re: Random corruption test for e2fsck

On Jul 10, 2007 10:47 -0500, Eric Sandeen wrote:
> Seems like a pretty good idea. I had played with such a thing using
> fsfuzzer... fsfuzzer always seemed at least as useful useful as a fsck
> tester than a kernel code tester anyway. (OOC, did you look at fsfuzzer
> when you did this?)

The person who originally started it had looked at fsfuzzer, but I haven't
myself.

> My only concern is that since it's introducing random corruption, new
> things will probably pop up from time to time; when we do an rpm build
> for Fedora/RHEL, it automatically runs make check:
>
> %check
> make check

Yes, we added this to our .spec file also, though I didn't realize rpm
had a %check stanza in it. We just added it into the %build stanza,
but this is something that should be pushed upstream, since it really
makes sense to ensure e2fsprogs is built & running correctly.

> which seems like a reasonably good idea to me. However, I'd rather not
> have last-minute build failures introduced by new random collection of
> bits that have never been seen before. Maybe "make RANDOM=0 check" as
> an option would be a good idea for automated builds...?

I've added this to the updated version:
$ f_random_corruption=skip ./test_script f_random_corruption
f_random_corruption: skipped

I wonder if it makes sense to add this as a generic functionality to
test_script, something like:

[ `eval \$$test_name` = "skip" ] && echo "skipped"

Latest version of the script is attached.

Cheers, Andreas
--
Andreas Dilger
Principal Software Engineer
Cluster File Systems, Inc.


Attachments:
(No filename) (1.53 kB)
e2fsprogs-tests-f_random_corruption.patch (8.82 kB)
Download all attachments

2007-07-11 17:43:51

by Theodore Ts'o

[permalink] [raw]
Subject: Re: Random corruption test for e2fsck

On Wed, Jul 11, 2007 at 03:44:11AM -0600, Andreas Dilger wrote:
> I've already found some kind of memory corruption in e2fsck as a result
> of running this as a regular user. It segfaults in qsort() when freeing
> memory. The image that causes this problem is attached, and it happens
> with the unpatched 1.39-WIP Mercurial tree of 2007-05-22. Unfortunately,
> I don't have any decent memory debugging tools handy, so it isn't easy to
> see what is happening. This is on an FC3 i686 system, in case it matters.

Thanks for sending me the test case! Here's the patch, which will
probably cause me to do a 1.40.2 release sooner rather than later...

- Ted

commit 5e9ba85c2694926eb784531d81ba107200cf1a51
Author: Theodore Ts'o <[email protected]>
Date: Wed Jul 11 13:42:43 2007 -0400

Fix e2fsck segfault on very badly damaged filesystems

A recent change to e2fsck_add_dir_info() to use tdb files to check
filesystems with a very large number of filesystems had a typo which
caused us to resize the wrong data structure. This would cause a
array overrun leading to malloc pointer corruptions. Since we
normally can very accurately predict how big the the dirinfo array
needs to be, this bug only got triggered on very badly corrupted
filesystems.

Thanks to Andreas Dilger for submitting the test case which discovered
this problem, and to Kalpak Shah for writing a random testing script
which created the test case.

Signed-off-by: "Theodore Ts'o" <[email protected]>

diff --git a/e2fsck/dirinfo.c b/e2fsck/dirinfo.c
index aaa4d09..f583c62 100644
--- a/e2fsck/dirinfo.c
+++ b/e2fsck/dirinfo.c
@@ -126,7 +126,7 @@ void e2fsck_add_dir_info(e2fsck_t ctx, ext2_ino_t ino, ext2_ino_t parent)
ctx->dir_info->size += 10;
retval = ext2fs_resize_mem(old_size, ctx->dir_info->size *
sizeof(struct dir_info),
- &ctx->dir_info);
+ &ctx->dir_info->array);
if (retval) {
ctx->dir_info->size -= 10;
return;

2007-07-12 05:16:03

by Andreas Dilger

[permalink] [raw]
Subject: Re: Random corruption test for e2fsck

On Jul 11, 2007 13:43 -0400, Theodore Tso wrote:
> Fix e2fsck segfault on very badly damaged filesystems
>
> --- a/e2fsck/dirinfo.c
> +++ b/e2fsck/dirinfo.c
> @@ -126,7 +126,7 @@ void e2fsck_add_dir_info(e2fsck_t ctx, ext2_ino_t ino, ext2_ino_t parent)
> ctx->dir_info->size += 10;
> retval = ext2fs_resize_mem(old_size, ctx->dir_info->size *
> sizeof(struct dir_info),
> - &ctx->dir_info);
> + &ctx->dir_info->array);
> if (retval) {
> ctx->dir_info->size -= 10;
> return;

This appears to fix the problem. I was previously able to crash e2fsck
within a couple of runs, now it is running in a loop w/o problems.

Cheers, Andreas
--
Andreas Dilger
Principal Software Engineer
Cluster File Systems, Inc.

2007-07-12 05:19:40

by Andreas Dilger

[permalink] [raw]
Subject: Re: Random corruption test for e2fsck

On Jul 11, 2007 17:20 +0200, Andi Kleen wrote:
> If you use a normal pseudo random number generator and print the seed
> (e.g. create from the time) initially the image can be easily recreated
> later without shipping it around. /dev/urandom
> is not really needed for this since you don't need cryptographic
> strength randomness. Besides urandom data is precious and it's
> a pity to use it up needlessly.
>
> bash has $RANDOM built in for this purpose.

Except it is a lot more efficient and easy to do
"dd if=/dev/urandom bs=1k ..." than to spin in a loop getting 16-bit
random numbers from bash. We would also be at the mercy of the shell
being identical on the user and debugger's systems.

I don't think that running this test once in a blue moon on some
system is going to be a source of problems.

Cheers, Andreas
--
Andreas Dilger
Principal Software Engineer
Cluster File Systems, Inc.

2007-07-12 05:52:38

by Andreas Dilger

[permalink] [raw]
Subject: Re: Random corruption test for e2fsck

I've got another one, but it isn't a show stopper I think.

If you format a filesystem with both resize_inode and meta_bg you get an
unfixable filesystem. The bad news is that it appears that running
e2fsck on the filesystem is actually _causing_ the corruption in this case
(trying to rebuild the resize inode)? For now, the answer is "don't do that".
The resize inode was never intended to be in use when meta_bg is enabled,
so we should just prevent this from the start.


mke2fs -j -b 4096 -I 512 -O sparse_super,filetype,resize_inode,dir_index,lazy_bg,meta_bg -F /tmp/test.img 57852

I can run e2fsck on this repeatedly and it always complains the same way:

$ e2fsck -fy /tmp/test.img
e2fsck 1.39.cfs9 (7-Apr-2007)
Resize inode not valid. Recreate? yes

Pass 1: Checking inodes, blocks, and sizes
Inode 8, i_blocks is 0, should be 32816. Fix? yes

Reserved inode 9 (<Reserved inode 9>) has invalid mode. Clear? yes
Deleted inode 17 has zero dtime. Fix? yes
Deleted inode 25 has zero dtime. Fix? yes
Deleted inode 33 has zero dtime. Fix? yes
Deleted inode 41 has zero dtime. Fix? yes
Deleted inode 49 has zero dtime. Fix? yes
Deleted inode 57 has zero dtime. Fix? yes
Deleted inode 65 has zero dtime. Fix? yes
Deleted inode 73 has zero dtime. Fix? yes
Deleted inode 81 has zero dtime. Fix? yes
Deleted inode 89 has zero dtime. Fix? yes

Pass 2: Checking directory structure
Inode 2 (???) has invalid mode (00).
Clear? yes

Entry '..' in ??? (2) has deleted/unused inode 2. Clear? yes

Inode 11 (???) has invalid mode (00).
Clear? yes

Pass 3: Checking directory connectivity
Root inode not allocated. Allocate? yes

/lost+found not found. Create? yes

Pass 4: Checking reference counts
Pass 5: Checking group summary information
Block bitmap differences: +0 +(2--14) +(16--3619) +(3621--3622) +(3625--7727)
Fix? yes

Free blocks count wrong for group #0 (25039, counted=25041).
Fix? yes

Free blocks count wrong (46503, counted=46505).
Fix? yes

Inode bitmap differences: +(3--10) -16
Fix? yes

Free inodes count wrong for group #0 (28918, counted=28917).
Fix? yes

Directories count wrong for group #0 (3, counted=2).
Fix? yes

Free inodes count wrong (57846, counted=57845).
Fix? yes


/tmp/test.img: ***** FILE SYSTEM WAS MODIFIED *****
/tmp/test.img: 11/57856 files (9.1% non-contiguous), 11347/57852 blocks


Cheers, Andreas
--
Andreas Dilger
Principal Software Engineer
Cluster File Systems, Inc.

2007-07-12 11:09:24

by Andi Kleen

[permalink] [raw]
Subject: Re: Random corruption test for e2fsck

On Wed, Jul 11, 2007 at 11:19:38PM -0600, Andreas Dilger wrote:
> On Jul 11, 2007 17:20 +0200, Andi Kleen wrote:
> > If you use a normal pseudo random number generator and print the seed
> > (e.g. create from the time) initially the image can be easily recreated
> > later without shipping it around. /dev/urandom
> > is not really needed for this since you don't need cryptographic
> > strength randomness. Besides urandom data is precious and it's
> > a pity to use it up needlessly.
> >
> > bash has $RANDOM built in for this purpose.
>
> Except it is a lot more efficient and easy to do

Ah you chose to only address one sentence in my reply.
I thought only Linus liked to to do that.

If you're worried about efficiency it's trivial to
write a C program that generates bulk pseudo random numbers using
random(3)

> "dd if=/dev/urandom bs=1k ..." than to spin in a loop getting 16-bit
> random numbers from bash. We would also be at the mercy of the shell
> being identical on the user and debugger's systems.

With /dev/urandom you have the guarantee you'll never ever reproduce
it again.

Andrea A. used to rant about people who use srand(time(NULL))
in benchmarks and it's sad these mistakes get repeated again and again.

-Andi

2007-07-12 22:16:27

by Andreas Dilger

[permalink] [raw]
Subject: Re: Random corruption test for e2fsck

On Jul 12, 2007 13:09 +0200, Andi Kleen wrote:
> > "dd if=/dev/urandom bs=1k ..." than to spin in a loop getting 16-bit
> > random numbers from bash. We would also be at the mercy of the shell
> > being identical on the user and debugger's systems.
>
> With /dev/urandom you have the guarantee you'll never ever reproduce
> it again.

That is kind of the point of this testing - getting new test images for
each user that runs "make check" or "make rpm". I'm We also save the
generated image before e2fsck touches it so that it can be used for
debugging if needed.

Cheers, Andreas
--
Andreas Dilger
Principal Software Engineer
Cluster File Systems, Inc.

2007-07-12 22:24:27

by Andi Kleen

[permalink] [raw]
Subject: Re: Random corruption test for e2fsck

On Thu, Jul 12, 2007 at 04:16:24PM -0600, Andreas Dilger wrote:
> On Jul 12, 2007 13:09 +0200, Andi Kleen wrote:
> > > "dd if=/dev/urandom bs=1k ..." than to spin in a loop getting 16-bit
> > > random numbers from bash. We would also be at the mercy of the shell
> > > being identical on the user and debugger's systems.
> >
> > With /dev/urandom you have the guarantee you'll never ever reproduce
> > it again.
>
> That is kind of the point of this testing - getting new test images for
> each user that runs "make check" or "make rpm". I'm We also save the
> generated image before e2fsck touches it so that it can be used for
> debugging if needed.

If you seed a good pseudo RNG with the time (or even a few bytes from
/dev/urandom; although the time tends to work as well) you'll also effectively
get a new image every time.

But the advantage is if you print out the seed the image
can be easily recreated just by re-running the fuzzer
with the same seed. No need to ship potentially huge images
around.

You can essentially compress your whole image into a single
number this way.

-Andi

2007-07-13 07:11:41

by Kalpak Shah

[permalink] [raw]
Subject: Re: Random corruption test for e2fsck

On Fri, 2007-07-13 at 00:24 +0200, Andi Kleen wrote:
> On Thu, Jul 12, 2007 at 04:16:24PM -0600, Andreas Dilger wrote:
> > On Jul 12, 2007 13:09 +0200, Andi Kleen wrote:
> > > > "dd if=/dev/urandom bs=1k ..." than to spin in a loop getting 16-bit
> > > > random numbers from bash. We would also be at the mercy of the shell
> > > > being identical on the user and debugger's systems.
> > >
> > > With /dev/urandom you have the guarantee you'll never ever reproduce
> > > it again.
> >
> > That is kind of the point of this testing - getting new test images for
> > each user that runs "make check" or "make rpm". I'm We also save the
> > generated image before e2fsck touches it so that it can be used for
> > debugging if needed.
>
> If you seed a good pseudo RNG with the time (or even a few bytes from
> /dev/urandom; although the time tends to work as well) you'll also effectively
> get a new image every time.
>
> But the advantage is if you print out the seed the image
> can be easily recreated just by re-running the fuzzer
> with the same seed. No need to ship potentially huge images
> around.
>
> You can essentially compress your whole image into a single
> number this way.

Firstly the filesystem is populated with files from the e2fsprogs source
directory. The filesystem is also corrupted by copying blocks in the
filesystem to some arbitary locations within it.

from=`get_random_location $NUM_BLKS`
to=`get_random_location $NUM_BLKS`
dd if=$IMAGE of=$IMAGE bs=1k count=$CORRUPTION_SIZE conv=notrunc skip=
$from seek=$to >> $OUT 2>&1

Then the filesystem also undergoes corruption with /dev/urandom. To be
able to recreate the exact same filesystem with the seed, the filesystem
would need to allocate the _same_ blocks and metadata on both the
clients and the testers machine, which is obviously not possible.

Thanks,
Kalpak.

>
> -Andi