The Ubuntu kernel team has been putting some automated testing
infrastructure in place. We are very interested in engaging with
the appropriate upstream developers. We have been running the
xfstests that come as part of the autotest testing framework.
Some of these tests fail or never complete when run against an
Ext4 file-system. Our initial questions are:
1. Is this an appropriate set of tests to be run as regression
tests?
2. Is there a list of the xfstests that are appropriate for
Ext4?
3. Are there additional regression tests that would be beneficial
to the Linux community for us to be running?
Test results can be found at:
http://kernel.ubuntu.com/testing/index.html
Thanks
Brad
--
Brad Figg [email protected] http://www.canonical.com
Hi Brad -
(cc: xfs list too)
On 9/12/12 5:52 PM, Brad Figg wrote:
>
> The Ubuntu kernel team has been putting some automated testing
> infrastructure in place. We are very interested in engaging with
> the appropriate upstream developers. We have been running the
> xfstests that come as part of the autotest testing framework.
> Some of these tests fail or never complete when run against an
> Ext4 file-system.
Which ones? Feel free to file bugs or send mail. Tests should pass.
Sometimes it's a test bug, though, of course ;)
> Our initial questions are:
>
> 1. Is this an appropriate set of tests to be run as regression
> tests?
Yes, that's what it's for!
> 2. Is there a list of the xfstests that are appropriate for
> Ext4?
Any test which says
_supported_fs generic
or
_supported_fs ext4
can run on ext4, and should in theory pass. I think there are
about 100 of them by now.
There is a "dangerous" group (see groups file) which contains test nrs that
might stop testing via a hang or panic. But don't skip those by
default; newer kernels with those bugs fixed _should_ pass them too.
> 3. Are there additional regression tests that would be beneficial
> to the Linux community for us to be running?
We've been encouraging new tests to be written for xfstests lately,
and it's gotten a good amount of traction. There certainly may be
other things out there, though.
-Eric
> Test results can be found at:
> http://kernel.ubuntu.com/testing/index.html
>
>
> Thanks
> Brad
>
On 09/12/2012 04:01 PM, Eric Sandeen wrote:
> Hi Brad -
>
> (cc: xfs list too)
>
> On 9/12/12 5:52 PM, Brad Figg wrote:
>>
>> The Ubuntu kernel team has been putting some automated testing
>> infrastructure in place. We are very interested in engaging with
>> the appropriate upstream developers. We have been running the
>> xfstests that come as part of the autotest testing framework.
>> Some of these tests fail or never complete when run against an
>> Ext4 file-system.
>
> Which ones? Feel free to file bugs or send mail. Tests should pass.
> Sometimes it's a test bug, though, of course ;)
>
>> Our initial questions are:
>>
>> 1. Is this an appropriate set of tests to be run as regression
>> tests?
>
> Yes, that's what it's for!
>
>> 2. Is there a list of the xfstests that are appropriate for
>> Ext4?
>
> Any test which says
>
> _supported_fs generic
> or
> _supported_fs ext4
>
> can run on ext4, and should in theory pass. I think there are
> about 100 of them by now.
>
> There is a "dangerous" group (see groups file) which contains test nrs that
> might stop testing via a hang or panic. But don't skip those by
> default; newer kernels with those bugs fixed _should_ pass them too.
>
>> 3. Are there additional regression tests that would be beneficial
>> to the Linux community for us to be running?
>
> We've been encouraging new tests to be written for xfstests lately,
> and it's gotten a good amount of traction. There certainly may be
> other things out there, though.
>
> -Eric
>
>
>> Test results can be found at:
>> http://kernel.ubuntu.com/testing/index.html
>>
>>
>> Thanks
>> Brad
>>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
I'm going to be doing some new runs so anything I find will be reported.
Thanks,
Brad
--
Brad Figg [email protected] http://www.canonical.com
On 9/12/12 6:15 PM, Brad Figg wrote:
> I'm going to be doing some new runs so anything I find will be reported.
Dave Chinner also pointed out that i.e.
http://kernel.ubuntu.com/beta/testing/test-results/statler.2012-09-11_22-42-47/xfstests/default/control
seems to redefine, re-group, exclude etc various tests, and is taking "intelligence" out of the test suite itself.
I'd be wary of that; xfstests is dynamic - things get fixed, tests get added, groups changed, etc.
If you hard code for example "this test is for xfs" somewhere else, you might miss updates which add coverage.
Another example :
#'197' : ['xfs'],# This test is only valid on 32 bit machines
but the test handles that gracefully:
bitsperlong=`src/feature -w`
if [ "$bitsperlong" -ne 32 ]; then
_notrun "This test is only valid on 32 bit machines"
fi
In general any test should be runnable; it may then issue 'not run' for some reason or other, but there's no harm in it - not as much harm as skipping regression tests because some config file got out of date...
and:
#'275' : ['generic'] # ext4 fails
but I just fixed that one up, and it should pass now. Who will update the 3rd party config?
Failing tests absolutely should be run as well. That information is as valuable as passing tests. The goal is getting a complete picture, not just a series of "pass" results. :)
-Eric
On 09/12/2012 05:20 PM, Eric Sandeen wrote:
> On 9/12/12 6:15 PM, Brad Figg wrote:
>> I'm going to be doing some new runs so anything I find will be reported.
>
> Dave Chinner also pointed out that i.e.
>
> http://kernel.ubuntu.com/beta/testing/test-results/statler.2012-09-11_22-42-47/xfstests/default/control
>
> seems to redefine, re-group, exclude etc various tests, and is taking "intelligence" out of the test suite itself.
>
> I'd be wary of that; xfstests is dynamic - things get fixed, tests get added, groups changed, etc.
>
> If you hard code for example "this test is for xfs" somewhere else, you might miss updates which add coverage.
>
> Another example :
>
> #'197' : ['xfs'],# This test is only valid on 32 bit machines
>
> but the test handles that gracefully:
>
> bitsperlong=`src/feature -w`
> if [ "$bitsperlong" -ne 32 ]; then
> _notrun "This test is only valid on 32 bit machines"
> fi
>
> In general any test should be runnable; it may then issue 'not run' for some reason or other, but there's no harm in it - not as much harm as skipping regression tests because some config file got out of date...
>
> and:
>
> #'275' : ['generic'] # ext4 fails
>
> but I just fixed that one up, and it should pass now. Who will update the 3rd party config?
>
> Failing tests absolutely should be run as well. That information is as valuable as passing tests. The goal is getting a complete picture, not just a series of "pass" results. :)
>
> -Eric
>
Eric,
Thanks for taking the time to point this out. We will adjust our testing accordingly.
We initially tried to run xfstest against ext2, ext3, ext4, xfs and btrfs. We are also
trying to get these tests to run on several different kernel versions as you can
see from our test results. We were running into issues on different kernels and various
file-systems while getting our act together, we did this as a band-aid.
I accept that we have some things to learn w.r.t. running this test suite. We will work
to run the xfstests "as is" without any outside "intelligence". We do recognise that
is a dynamic set of tests that people are adding to regularly.
I am not attempting to get just a series of "pass" results. If that were my goal
I could accomplish it much easier and would not have engaged with the community
on the mailing list. We want to help where we can and will accept constructive
criticism.
Brad
--
Brad Figg [email protected] http://www.canonical.com
On 9/12/12 7:41 PM, Brad Figg wrote:
> Eric,
>
> Thanks for taking the time to point this out. We will adjust our testing accordingly.
> We initially tried to run xfstest against ext2, ext3, ext4, xfs and btrfs. We are also
> trying to get these tests to run on several different kernel versions as you can
> see from our test results. We were running into issues on different kernels and various
> file-systems while getting our act together, we did this as a band-aid.
I see.
> I accept that we have some things to learn w.r.t. running this test suite. We will work
> to run the xfstests "as is" without any outside "intelligence". We do recognise that
> is a dynamic set of tests that people are adding to regularly.
>
> I am not attempting to get just a series of "pass" results. If that were my goal
> I could accomplish it much easier and would not have engaged with the community
> on the mailing list. We want to help where we can and will accept constructive
> criticism.
Sorry, it sounds like I came across too strong there - it was just a little worrying to see failing or problematic tests disabled or otherwise artificially restricted.
I'm actually very excited to see you setting up ongoing, public testing using xfstests, I think it'll be a great benefit, especially if there's a way to see a particular test's results across several kernel versions and/or filesystems and/or architectures, so that patterns of failure can emerge.
If you find that xfstests is missing some feature or behavior which would facilitate testing in the automated environment, please do let us know what you need - or send patches. :)
Thanks,
-Eric
> Brad
>
On 09/12/2012 06:51 PM, Eric Sandeen wrote:
> On 9/12/12 7:41 PM, Brad Figg wrote:
>
>> Eric,
>>
>> Thanks for taking the time to point this out. We will adjust our testing accordingly.
>> We initially tried to run xfstest against ext2, ext3, ext4, xfs and btrfs. We are also
>> trying to get these tests to run on several different kernel versions as you can
>> see from our test results. We were running into issues on different kernels and various
>> file-systems while getting our act together, we did this as a band-aid.
>
> I see.
>
>> I accept that we have some things to learn w.r.t. running this test suite. We will work
>> to run the xfstests "as is" without any outside "intelligence". We do recognise that
>> is a dynamic set of tests that people are adding to regularly.
>>
>> I am not attempting to get just a series of "pass" results. If that were my goal
>> I could accomplish it much easier and would not have engaged with the community
>> on the mailing list. We want to help where we can and will accept constructive
>> criticism.
>
> Sorry, it sounds like I came across too strong there - it was just a little worrying to see failing or problematic tests disabled or otherwise artificially restricted.
>
> I'm actually very excited to see you setting up ongoing, public testing using xfstests, I think it'll be a great benefit, especially if there's a way to see a particular test's results across several kernel versions and/or filesystems and/or architectures, so that patterns of failure can emerge.
>
> If you find that xfstests is missing some feature or behavior which would facilitate testing in the automated environment, please do let us know what you need - or send patches. :)
>
> Thanks,
> -Eric
>
>> Brad
>>
>
No harm, no foul. We really don't mind constructive criticism. We are also
eager to get this setup and running. We will try to contribute more than
just running tests.
I do want to point out that we are using the xfstests which is a snapshot
in autotest. We do also look at the latest xfstests in the official xfstests
repo and add it in when we see a delta. We will also work with the autotest
maintainers to stay more up-to-date with xfstests.
Thanks,
Brad
--
Brad Figg [email protected] http://www.canonical.com
On 9/12/12 9:04 PM, Brad Figg wrote:
> On 09/12/2012 06:51 PM, Eric Sandeen wrote:
>> On 9/12/12 7:41 PM, Brad Figg wrote:
>>
>>> Eric,
>>>
>>> Thanks for taking the time to point this out. We will adjust our testing accordingly.
>>> We initially tried to run xfstest against ext2, ext3, ext4, xfs and btrfs. We are also
>>> trying to get these tests to run on several different kernel versions as you can
>>> see from our test results. We were running into issues on different kernels and various
>>> file-systems while getting our act together, we did this as a band-aid.
>>
>> I see.
>>
>>> I accept that we have some things to learn w.r.t. running this test suite. We will work
>>> to run the xfstests "as is" without any outside "intelligence". We do recognise that
>>> is a dynamic set of tests that people are adding to regularly.
>>>
>>> I am not attempting to get just a series of "pass" results. If that were my goal
>>> I could accomplish it much easier and would not have engaged with the community
>>> on the mailing list. We want to help where we can and will accept constructive
>>> criticism.
>>
>> Sorry, it sounds like I came across too strong there - it was just a little worrying to see failing or problematic tests disabled or otherwise artificially restricted.
>>
>> I'm actually very excited to see you setting up ongoing, public testing using xfstests, I think it'll be a great benefit, especially if there's a way to see a particular test's results across several kernel versions and/or filesystems and/or architectures, so that patterns of failure can emerge.
>>
>> If you find that xfstests is missing some feature or behavior which would facilitate testing in the automated environment, please do let us know what you need - or send patches. :)
>>
>> Thanks,
>> -Eric
>>
>>> Brad
>>>
>>
>
> No harm, no foul. We really don't mind constructive criticism. We are also
> eager to get this setup and running. We will try to contribute more than
> just running tests.
Just running them and publishing results is definitely useful.
> I do want to point out that we are using the xfstests which is a snapshot
> in autotest. We do also look at the latest xfstests in the official xfstests
> repo and add it in when we see a delta. We will also work with the autotest
> maintainers to stay more up-to-date with xfstests.
Ah, I didn't know that autotest had a snapshot. I wonder if there's a way to
tease that back out, and pull down xfstests from git daily.
FWIW, it's a little confusing - we have 2 repos:
git://oss.sgi.com/xfs/cmds/xfstests.git
git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
the one on kernel.org is where most rapid development seems to happen, and changes get pulled over to sgi. Sometimes, it goes the other way.
Did autotest make any changes to what's upstream?
-Eric
> Thanks,
> Brad
>
_______________________________________________
xfs mailing list
[email protected]
http://oss.sgi.com/mailman/listinfo/xfs
On 09/12/2012 07:09 PM, Eric Sandeen wrote:
> On 9/12/12 9:04 PM, Brad Figg wrote:
>> On 09/12/2012 06:51 PM, Eric Sandeen wrote:
>>> On 9/12/12 7:41 PM, Brad Figg wrote:
>>>
>>>> Eric,
>>>>
>>>> Thanks for taking the time to point this out. We will adjust our testing accordingly.
>>>> We initially tried to run xfstest against ext2, ext3, ext4, xfs and btrfs. We are also
>>>> trying to get these tests to run on several different kernel versions as you can
>>>> see from our test results. We were running into issues on different kernels and various
>>>> file-systems while getting our act together, we did this as a band-aid.
>>>
>>> I see.
>>>
>>>> I accept that we have some things to learn w.r.t. running this test suite. We will work
>>>> to run the xfstests "as is" without any outside "intelligence". We do recognise that
>>>> is a dynamic set of tests that people are adding to regularly.
>>>>
>>>> I am not attempting to get just a series of "pass" results. If that were my goal
>>>> I could accomplish it much easier and would not have engaged with the community
>>>> on the mailing list. We want to help where we can and will accept constructive
>>>> criticism.
>>>
>>> Sorry, it sounds like I came across too strong there - it was just a little worrying to see failing or problematic tests disabled or otherwise artificially restricted.
>>>
>>> I'm actually very excited to see you setting up ongoing, public testing using xfstests, I think it'll be a great benefit, especially if there's a way to see a particular test's results across several kernel versions and/or filesystems and/or architectures, so that patterns of failure can emerge.
>>>
>>> If you find that xfstests is missing some feature or behavior which would facilitate testing in the automated environment, please do let us know what you need - or send patches. :)
>>>
>>> Thanks,
>>> -Eric
>>>
>>>> Brad
>>>>
>>>
>>
>> No harm, no foul. We really don't mind constructive criticism. We are also
>> eager to get this setup and running. We will try to contribute more than
>> just running tests.
>
> Just running them and publishing results is definitely useful.
>
>> I do want to point out that we are using the xfstests which is a snapshot
>> in autotest. We do also look at the latest xfstests in the official xfstests
>> repo and add it in when we see a delta. We will also work with the autotest
>> maintainers to stay more up-to-date with xfstests.
>
> Ah, I didn't know that autotest had a snapshot. I wonder if there's a way to
> tease that back out, and pull down xfstests from git daily.
>
> FWIW, it's a little confusing - we have 2 repos:
>
> git://oss.sgi.com/xfs/cmds/xfstests.git
> git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
>
> the one on kernel.org is where most rapid development seems to happen, and changes get pulled over to sgi. Sometimes, it goes the other way.
>
> Did autotest make any changes to what's upstream?
>
> -Eric
>
>> Thanks,
>> Brad
>>
>
No, it seems to be an unmodified snapshot. As long as it's kept up-to-date,
which is something we have a vested interest in seeing is done, I'm fine
with this. We then take this (autotest) and also roll it into our QA team
for even more testing. They will also be making their testing results public.
I grabbed from the oss.sgi repository. If one is more "authoritative" or
one you'd rather see us use, just let me/us know.
Brad
--
Brad Figg [email protected] http://www.canonical.com
On Wed, Sep 12, 2012 at 03:52:49PM -0700, Brad Figg wrote:
>
> The Ubuntu kernel team has been putting some automated testing
> infrastructure in place. We are very interested in engaging with
> the appropriate upstream developers. We have been running the
> xfstests that come as part of the autotest testing framework.
> Some of these tests fail or never complete when run against an
> Ext4 file-system.
Looking at your web page, it looks like xfstests is passing w/o any
problems for the set of tests that you are running with the 3.5 and
3.2 kernels.
The failures that you are seeing with 3.0 kernel looks funny; it looks
like you are running the tests one at a time, and after the file
system got corrupted with the fsstress run with test #13, your test
framework isn't repairing the file system with e2fsck -fy, so all of
the tests afterwards failed because the file system was corrupted. As
to why the test failed with 3.0, it's probably some problem we didn't
get backported to 3.0 for some reason. Quite frankly that's not
something I really worry about --- as far as I'm concerned, if it
can't be trivially backported as part of the [email protected]
process, or after the stable coverage is finished, it's the distro's
responsibility to backport patches to their
old/antique/stable/enterprise/LTS kernels --- it's why the
distributions get paid the big bucks. :-)
Also, I noted that some of your failures on the older distributions
was due to missing userspace programs that were not installed on the
System Under Test (for example, "chacl").
As far as how I do my testing, I generally use "./check -g quick" or
"./check -g auto" --- where "-g auto" takes a lot longer, and there
are some known failure depending on specific set of mount options and
how the file system is created.
Here is my baseline that I had at the start of the 3.6 development
cycle. (There are some test failures that are on my todo list to
investigate more closely, but I keep a baseline to make sure things
don't regress.)
I have a script which handles doing all of this automatically using
KVM, with a single command I can run which takes a built kernel, runs
it using KVM, and passes the configuration options via the boot
command-line to a set of shell scripts run out of /etc/rc.local which
then runs xfstests in the various file system configurations.
- Ted
BEGIN TEST: Ext4 4k block Sat Jul 28 16:04:48 EDT 2012
MKFS_OPTIONS -- -q /dev/vdc
MOUNT_OPTIONS -- -o acl,user_xattr -o block_validity /dev/vdc /vdc
Ran: 001 002 005 006 007 011 013 014 015 020 053 062 069 070 075 076 079 088 091 105 112 113 117 120 123 124 126 128 129 130 131 135 141 169 184 193 198 207 210 211 212 213 214 215 219 221 223 225 228 230 235 236 237 240 243 245 246 247 248 249 255 256 257 258 263 271 277
Passed all 67 tests
END TEST: Ext4 4k block Sat Jul 28 16:33:53 EDT 2012
# This is a good way to test using ext4 for ext3 file systems
BEGIN TEST: Ext4 4k block w/nodelalloc and no extents Sat Jul 28 16:34:03 EDT 2012
MKFS_OPTIONS -- -q -O ^extents,^flex_bg,^uninit_bg /dev/vdc
MOUNT_OPTIONS -- -o acl,user_xattr -o block_validity,nodelalloc /dev/vdc /vdc
Ran: 001 002 005 006 007 011 013 014 015 020 053 062 069 070 075 076 079 088 091 105 112 113 117 120 123 124 126 128 129 130 131 135 141 169 184 193 198 207 210 211 212 215 219 221 225 230 235 236 237 240 245 246 247 248 249 257 258 263 271 277
Passed all 60 tests
END TEST: Ext4 4k block w/nodelalloc and no extents Sat Jul 28 16:55:23 EDT 2012
# We care about this in Google, which is why I run it
BEGIN TEST: Ext4 4k block w/ no journal Sat Jul 28 16:55:26 EDT 2012
MKFS_OPTIONS -- -q -O ^has_journal /dev/vdc
MOUNT_OPTIONS -- -o acl,user_xattr -o block_validity,noload /dev/vdc /vdc
Ran: 001 002 005 006 007 011 013 014 015 020 053 062 069 070 075 076 079 088 091 105 112 113 117 120 123 124 126 128 129 130 131 135 141 169 184 193 198 207 210 211 212 213 214 215 219 221 223 225 228 230 235 236 237 240 243 245 246 247 248 249 255 256 257 258 263 271 277
Passed all 67 tests
END TEST: Ext4 4k block w/ no journal Sat Jul 28 17:15:07 EDT 2012
# This is useful for testing page size != block size --- a big deal with
# architectures that have 16k pages, such as Power PC or Itanium with a
# 4k block --- we test for it using x86 and 1k blocks / 4k pages.
BEGIN TEST: Ext4 1k block Sat Jul 28 17:15:16 EDT 2012
MKFS_OPTIONS -- -q -b 1024 /dev/vdc
MOUNT_OPTIONS -- -o acl,user_xattr -o block_validity /dev/vdc /vdc
Ran: 001 002 005 006 007 011 013 014 015 020 053 062 069 070 075 076 079 088 091 105 112 113 117 120 123 124 126 128 129 130 131 135 141 169 184 193 198 207 210 211 212 213 214 215 219 221 223 225 228 230 235 236 237 240 243 245 246 247 248 249 255 256 257 258 263 271 277
Failures: 020
END TEST: Ext4 1k block Sat Jul 28 18:18:25 EDT 2012
# Useful for PCIe attached flash devices
BEGIN TEST: Ext4 4k block w/dioread_nolock Sat Jul 28 18:38:55 EDT 2012
MKFS_OPTIONS -- -q /dev/vdc
MOUNT_OPTIONS -- -o acl,user_xattr -o block_validity,dioread_nolock /dev/vdc /vdc
Ran: 001 002 005 006 007 011 013 014 015 020 053 062 069 070 075 076 079 088 091 105 112 113 117 120 123 124 126 128 129 130 131 135 141 169 184 193 198 207 210 211 212 213 214 215 219 221 223 225 228 230 235 236 237 240 243 245 246 247 248 249 255 256 257 258 263 271 277
Failures: 091 263
END TEST: Ext4 4k block w/dioread_nolock Sat Jul 28 19:07:38 EDT 2012
BEGIN TEST: Ext4 4k block w/data=journal Sat Jul 28 19:07:45 EDT 2012
MKFS_OPTIONS -- -q /dev/vdc
MOUNT_OPTIONS -- -o acl,user_xattr -o block_validity,data=journal /dev/vdc /vdc
Ran: 001 002 005 006 007 011 013 014 015 020 053 062 069 070 075 076 079 088 091 105 112 113 117 120 123 124 126 128 129 130 131 135 141 169 184 193 198 207 210 211 212 213 214 215 219 221 223 225 228 230 235 236 237 240 243 245 246 247 248 249 255 256 257 258 263 271 277
Failures: 223
END TEST: Ext4 4k block w/data=journal Sat Jul 28 19:33:26 EDT 2012
BEGIN TEST: Ext4 4k block w/bigalloc Sat Jul 28 19:33:35 EDT 2012
MKFS_OPTIONS -- -q -O bigalloc /dev/vdc
MOUNT_OPTIONS -- -o acl,user_xattr -o block_validity /dev/vdc /vdc
Ran: 001 002 005 006 007 011 013 014 015 020 053 062 069 070 075 076 079 088 091 105 112 113 117 120 123 124 126 128 129 130 131 135 141 169 184 193 198 207 210 211 212 213 214 215 219 221 223 225 228 230 235 236 237 240 243 245 246 247 248 249 257 258 263 271 277
Failures: 015 219 235
END TEST: Ext4 4k block w/bigalloc Sat Jul 28 19:52:41 EDT 2012
On Wed, Sep 12, 2012 at 09:09:37PM -0500, Eric Sandeen wrote:
>
> Ah, I didn't know that autotest had a snapshot. I wonder if there's
> a way to tease that back out, and pull down xfstests from git daily.
>
> FWIW, it's a little confusing - we have 2 repos:
>
> git://oss.sgi.com/xfs/cmds/xfstests.git
> git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
This may be helpful; I have a xfstests build environment here:
git://git.kernel.org/pub/scm/fs/ext2/xfstests-bld.git
It will automatically pull down the xfstests and xfsprogs from
git.kernel.org, and builds xfstests in a hermetic environment (i.e.,
it doesn't depend on version of xfsprogs, libacl, libaio,
etc. installed on the build system). I set this up back when I was
doing most of my work using Ubuntu LTS 10.04, and the positively
ancient versions of libacl, libaio, xfsprogs-dev, etc., weren't
compatible with the bleeding edge of xfstests.
So I used this build environment so I wasn't dependent on the
vagrancies of whatever happened to be in Ubunutu LTS 10.04.
The makefiles in xfstests-bld will also generate a tar file containing
the necessary xfstests and xfsprogs binaries which I could then drop
into a debootstrap environment (I'm currently using a x86-32 Debian
unstable chroot) which I then use in my KVM image. So it allows me to
get and build the very latest version of xfstests from git.kernel.org
in a highly automated fashion.
Regards,
- Ted
On 09/12/2012 07:18 PM, Theodore Ts'o wrote:
> On Wed, Sep 12, 2012 at 03:52:49PM -0700, Brad Figg wrote:
>>
>> The Ubuntu kernel team has been putting some automated testing
>> infrastructure in place. We are very interested in engaging with
>> the appropriate upstream developers. We have been running the
>> xfstests that come as part of the autotest testing framework.
>> Some of these tests fail or never complete when run against an
>> Ext4 file-system.
>
> Looking at your web page, it looks like xfstests is passing w/o any
> problems for the set of tests that you are running with the 3.5 and
> 3.2 kernels.
>
> The failures that you are seeing with 3.0 kernel looks funny; it looks
> like you are running the tests one at a time, and after the file
> system got corrupted with the fsstress run with test #13, your test
> framework isn't repairing the file system with e2fsck -fy, so all of
> the tests afterwards failed because the file system was corrupted. As
> to why the test failed with 3.0, it's probably some problem we didn't
> get backported to 3.0 for some reason. Quite frankly that's not
> something I really worry about --- as far as I'm concerned, if it
> can't be trivially backported as part of the [email protected]
> process, or after the stable coverage is finished, it's the distro's
> responsibility to backport patches to their
> old/antique/stable/enterprise/LTS kernels --- it's why the
> distributions get paid the big bucks. :-)
>
> Also, I noted that some of your failures on the older distributions
> was due to missing userspace programs that were not installed on the
> System Under Test (for example, "chacl").
>
>
> As far as how I do my testing, I generally use "./check -g quick" or
> "./check -g auto" --- where "-g auto" takes a lot longer, and there
> are some known failure depending on specific set of mount options and
> how the file system is created.
>
> Here is my baseline that I had at the start of the 3.6 development
> cycle. (There are some test failures that are on my todo list to
> investigate more closely, but I keep a baseline to make sure things
> don't regress.)
>
> I have a script which handles doing all of this automatically using
> KVM, with a single command I can run which takes a built kernel, runs
> it using KVM, and passes the configuration options via the boot
> command-line to a set of shell scripts run out of /etc/rc.local which
> then runs xfstests in the various file system configurations.
>
> - Ted
>
>
> BEGIN TEST: Ext4 4k block Sat Jul 28 16:04:48 EDT 2012
> MKFS_OPTIONS -- -q /dev/vdc
> MOUNT_OPTIONS -- -o acl,user_xattr -o block_validity /dev/vdc /vdc
> Ran: 001 002 005 006 007 011 013 014 015 020 053 062 069 070 075 076 079 088 091 105 112 113 117 120 123 124 126 128 129 130 131 135 141 169 184 193 198 207 210 211 212 213 214 215 219 221 223 225 228 230 235 236 237 240 243 245 246 247 248 249 255 256 257 258 263 271 277
> Passed all 67 tests
> END TEST: Ext4 4k block Sat Jul 28 16:33:53 EDT 2012
>
> # This is a good way to test using ext4 for ext3 file systems
> BEGIN TEST: Ext4 4k block w/nodelalloc and no extents Sat Jul 28 16:34:03 EDT 2012
> MKFS_OPTIONS -- -q -O ^extents,^flex_bg,^uninit_bg /dev/vdc
> MOUNT_OPTIONS -- -o acl,user_xattr -o block_validity,nodelalloc /dev/vdc /vdc
> Ran: 001 002 005 006 007 011 013 014 015 020 053 062 069 070 075 076 079 088 091 105 112 113 117 120 123 124 126 128 129 130 131 135 141 169 184 193 198 207 210 211 212 215 219 221 225 230 235 236 237 240 245 246 247 248 249 257 258 263 271 277
> Passed all 60 tests
> END TEST: Ext4 4k block w/nodelalloc and no extents Sat Jul 28 16:55:23 EDT 2012
>
> # We care about this in Google, which is why I run it
> BEGIN TEST: Ext4 4k block w/ no journal Sat Jul 28 16:55:26 EDT 2012
> MKFS_OPTIONS -- -q -O ^has_journal /dev/vdc
> MOUNT_OPTIONS -- -o acl,user_xattr -o block_validity,noload /dev/vdc /vdc
> Ran: 001 002 005 006 007 011 013 014 015 020 053 062 069 070 075 076 079 088 091 105 112 113 117 120 123 124 126 128 129 130 131 135 141 169 184 193 198 207 210 211 212 213 214 215 219 221 223 225 228 230 235 236 237 240 243 245 246 247 248 249 255 256 257 258 263 271 277
> Passed all 67 tests
> END TEST: Ext4 4k block w/ no journal Sat Jul 28 17:15:07 EDT 2012
>
> # This is useful for testing page size != block size --- a big deal with
> # architectures that have 16k pages, such as Power PC or Itanium with a
> # 4k block --- we test for it using x86 and 1k blocks / 4k pages.
> BEGIN TEST: Ext4 1k block Sat Jul 28 17:15:16 EDT 2012
> MKFS_OPTIONS -- -q -b 1024 /dev/vdc
> MOUNT_OPTIONS -- -o acl,user_xattr -o block_validity /dev/vdc /vdc
> Ran: 001 002 005 006 007 011 013 014 015 020 053 062 069 070 075 076 079 088 091 105 112 113 117 120 123 124 126 128 129 130 131 135 141 169 184 193 198 207 210 211 212 213 214 215 219 221 223 225 228 230 235 236 237 240 243 245 246 247 248 249 255 256 257 258 263 271 277
> Failures: 020
> END TEST: Ext4 1k block Sat Jul 28 18:18:25 EDT 2012
>
> # Useful for PCIe attached flash devices
> BEGIN TEST: Ext4 4k block w/dioread_nolock Sat Jul 28 18:38:55 EDT 2012
> MKFS_OPTIONS -- -q /dev/vdc
> MOUNT_OPTIONS -- -o acl,user_xattr -o block_validity,dioread_nolock /dev/vdc /vdc
> Ran: 001 002 005 006 007 011 013 014 015 020 053 062 069 070 075 076 079 088 091 105 112 113 117 120 123 124 126 128 129 130 131 135 141 169 184 193 198 207 210 211 212 213 214 215 219 221 223 225 228 230 235 236 237 240 243 245 246 247 248 249 255 256 257 258 263 271 277
> Failures: 091 263
> END TEST: Ext4 4k block w/dioread_nolock Sat Jul 28 19:07:38 EDT 2012
>
> BEGIN TEST: Ext4 4k block w/data=journal Sat Jul 28 19:07:45 EDT 2012
> MKFS_OPTIONS -- -q /dev/vdc
> MOUNT_OPTIONS -- -o acl,user_xattr -o block_validity,data=journal /dev/vdc /vdc
> Ran: 001 002 005 006 007 011 013 014 015 020 053 062 069 070 075 076 079 088 091 105 112 113 117 120 123 124 126 128 129 130 131 135 141 169 184 193 198 207 210 211 212 213 214 215 219 221 223 225 228 230 235 236 237 240 243 245 246 247 248 249 255 256 257 258 263 271 277
> Failures: 223
> END TEST: Ext4 4k block w/data=journal Sat Jul 28 19:33:26 EDT 2012
>
> BEGIN TEST: Ext4 4k block w/bigalloc Sat Jul 28 19:33:35 EDT 2012
> MKFS_OPTIONS -- -q -O bigalloc /dev/vdc
> MOUNT_OPTIONS -- -o acl,user_xattr -o block_validity /dev/vdc /vdc
> Ran: 001 002 005 006 007 011 013 014 015 020 053 062 069 070 075 076 079 088 091 105 112 113 117 120 123 124 126 128 129 130 131 135 141 169 184 193 198 207 210 211 212 213 214 215 219 221 223 225 228 230 235 236 237 240 243 245 246 247 248 249 257 258 263 271 277
> Failures: 015 219 235
> END TEST: Ext4 4k block w/bigalloc Sat Jul 28 19:52:41 EDT 2012
>
Thanks, this is very helpful. I wouldn't mind seeing your script if you
feel like sharing it.
Brad
--
Brad Figg [email protected] http://www.canonical.com
On Wed, Sep 12, 2012 at 07:50:48PM -0700, Brad Figg wrote:
>
> Thanks, this is very helpful. I wouldn't mind seeing your script if you
> feel like sharing it.
I've checked in the scripts into my xfstests-bld repository. You can
get the repository here:
git://git.kernel.org/pub/scm/fs/ext2/xfstests-bld.git
This is how I build xfstests and its dependencies hermetically, so
that even if you are forced (as I was for a while) to build on an
ancient enterprise or LTS distribution, it will build cleanly.
(Xfstests requires the libraries in xfsprogs newer than what was
shipped with LTS 10.04, and I'd guess RHEL 6 and definitely RHEL 5.)
The scripts that run in the VM can be found in the directory
kvm-autorun; they are installed into a file system built using
debootstrap. The scripts I use to kick off kvm in the host OS can be
found in the directory kvm-xfstests.
Sorry for the delay; it took me a while to get things packaged up
cleanly. Unfortunately I can't just ship you the root_fs.img due of
GPL licensing issues (figuring what all of the necessary source files
that I'd have to ship that correspond to the application image is a
huge pain in the ass). One of these days I'll create a shell script
that runs debootsrap and automatically sets up the root_fs.img for the
VM for people building the VM while running Debian or Ubuntu ---
unless someone beats me to it first (hint, hint :-).
- Ted