It seems that the current e2fsprogs "maint" branch has broken tests?
At least on two different systems I tried this on had the same problem:
r_64bit_big_expand: very large fs growth using ext4 w/64bit: failed
r_bigalloc_big_expand: ext4 with bigalloc: failed
r_ext4_big_expand: very large fs growth using ext4: failed
The test logs show:
/tmp/e2fsprogs-tmp.VitAZy: 13/32768 files (7.7% non-contiguous),
6870/131072 blocks
../resize/resize2fs -d 31 /tmp/e2fsprogs-tmp.VitAZy 2T
resize2fs 1.42.8 (20-Jun-2013)
The containing partition (or device) is only 131072 (4k) blocks.
You requested a new size of 536870912 blocks.
I tried to add in a "truncate -s $SIZE_2 $TMPFILE", but it complains that
it
isn't able to truncate the file in /tmp to 2TB:
truncating `/tmp/e2fsprogs-tmp.OGxb09' at 2199023255552 bytes: File too
large
Testing manually, it seems I'm not allowed to create a file in tmpfs larger
than 256GB. How large does this file need to be for this test to be valid?
I'm also seeing a consistent test failure in f_extent_oobounds on ONE of
the
two systems, though I can't see why the results are inconsistent since they
have the same GCC, glibc and almost the same kernel (RHEL
2.6.32-358.11.1.el6
and 2.6.32-279.5.1.el6, not that it should make any difference).
more f_extent*.failed
--- f_extent_oobounds/expect.1 2013-10-31 20:01:06.299616314 +0000
+++ f_extent_oobounds.1.log 2013-10-31 21:16:21.008616804 +0000
@@ -1,24 +1,20 @@
Pass 1: Checking inodes, blocks, and sizes
-Inode 12, end of extent exceeds allowed value
- (logical block 15, physical block 200, len 30)
-Clear? yes
-
-Inode 12, i_blocks is 154, should be 94. Fix? yes
+Inode 12, i_blocks is 154, should be 0. Fix? yes
This is still true after "make distclean" and rebuilding the whole tree.
It seems that e2fsck isn't detecting the new PR_1_EXTENT_END_OUT_OF_BOUNDS
problem on this system for some reason? Usually this kind of inconsistency
is due to some uninitialized stack variable being used that is different
on the two systems.
Anyone else seen these problems, or do I need to dig in further?
Cheers, Andreas
--
Andreas Dilger
Lustre Software Architect
Intel High Performance Data Division
Hi Andreas,
On Thu, Oct 31, 2013 at 09:35:25PM +0000, Dilger, Andreas wrote:
> It seems that the current e2fsprogs "maint" branch has broken tests?
> At least on two different systems I tried this on had the same problem:
>
> r_64bit_big_expand: very large fs growth using ext4 w/64bit: failed
> r_bigalloc_big_expand: ext4 with bigalloc: failed
> r_ext4_big_expand: very large fs growth using ext4: failed
>
> The test logs show:
>
> /tmp/e2fsprogs-tmp.VitAZy: 13/32768 files (7.7% non-contiguous),
> 6870/131072 blocks
> ../resize/resize2fs -d 31 /tmp/e2fsprogs-tmp.VitAZy 2T
> resize2fs 1.42.8 (20-Jun-2013)
> The containing partition (or device) is only 131072 (4k) blocks.
> You requested a new size of 536870912 blocks.
>
>
> I tried to add in a "truncate -s $SIZE_2 $TMPFILE", but it complains that
> it
> isn't able to truncate the file in /tmp to 2TB:
>
> truncating `/tmp/e2fsprogs-tmp.OGxb09' at 2199023255552 bytes: File too
> large
>
> Testing manually, it seems I'm not allowed to create a file in tmpfs larger
> than 256GB. How large does this file need to be for this test to be valid?
>
>
> I'm also seeing a consistent test failure in f_extent_oobounds on ONE of
> the
> two systems, though I can't see why the results are inconsistent since they
> have the same GCC, glibc and almost the same kernel (RHEL
> 2.6.32-358.11.1.el6
> and 2.6.32-279.5.1.el6, not that it should make any difference).
>
> more f_extent*.failed
> --- f_extent_oobounds/expect.1 2013-10-31 20:01:06.299616314 +0000
> +++ f_extent_oobounds.1.log 2013-10-31 21:16:21.008616804 +0000
> @@ -1,24 +1,20 @@
> Pass 1: Checking inodes, blocks, and sizes
> -Inode 12, end of extent exceeds allowed value
> - (logical block 15, physical block 200, len 30)
> -Clear? yes
> -
> -Inode 12, i_blocks is 154, should be 94. Fix? yes
> +Inode 12, i_blocks is 154, should be 0. Fix? yes
>
> This is still true after "make distclean" and rebuilding the whole tree.
> It seems that e2fsck isn't detecting the new PR_1_EXTENT_END_OUT_OF_BOUNDS
> problem on this system for some reason? Usually this kind of inconsistency
> is due to some uninitialized stack variable being used that is different
> on the two systems.
>
>
> Anyone else seen these problems, or do I need to dig in further?
Yes, I also can see these problems.
Thanks,
- Zheng
On Fri, Nov 01, 2013 at 10:35:56AM +0800, Zheng Liu wrote:
> Hi Andreas,
>
> On Thu, Oct 31, 2013 at 09:35:25PM +0000, Dilger, Andreas wrote:
> > I tried to add in a "truncate -s $SIZE_2 $TMPFILE", but it complains that
> > it
> > isn't able to truncate the file in /tmp to 2TB:
> >
> > truncating `/tmp/e2fsprogs-tmp.OGxb09' at 2199023255552 bytes: File too
> > large
> >
> > Testing manually, it seems I'm not allowed to create a file in tmpfs larger
> > than 256GB. How large does this file need to be for this test to be valid?
> >
> > Anyone else seen these problems, or do I need to dig in further?
>
> Yes, I also can see these problems.
Hmm.... it works for me. Run while r_64bit_big_expand is running:
% ls -l tmp
...
24896 -rw-r--r--. 1 tytso tytso 2199023255552 Oct 31 23:17 e2fsprogs-tmp.pkOcCc
...
% df /tmp
Filesystem 1K-blocks Used Available Use% Mounted on
tmpfs 3216420 26008 3190412 1% /tmp
What version of the kernel are you running? I am using 3.12-rc5 plus
the ext4 dev tree, so I'm using a pretty recent kernel.
Maybe this is a relatively new feature of tmpfs? If so, I should
probably change the test so that it's a bit more portable on people
using older kernels.
- Ted
On Thu, Oct 31, 2013 at 11:21:15PM -0400, Theodore Ts'o wrote:
> On Fri, Nov 01, 2013 at 10:35:56AM +0800, Zheng Liu wrote:
> > Hi Andreas,
> >
> > On Thu, Oct 31, 2013 at 09:35:25PM +0000, Dilger, Andreas wrote:
> > > I tried to add in a "truncate -s $SIZE_2 $TMPFILE", but it complains that
> > > it
> > > isn't able to truncate the file in /tmp to 2TB:
> > >
> > > truncating `/tmp/e2fsprogs-tmp.OGxb09' at 2199023255552 bytes: File too
> > > large
> > >
> > > Testing manually, it seems I'm not allowed to create a file in tmpfs larger
> > > than 256GB. How large does this file need to be for this test to be valid?
> > >
> > > Anyone else seen these problems, or do I need to dig in further?
> >
> > Yes, I also can see these problems.
>
> Hmm.... it works for me. Run while r_64bit_big_expand is running:
>
> % ls -l tmp
> ...
> 24896 -rw-r--r--. 1 tytso tytso 2199023255552 Oct 31 23:17 e2fsprogs-tmp.pkOcCc
> ...
$ ls -l /tmp
-rw-rw-r-- 1 wenqing wenqing 536870912 Nov 1 21:03 e2fsprogs-tmp.x8yzKP
I am not sure that I do the right thing to get this result because the
tmporary files are removed after the test is done. So what I only can
do is like this in a terminal during the test is running:
while true
do
ls -l /tmp
usleep 500
done
Please let me know if I am wrong.
>
> % df /tmp
> Filesystem 1K-blocks Used Available Use% Mounted on
> tmpfs 3216420 26008 3190412 1% /tmp
>
> What version of the kernel are you running? I am using 3.12-rc5 plus
> the ext4 dev tree, so I'm using a pretty recent kernel.
I'm using 3.12-rc5 plus the ext4 dev tree too. So I guess that the
difference between us is that I do these tests on a hard disk rather
than on tmpfs.
- Zheng
On Fri, Nov 01, 2013 at 09:12:37PM +0800, Zheng Liu wrote:
> > Hmm.... it works for me. Run while r_64bit_big_expand is running:
> >
> > % ls -l tmp
> > ...
> > 24896 -rw-r--r--. 1 tytso tytso 2199023255552 Oct 31 23:17 e2fsprogs-tmp.pkOcCc
> > ...
>
> $ ls -l /tmp
> -rw-rw-r-- 1 wenqing wenqing 536870912 Nov 1 21:03 e2fsprogs-tmp.x8yzKP
Well, I got this by running "./test_script r_64bit_big_expand" and
then typing ^Z to stop the test mid-stream, and then looking in /tmp.
But a simpler thing to do is to simply run the following commands:
truncate -s 2T /tmp/foo.img
mke2fs -t ext4 -F /tmp/foo.img
... and see if it works correctly. I'm wondering if the problem is
that a file limit was set, although that would result in a core dump:
% bash
% ulimit -f 131072
% truncate -s 2T /tmp/foo.img
File size limit exceeded (core dumped)
% exit
.... so that doesn't seem to be it. Anyway, the problem seems to be
that trying to create a sparse 2T file during the test is what's
causing the problem that you and Andreas are seeing. If this theory
is question, the next question is what's causing the failure to write
files whose i_size is greater than 2T.
- Ted
On 2013/10/31 9:21 PM, "Theodore Ts'o" <[email protected]> wrote:
>On Fri, Nov 01, 2013 at 10:35:56AM +0800, Zheng Liu wrote:
>> Hi Andreas,
>>
>> On Thu, Oct 31, 2013 at 09:35:25PM +0000, Dilger, Andreas wrote:
>> > I tried to add in a "truncate -s $SIZE_2 $TMPFILE", but it complains
>>that
>> > it isn't able to truncate the file in /tmp to 2TB:
>> >
>> > truncating `/tmp/e2fsprogs-tmp.OGxb09' at 2199023255552 bytes: File
>>too
>> > large
>> >
>> > Testing manually, it seems I'm not allowed to create a file in tmpfs
>>larger
>> > than 256GB. How large does this file need to be for this test to be
>>valid?
>> >
>> > Anyone else seen these problems, or do I need to dig in further?
>>
>> Yes, I also can see these problems.
>
>Hmm.... it works for me. Run while r_64bit_big_expand is running:
>
>% ls -l tmp
>...
>24896 -rw-r--r--. 1 tytso tytso 2199023255552 Oct 31 23:17
>e2fsprogs-tmp.pkOcCc
>...
>
>% df /tmp
>Filesystem 1K-blocks Used Available Use% Mounted on
>tmpfs 3216420 26008 3190412 1% /tmp
>
>What version of the kernel are you running? I am using 3.12-rc5 plus
>the ext4 dev tree, so I'm using a pretty recent kernel.
It was in the original email - the failing systems are both RHEL6,
2.6.32-358.11.1.el6 (w/4GB RAM) and 2.6.32-279.5.1.el6 (w/ 2GB RAM).
Both fail to create files in tmpfs larger than 256GB.
>Maybe this is a relatively new feature of tmpfs? If so, I should
>probably change the test so that it's a bit more portable on people
>using older kernels.
Looking at the s_maxbytes value in current kernels shows me that it was
changed in v3.0-7280-g285b2c4 and has not been backported to the RHEL 6
kernel that I'm using at least.
Cheers, Andreas
--
Andreas Dilger
Lustre Software Architect
Intel High Performance Data Division
On Fri, Nov 01, 2013 at 12:48:34PM -0400, Theodore Ts'o wrote:
> On Fri, Nov 01, 2013 at 09:12:37PM +0800, Zheng Liu wrote:
> > > Hmm.... it works for me. Run while r_64bit_big_expand is running:
> > >
> > > % ls -l tmp
> > > ...
> > > 24896 -rw-r--r--. 1 tytso tytso 2199023255552 Oct 31 23:17 e2fsprogs-tmp.pkOcCc
> > > ...
> >
> > $ ls -l /tmp
> > -rw-rw-r-- 1 wenqing wenqing 536870912 Nov 1 21:03 e2fsprogs-tmp.x8yzKP
>
> Well, I got this by running "./test_script r_64bit_big_expand" and
> then typing ^Z to stop the test mid-stream, and then looking in /tmp.
Thanks for letting me know.
>
> But a simpler thing to do is to simply run the following commands:
>
> truncate -s 2T /tmp/foo.img
> mke2fs -t ext4 -F /tmp/foo.img
>
> ... and see if it works correctly. I'm wondering if the problem is
> that a file limit was set, although that would result in a core dump:
>
> % bash
> % ulimit -f 131072
> % truncate -s 2T /tmp/foo.img
> File size limit exceeded (core dumped)
> % exit
>
> .... so that doesn't seem to be it. Anyway, the problem seems to be
> that trying to create a sparse 2T file during the test is what's
> causing the problem that you and Andreas are seeing. If this theory
> is question, the next question is what's causing the failure to write
> files whose i_size is greater than 2T.
It seems that I know the reason why tests failed. That is because my
/tmp directory is a ext3 file system, and I couldn't create a big sparse
file like this 'truncate -s 2T /tmp/foo.img'. So I did the following
test in my sand box.
% sudo mke2fs -t ext4 ${DEV} # I create a new ext4 file system
% sudo mount -t ext4 ${DEV} /tmp # mount this file system on /tmp
% sudo chmod 777 -R /tmp
% cd $E2FSPROGS
% make check
Then r_64bit_big_expand, r_bigalloc_big_expand and r_ext4_big_expand can
survive. So I guess that the root cause is this.
Andreas, could you please confirm my guess?
BTW, after that, I still get a failure. That is f_extent_oobounds. So
we still need to take a closer look at this problem.
- Zheng