2001-11-14 22:05:55

by Alex Adriaanse

[permalink] [raw]
Subject: LFS stopped working

Hey,

I've been running 2.4.14 for a few days now. I needed LFS support, so I
recompiled glibc 2.1.3 with the new 2.4 headers, and after that I could
create large files (e.g. using dd if=/dev/zero of=test bs=1M count=0
seek=3000) just fine.

However, as of yesterday, I couldn't create files bigger than 2GB anymore.
I did not change kernels, nor did I mess with libc or anything else (I did
some Debian package upgrades/installations/recompiles, but I don't think
they should affect this) - I'm not quite sure what happened. Now commands
such as the dd command I mentioned above will die with the message "File
size limit exceeded", leaving a 2GB file behind. Rebooting didn't solve
anything. My ulimits seem to be fine (file size = unlimited).

The last few lines of the strace on the dd command above shows the
following:
open("/dev/zero", O_RDONLY|0x8000) = 0
close(1) = 0
open("test", O_RDWR|O_CREAT|0x8000, 0666) = 1
ftruncate64(0x1, 0xbb800000, 0, 0, 0x1) = 0
--- SIGXFSZ (File size limit exceeded) ---
+++ killed by SIGXFSZ +++

Also, cat'ing two 2GB files together into one big 4GB file (cat file1 file2
> file3) just dies after creating a 2GB file, whereas it used to work fine
(if I remember correctly). Doing an strace on it ends with the following
lines:
write(1, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096)
= 4096
read(3, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096) =
4096
write(1, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096)
= 4095
write(1, "\0", 1) = -1 EFBIG (File too large)
--- SIGXFSZ (File size limit exceeded) ---
+++ killed by SIGXFSZ +++

I'm doing this on a ReiserFS filesystem, but trying it on an ext2 partition
yields the same results.

Any suggestions?

Thanks,

Alex


2001-11-15 02:59:01

by Ben Collins

[permalink] [raw]
Subject: Re: LFS stopped working

On Wed, Nov 14, 2001 at 02:05:21PM -0800, Alex Adriaanse wrote:
> Hey,
>
> I've been running 2.4.14 for a few days now. I needed LFS support, so I
> recompiled glibc 2.1.3 with the new 2.4 headers, and after that I could
> create large files (e.g. using dd if=/dev/zero of=test bs=1M count=0
> seek=3000) just fine.
>
> However, as of yesterday, I couldn't create files bigger than 2GB anymore.
> I did not change kernels, nor did I mess with libc or anything else (I did
> some Debian package upgrades/installations/recompiles, but I don't think
> they should affect this) - I'm not quite sure what happened. Now commands
> such as the dd command I mentioned above will die with the message "File
> size limit exceeded", leaving a 2GB file behind. Rebooting didn't solve
> anything. My ulimits seem to be fine (file size = unlimited).

Actually it does affect it. Recompiling glibc isn't the end all to LFS
support. In fact, 2.1.3 has less than adequate support for LFS, IIRC, so
use 2.2.x. For Debian, that just means upgrading to woody(testing).

Your problem extends from programs also needing to be recompiled with
LFS support. This involves some special LFS CFLAGS, and most common
programs detect whether to do this using autoconf (fileutils, gzip and
tar are perfect examples of programs that use this feature).


Ben

--
.----------=======-=-======-=========-----------=====------------=-=-----.
/ Ben Collins -- Debian GNU/Linux \
` [email protected] -- [email protected] -- [email protected] '
`---=========------=======-------------=-=-----=-===-======-------=--=---'

2001-11-15 06:30:11

by Alex Adriaanse

[permalink] [raw]
Subject: RE: LFS stopped working

I actually re-compiled fileutils 4.1-7 from woody, and it still didn't
change anything. This is what ./configure said during the recompile by the
way:
checking for special C compiler options needed for large files... no
checking for _FILE_OFFSET_BITS value needed for large files... 64
checking for _LARGE_FILES value needed for large files... no
I'm assuming that this is the way it's supposed to be.

What I don't get though is that dd and other programs USED to work fine -
and I didn't update the kernel, glibc, or these programs (which I suppose
are the only things that could break LFS for utilities such as dd) after
they worked fine. I even reinstalled my new LFS-supporting glibc-2.1.3 to
be on the safe side in case apt-get "upgraded" them (which it didn't, as I
put it on hold).

Looking at the strace, it looks like these problems are actually coming from
the kernel, since the system call actually seems to be causing the
problem... but of course I could be wrong.

Thanks,

Alex

-----Original Message-----
From: Ben Collins [mailto:[email protected]]
Sent: Wednesday, November 14, 2001 6:58 PM
To: Alex Adriaanse
Cc: [email protected]
Subject: Re: LFS stopped working


On Wed, Nov 14, 2001 at 02:05:21PM -0800, Alex Adriaanse wrote:
> Hey,
>
> I've been running 2.4.14 for a few days now. I needed LFS support, so I
> recompiled glibc 2.1.3 with the new 2.4 headers, and after that I could
> create large files (e.g. using dd if=/dev/zero of=test bs=1M count=0
> seek=3000) just fine.
>
> However, as of yesterday, I couldn't create files bigger than 2GB anymore.
> I did not change kernels, nor did I mess with libc or anything else (I did
> some Debian package upgrades/installations/recompiles, but I don't think
> they should affect this) - I'm not quite sure what happened. Now commands
> such as the dd command I mentioned above will die with the message "File
> size limit exceeded", leaving a 2GB file behind. Rebooting didn't solve
> anything. My ulimits seem to be fine (file size = unlimited).

Actually it does affect it. Recompiling glibc isn't the end all to LFS
support. In fact, 2.1.3 has less than adequate support for LFS, IIRC, so
use 2.2.x. For Debian, that just means upgrading to woody(testing).

Your problem extends from programs also needing to be recompiled with
LFS support. This involves some special LFS CFLAGS, and most common
programs detect whether to do this using autoconf (fileutils, gzip and
tar are perfect examples of programs that use this feature).


Ben

--
.----------=======-=-======-=========-----------=====------------=-=-----.
/ Ben Collins -- Debian GNU/Linux \
` [email protected] -- [email protected] -- [email protected] '
`---=========------=======-------------=-=-----=-===-======-------=--=---'

2001-11-15 09:38:28

by Andreas Jaeger

[permalink] [raw]
Subject: Re: LFS stopped working

"Alex Adriaanse" <[email protected]> writes:

> Hey,
>
> I've been running 2.4.14 for a few days now. I needed LFS support, so I
> recompiled glibc 2.1.3 with the new 2.4 headers, and after that I could
> create large files (e.g. using dd if=/dev/zero of=test bs=1M count=0
> seek=3000) just fine.
>
> However, as of yesterday, I couldn't create files bigger than 2GB anymore.
> I did not change kernels, nor did I mess with libc or anything else (I did
> some Debian package upgrades/installations/recompiles, but I don't think
> they should affect this) - I'm not quite sure what happened. Now commands
> such as the dd command I mentioned above will die with the message "File
> size limit exceeded", leaving a 2GB file behind. Rebooting didn't solve
> anything. My ulimits seem to be fine (file size = unlimited).
>
> The last few lines of the strace on the dd command above shows the
> following:
> open("/dev/zero", O_RDONLY|0x8000) = 0
> close(1) = 0
> open("test", O_RDWR|O_CREAT|0x8000, 0666) = 1
> ftruncate64(0x1, 0xbb800000, 0, 0, 0x1) = 0
> --- SIGXFSZ (File size limit exceeded) ---
> +++ killed by SIGXFSZ +++

ulimit is hit. I strongly advise to upgrade to glibc 2.2 when using
kernel 2.4,

Andreas
--
Andreas Jaeger
SuSE Labs [email protected]
private [email protected]
http://www.suse.de/~aj

2001-11-15 10:04:18

by Alex Adriaanse

[permalink] [raw]
Subject: RE: LFS stopped working

But ulimit shows that the file size is unlimited... would this be a bug? If
that's the case, then how/why would it work before?

Thanks,

Alex

-----Original Message-----
From: [email protected] [mailto:[email protected]]
Sent: Thursday, November 15, 2001 1:38 AM
To: Alex Adriaanse
Cc: [email protected]
Subject: Re: LFS stopped working


"Alex Adriaanse" <[email protected]> writes:

> Hey,
>
> I've been running 2.4.14 for a few days now. I needed LFS support, so I
> recompiled glibc 2.1.3 with the new 2.4 headers, and after that I could
> create large files (e.g. using dd if=/dev/zero of=test bs=1M count=0
> seek=3000) just fine.
>
> However, as of yesterday, I couldn't create files bigger than 2GB anymore.
> I did not change kernels, nor did I mess with libc or anything else (I did
> some Debian package upgrades/installations/recompiles, but I don't think
> they should affect this) - I'm not quite sure what happened. Now commands
> such as the dd command I mentioned above will die with the message "File
> size limit exceeded", leaving a 2GB file behind. Rebooting didn't solve
> anything. My ulimits seem to be fine (file size = unlimited).
>
> The last few lines of the strace on the dd command above shows the
> following:
> open("/dev/zero", O_RDONLY|0x8000) = 0
> close(1) = 0
> open("test", O_RDWR|O_CREAT|0x8000, 0666) = 1
> ftruncate64(0x1, 0xbb800000, 0, 0, 0x1) = 0
> --- SIGXFSZ (File size limit exceeded) ---
> +++ killed by SIGXFSZ +++

ulimit is hit. I strongly advise to upgrade to glibc 2.2 when using
kernel 2.4,

Andreas
--
Andreas Jaeger
SuSE Labs [email protected]
private [email protected]
http://www.suse.de/~aj

2001-11-15 10:43:06

by Andreas Jaeger

[permalink] [raw]
Subject: Re: LFS stopped working

"Alex Adriaanse" <[email protected]> writes:

> But ulimit shows that the file size is unlimited... would this be a bug? If
> that's the case, then how/why would it work before?

If you use an older distro, bash will not handle the changed getrlimit
syscall in 2.4, for details check the Red Hat entry under:
http://www.suse.de/~aj/linux_lfs.html

Andreas
--
Andreas Jaeger
SuSE Labs [email protected]
private [email protected]
http://www.suse.de/~aj

2001-11-15 11:57:51

by Alex Adriaanse

[permalink] [raw]
Subject: RE: LFS stopped working

Well, I'm running Debian 2.2 (with a few recompiled newer packages, as well
as a recompiled glibc 2.1.3 obviously), which comes with bash 2.03.
Compiling & installing bash 2.05, and commenting out that line in
/etc/pam.d/ssh that was mentioned in the web page you provided unfortunately
didn't change anything - the ulimit for file size was still unlimited, and I
still couldn't write >2GB files.

Alex

-----Original Message-----
From: [email protected] [mailto:[email protected]]
Sent: Thursday, November 15, 2001 2:43 AM
To: Alex Adriaanse
Cc: [email protected]
Subject: Re: LFS stopped working


"Alex Adriaanse" <[email protected]> writes:

> But ulimit shows that the file size is unlimited... would this be a bug?
If
> that's the case, then how/why would it work before?

If you use an older distro, bash will not handle the changed getrlimit
syscall in 2.4, for details check the Red Hat entry under:
http://www.suse.de/~aj/linux_lfs.html

Andreas
--
Andreas Jaeger
SuSE Labs [email protected]
private [email protected]
http://www.suse.de/~aj

2001-11-21 18:32:58

by Andi Kleen

[permalink] [raw]
Subject: Re: LFS stopped working

"Alex Adriaanse" <[email protected]> writes:
> = 4095
> write(1, "\0", 1) = -1 EFBIG (File too large)
> --- SIGXFSZ (File size limit exceeded) ---
> +++ killed by SIGXFSZ +++
>
> I'm doing this on a ReiserFS filesystem, but trying it on an ext2 partition
> yields the same results.
>
> Any suggestions?

ulimit -f unlimited.

SIGXFSZ means you exceeded your quota. Somehow you managed to set your
file size quotas to 2GB. Set them to unlimited instead. It could be caused
by same PAM module; e.g. pam_limits, check /etc/security/*



-Andi

2001-11-21 19:08:12

by Andreas Dilger

[permalink] [raw]
Subject: Re: LFS stopped working

On Nov 15, 2001 07:08 +0100, Andi Kleen wrote:
> "Alex Adriaanse" <[email protected]> writes:
> > = 4095
> > write(1, "\0", 1) = -1 EFBIG (File too large)
> > --- SIGXFSZ (File size limit exceeded) ---
> > +++ killed by SIGXFSZ +++
> >
> > I'm doing this on a ReiserFS filesystem, but trying it on an ext2 partition
> > yields the same results.
> >
> > Any suggestions?
>
> ulimit -f unlimited.
>
> SIGXFSZ means you exceeded your quota. Somehow you managed to set your
> file size quotas to 2GB. Set them to unlimited instead. It could be caused
> by same PAM module; e.g. pam_limits, check /etc/security/*

The problem is that the old getrlimit() syscall returns a max of 0x7fffffff
for the limit, while the kernel uses 0xffffffff for unlimited, so if you
do "setrlimit(getrlimit())" you may actually be going from a real unlimited
ulimit, to a "bogus" unlimited limit that the kernel will deny you on.

I think the fix is to simply ignore file limits when writing to block
devices.

Cheers, Andreas
--
Andreas Dilger
http://sourceforge.net/projects/ext2resize/
http://www-mddsp.enel.ucalgary.ca/People/adilger/

2001-11-21 20:06:54

by Alex Adriaanse

[permalink] [raw]
Subject: RE: LFS stopped working

Upgrading to Debian woody (with glibc 2.2) fixed the problem. :)

Thanks,

Alex

-----Original Message-----
From: Andreas Dilger [mailto:[email protected]]
Sent: Wednesday, November 21, 2001 11:07 AM
To: Andi Kleen
Cc: Alex Adriaanse; [email protected]
Subject: Re: LFS stopped working


On Nov 15, 2001 07:08 +0100, Andi Kleen wrote:
> "Alex Adriaanse" <[email protected]> writes:
> > = 4095
> > write(1, "\0", 1) = -1 EFBIG (File too large)
> > --- SIGXFSZ (File size limit exceeded) ---
> > +++ killed by SIGXFSZ +++
> >
> > I'm doing this on a ReiserFS filesystem, but trying it on an ext2
partition
> > yields the same results.
> >
> > Any suggestions?
>
> ulimit -f unlimited.
>
> SIGXFSZ means you exceeded your quota. Somehow you managed to set your
> file size quotas to 2GB. Set them to unlimited instead. It could be caused
> by same PAM module; e.g. pam_limits, check /etc/security/*

The problem is that the old getrlimit() syscall returns a max of 0x7fffffff
for the limit, while the kernel uses 0xffffffff for unlimited, so if you
do "setrlimit(getrlimit())" you may actually be going from a real unlimited
ulimit, to a "bogus" unlimited limit that the kernel will deny you on.

I think the fix is to simply ignore file limits when writing to block
devices.

Cheers, Andreas
--
Andreas Dilger
http://sourceforge.net/projects/ext2resize/
http://www-mddsp.enel.ucalgary.ca/People/adilger/