2003-06-10 13:42:26

by Richard B. Johnson

[permalink] [raw]
Subject: Large files


With 32 bit return values, ix86 Linux has a file-size limitation
which is currently about 0x7fffffff. Unfortunately, instead of
returning from a write() with a -1 and errno being set, so that
a program can do something about it, write() executes a signal(25)
which kills the task even if trapped. Is this one of those <expletive
deleted> POSIX requirements or is somebody going to fix it?

Cheers,
Dick Johnson
Penguin : Linux version 2.4.20 on an i686 machine (797.90 BogoMips).
Why is the government concerned about the lunatic fringe? Think about it.


Subject: Re: Large files

Dear All,

I'm allocating a large buffer at boot-time, from the kernel, using
alloc_bootmem_low_pages, which I wish to use for DMA from an device driver.

For example, the bootmem returns an address of 0xc0006000. This all works
fine, but...

What is the mechanism for communicating this address to user-space
processes, and mapping it to a virtual address, so that they can use my
buffer?

I want user-space processes to be able to read and write from this block of
memory, without having to be
suid root (if possible).

Cheers,

Ed




begin 666 RMRL-Disclaimer.txt
M4F5G:7-T97)E9"!/9F9I8V4Z(%)O:V4@36%N;W(@4F5S96%R8V@@3'1D+"!3
M:65M96YS($AO=7-E+"!/;&1B=7)Y+"!"<F%C:VYE;&PL( T*0F5R:W-H:7)E
M+B!21S$R(#A&6@T*#0I4:&4@:6YF;W)M871I;VX@8V]N=&%I;F5D(&EN('1H
M:7,@92UM86EL(&%N9"!A;GD@871T86-H;65N=',@:7,@8V]N9FED96YT:6%L
M('1O(%)O:V4@#0T-"DUA;F]R(%)E<V5A<F-H($QT9"!A;F0@;75S="!N;W0@
M8F4@<&%S<V5D('1O(&%N>2!T:&ER9"!P87)T>2!W:71H;W5T('!E<FUI<W-I
M;VXN(%1H:7,@#0T-"F-O;6UU;FEC871I;VX@:7,@9F]R(&EN9F]R;6%T:6]N
M(&]N;'D@86YD('-H86QL(&YO="!C<F5A=&4@;W(@8VAA;F=E(&%N>2!C;VYT
;<F%C='5A;" -#0T*<F5L871I;VYS:&EP+@T*
end

2003-06-10 14:05:05

by Matti Aarnio

[permalink] [raw]
Subject: Re: Large files

On Tue, Jun 10, 2003 at 09:57:57AM -0400, Richard B. Johnson wrote:
> With 32 bit return values, ix86 Linux has a file-size limitation
> which is currently about 0x7fffffff. Unfortunately, instead of
> returning from a write() with a -1 and errno being set, so that
> a program can do something about it, write() executes a signal(25)
> which kills the task even if trapped. Is this one of those <expletive
> deleted> POSIX requirements or is somebody going to fix it?

http://www.sas.com/standards/large.file/

#define SIGXFSZ 25 /* File size limit exceeded (4.2 BSD). */

from fs/buffer.c:

err = -EFBIG;
limit = current->rlim[RLIMIT_FSIZE].rlim_cur;
if (limit != RLIM_INFINITY && size > (loff_t)limit) {
send_sig(SIGXFSZ, current, 0);
goto out;
}
if (size > inode->i_sb->s_maxbytes)
goto out;


> Cheers,
> Dick Johnson
> Penguin : Linux version 2.4.20 on an i686 machine (797.90 BogoMips).

/Matti Aarnio

2003-06-10 14:57:09

by Richard B. Johnson

[permalink] [raw]
Subject: Re: Large files

On Tue, 10 Jun 2003, Matti Aarnio wrote:

> On Tue, Jun 10, 2003 at 09:57:57AM -0400, Richard B. Johnson wrote:
> > With 32 bit return values, ix86 Linux has a file-size limitation
> > which is currently about 0x7fffffff. Unfortunately, instead of
> > returning from a write() with a -1 and errno being set, so that
> > a program can do something about it, write() executes a signal(25)
> > which kills the task even if trapped. Is this one of those <expletive
> > deleted> POSIX requirements or is somebody going to fix it?
>
> http://www.sas.com/standards/large.file/
>
> #define SIGXFSZ 25 /* File size limit exceeded (4.2 BSD). */
>
> from fs/buffer.c:
>
> err = -EFBIG;
> limit = current->rlim[RLIMIT_FSIZE].rlim_cur;
> if (limit != RLIM_INFINITY && size > (loff_t)limit) {
> send_sig(SIGXFSZ, current, 0);
> goto out;
> }
> if (size > inode->i_sb->s_maxbytes)
> goto out;
>
>

On the system that fails, there are no ulimits and it's the root
account, therefore I don't know how to set the above limit to
RLIM_INFINITY (~0LU). It's also version 2.4.20. I don't think
it has anything to do with 'rlim' shown above. In any event
sending a signal when the file-size exceeds some level is preposterous.
The write should return -1 and errno should have been set to EFBIG
(in user space). That allows the user's database to create another
file and keep on trucking instead of blowing up and destroying the
user's inventory or whatever else was in process.

FYI, this caused the failure of a samba server for M$ stuff. It
gives the impression of Linux being defective. This is not good.

Cheers,
Dick Johnson
Penguin : Linux version 2.4.20 on an i686 machine (797.90 BogoMips).
Why is the government concerned about the lunatic fringe? Think about it.

2003-06-10 17:12:01

by Martin Mares

[permalink] [raw]
Subject: Re: Large files

Hello!

> On the system that fails, there are no ulimits and it's the root
> account, therefore I don't know how to set the above limit to
> RLIM_INFINITY (~0LU). It's also version 2.4.20. I don't think
> it has anything to do with 'rlim' shown above.

I think it has -- login (or PAM) in most distributions sets the
file size limit to 2GB instead of RLIM_INFINITY. If you are root,
try `ulimit -f unlimited' to see if it helps.

> sending a signal when the file-size exceeds some level is preposterous.

No, it's just the definition of the rlimits. Not leaving them at
RLIM_INFINITY by default is preposterous.

Have a nice fortnight
--
Martin `MJ' Mares <[email protected]> http://atrey.karlin.mff.cuni.cz/~mj/
Faculty of Math and Physics, Charles University, Prague, Czech Rep., Earth
COBOL -- Compiles Only Because Of Luck

2003-06-10 18:02:49

by Andreas Dilger

[permalink] [raw]
Subject: Re: Large files

On Jun 10, 2003 11:12 -0400, Richard B. Johnson wrote:
> > On Tue, Jun 10, 2003 at 09:57:57AM -0400, Richard B. Johnson wrote:
> > > With 32 bit return values, ix86 Linux has a file-size limitation
> > > which is currently about 0x7fffffff. Unfortunately, instead of
> > > returning from a write() with a -1 and errno being set, so that
> > > a program can do something about it, write() executes a signal(25)
> > > which kills the task even if trapped. Is this one of those <expletive
> > > deleted> POSIX requirements or is somebody going to fix it?
> >
> > http://www.sas.com/standards/large.file/
> >
> > #define SIGXFSZ 25 /* File size limit exceeded (4.2 BSD). */
> >
> > from fs/buffer.c:
> >
> > err = -EFBIG;
> > limit = current->rlim[RLIMIT_FSIZE].rlim_cur;
> > if (limit != RLIM_INFINITY && size > (loff_t)limit) {
> > send_sig(SIGXFSZ, current, 0);
> > goto out;
> > }
> > if (size > inode->i_sb->s_maxbytes)
> > goto out;
> >
> >
>
> On the system that fails, there are no ulimits and it's the root
> account, therefore I don't know how to set the above limit to
> RLIM_INFINITY (~0LU). It's also version 2.4.20. I don't think
> it has anything to do with 'rlim' shown above. In any event
> sending a signal when the file-size exceeds some level is preposterous.
> The write should return -1 and errno should have been set to EFBIG
> (in user space). That allows the user's database to create another
> file and keep on trucking instead of blowing up and destroying the
> user's inventory or whatever else was in process.
>
> FYI, this caused the failure of a samba server for M$ stuff. It
> gives the impression of Linux being defective. This is not good.

If your application is not compiled with O_LARGEFILE, you will also
get SIGXFSZ if you try to write past the 2GB limit. This is to avoid
your application corrupting data by trying to store a 64-bit file
size in an (apparently) 32-bit data value (32-bit because you didn't
specify O_LARGEFILE).

I don't see anything in signal(7) which says that SIGXFSZ(25) can't be
caught and handled by the application, but at that point you may as
well just fix the app to just open the file with O_LARGEFILE and handle
64-bit file offsets properly.

Cheers, Andreas
--
Andreas Dilger
http://sourceforge.net/projects/ext2resize/
http://www-mddsp.enel.ucalgary.ca/People/adilger/

2003-06-10 20:04:20

by David Schwartz

[permalink] [raw]
Subject: RE: Large files


> With 32 bit return values, ix86 Linux has a file-size limitation
> which is currently about 0x7fffffff. Unfortunately, instead of
> returning from a write() with a -1 and errno being set, so that
> a program can do something about it, write() executes a signal(25)
> which kills the task even if trapped. Is this one of those <expletive
> deleted> POSIX requirements or is somebody going to fix it?

If the program were smart enough to do something sane about it, it should
be smart enough to handle the signal correctly. What do you think should
happen if a program compiled today calls 'time' in 2039? You want to shut
down the program as quickly as possible before it does something insane.

DS


2003-06-10 20:17:15

by Richard B. Johnson

[permalink] [raw]
Subject: RE: Large files

On Tue, 10 Jun 2003, David Schwartz wrote:

>
> > With 32 bit return values, ix86 Linux has a file-size limitation
> > which is currently about 0x7fffffff. Unfortunately, instead of
> > returning from a write() with a -1 and errno being set, so that
> > a program can do something about it, write() executes a signal(25)
> > which kills the task even if trapped. Is this one of those <expletive
> > deleted> POSIX requirements or is somebody going to fix it?
>
> If the program were smart enough to do something sane about it, it should
> be smart enough to handle the signal correctly. What do you think should
> happen if a program compiled today calls 'time' in 2039? You want to shut
> down the program as quickly as possible before it does something insane.
>
> DS
>

A trap on that signal doesn't even allow a longjump() to recover!
The signal can be trapped, but the kernel kills the task anyway.
All you can do is make the program print something else than
the "File too large" default. It's sick, very sick. The file-too-
big problem should have been handled properly by the kernel. The
kernel has no business making a policy decision. If the file is
getting too big, the kernel should fail to write any more than
the maximum allowable and return the correct information in the
defined API. It must not make a policy decision and kill the
task.

This has far-reaching consequences.

Even opening the file with large file attributes can result in
the file getting to large eventually. The kernel must not blow
away a task because it "thinks" something. It is not allowed to
"think". It is not allowed to generate policy.

Cheers,
Dick Johnson
Penguin : Linux version 2.4.20 on an i686 machine (797.90 BogoMips).
Why is the government concerned about the lunatic fringe? Think about it.

2003-06-10 21:57:09

by Rob Landley

[permalink] [raw]
Subject: Re: Large files

On Tuesday 10 June 2003 11:12, Richard B. Johnson wrote:
> On Tue, 10 Jun 2003, Matti Aarnio wrote:
> > On Tue, Jun 10, 2003 at 09:57:57AM -0400, Richard B. Johnson wrote:
> > > With 32 bit return values, ix86 Linux has a file-size limitation
> > > which is currently about 0x7fffffff. Unfortunately, instead of
> > > returning from a write() with a -1 and errno being set, so that
> > > a program can do something about it, write() executes a signal(25)
> > > which kills the task even if trapped. Is this one of those <expletive
> > > deleted> POSIX requirements or is somebody going to fix it?
> >
> > http://www.sas.com/standards/large.file/

Is anybody indexing these suckers? I've got a directory full of downloaded
PDFs of things like the el-torito spec and bits of posix and sus, and I was
just wondering if there's some kind of master list of all these things that
Linux actually implements.

I suspect the answer is "probably not", but i thought I'd ask...

Rob

Rob

2003-06-10 22:24:35

by Ray Lee

[permalink] [raw]
Subject: Re: Large files

> With 32 bit return values, ix86 Linux has a file-size limitation
> which is currently about 0x7fffffff. Unfortunately, instead of
> returning from a write() with a -1 and errno being set, so that
> a program can do something about it, write() executes a signal(25)
> which kills the task even if trapped

Works For Me(tm)

ray:~/work/test/signals$ ls
sig.c
ray:~/work/test/signals$ tcc -run sig.c
write errored
seem to have gone overboard, switching to next log file...
write errored
seem to have gone overboard, switching to next log file...
write errored
seem to have gone overboard, switching to next log file...
ray:~/work/test/signals$ ls -l
total 4
-rw------- 1 ray ray 2147483647 Jun 10 15:35 log.0
-rw------- 1 ray ray 2147483647 Jun 10 15:35 log.1
-rw------- 1 ray ray 2147483647 Jun 10 15:35 log.2
-rw------- 1 ray ray 259 Jun 10 15:35 log.3
-rw-r--r-- 1 ray ray 2119 Jun 10 15:33 sig.c
ray:~/work/test/signals$

Test code attached. Please excuse the somewhat haphazard structure, it
was tossed together from code I'd written for other projects.

Ray


Attachments:
sig.c (2.07 kB)