This problem has happened on 2.4.4 and 2.4.20, on an ext2 filesystem as
well as a jfs filesystem. Through normal file operations a file is
operated on and its size becomes something way too large, like this:
-rw-r--r-- 1 root root 1965107636224 Jan 26 14:59 output1.iso
The file should be 4.5 gb or so.
It is opened with this:
fd=open(savename,O_RDWR|O_CREAT|O_TRUNC|O_LARGEFILE,0644);
Operations done on the file handle are read,write, lseek64 and close.
All reads/writes to the file are in units of 2048 bytes. First something
like 4+ gigs is written to the file. Then without closing the file it
is all read out again 2048 bytes at a time. Before every read is an lseek64,
almost always right to where the file position would have been anyway.
Finally some fraction of the sectors are rewritten, on the order of 1/150,
spread pretty much evenly thoughout the file. Before every write there is
an lseek64. Then the file is closed.
I'm not certain but it may be that if I do this operation as root then
sometimes the problem occurs. I'm not certain if the problem has ever
occured when not running as root.
The resultant file can be read out beyond the actual size of the file.
What can it be reading? I'm assuming the contents of the hard drive in
other areas not part of the original file, as in other user's files.
As such it is a very real security risk.
The hard drives are IDE in both cases. 2.4.4 was ext2, and 2.4.20 was jfs.
I figure it must relate to the O_LARGEFILE since that probably hasn't been
exercised as much.
-Dave
On Sun, Jan 26, 2003 at 03:18:49PM -0800, David Ashley wrote:
> -rw-r--r-- 1 root root 1965107636224 Jan 26 14:59 output1.iso
You know about the Unix concept of files with holes?
--On Sunday, January 26, 2003 15:18:49 -0800 David Ashley <[email protected]>
wrote:
> This problem has happened on 2.4.4 and 2.4.20, on an ext2 filesystem as
> well as a jfs filesystem. Through normal file operations a file is
> operated on and its size becomes something way too large, like this:
> -rw-r--r-- 1 root root 1965107636224 Jan 26 14:59 output1.iso
>
> The file should be 4.5 gb or so.
> It is opened with this:
> fd=open(savename,O_RDWR|O_CREAT|O_TRUNC|O_LARGEFILE,0644);
>
> Operations done on the file handle are read,write, lseek64 and close.
> All reads/writes to the file are in units of 2048 bytes. First something
> like 4+ gigs is written to the file. Then without closing the file it
> is all read out again 2048 bytes at a time. Before every read is an
> lseek64, almost always right to where the file position would have been
> anyway. Finally some fraction of the sectors are rewritten, on the order
> of 1/150, spread pretty much evenly thoughout the file. Before every
> write there is an lseek64. Then the file is closed.
Are you sure you are not lseek64'ing to a position corresponding to the new
size? If you do that and then write, the write will succeed and you get a
sparse file. To test for that, what does ls -lsk say for that file? The
first column will be the number of kilobytes actually stored in the file.
If that is different from the length, it's a sparse file.
> I'm not certain but it may be that if I do this operation as root then
> sometimes the problem occurs. I'm not certain if the problem has ever
> occured when not running as root.
>
> The resultant file can be read out beyond the actual size of the file.
> What can it be reading? I'm assuming the contents of the hard drive in
> other areas not part of the original file, as in other user's files.
> As such it is a very real security risk.
Or it could just be sparse, and reading synthetic zeros that don't really
exist anywhere, which is no problem.
> The hard drives are IDE in both cases. 2.4.4 was ext2, and 2.4.20 was jfs.
> I figure it must relate to the O_LARGEFILE since that probably hasn't been
> exercised as much.
>
> -Dave
Andrew