Hi,
I should NOT get a "file too large" error when copying from a device
to a device, right?
I should NOT get a "file too large" if the files are openeed using
the "O_LARGEFILE" option, right?
Well:
execve("/bin/dd", ["dd", "if=/dev/hda", "of=/dev/hdc", "bs=1024k", "count=10"], [/* 46 vars */]) = 0
[... libs and stuff ... ]
open("/dev/hda", O_RDONLY|O_LARGEFILE) = 4
open("/dev/hdc", O_RDWR|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 5
[....signals and stuff. ]
read(4, ""..., 1048576) = 1048576
write(5, ""..., 1048576) = 1048576
read(4, ""..., 1048576) = 1048576
write(5, ""..., 1048576) = 1048576
read(4, ""..., 1048576) = 1048576
write(5, ""..., 1048576) = 1048576
read(4, ""..., 1048576) = 1048576
write(5, ""..., 1048576) = 1048576
read(4, ""..., 1048576) = 1048576
write(5, ""..., 1048576) = 1048576
read(4, ""..., 1048576) = 1048576
write(5, ""..., 1048576) = 1048576
read(4, ""..., 1048576) = 1048576
write(5, ""..., 1048576) = 1048576
read(4, ""..., 1048576) = 1048576
write(5, ""..., 1048576) = 1048576
read(4, ""..., 1048576) = 1048576
write(5, ""..., 1048576) = 1048576
read(4, ""..., 1048576) = 1048576
write(5, ""..., 1048576) = 1048576
munmap(0x400fb000, 1052672) = 0
write(2, "10+0 records in\n", 16) = 16
write(2, "10+0 records out\n", 17) = 17
close(4) = 0
close(5) = 0
_exit(0) = ?
But without the "count=10" I get:
read(4, ""..., 1048576) = 1048576
write(5, ""..., 1048576) = 1048576
read(4, ""..., 1048576) = 1048576
write(5, ""..., 1048576) = 1048575
write(5, ".", 1) = -1 EFBIG (File too large)
This is on 2.2.14. I Could swear we made a working copy of a disk 30
minutes earlier....
Roger.
--
** [email protected] ** http://www.BitWizard.nl/ ** +31-15-2137555 **
*-- BitWizard writes Linux device drivers for any device you may have! --*
* There are old pilots, and there are bold pilots.
* There are also old, bald pilots.
On Nov 18, 2001 14:26 +0100, Rogier Wolff wrote:
> I should NOT get a "file too large" error when copying from a device
> to a device, right?
>
> I should NOT get a "file too large" if the files are openeed using
> the "O_LARGEFILE" option, right?
>
> read(4, ""..., 1048576) = 1048576
> write(5, ""..., 1048576) = 1048576
> read(4, ""..., 1048576) = 1048576
> write(5, ""..., 1048576) = 1048575
> write(5, ".", 1) = -1 EFBIG (File too large)
>
>
>
> This is on 2.2.14. I Could swear we made a working copy of a disk 30
> minutes earlier....
Hmm, you mean 2.4.14 I take it? There is another report saying 2.4.14
also "Creating partitions under 2.4.14", and I have read several more
recently but am unsure of the exact kernel version. What fs are you
using, just in case it matters?
I know for sure that 2.4.13+ext3 is working mostly OK, as I have been
playing with multi-TB file sizes (sparse of course) although there is
a minor bug in the case where you hit the fs size maximum. I'm glad
my patch isn't in yet, or I would be getting flak over this I'm sure.
The only problem is that I can't see anything in the 2.4.14 patch which
would cause this problem. All the previous reports had to do with
ulimit, caused by su'ing to root instead of logging into root, but I'm
not sure exactly where the problem lies.
Cheers, Andreas
--
Andreas Dilger
http://sourceforge.net/projects/ext2resize/
http://www-mddsp.enel.ucalgary.ca/People/adilger/
Andreas Dilger wrote:
> On Nov 18, 2001 14:26 +0100, Rogier Wolff wrote:
> > I should NOT get a "file too large" error when copying from a device
> > to a device, right?
> >
> > I should NOT get a "file too large" if the files are openeed using
> > the "O_LARGEFILE" option, right?
> >
> > read(4, ""..., 1048576) = 1048576
> > write(5, ""..., 1048576) = 1048576
> > read(4, ""..., 1048576) = 1048576
> > write(5, ""..., 1048576) = 1048575
> > write(5, ".", 1) = -1 EFBIG (File too large)
> >
> >
> >
> > This is on 2.2.14. I Could swear we made a working copy of a disk 30
> > minutes earlier....
>
> Hmm, you mean 2.4.14 I take it?
Typo. Yes.
> There is another report saying 2.4.14
> also "Creating partitions under 2.4.14", and I have read several more
> recently but am unsure of the exact kernel version. What fs are you
> using, just in case it matters?
ext2.
It first worked on one machine, then we moved the harddisk to another
machine and it suddenly stopped working as described above.
We since then moved back to the first machine, and it worked again.
Then we moved to the second machine, which now works great too.
In short: Cannot reproduce anymore....
> I know for sure that 2.4.13+ext3 is working mostly OK, as I have been
> playing with multi-TB file sizes (sparse of course) although there is
> a minor bug in the case where you hit the fs size maximum. I'm glad
> my patch isn't in yet, or I would be getting flak over this I'm sure.
> The only problem is that I can't see anything in the 2.4.14 patch which
> would cause this problem. All the previous reports had to do with
> ulimit, caused by su'ing to root instead of logging into root, but I'm
> not sure exactly where the problem lies.
Gotcha!!!!
The "wouldn't work" case was tested by me, logged in as wolff, su-ing
to root, and the "works just fine" cases were tested by a guy who logs
in to the machine on the console (as root).
Now, can someone tell me why "unlimited" is interpreted somehow as 2G
or something thereabouts? :
/home/wolff> limit
cputime unlimited
filesize unlimited
datasize unlimited
stacksize unlimited
coredumpsize unlimited
memoryuse unlimited
descriptors 1024
memorylocked unlimited
maxproc 4095
openfiles 1024
/home/wolff> su
Password:
/home/wolff# limit
cputime unlimited
filesize unlimited
datasize unlimited
stacksize unlimited
coredumpsize unlimited
memoryuse unlimited
descriptors 1024
memorylocked unlimited
maxproc 4095
openfiles 1024
/home/wolff# cat /proc/version
Linux version 2.4.9 (wolff@machine) (gcc version 2.95.2 19991024 (release)) #3 SMP
Mon Sep 10 09:17:17 BST 2001
/home/wolff#
(The machine was downgraded due to other problems. )
Roger.
--
** [email protected] ** http://www.BitWizard.nl/ ** +31-15-2137555 **
*-- BitWizard writes Linux device drivers for any device you may have! --*
* There are old pilots, and there are bold pilots.
* There are also old, bald pilots.
On Mon, Nov 19, 2001 at 06:28:55PM +0100, Rogier Wolff wrote:
> Now, can someone tell me why "unlimited" is interpreted somehow as 2G
> or something thereabouts? :
>
> /home/wolff> limit
> cputime unlimited
> filesize unlimited
I seem to recall seeing something about this on lkml: the kernel
considers 0xffffffff to be unlimited internally, but the setrlimit
syscall converts user-supplied 0xffffffff into 0x7fffffff.
Here's the relevant thread:
http://www.uwsg.indiana.edu/hypermail/linux/kernel/0111.1/0086.html
Marius Gedminas
--
"If it ain't broke, don't fix it."
- Bert Lantz
On Nov 19, 2001 18:28 +0100, Rogier Wolff wrote:
> > There is another report saying 2.4.14
> > also "Creating partitions under 2.4.14", and I have read several more
> > recently but am unsure of the exact kernel version. What fs are you
> > using, just in case it matters?
>
> ext2.
Well, I just tried this on ext2 instead of ext3 (on my 2.4.13 system)
and it worked fine as a logged-in non-root user (creates a 16GB sparse file):
dd if=/dev/zero of=tt bs=1k count=1 seek=16M
> > I know for sure that 2.4.13+ext3 is working mostly OK, as I have been
> > playing with multi-TB file sizes (sparse of course) although there is
> > a minor bug in the case where you hit the fs size maximum. I'm glad
> > my patch isn't in yet, or I would be getting flak over this I'm sure.
>
> > The only problem is that I can't see anything in the 2.4.14 patch which
> > would cause this problem. All the previous reports had to do with
> > ulimit, caused by su'ing to root instead of logging into root, but I'm
> > not sure exactly where the problem lies.
>
> Gotcha!!!!
>
> The "wouldn't work" case was tested by me, logged in as wolff, su-ing
> to root, and the "works just fine" cases were tested by a guy who logs
> in to the machine on the console (as root).
>
>
> Now, can someone tell me why "unlimited" is interpreted somehow as 2G
> or something thereabouts? :
>
> /home/wolff> limit
> cputime unlimited
> filesize unlimited
> datasize unlimited
> stacksize unlimited
> coredumpsize unlimited
> memoryuse unlimited
> descriptors 1024
> memorylocked unlimited
> maxproc 4095
> openfiles 1024
> /home/wolff> su
> Password:
> /home/wolff# limit
> cputime unlimited
> filesize unlimited
> datasize unlimited
> stacksize unlimited
> coredumpsize unlimited
> memoryuse unlimited
> descriptors 1024
> memorylocked unlimited
> maxproc 4095
> openfiles 1024
Well, because of 32-bit API issues "unlimited" actually IS the same as 2G
for 32-bit systems, but the code internally checks if the limit is equal
to RLIM_INFINITY (mm/filemap.c:generic_file_write()) and (should) ignore
it if so. Thus it is impossible to set a ulimit exactly 2GB, but that
isn't really a problem.
Hmm, looking at the user-space header <customs/customs.h>, it has
RLIM_INFINITY as 0x7fffffff, the <bits/resource.h> has:
#ifndef __USE_FILE_OFFSET64
# define RLIM_INFINITY ((long int) (~0UL >> 1))
#else
# define RLIM_INFINITY 0x7fffffffffffffffLL
#endif
but the kernel code has RLIM_INFINITY as ~0UL for most arches.
> /home/wolff# cat /proc/version
> Linux version 2.4.9 (wolff@machine) (gcc version 2.95.2 19991024 (release)) #3 SMP
> Mon Sep 10 09:17:17 BST 2001
> /home/wolff#
>
> (The machine was downgraded due to other problems. )
Can you test the "dd" above to ensure it works with your tools and the old
kernel? For your next 2.4.14 kernel build, it may be instructive to put
a printk() inside the 3 checks in generic_file_write() before it outputs
SIGXFSZ, which tells us limit and RLIM_INIFINITY, pos and count, and pos
and s_maxbytes are, respectively. This will also tell us what limit is
being hit (although it is most likely a ulimit issue).
Cheers, Andreas
--
Andreas Dilger
http://sourceforge.net/projects/ext2resize/
http://www-mddsp.enel.ucalgary.ca/People/adilger/
Andreas Dilger wrote:
> On Nov 19, 2001 18:28 +0100, Rogier Wolff wrote:
> > > There is another report saying 2.4.14
> > > also "Creating partitions under 2.4.14", and I have read several more
> > > recently but am unsure of the exact kernel version. What fs are you
> > > using, just in case it matters?
> >
> > ext2.
>
> Well, I just tried this on ext2 instead of ext3 (on my 2.4.13 system)
> and it worked fine as a logged-in non-root user (creates a 16GB sparse file):
>
> dd if=/dev/zero of=tt bs=1k count=1 seek=16M
/tmp> dd if=/dev/zero of=tt bs=1k count=1 seek=16M
dd: tt: Invalid argument
1+0 records in
1+0 records out
/tmp> dd if=/dev/zero of=tt bs=1k seek=2047k
19913+0 records in
19912+0 records out
^C
/tmp> ls -al tt
ls: tt: Value too large for defined data type
/tmp> su
Password:
/tmp# rm tt
rm: cannot remove `tt': Value too large for defined data type
/tmp# mv tt xx
mv: tt: Value too large for defined data type
/tmp# rm -f tt
rm: cannot remove `tt': Value too large for defined data type
/tmp# dd if=/dev/zero of=uu bs=1k count=2050 seek=2047k
2050+0 records in
2050+0 records out
/tmp# l uu
ls: uu: Value too large for defined data type
/tmp#
> Can you test the "dd" above to ensure it works with your tools and the old
> kernel? For your next 2.4.14 kernel build, it may be instructive to put
> a printk() inside the 3 checks in generic_file_write() before it outputs
> SIGXFSZ, which tells us limit and RLIM_INIFINITY, pos and count, and pos
> and s_maxbytes are, respectively. This will also tell us what limit is
> being hit (although it is most likely a ulimit issue).
Grmbl... I'll see what I can do.
Rogier.
--
** [email protected] ** http://www.BitWizard.nl/ ** +31-15-2137555 **
*-- BitWizard writes Linux device drivers for any device you may have! --*
* There are old pilots, and there are bold pilots.
* There are also old, bald pilots.
On Nov 19, 2001 21:13 +0100, Rogier Wolff wrote:
> Andreas Dilger wrote:
> > dd if=/dev/zero of=tt bs=1k count=1 seek=16M
>
>
> /tmp> dd if=/dev/zero of=tt bs=1k count=1 seek=16M
> dd: tt: Invalid argument
> 1+0 records in
> 1+0 records out
Invalid argument is probably from ftruncate.
> /tmp> dd if=/dev/zero of=tt bs=1k seek=2047k
> 19913+0 records in
> 19912+0 records out
> ^C
> /tmp> ls -al tt
> ls: tt: Value too large for defined data type
> /tmp> su
> Password:
> /tmp# rm tt
> rm: cannot remove `tt': Value too large for defined data type
> /tmp# mv tt xx
> mv: tt: Value too large for defined data type
> /tmp# rm -f tt
> rm: cannot remove `tt': Value too large for defined data type
> /tmp# dd if=/dev/zero of=uu bs=1k count=2050 seek=2047k
> 2050+0 records in
> 2050+0 records out
> /tmp# l uu
> ls: uu: Value too large for defined data type
> /tmp#
Looks like your fileutils and/or shell and/or glibc are conspiring against
you.
> > Can you test the "dd" above to ensure it works with your tools and the old
> > kernel? For your next 2.4.14 kernel build, it may be instructive to put
> > a printk() inside the 3 checks in generic_file_write() before it outputs
> > SIGXFSZ, which tells us limit and RLIM_INIFINITY, pos and count, and pos
> > and s_maxbytes are, respectively. This will also tell us what limit is
> > being hit (although it is most likely a ulimit issue).
>
> Grmbl... I'll see what I can do.
Start by upgrading your tools to largefile aware ones.
Cheers, Andreas
--
Andreas Dilger
http://sourceforge.net/projects/ext2resize/
http://www-mddsp.enel.ucalgary.ca/People/adilger/