2002-11-12 15:34:03

by Adam Voigt

[permalink] [raw]
Subject: File Limit in Kernel?

I have a directory with 39,000 files in it, and I'm trying to use the cp
command to copy them into another directory, and neither the cp or the
mv command will work, they both same "argument list too long" when I
use:

cp -f * /usr/local/www/images

or

mv -f * /usr/local/www/images

Is this a kernel limitation? If yes, how can I get around it?
If no, anyone know a workaround? I appreciate it.

--
Adam Voigt ([email protected])
The Cryptocomm Group
My GPG Key: http://64.238.252.49:8080/adam_at_cryptocomm.asc


Attachments:
signature.asc (189.00 B)
This is a digitally signed message part

2002-11-12 15:50:32

by Andreas Gruenbacher

[permalink] [raw]
Subject: Re: File Limit in Kernel?

On Tuesday 12 November 2002 16:38, Adam Voigt wrote:
> I have a directory with 39,000 files in it, and I'm trying to use the cp
> command to copy them into another directory, and neither the cp or the
> mv command will work, they both same "argument list too long" when I
> use:
>
> cp -f * /usr/local/www/images
>
> or
>
> mv -f * /usr/local/www/images

Note that this is not a kernel related question. The * in the command line is
expanded into a list of all entries in the current directory, which results
in a command line longer than allowed. Try this instead:

find -maxdepth 1 -print0 | \
xargs -0 --replace=% cp -f % /usr/local/www/images

--Andreas.

2002-11-12 15:46:39

by Andi Kleen

[permalink] [raw]
Subject: Re: File Limit in Kernel?

Adam Voigt <[email protected]> writes:

> --=-gjU1b+qYiuNy2hSLpQYr
> Content-Type: text/plain
> Content-Transfer-Encoding: quoted-printable
>
> I have a directory with 39,000 files in it, and I'm trying to use the cp
> command to copy them into another directory, and neither the cp or the
> mv command will work, they both same "argument list too long" when I
> use:
>
> cp -f * /usr/local/www/images

Kind of. The * is expanded by the shell. The kernel limits the max
length of program arguments, which is biting you here. In theory you
could increase the MAX_ARG_PAGES #define in linux/binfmts.h and
recompile. No guarantee that it won't have any bad side effects
though. The default is rather low, it should be probably increased
(I also regularly run into this)

The actual limit of files per directory is usually around 65000 in
most fs.

For your immediate problem you can use

find -type f | xargs cp -iX -f X /usr/local/www/images


-Andi

2002-11-12 15:55:00

by Kent Borg

[permalink] [raw]
Subject: Re: File Limit in Kernel?

On Tue, Nov 12, 2002 at 10:38:55AM -0500, Adam Voigt wrote:
> I have a directory with 39,000 files in it, and I'm trying to use the cp
> command to copy them into another directory,
> [...]
> "argument list too long"

No, it is not a kernel limit, it is a limit to your shell (bash, for
example). Look at xargs to get around it.

A related limit is that the popular ext2 and 3 file systems get
inefficient when directories have so many files. The work-around for
that is to have your files either hashed or organized across a
collection of directories.


-kb

2002-11-12 16:05:13

by Matt Reppert

[permalink] [raw]
Subject: Re: File Limit in Kernel?

On 12 Nov 2002 10:38:55 -0500
Adam Voigt <[email protected]> wrote:

> I have a directory with 39,000 files in it, and I'm trying to use the cp
> command to copy them into another directory, and neither the cp or the
> mv command will work, they both same "argument list too long" when I
> use: cp -f * /usr/local/www/images
>
> Is this a kernel limitation?

Yes, but you can get around it in userspace. See
http://www.linuxjournal.com/article.php?sid=6060

(Short answer is this from include/linux/binfmts.h.:
/*
* MAX_ARG_PAGES defines the number of pages allocated for arguments
* and envelope for the new program. 32 should suffice, this gives
* a maximum env+arg of 128kB w/4KB pages!
*/
#define MAX_ARG_PAGES 32

I'm assuming your arch has 4kb pages, so the 'problem' is that you're
passing more than 128kb of cmdline arguments.)

> If yes, how can I get around it?

The article has a bunch of ways. You don't really need to change
the kernel though ... if you don't mind generating 39000 new processes
one after the other, I'd do something like 'for FILE in * ; do mv $FILE
/usr/local/www/images ; done'. Probably slower than calling mv once due
to process overhead done 39000 times, but it *works*, and it's simple.
(If you use a csh instead of something bash-like, change that to fit.)

Matt

2002-11-12 16:00:05

by Chris Friesen

[permalink] [raw]
Subject: Re: File Limit in Kernel?

Adam Voigt wrote:
> I have a directory with 39,000 files in it, and I'm trying to use the cp
> command to copy them into another directory, and neither the cp or the
> mv command will work, they both same "argument list too long" when I
> use:
>
> cp -f * /usr/local/www/images
>
> or
>
> mv -f * /usr/local/www/images
>
> Is this a kernel limitation?

It's not a kernel limitiation, its a shell limitation. "*" expands to
the list of 39000 names, which is too large.


You could try something like:

ls .|xargs cp -f --target-directory=/usr/local/www/images


Chris

--
Chris Friesen | MailStop: 043/33/F10
Nortel Networks | work: (613) 765-0557
3500 Carling Avenue | fax: (613) 765-2986
Nepean, ON K2H 8E9 Canada | email: [email protected]

2002-11-12 15:59:24

by Richard B. Johnson

[permalink] [raw]
Subject: Re: File Limit in Kernel?

On 12 Nov 2002, Adam Voigt wrote:

> I have a directory with 39,000 files in it, and I'm trying to use the cp
> command to copy them into another directory, and neither the cp or the
> mv command will work, they both same "argument list too long" when I
> use:
>
> cp -f * /usr/local/www/images
>
> or
>
> mv -f * /usr/local/www/images
>
> Is this a kernel limitation? If yes, how can I get around it?
> If no, anyone know a workaround? I appreciate it.
>

The '*' is expanded by your shell to be a command-line that has
39,000 file-names in it! It is probably way too long for a command-
line (argument list).

The easiest way is to do:

mv -f a* /usr/local/www/images
mv -f b* /usr/local/www/images
mv -f c* /usr/local/www/images

... until the remaining argument list is short enough for the '*' only.

You can also do a loop in a shell if the shell's internal buffer is
big enough for the expanded '*' ...

for x in * ; do mv -f $x /usr/local/www/images ; done


Cheers,
Dick Johnson
Penguin : Linux version 2.4.18 on an i686 machine (797.90 BogoMips).
Bush : The Fourth Reich of America


2002-11-12 16:09:15

by Bernd Eckenfels

[permalink] [raw]
Subject: Re: File Limit in Kernel?

In article <[email protected]> you wrote:
> Is this a kernel limitation? If yes, how can I get around it?
> If no, anyone know a workaround? I appreciate it.

the * is expanded to a string, which is too long to be handed as a single
command line argument. Think it is in the kernel, yes. You can solve this by
using find|xargs or tar|tar or by specifying a directory.

Greetings
Bernd

2002-11-12 16:14:13

by Mark Mielke

[permalink] [raw]
Subject: Re: File Limit in Kernel?

On Tue, Nov 12, 2002 at 11:01:49AM -0500, Kent Borg wrote:
> On Tue, Nov 12, 2002 at 10:38:55AM -0500, Adam Voigt wrote:
> > I have a directory with 39,000 files in it, and I'm trying to use the cp
> > command to copy them into another directory,
> > [...]
> > "argument list too long"
> No, it is not a kernel limit, it is a limit to your shell (bash, for
> example). Look at xargs to get around it.

>From "man execve":

E2BIG The argument list is too big.

>From "man sysconf":

_SC_ARG_MAX
The maximum length of the arguments to the exec()
family of functions; the corresponding macro is
ARG_MAX.

On my RedHat 8.0 box:

$ getconf ARG_MAX
131072

It is definately a kernel limitation, although as other people have
pointed out, there are common userspace solutions to the problem.

mark

--
[email protected]/[email protected]/[email protected] __________________________
. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder
|\/| |_| |_| |/ |_ |\/| | |_ | |/ |_ |
| | | | | \ | \ |__ . | | .|. |__ |__ | \ |__ | Ottawa, Ontario, Canada

One ring to rule them all, one ring to find them, one ring to bring them all
and in the darkness bind them...

http://mark.mielke.cc/

2002-11-12 21:49:49

by Jamie Lokier

[permalink] [raw]
Subject: Re: File Limit in Kernel?

Andi Kleen wrote:
> > cp -f * /usr/local/www/images
>
> Kind of. The * is expanded by the shell. The kernel limits the max
> length of program arguments, which is biting you here. In theory you
> could increase the MAX_ARG_PAGES #define in linux/binfmts.h and
> recompile. No guarantee that it won't have any bad side effects
> though. The default is rather low, it should be probably increased
> (I also regularly run into this)

Yes, you can do this. I used to do it with 2.0 kernels, because our
"make" command lines were very lock, and you couldn't use files to
hold the list of make-generated names because even "echo
$(LIST_OF_FILES) > list" hit this limit.

Ages ago somebody promised to fix this limitation :)

-- Jamie

2002-11-12 23:18:00

by Bill Davidsen

[permalink] [raw]
Subject: Re: File Limit in Kernel?

On 12 Nov 2002, Adam Voigt wrote:

> I have a directory with 39,000 files in it, and I'm trying to use the cp
> command to copy them into another directory, and neither the cp or the
> mv command will work, they both same "argument list too long" when I
> use:
>
> cp -f * /usr/local/www/images
>
> or
>
> mv -f * /usr/local/www/images
>
> Is this a kernel limitation? If yes, how can I get around it?
> If no, anyone know a workaround? I appreciate it.

Sort of, there is a limit to how many k of arguments you can have on a
command line. Having grown up with much smaller limits in early UNIX, I
got into the habit of using xargs when I wasn't sure. You can avoid one
exec per file behaviour by something like:
ls | xargs -l50 echo | xargs -i mv "{}" destdir

You can also do useful things by using find and the -p option of cpio.
--
bill davidsen <[email protected]>
CTO, TMR Associates, Inc
Doing interesting things with little computers since 1979.


Attachments:
signature.asc (189.00 B)
This is a digitally signed message part

2002-11-13 09:44:51

by Jan Hudec

[permalink] [raw]
Subject: Re: File Limit in Kernel?

On Tue, Nov 12, 2002 at 04:57:20PM +0100, Andreas Gruenbacher wrote:
> On Tuesday 12 November 2002 16:38, Adam Voigt wrote:
> > I have a directory with 39,000 files in it, and I'm trying to use the cp
> > command to copy them into another directory, and neither the cp or the
> > mv command will work, they both same "argument list too long" when I
> > use:
> >
> > cp -f * /usr/local/www/images
> >
> > or
> >
> > mv -f * /usr/local/www/images
>
> Note that this is not a kernel related question. The * in the command line is
> expanded into a list of all entries in the current directory, which results
> in a command line longer than allowed. Try this instead:
>
> find -maxdepth 1 -print0 | \
> xargs -0 --replace=% cp -f % /usr/local/www/images

Find has an -exec operator in the first place, so this is a little:

find -maxdepth 1 -exec cp -f '{}' /usr/local/www/images ';'

-------------------------------------------------------------------------------
Jan 'Bulb' Hudec <[email protected]>

2002-11-13 10:42:25

by Andreas Schwab

[permalink] [raw]
Subject: Re: File Limit in Kernel?

Jan Hudec <[email protected]> writes:

|> On Tue, Nov 12, 2002 at 04:57:20PM +0100, Andreas Gruenbacher wrote:
|> > On Tuesday 12 November 2002 16:38, Adam Voigt wrote:
|> > > I have a directory with 39,000 files in it, and I'm trying to use the cp
|> > > command to copy them into another directory, and neither the cp or the
|> > > mv command will work, they both same "argument list too long" when I
|> > > use:
|> > >
|> > > cp -f * /usr/local/www/images
|> > >
|> > > or
|> > >
|> > > mv -f * /usr/local/www/images
|> >
|> > Note that this is not a kernel related question.

Actually it is, because it's a kernel limit. Userspace does not have
this problem in general.

|> > expanded into a list of all entries in the current directory, which results
|> > in a command line longer than allowed. Try this instead:
|> >
|> > find -maxdepth 1 -print0 | \
|> > xargs -0 --replace=% cp -f % /usr/local/www/images
|>
|> Find has an -exec operator in the first place, so this is a little:
|>
|> find -maxdepth 1 -exec cp -f '{}' /usr/local/www/images ';'

Or even using the shell:

for f in *; do cp -f "$f" /usr/local/www/image; done

Andreas.

--
Andreas Schwab, SuSE Labs, [email protected]
SuSE Linux AG, Deutschherrnstr. 15-19, D-90429 N?rnberg
Key fingerprint = 58CA 54C7 6D53 942B 1756 01D3 44D5 214B 8276 4ED5
"And now for something completely different."

2002-11-13 20:24:07

by Pavel Machek

[permalink] [raw]
Subject: Re: File Limit in Kernel?

Hi!

> > I have a directory with 39,000 files in it, and I'm trying to use the cp
> > command to copy them into another directory, and neither the cp or the
> > mv command will work, they both same "argument list too long" when I
> > use:
> >
> > cp -f * /usr/local/www/images
>
> Kind of. The * is expanded by the shell. The kernel limits the max
> length of program arguments, which is biting you here. In theory you
> could increase the MAX_ARG_PAGES #define in linux/binfmts.h and
> recompile. No guarantee that it won't have any bad side effects
> though. The default is rather low, it should be probably increased
> (I also regularly run into this)

I have been making that limit higher 5 years ago. Perhaps its time to
up it for everyone?
Pavel
--
When do you have heart between your knees?

2002-11-19 15:07:51

by Shalon Wood

[permalink] [raw]
Subject: Re: File Limit in Kernel?

Pavel Machek <[email protected]> writes:

> Hi!
>
> > > I have a directory with 39,000 files in it, and I'm trying to use the cp
> > > command to copy them into another directory, and neither the cp or the
> > > mv command will work, they both same "argument list too long" when I
> > > use:
> > >
> > > cp -f * /usr/local/www/images
> >
> > Kind of. The * is expanded by the shell. The kernel limits the max
> > length of program arguments, which is biting you here. In theory you
> > could increase the MAX_ARG_PAGES #define in linux/binfmts.h and
> > recompile. No guarantee that it won't have any bad side effects
> > though. The default is rather low, it should be probably increased
> > (I also regularly run into this)
>
> I have been making that limit higher 5 years ago. Perhaps its time to
> up it for everyone?

Is this something that _must_ be set at compile time, or could it be
made tuneable via /proc?

Shalon Wood

--

2002-11-19 19:37:18

by Pavel Machek

[permalink] [raw]
Subject: Re: File Limit in Kernel?

Hi!

> >
> > > > I have a directory with 39,000 files in it, and I'm trying to use the cp
> > > > command to copy them into another directory, and neither the cp or the
> > > > mv command will work, they both same "argument list too long" when I
> > > > use:
> > > >
> > > > cp -f * /usr/local/www/images
> > >
> > > Kind of. The * is expanded by the shell. The kernel limits the max
> > > length of program arguments, which is biting you here. In theory you
> > > could increase the MAX_ARG_PAGES #define in linux/binfmts.h and
> > > recompile. No guarantee that it won't have any bad side effects
> > > though. The default is rather low, it should be probably increased
> > > (I also regularly run into this)
> >
> > I have been making that limit higher 5 years ago. Perhaps its time to
> > up it for everyone?
>
> Is this something that _must_ be set at compile time, or could it be
> made tuneable via /proc?

Probably can be made tuneable, but I've nonot seen a patch...

Pavel

--
Casualities in World Trade Center: ~3k dead inside the building,
cryptography in U.S.A. and free speech in Czech Republic.