2005-03-21 03:07:06

by William Beebe

[permalink] [raw]
Subject: forkbombing Linux distributions

The following quote is from the article "Linux Kernel Security, Again"
(http://www.securityfocus.com/columnists/308):

"Don't get me wrong. Linux doesn't suck. But I do believe that the
Linux kernel team (and some of the Linux distributions that are still
vulnerable to fork bombing) need to take proactive security a little
more seriously. I'm griping for a reason here -- things need to be
change."

Sure enough, I created the following script and ran it as a non-root user:

#!/bin/bash
$0 & $0 &

and ran it on Fedora Core 3 with kernel 2.6.11.5 (the box is an Athlon
XP 2500+ Barton with 512M on an nForce2 board). The system locked up
tighter than a drum. However... After about two minutes the system
"unlocked" and responsiveness returned to normal. I can see where this
would be an issue on a production system, especially if you could kick
off a new fork bomb to continuously lock the system.

Is this really a kernel issue? Or is there a better way in userland to
stop this kind of crap?

Regards


2005-03-21 03:22:33

by Dave Jones

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

On Sun, Mar 20, 2005 at 10:06:57PM -0500, William Beebe wrote:

> Is this really a kernel issue? Or is there a better way in userland to
> stop this kind of crap?

man ulimit

Dave

2005-03-21 03:26:27

by William Beebe

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

Thanks. That's what I thought. Sorry for the annoyance.


On Sun, 20 Mar 2005 22:22:21 -0500, Dave Jones <[email protected]> wrote:
> On Sun, Mar 20, 2005 at 10:06:57PM -0500, William Beebe wrote:
>
> > Is this really a kernel issue? Or is there a better way in userland to
> > stop this kind of crap?
>
> man ulimit
>
> Dave
>
>

2005-03-21 03:27:13

by Peter Chubb

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

>>>>> "William" == William Beebe <[email protected]> writes:

William> Sure enough, I created the following script and ran it as a
William> non-root user:

William> #!/bin/bash $0 & $0 &

There are two approaches to fixing this.
1. Rate limit fork(). Unfortunately some legitimate usges do a lot
of forking, and you don't really want to slow them down.
2. Limit (per user) the number of processes allowed. This is what's
currently done; and if you as administrator want to you can set
RLIMIT_NPROC in /etc/security/limits.conf

On an almost-single-user system such as most desktops, there isn't much
point in setting this. On shared systems, it can be useful.

--
Dr Peter Chubb http://www.gelato.unsw.edu.au peterc AT gelato.unsw.edu.au
The technical we do immediately, the political takes *forever*

2005-03-21 05:15:33

by Grant Coady

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

On Mon, 21 Mar 2005 14:27:55 +1100, Peter Chubb <[email protected]> wrote:

>>>>>> "William" == William Beebe <[email protected]> writes:
>
>William> Sure enough, I created the following script and ran it as a
>William> non-root user:
>
>William> #!/bin/bash $0 & $0 &
>
>There are two approaches to fixing this.
> 1. Rate limit fork(). Unfortunately some legitimate usges do a lot
> of forking, and you don't really want to slow them down.
> 2. Limit (per user) the number of processes allowed. This is what's
> currently done; and if you as administrator want to you can set
> RLIMIT_NPROC in /etc/security/limits.conf
>
>On an almost-single-user system such as most desktops, there isn't much
>point in setting this. On shared systems, it can be useful.

Had to try it out of curiosity, five ssh logins at the time,
but I hit Ctrl-S on the terminal running forkbomb, then other
terminals responsive and I could recover, do 'killall forkbomb'.

Even 'top' segfaulted. Machine didn't die though.

slackware-current running 2.4.29-hf5

Just checked logs, messages: --> kernel: VFS: file-max limit 52427 reached
nothing in syslog or debug

Cheers,
Grant.

2005-03-21 07:41:31

by Jan Engelhardt

[permalink] [raw]
Subject: Re: forkbombing Linux distributions


>>William> Sure enough, I created the following script and ran it as a
>>William> non-root user:
>>
>>William> #!/bin/bash $0 & $0 &
>>
>>There are two approaches to fixing this.
>> 1. Rate limit fork()
>> 2. Limit (per user) the number of processes allowed
>
>Had to try it out of curiosity, five ssh logins at the time,
>but I hit Ctrl-S on the terminal running forkbomb, then other
>terminals responsive and I could recover, do 'killall forkbomb'.

By the time you killed a handful of procs, the other half spawned new ones.

You can try stopping forkbombs by "killall -STOP nameofprog" and then
"killall -9 nameofprog".

But you probably won't get to run killall in case of a thrasher running within
the limits of `ulimit -m` and `ulimit -u`:
perl -e 'fork,$_="x"x 10E6 while 1'


Jan Engelhardt
--

2005-03-22 11:29:41

by Hikaru1

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

Alright, I noticed this article scared a few of my friends, so I decided to
figure out on my own a way to prevent fork bombing from completely disabling
my machine.

This is only one way to do this, and it's not particularly elegant, but it
gets the job done. If you want something more elegant, try using ulimit or
/etc/limits instead. Me? This is good enough.

Create, or edit the file /etc/sysctl.conf

In the file, find a line or otherwise create one labelled:
kernel.threads-max = 250

Now make sure at startup something runs

sysctl -p

- on my slackware 10.1 system I had to edit /etc/rc.d/rc.local and add a
line specifically to do this.

Mind that this isn't the best solution. This limits all users, everything to
250 procs, you cannot run more. If your running system or server uses more,
adjust the number accordingly.

An example of an attack this stops in its tracks:
:(){ :|:&};:

(as a command to bash)

Attacks this *limits* and enables the user to do something about:
Create a file, and put in it:

#!/bin/bash
$0 & $0 &

chmod +x it, then run it.

This will prevent it from exceeding the procs limits, but it will *not*
completely stop it. The only way to kill it off successfully is to killall
-9 the script name repeatedly. Note that you'll occasionally be unable to
run killall since the forkbomb will be hitting the limit very often.

Like I said, this is not an elegant solution, however it does increase the
ability of the person owning the machine to do something about it.

Of course, you should always use a bat on the user if nothing else works. ;)

Hikaru

2005-03-22 11:50:07

by Jan Engelhardt

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

>
>This will prevent it from exceeding the procs limits, but it will *not*
>completely stop it.

What if the few procs that he may spawn also grab so much memory so your
machine disappears in swap-t(h)rashing?

>The only way to kill it off successfully is to killall
>-9 the script name repeatedly.

As said earlier, killall -STOP first
=> keeps the number of processes constant (so he can't spawn any new ones)

>Of course, you should always use a bat on the user if nothing else works. ;)

Use a keylogger if you distrust, and after a bombing,
look who set us up the bomb.



Jan Engelhardt
--

2005-03-22 12:50:35

by Hikaru1

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

On Tue, Mar 22, 2005 at 12:49:58PM +0100, Jan Engelhardt wrote:
> >
> >This will prevent it from exceeding the procs limits, but it will *not*
> >completely stop it.
>
> What if the few procs that he may spawn also grab so much memory so your
> machine disappears in swap-t(h)rashing?
While I have figured out how it'd be possible in theory to prevent things
from grabbing so much memory that your computer enters swap death, I haven't
been able to figure out what reasonable defaults would be for myself or
others. Soooo, I suggest everyone who is worried about this check the
manpage for 'limits' which tells you how to do this. My machine runs various
rediculously large and small programs - I'm not sure a forkbomb could be
stopped without hindering the usage of some of the games on my desktop
machine.

On a server or something with multiple users however, I'm sure you could
configure each user independently with resource limits. Most servers
don't have users that play games which take up 90% of the ram. :)

In any case, I was forced by various smarter-than-I people to come up with a
better solution to our problem as they were able to make forkbombs that did
a much better job of driving me crazy. :)

If you edit or create /etc/limits and set as the only line

* U250

It'll do the same thing as the sysctl hack, except root will still be able
to run programs. Programs like ps and kill/killall.

If you've actually implemented the sysctl.conf hack I spoke of previously, I
suggest setting it back to whatever it used to be before, or deleting the
line from /etc/sysctl.conf altogether.

/etc/limits does a better job at stopping forkbombs.

This is an example of a program in C my friends gave me that forkbombs.
My previous sysctl.conf hack can't stop this, but the /etc/limits solution
enables the owner of the computer to do something about it as root.

int main() { while(1) { fork(); } }

Hikaru

2005-03-22 17:09:57

by Natanael Copa

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

Hi list!

(I'm new to this list so I'm sorry this mail has not correct thread id)

I have been following this forkbombing discussions and I would like to
point out a few things:

* When setting limits /etc/limits (or /etc/security/limits.conf) you
will prevent logged in users to fork too many processes. However, this
setting will not prevent a missbehaving daemon that is started from a
bootscript to fork too many processes, even if running as non root.

* Linux is very generous allowing maximum numbers of processes for
non-root users by default in comparation to other *nixes.

The kernel defaults is calculated from the amount of RAM in
kernel/fork.c with in those lines:

max_threads = mempages / (8 * THREAD_SIZE / PAGE_SIZE);

/*
* we need to allow at least 20 threads to boot a system
*/
if(max_threads < 20)
max_threads = 20;

init_task.signal->rlim[RLIMIT_NPROC].rlim_cur = max_threads/2;
init_task.signal->rlim[RLIMIT_NPROC].rlim_max = max_threads/2;

The forkbomb is mentioned already in 2001-06-18 by Rik van Riel that
suggested mempages / (16 * THREAD_SIZE / PAGE_SIZE)

http://marc.theaimsgroup.com/?l=linux-kernel&m=99283072806620&w=2
http://marc.theaimsgroup.com/?l=linux-kernel&m=99617386529767&w=2

But I cannot find out why it was set back again to 8 * ... I think this
is the main reason that almost all distros are vulerable to the stupid
fork bomb attack.

Would it be an idea to set it back to:

mempages / (16 * THREAD_SIZE / PAGE_SIZE)

and let the sysadmins raise the limit with /proc/sys/kernel/threads-max
if they need more?

--
Natanael Copa


2005-03-23 10:56:59

by Nguyen Anh Quynh

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

On Tue, 22 Mar 2005 07:50:25 -0500, [email protected]
<[email protected]> wrote:
> On Tue, Mar 22, 2005 at 12:49:58PM +0100, Jan Engelhardt wrote:
> > >
> > >This will prevent it from exceeding the procs limits, but it will *not*
> > >completely stop it.
> >
> > What if the few procs that he may spawn also grab so much memory so your
> > machine disappears in swap-t(h)rashing?
> While I have figured out how it'd be possible in theory to prevent things
> from grabbing so much memory that your computer enters swap death, I haven't
> been able to figure out what reasonable defaults would be for myself or
> others. Soooo, I suggest everyone who is worried about this check the
> manpage for 'limits' which tells you how to do this. My machine runs various
> rediculously large and small programs - I'm not sure a forkbomb could be
> stopped without hindering the usage of some of the games on my desktop
> machine.
>
> On a server or something with multiple users however, I'm sure you could
> configure each user independently with resource limits. Most servers
> don't have users that play games which take up 90% of the ram. :)
>
> In any case, I was forced by various smarter-than-I people to come up with a
> better solution to our problem as they were able to make forkbombs that did
> a much better job of driving me crazy. :)
>
> If you edit or create /etc/limits and set as the only line
>
> * U250
>
> It'll do the same thing as the sysctl hack, except root will still be able
> to run programs. Programs like ps and kill/killall.
>
> If you've actually implemented the sysctl.conf hack I spoke of previously, I
> suggest setting it back to whatever it used to be before, or deleting the
> line from /etc/sysctl.conf altogether.
>
> /etc/limits does a better job at stopping forkbombs.
>
> This is an example of a program in C my friends gave me that forkbombs.
> My previous sysctl.conf hack can't stop this, but the /etc/limits solution
> enables the owner of the computer to do something about it as root.
>
> int main() { while(1) { fork(); } }
>

I find that this forkbomb doesnt always kill the machine. Trying a
small forkbomb, I saw that either the forkbomb process, or the parent
process (of forkbomb) will be killed after a while (by the kernel)
because of "out of memory" error. The problem is that which process
would be chosen to kill? (I have no idea on how kernel choose the
would-be-kill process).

If the kernel choose to kill the parent process, or the forkbomb
itself, damage can be afford. Otherwise, if the more important
processes are killed (like kernel threads or other daemons), things
would be much more serious.

Any idea?

Thank you,
aq

2005-03-23 12:38:06

by Natanael Copa

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

On Wed, 2005-03-23 at 19:56 +0900, aq wrote:
> On Tue, 22 Mar 2005 07:50:25 -0500, [email protected]
> <[email protected]> wrote:

> > While I have figured out how it'd be possible in theory to prevent things
> > from grabbing so much memory that your computer enters swap death, I haven't
> > been able to figure out what reasonable defaults would be for myself or
> > others. Soooo, I suggest everyone who is worried about this check the
> > manpage for 'limits' which tells you how to do this. My machine runs various
> > rediculously large and small programs - I'm not sure a forkbomb could be
> > stopped without hindering the usage of some of the games on my desktop
> > machine.

See patch below.

> > /etc/limits does a better job at stopping forkbombs.

but does not limit processes that are started from the boot scripts. So
if a buggy non-root service is exploited, an attacker would be able to
easily shut down the system.

> > This is an example of a program in C my friends gave me that forkbombs.
> > My previous sysctl.conf hack can't stop this, but the /etc/limits solution
> > enables the owner of the computer to do something about it as root.
> >
> > int main() { while(1) { fork(); } }

I guess that "fork twice and exit" is worse than this?

> I find that this forkbomb doesnt always kill the machine. Trying a
> small forkbomb, I saw that either the forkbomb process, or the parent
> process (of forkbomb) will be killed after a while (by the kernel)
> because of "out of memory" error. The problem is that which process
> would be chosen to kill? (I have no idea on how kernel choose the
> would-be-kill process).

It kills the process that reaches the limit (max proc's / out of mem)?

> If the kernel choose to kill the parent process, or the forkbomb
> itself, damage can be afford. Otherwise, if the more important
> processes are killed (like kernel threads or other daemons), things
> would be much more serious.
>
> Any idea?

Limit the default maximum of user processes. If someone needs more, let
the sysadmin raise it (with ulimit -u, /etc/limits, sysctl.conf
whatever)

This should do the trick:

--- kernel/fork.c.orig 2005-03-02 08:37:48.000000000 +0100
+++ kernel/fork.c 2005-03-21 15:22:50.000000000 +0100
@@ -119,7 +119,7 @@
* value: the thread structures can take up at most half
* of memory.
*/
- max_threads = mempages / (8 * THREAD_SIZE / PAGE_SIZE);
+ max_threads = mempages / (16 * THREAD_SIZE / PAGE_SIZE);

/*
* we need to allow at least 20 threads to boot a system


--
Natanael Copa


2005-03-23 13:05:27

by Nguyen Anh Quynh

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

On Wed, 23 Mar 2005 13:37:38 +0100, Natanael Copa <[email protected]> wrote:
> > > This is an example of a program in C my friends gave me that forkbombs.
> > > My previous sysctl.conf hack can't stop this, but the /etc/limits solution
> > > enables the owner of the computer to do something about it as root.
> > >
> > > int main() { while(1) { fork(); } }
>
> I guess that "fork twice and exit" is worse than this?

you meant code like this

int main() { while(1) { fork(); fork(); exit(); } }

is more harmful ? I dont see why (?)

> > I find that this forkbomb doesnt always kill the machine. Trying a
> > small forkbomb, I saw that either the forkbomb process, or the parent
> > process (of forkbomb) will be killed after a while (by the kernel)
> > because of "out of memory" error. The problem is that which process
> > would be chosen to kill? (I have no idea on how kernel choose the
> > would-be-kill process).
>
> It kills the process that reaches the limit (max proc's / out of mem)?

If so, forkbomb doesnt cause much problem like they said, since
eventually it would be killed once it reach the limit of memory. the
system will recover automatically after awhile.

> Limit the default maximum of user processes. If someone needs more, let
> the sysadmin raise it (with ulimit -u, /etc/limits, sysctl.conf
> whatever)
>
> This should do the trick:
>
> --- kernel/fork.c.orig 2005-03-02 08:37:48.000000000 +0100
> +++ kernel/fork.c 2005-03-21 15:22:50.000000000 +0100
> @@ -119,7 +119,7 @@
> * value: the thread structures can take up at most half
> * of memory.
> */
> - max_threads = mempages / (8 * THREAD_SIZE / PAGE_SIZE);
> + max_threads = mempages / (16 * THREAD_SIZE / PAGE_SIZE);

I dont see any advantages of halving the max_threads like this at all.
That doesnt solve the problem. You should focus elsewhere.

thank you,
aq

2005-03-23 13:38:24

by Jan Engelhardt

[permalink] [raw]
Subject: Re: forkbombing Linux distributions


>If so, forkbomb doesnt cause much problem like they said, since
>eventually it would be killed once it reach the limit of memory. the
>system will recover automatically after awhile.

I doubt that! Maybe OOM strikes one process out, but as already said, when
that happens, it makes room for further spawns.



Jan Engelhardt
--

2005-03-23 13:48:46

by Erik Mouw

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

On Wed, Mar 23, 2005 at 01:37:38PM +0100, Natanael Copa wrote:
> On Wed, 2005-03-23 at 19:56 +0900, aq wrote:
> > > /etc/limits does a better job at stopping forkbombs.
>
> but does not limit processes that are started from the boot scripts. So
> if a buggy non-root service is exploited, an attacker would be able to
> easily shut down the system.

That's easy to fix: set limits from initrd or initramfs.


Erik

--
+-- Erik Mouw -- http://www.harddisk-recovery.com -- +31 70 370 12 90 --
| Lab address: Delftechpark 26, 2628 XH, Delft, The Netherlands

2005-03-23 13:58:02

by Natanael Copa

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

On Wed, 2005-03-23 at 22:04 +0900, aq wrote:
> On Wed, 23 Mar 2005 13:37:38 +0100, Natanael Copa <[email protected]> wrote:
> > > > This is an example of a program in C my friends gave me that forkbombs.
> > > > My previous sysctl.conf hack can't stop this, but the /etc/limits solution
> > > > enables the owner of the computer to do something about it as root.
> > > >
> > > > int main() { while(1) { fork(); } }
> >
> > I guess that "fork twice and exit" is worse than this?
>
> you meant code like this
>
> int main() { while(1) { fork(); fork(); exit(); } }
>
> is more harmful ? I dont see why (?)

Because the parent disappears. When things like killall tries to kill
the process its already gone but there are 2 new with new pids.

> > > I find that this forkbomb doesnt always kill the machine. Trying a
> > > small forkbomb, I saw that either the forkbomb process, or the parent
> > > process (of forkbomb) will be killed after a while (by the kernel)
> > > because of "out of memory" error. The problem is that which process
> > > would be chosen to kill? (I have no idea on how kernel choose the
> > > would-be-kill process).
> >
> > It kills the process that reaches the limit (max proc's / out of mem)?
>
> If so, forkbomb doesnt cause much problem like they said, since
> eventually it would be killed once it reach the limit of memory. the
> system will recover automatically after awhile.

well, the problem here has that stupid fork bombs like:

:() { :|:& };:

brings down almost all linux distro's while other *nixes survives.

So in theory, you are right, but in real world the system dies. (when I
tried here the system was completely dead for at least 30mins wihtout
any possiblity to even move the mouse) It would probably recover after
hours or days, but who waits so long if the production server does not
respond?

> > Limit the default maximum of user processes. If someone needs more, let
> > the sysadmin raise it (with ulimit -u, /etc/limits, sysctl.conf
> > whatever)
> >
> > This should do the trick:
> >
> > --- kernel/fork.c.orig 2005-03-02 08:37:48.000000000 +0100
> > +++ kernel/fork.c 2005-03-21 15:22:50.000000000 +0100
> > @@ -119,7 +119,7 @@
> > * value: the thread structures can take up at most half
> > * of memory.
> > */
> > - max_threads = mempages / (8 * THREAD_SIZE / PAGE_SIZE);
> > + max_threads = mempages / (16 * THREAD_SIZE / PAGE_SIZE);
>
> I dont see any advantages of halving the max_threads like this at all.

If you take a look at the lines below this patch you will find:

max_threads = mempages / (16 * THREAD_SIZE / PAGE_SIZE);

/*
* we need to allow at least 20 threads to boot a system
*/
if(max_threads < 20)
max_threads = 20;

init_task.signal->rlim[RLIMIT_NPROC].rlim_cur = max_threads/2;
init_task.signal->rlim[RLIMIT_NPROC].rlim_max = max_threads/2;

Default RLIMIT_NPROC (ulimit -u) is calculated from max_threads so
maximum allowed proc's is halved too.

> That doesnt solve the problem.

It actually does. Try yourself.

This was actually the default in 2.4.7-acl
http://marc.theaimsgroup.com/?l=linux-kernel&m=99617386529767&w=2

I don't know why it was doubled again afterwards.

IMHO its better to set a lower default and let the sysadmin raise the
limit if needed instead of the opposite.

> You should focus elsewhere.

Any suggestions what I should focus on if not on default maximum user
processes?

BTW... its not first time this is discussed:
http://marc.theaimsgroup.com/?t=105769009100003&r=1&w=2

However, the limits should apply to the daemons started from boot
scripts too, not only the logged in users. That is why I'd like to see
the default max_threads limit halved.

--
Natanael Copa


2005-03-23 13:58:21

by Max Kellermann

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

On 2005/03/22 12:49, Jan Engelhardt <[email protected]> wrote:
> What if the few procs that he may spawn also grab so much memory so
> your machine disappears in swap-t(h)rashing?

The number of processes is counted per user, but CPU time and memory
consumption is counted per process.

Going around RLIMIT_CPU is too easy by simply forkbombing. This
renders RLIMIT_CPU unusable.

The memory limits aren't good enough either: if you set them low
enough that memory-forkbombs are unperilous for
RLIMIT_NPROC*RLIMIT_DATA, it's probably too low for serious
applications.

Now what about per-user (or per-session) CPU and memory limits?

Another idea: RLIMIT_FORK (number of allowed fork() calls in that
session). While that may not be useful for interactive login sessions,
I can imagine several situations where it could help (like qmail child
processes).

Max

2005-03-23 14:04:15

by Natanael Copa

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

On Wed, 2005-03-23 at 14:45 +0100, Erik Mouw wrote:
> On Wed, Mar 23, 2005 at 01:37:38PM +0100, Natanael Copa wrote:
> > On Wed, 2005-03-23 at 19:56 +0900, aq wrote:
> > > > /etc/limits does a better job at stopping forkbombs.
> >
> > but does not limit processes that are started from the boot scripts. So
> > if a buggy non-root service is exploited, an attacker would be able to
> > easily shut down the system.
>
> That's easy to fix: set limits from initrd or initramfs.

..or run "ulimit -u" early in the boot scripts.

What I suggest is doing the reverse. Let the kernel be restrictive by
default and let distro's or sysadmins open up if they need more
processes.

--
Natanael Copa


2005-03-23 14:21:54

by Måns Rullgård

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

Natanael Copa <[email protected]> writes:

> well, the problem here has that stupid fork bombs like:
>
> :() { :|:& };:
>
> brings down almost all linux distro's while other *nixes survives.

I have seen a SunFire machine with 4GB RAM running Solaris grind to a
complete halt from a fork bomb.

--
M?ns Rullg?rd
[email protected]

2005-03-23 14:23:56

by Natanael Copa

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

On Wed, 2005-03-23 at 14:53 +0100, Max Kellermann wrote:
> On 2005/03/22 12:49, Jan Engelhardt <[email protected]> wrote:
> > What if the few procs that he may spawn also grab so much memory so
> > your machine disappears in swap-t(h)rashing?
>
> The number of processes is counted per user, but CPU time and memory
> consumption is counted per process.

So limiting maximum number of processes will automatically limit CPU
time and memory consumption per user?

--
Natanael Copa


2005-03-23 14:28:47

by Max Kellermann

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

On 2005/03/23 15:23, Natanael Copa <[email protected]> wrote:
> On Wed, 2005-03-23 at 14:53 +0100, Max Kellermann wrote:
> > The number of processes is counted per user, but CPU time and memory
> > consumption is counted per process.
>
> So limiting maximum number of processes will automatically limit CPU
> time and memory consumption per user?

No. I was talking about RLIMIT_CPU and RLIMIT_DATA, compared to
RLIMIT_NPROC. RLIMIT_NPROC limits the number of processes for that
user, nothing else (slightly simplified explanation).

Max

2005-03-23 14:43:47

by Jan Engelhardt

[permalink] [raw]
Subject: Re: forkbombing Linux distributions


>brings down almost all linux distro's while other *nixes survives.

Let's see if this can be confirmed.



Jan Engelhardt
--

2005-03-23 14:45:25

by Natanael Copa

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

On Wed, 2005-03-23 at 15:27 +0100, Max Kellermann wrote:
> On 2005/03/23 15:23, Natanael Copa <[email protected]> wrote:
> > On Wed, 2005-03-23 at 14:53 +0100, Max Kellermann wrote:
> > > The number of processes is counted per user, but CPU time and memory
> > > consumption is counted per process.
> >
> > So limiting maximum number of processes will automatically limit CPU
> > time and memory consumption per user?
>
> No. I was talking about RLIMIT_CPU and RLIMIT_DATA, compared to
> RLIMIT_NPROC. RLIMIT_NPROC limits the number of processes for that
> user, nothing else (slightly simplified explanation).

Yes, but if
RLIMIT_NPROC is per user and RLIMIT_CPU is per proc

the theoretical CPU limit per user is RLIMIT_NPROC * RLIMIT_CPU. So if
you half the RLIMIT_NPROC you will half the theoretical maximum CPU
limit per user.

Same with memory.

I don't know if that really solves anything, but a misbehaving process
(fork bomb) would need to consume the double RAM or CPU to do the same
"damage" if RLIMIT_NPROC is halved.

--
Natanael Copa


2005-03-23 14:57:49

by Max Kellermann

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

On 2005/03/23 15:44, Natanael Copa <[email protected]> wrote:
> Yes, but if
> RLIMIT_NPROC is per user and RLIMIT_CPU is per proc
>
> the theoretical CPU limit per user is RLIMIT_NPROC * RLIMIT_CPU. So if
> you half the RLIMIT_NPROC you will half the theoretical maximum CPU
> limit per user.
>
> Same with memory.

It's even worse with RLIMIT_CPU. Imagine a process forks
RLIMIT_NPROC-1 child processes. These consume all their CPU time, get
killed with SIGXCPU, and the parent process spawns new child processes
again with fresh RLIMIT_CPU counters (the parent process idled
meanwhile, consuming none of its assigned CPU cycles). Again and
again.

You see, RLIMIT_CPU is worthless in its current implementation.

Max

2005-03-23 15:04:58

by Natanael Copa

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

On Wed, 2005-03-23 at 15:43 +0100, Jan Engelhardt wrote:
> >brings down almost all linux distro's while other *nixes survives.
>
> Let's see if this can be confirmed.

open/free/netbsd survives. I guess OSX does too.

Gentoo (non-hardened), Red Hat, Mandrake, FC2 are vulnerable.

Debian stable survives but they set the default proc limit to 256. Looks
like Suse also is not vulerable. (I wonder if the daemons started from
booscripts are vulnerable though)

Solaris 10 seems to be vulnerable.

http://www.securityfocus.com/columnists/308

I think it would be nice if Linux could be mentioned together with the
*bsd's instead of the commercial *nixes next time :)

--
Natananael Copa


2005-03-23 15:18:52

by Natanael Copa

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

On Wed, 2005-03-23 at 15:52 +0100, Max Kellermann wrote:

> You see, RLIMIT_CPU is worthless in its current implementation.

You are right. Limiting CPU is probably not a good solution anyway.

http://marc.theaimsgroup.com/?l=linux-kernel&m=105808941823955&w=2

--
Natanael Copa


2005-03-23 17:06:32

by Nguyen Anh Quynh

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

On Wed, 23 Mar 2005 14:54:18 +0100, Natanael Copa <[email protected]> wrote:
> On Wed, 2005-03-23 at 22:04 +0900, aq wrote:
> > On Wed, 23 Mar 2005 13:37:38 +0100, Natanael Copa <[email protected]> wrote:
> > > > > This is an example of a program in C my friends gave me that forkbombs.
> > > > > My previous sysctl.conf hack can't stop this, but the /etc/limits solution
> > > > > enables the owner of the computer to do something about it as root.
> > > > >
> > > > > int main() { while(1) { fork(); } }
> > >
> > > I guess that "fork twice and exit" is worse than this?
> >
> > you meant code like this
> >
> > int main() { while(1) { fork(); fork(); exit(); } }
> >
> > is more harmful ? I dont see why (?)
>
> Because the parent disappears. When things like killall tries to kill
> the process its already gone but there are 2 new with new pids.
>

are you sure? the above forkbomb will stop quickly after just several
spawns because of exit().

I agree that make kernel more restrictive by default is a good approach.

thank you,
aq

2005-03-23 18:07:29

by Paul Jackson

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

> int main() { while(1) { fork(); fork(); exit(); } }
> ...
> the above forkbomb will stop quickly

Yep.

Try this forkbomb:

int main() { while(1) { if (!fork()) continue; if (!fork()) continue; exit(); } }

--
I won't rest till it's the best ...
Programmer, Linux Scalability
Paul Jackson <[email protected]> 1.650.933.1373, 1.925.600.0401

2005-03-23 18:45:13

by Nguyen Anh Quynh

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

On Wed, 23 Mar 2005 10:05:43 -0800, Paul Jackson <[email protected]> wrote:
> > int main() { while(1) { fork(); fork(); exit(); } }
> > ...
> > the above forkbomb will stop quickly
>
> Yep.
>
> Try this forkbomb:
>
> int main() { while(1) { if (!fork()) continue; if (!fork()) continue; exit(); } }
>

yep, that is better. but system can still be recovered by killall.

a little "sleep" will render the system completely useless, like this:

int main() { while(1) { if (!fork()) continue; if (!fork()) continue;
sleep(5); exit(0); } }

thank you,
aq

2005-03-23 19:43:11

by Kyle Moffett

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

On Mar 23, 2005, at 09:43, Jan Engelhardt wrote:
>> brings down almost all linux distro's while other *nixes survives.
>
> Let's see if this can be confirmed.

Here at my school we have the workstations running Debian testing. We
have edited /etc/security/limits.conf to have a much more restrictive
startup environment for user processes, limiting to 100 processes per
user and clamping maximum CPU time to 4 hours per process. It's not
failsafe, but we also have all of the kernel threads set at realtime
levels, with the IRQ threads specifically set at SCHED_RR 99, and we
have a sulogin-type process on tty12 at SCHED_RR 99.

Even in the event of the worst kind of forkbomb, the terminal is as
responsive as if nothing else were running and allows us to kill the
offending processes easily, because when the scheduler refuses to
interrupt the killall process to run anything else, no other forkbomb
processes get started.

I suppose a similar situation could be set up with a user-accessible
server and a rate-limited SSH daemon if necessary, although a ttyS0
console via a console server might work better. In any case, I think
that while there could perhaps be a better interface for user-limits
in the kernel, the existing one works fine for most purposes, when
combined with appropriate administrative tools.

Cheers,
Kyle Moffett

-----BEGIN GEEK CODE BLOCK-----
Version: 3.12
GCM/CS/IT/U d- s++: a18 C++++>$ UB/L/X/*++++(+)>$ P+++(++++)>$
L++++(+++) E W++(+) N+++(++) o? K? w--- O? M++ V? PS+() PE+(-) Y+
PGP+++ t+(+++) 5 X R? tv-(--) b++++(++) DI+ D+ G e->++++$ h!*()>++$ r
!y?(-)
------END GEEK CODE BLOCK------


2005-03-23 20:19:21

by Natanael Copa

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

On Thu, 2005-03-24 at 03:44 +0900, aq wrote:
> On Wed, 23 Mar 2005 10:05:43 -0800, Paul Jackson <[email protected]> wrote:
> > > int main() { while(1) { fork(); fork(); exit(); } }
> > > ...
> > > the above forkbomb will stop quickly
> >
> > Yep.
> >
> > Try this forkbomb:
> >
> > int main() { while(1) { if (!fork()) continue; if (!fork()) continue; exit(); } }
> >
>
> yep, that is better. but system can still be recovered by killall.
>
> a little "sleep" will render the system completely useless, like this:
>
> int main() { while(1) { if (!fork()) continue; if (!fork()) continue;
> sleep(5); exit(0); } }

Interesting.

With the patch I suggested earlier, reducing default max_threads to the
half in kernel/fork.c, my system survived. (without
touching /etc/security/limits.conf) Mail notification died because it
couldn't start any new threads but that was the only thing that
happened.

--
Natanael Copa


2005-03-23 20:28:25

by Natanael Copa

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

On Wed, 2005-03-23 at 14:38 -0500, Kyle Moffett wrote:
> On Mar 23, 2005, at 09:43, Jan Engelhardt wrote:
> >> brings down almost all linux distro's while other *nixes survives.
> >
> > Let's see if this can be confirmed.
>
> Here at my school we have the workstations running Debian testing. We
> have edited /etc/security/limits.conf to have a much more restrictive
> startup environment for user processes, limiting to 100 processes per
> user and clamping maximum CPU time to 4 hours per process.

Thats great. I was was thinking of the default settings. (its even
possible to lock down a windows machine to be "secure")

Also the daemons started from bootscripts that is not aware of PAM is
not affected by those settings. So an exploited security flaw in a
service would allow an attacker to bring the system down even if the
service is running as non-root.

Try running this from a boot script and you'll see that even if this
process is setuid, it will be able to fork more than 100 processes per
user:

/* this program should be started as root but it changes uid */

#define TTL 300
#define MAX 65536
#define UID 65534

int pids[MAX];
int main(int argc, char *argv[]) {
int count = 0; pid_t pid;
if (setuid(UID) < 0) {
perror("setuid");
exit(1);
}
while ((pid = fork()) >= 0 && count < MAX) {
if (pid == 0) sleep(TTL);
pids[count++] = pid;
}
printf("Forked %i new processes\n", count);
while (count--) kill(pids[count], SIGTERM);
return 0;
}


> In any case, I think
> that while there could perhaps be a better interface for user-limits
> in the kernel, the existing one works fine for most purposes, when
> combined with appropriate administrative tools.

My point is, the default max allowed processes per user is too high. It
better to open up a restrictive default than locking down an generous
default.

--
Natanael Copa


2005-03-23 20:55:18

by Natanael Copa

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

On Thu, 2005-03-24 at 02:05 +0900, aq wrote:

> I agree that make kernel more restrictive by default is a good approach.

Thank you! For a moment I thought I was the only human on this planet
who thought that.

Next question is where and how and what is an appropiate limit? I have
not heard any better suggestions than this:

--- kernel/fork.c.orig 2005-03-02 08:37:48.000000000 +0100
+++ kernel/fork.c 2005-03-21 15:22:50.000000000 +0100
@@ -119,7 +119,7 @@
* value: the thread structures can take up at most half
* of memory.
*/
- max_threads = mempages / (8 * THREAD_SIZE / PAGE_SIZE);
+ max_threads = mempages / (16 * THREAD_SIZE / PAGE_SIZE);

/*
* we need to allow at least 20 threads to boot a system


(FYI: A few lines below the default RLIMIT_NPROC is calculated from
max_threads/2)

This would give default maximum number of processes from the amount of
low memory:

RAM RLIMIT_NPROC
64MiB 256
128MiB 512
256MiB 1024
512MiB 2048
1GiB 4096

That would be sufficent for the users to play their games, compile ther
stuff etc while it would protect everyone from that classic shell fork
bomb by default.

Actually, Alan Cox tried this in the 2.4.7-ac1 kernel
http://marc.theaimsgroup.com/?l=linux-kernel&m=99617009115570&w=2

but I have no idea why it was raised to the double afterwards.

--
Natanael Copa


2005-03-24 07:15:07

by Jan Engelhardt

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

>> >brings down almost all linux distro's while other *nixes survives.
>>
>> Let's see if this can be confirmed.
>
>open/free/netbsd survives. I guess OSX does too.

Confirmed. My OpenBSD install copes very well with forkbombs.
However, I _was able_ to spawn a lot of shells by default.
The essence is that the number of processes/threads within
a _session group_ (correct word?) is limited. That way, you can
start a ton of "/bin/sh"s from one another, i.e.:

\__ login jengelh
\__ /bin/sh
\__ /bin/sh
\__ /bin/sh
...

So I think that if you add a setsid() to your forkbomb,
you could once again be able to bring a system - BSD this time - down.
Just a guess at this time, I would need to write a prog first :p



Jan Engelhardt
--

2005-03-24 10:05:51

by Natanael Copa

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

On Thu, 2005-03-24 at 08:07 +0100, Jan Engelhardt wrote:
> >> >brings down almost all linux distro's while other *nixes survives.
> >>
> >> Let's see if this can be confirmed.
> >
> >open/free/netbsd survives. I guess OSX does too.
>
> Confirmed. My OpenBSD install copes very well with forkbombs.
> However, I _was able_ to spawn a lot of shells by default.
> The essence is that the number of processes/threads within
> a _session group_ (correct word?) is limited. That way, you can
> start a ton of "/bin/sh"s from one another, i.e.:
>
> \__ login jengelh
> \__ /bin/sh
> \__ /bin/sh
> \__ /bin/sh
> ...
>
> So I think that if you add a setsid() to your forkbomb,
> you could once again be able to bring a system - BSD this time - down.

I seriously doubt that. Try raising your maxproc setting (sysctl
kern.maxproc?) to something insane and try bombing again.

I tried to bring the box down by raising the limit to something similar
linux default and run the classic ":() { :|:& };:" However, the bomb was
stopped by maximum number of pipes and BSD survived.

If you don't hit the maximum number of processes you will hit another
limit.

--
Natanael Copa


2005-03-26 10:37:39

by Tux

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

I'm confused, are hard limits to RLIMIT_NPROC imposed on services
spawned by init before a user logs in?

2005-03-28 08:03:37

by Natanael Copa

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

On Sat, 2005-03-26 at 10:37 +0000, Tux wrote:
> I'm confused, are hard limits to RLIMIT_NPROC imposed on services
> spawned by init before a user logs in?

There are no "hard" limits to RLIMIT_NPROC. However, on fork, childern
inherits the parents limits. Non-root users can not raise the limit,
just lower it. So unless limits are set in the bootscripts, the defaults
set in kernel/fork.c will be used on services.

--
Natanael Copa


2005-03-28 17:30:56

by Matthieu Castet

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

> The memory limits aren't good enough either: if you set them low
> enough that memory-forkbombs are unperilous for
> RLIMIT_NPROC*RLIMIT_DATA, it's probably too low for serious
> applications.

yes, if you want to run application like openoffice.org you need at
least 200Mo. If you want that your system is usable, you need at least 40 process per user. So 40*200 = 8Go, and it don't think you have all this memory...

I think per user limit could be a solution.

attached a small fork-memory bombing.

Matthieu


Attachments:
(No filename) (514.00 B)
kha.c (64.00 B)
Download all attachments

2005-03-28 18:02:50

by folkert

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

> > The memory limits aren't good enough either: if you set them low
> > enough that memory-forkbombs are unperilous for
> > RLIMIT_NPROC*RLIMIT_DATA, it's probably too low for serious
> > applications.
> yes, if you want to run application like openoffice.org you need at
> least 200Mo. If you want that your system is usable, you need at least 40 process per user. So 40*200 = 8Go, and it don't think you have all this memory...
> I think per user limit could be a solution.
> attached a small fork-memory bombing.
> Matthieu
> int main()
> {
> while(1){
> while(fork()){
> malloc(1);
> }
> }
> }

Imporved version:

int main()
{
while(1) {
while(fork()) {
char *dummy = (char *)malloc(1);
*dummy = 1;
}
}
}


Folkert van Heusden

Op zoek naar een IT of Finance baan? Mail me voor de mogelijkheden!
+------------------------------------------------------------------+
|UNIX admin? Then give MultiTail (http://vanheusden.com/multitail/)|
|a try, it brings monitoring logfiles to a different level! See |
|http://vanheusden.com/multitail/features.html for a feature list. |
+------------------------------------------= http://www.unixsoftware.nl =-+
Phone: +31-6-41278122, PGP-key: 1F28D8AE
Get your PGP/GPG key signed at http://www.biglumber.com!

2005-03-28 19:33:52

by Jan Engelhardt

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

>> I think per user limit could be a solution.
>> attached a small fork-memory bombing.

I already posted one, posts ago.

>>[snip]

>Imporved version:
>[snip]
>char *dummy = (char *)malloc(1);

That cast is not supposed to be there, is it? (To pretake it: it's bad.)


Jan Engelhardt
--
No TOFU for me, please.

2005-03-28 19:40:00

by folkert

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

> I already posted one, posts ago.
> >>[snip]
> >Imporved version:
> >[snip]
> >char *dummy = (char *)malloc(1);
> That cast is not supposed to be there, is it? (To pretake it: it's bad.)

What is so bad about it?


Folkert van Heusden

Op zoek naar een IT of Finance baan? Mail me voor de mogelijkheden!
+------------------------------------------------------------------+
|UNIX admin? Then give MultiTail (http://vanheusden.com/multitail/)|
|a try, it brings monitoring logfiles to a different level! See |
|http://vanheusden.com/multitail/features.html for a feature list. |
+------------------------------------------= http://www.unixsoftware.nl =-+
Phone: +31-6-41278122, PGP-key: 1F28D8AE
Get your PGP/GPG key signed at http://www.biglumber.com!

2005-03-28 20:29:33

by Renate Meijer

[permalink] [raw]
Subject: Re: forkbombing Linux distributions


On Mar 28, 2005, at 9:39 PM, [email protected] wrote:

On Mar 28, 2005, at 9:39 PM, [email protected] wrote:

>> I already posted one, posts ago.
>>>> [snip]
>>> Imporved version:
>>> [snip]
>>> char *dummy = (char *)malloc(1);
>> That cast is not supposed to be there, is it? (To pretake it: it's
>> bad.)
>
> What is so bad about it?

Read the FAQ at http://www.eskimo.com/~scs/C-faq/q7.7.html

Malloc() returns a void*, so casts are superfluous if stdlib.h is
included (as it should be). Hence if one typecasts the result of malloc
in order to suit any particular type, the real bug is probably a
lacking "#iinclude <stdlib.h>", which the cast (effectively) is hiding.



2005-03-28 20:44:19

by Willy Tarreau

[permalink] [raw]
Subject: [BORED] Re: forkbombing Linux distributions

Please,

would you be so kind to stop debugging your fork-bombing tools with all
the list in CC ? I think that most of us are not interested in knowning
whether the cast is necessary or not before the malloc(). This is LKML,
not FBTML. There are lots of ways to locally DoS linux, you don't need
to fine tune your tools here in public.

Thanks in advance,
Willy

On Mon, Mar 28, 2005 at 10:35:00PM +0200, Renate Meijer wrote:
>
> On Mar 28, 2005, at 9:39 PM, [email protected] wrote:
>
> On Mar 28, 2005, at 9:39 PM, [email protected] wrote:
>
> >>I already posted one, posts ago.
> >>>>[snip]
> >>>Imporved version:
> >>>[snip]
> >>>char *dummy = (char *)malloc(1);
> >>That cast is not supposed to be there, is it? (To pretake it: it's
> >>bad.)
> >
> >What is so bad about it?
>
> Read the FAQ at http://www.eskimo.com/~scs/C-faq/q7.7.html
>
> Malloc() returns a void*, so casts are superfluous if stdlib.h is
> included (as it should be). Hence if one typecasts the result of malloc
> in order to suit any particular type, the real bug is probably a
> lacking "#iinclude <stdlib.h>", which the cast (effectively) is hiding.
>
>
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/

2005-03-29 12:31:42

by Natanael Copa

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

On Mon, 2005-03-28 at 19:28 +0200, Matthieu Castet wrote:
> > The memory limits aren't good enough either: if you set them low
> > enough that memory-forkbombs are unperilous for
> > RLIMIT_NPROC*RLIMIT_DATA, it's probably too low for serious
> > applications.
>
> yes, if you want to run application like openoffice.org you need at
> least 200Mo. If you want that your system is usable, you need at least 40 process per user. So 40*200 = 8Go, and it don't think you have all this memory...
>
> I think per user limit could be a solution.

You have /etc/limits and /etc/security/limits.conf.

I think it would solve many problems by simply lowering the default
max_treads in kernel/fork.c. RLIMIT_NPROC is calculated from this value.

--- kernel/fork.c.orig 2005-03-02 08:37:48.000000000 +0100
+++ kernel/fork.c 2005-03-21 15:22:50.000000000 +0100
@@ -119,7 +119,7 @@
* value: the thread structures can take up at most half
* of memory.
*/
- max_threads = mempages / (8 * THREAD_SIZE / PAGE_SIZE);
+ max_threads = mempages / (16 * THREAD_SIZE / PAGE_SIZE);

/*
* we need to allow at least 20 threads to boot a system

I don't think this will cause much problems for most users. (compare the
default maximum process limit in the BSD's and OSX)

This will also limit deamons/services started from boot scripts by
default. The /etc/limits and /etc/security/limits.conf does not.

If it does cause problems for extrem users, they can easily raise the
limits in either initrd and/or using /proc/sys/kernel/threads-max (or
systctl).

BTW... does anyone know *why* the default max number of processes is so
high in Linux?

--
Natanael Copa


2005-03-30 18:40:13

by Jacek Luczak

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

--- linux-2.6.12-rc1/kernel/fork.c 2005-03-29 00:53:37.000000000 +0200
+++ linux/kernel/fork.c 2005-03-29 00:54:19.000000000 +0200
@@ -57,6 +57,8 @@

int max_threads; /* tunable limit on nr_threads */

+int max_user_threads; /* tunable limit on nr_threads per user */
+
DEFINE_PER_CPU(unsigned long, process_counts) = 0;

__cacheline_aligned DEFINE_RWLOCK(tasklist_lock); /* outer */
@@ -146,6 +148,21 @@
if(max_threads < 20)
max_threads = 20;

+ /*
+ * The default maximum number of threads per user.
+ *
+ * FIXME: this value is based on my experiments and is
+ * rather good on desktop system; it should be fixed to
+ * the more universal value.
+ */
+ max_user_threads = 300;
+
+ /*
+ * default value is too high - set to max_threads
+ */
+ if (max_threads < max_user_threads)
+ max_user_threads = max_threads;
+
init_task.signal->rlim[RLIMIT_NPROC].rlim_cur = max_threads/2;
init_task.signal->rlim[RLIMIT_NPROC].rlim_max = max_threads/2;
init_task.signal->rlim[RLIMIT_SIGPENDING] =
@@ -179,6 +196,16 @@
return tsk;
}

+/*
+ * This is used to get number of user processes
+ * from current running task.
+ */
+static inline int get_user_processes(void)
+{
+ return atomic_read(&current->user->processes);
+}
+#define user_nr_processes get_user_processes()
+
#ifdef CONFIG_MMU
static inline int dup_mmap(struct mm_struct * mm, struct mm_struct * oldmm)
{
@@ -869,6 +896,13 @@
goto fork_out;

retval = -ENOMEM;
+
+ /*
+ * Stop creation of new user process if limit is reached.
+ */
+ if ( (current->user != &root_user) && (user_nr_processes >= max_user_threads) )
+ goto max_user_fork;
+
p = dup_task_struct(current);
if (!p)
goto fork_out;
@@ -1109,6 +1143,9 @@
return ERR_PTR(retval);
return p;

+max_user_fork:
+ retval = -EAGAIN;
+ return ERR_PTR(retval);
bad_fork_cleanup_namespace:
exit_namespace(p);
bad_fork_cleanup_keys:
--- linux-2.6.12-rc1/kernel/sysctl.c 2005-03-29 00:53:38.000000000 +0200
+++ linux/kernel/sysctl.c 2005-03-29 00:54:19.000000000 +0200
@@ -56,6 +56,7 @@
extern int sysctl_overcommit_memory;
extern int sysctl_overcommit_ratio;
extern int max_threads;
+extern int max_user_threads;
extern int sysrq_enabled;
extern int core_uses_pid;
extern char core_pattern[];
@@ -642,6 +643,14 @@
.mode = 0644,
.proc_handler = &proc_dointvec,
},
+ {
+ .ctl_name = KERN_MAX_USER_THREADS,
+ .procname = "user_threads_max",
+ .data = &max_user_threads,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = &proc_dointvec,
+ },

{ .ctl_name = 0 }
};
--- linux-2.6.12-rc1/include/linux/sysctl.h 2005-03-29 00:54:06.000000000 +0200
+++ linux/include/linux/sysctl.h 2005-03-29 00:54:36.000000000 +0200
@@ -136,6 +136,7 @@
KERN_UNKNOWN_NMI_PANIC=66, /* int: unknown nmi panic flag */
KERN_BOOTLOADER_TYPE=67, /* int: boot loader type */
KERN_RANDOMIZE=68, /* int: randomize virtual address space */
+ KERN_MAX_USER_THREADS=69, /* int: Maximum nr of threads per user in the system */
};



Attachments:
user_threads_limit.patch (2.94 kB)
difrost.vcf (304.00 B)
Download all attachments

2005-03-30 23:47:19

by Felipe Alfaro Solana

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

On Mon, 28 Mar 2005 19:28:20 +0200, Matthieu Castet
<[email protected]> wrote:
> > The memory limits aren't good enough either: if you set them low
> > enough that memory-forkbombs are unperilous for
> > RLIMIT_NPROC*RLIMIT_DATA, it's probably too low for serious
> > applications.
>
> yes, if you want to run application like openoffice.org you need at
> least 200Mo. If you want that your system is usable, you need at least 40 process per user. So 40*200 = 8Go, and it don't think you have all this memory...
>
> I think per user limit could be a solution.
>
> attached a small fork-memory bombing.

Doesn't do anything on my machine:

# ulimits -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
pending signals (-i) 4095
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 100
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

it tops at 100 processes and eats a little CPU... although the system
is under load, it's completely responsive.

2005-03-31 06:55:44

by Natanael Copa

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

On Thu, 2005-03-31 at 01:46 +0200, Felipe Alfaro Solana wrote:
> On Mon, 28 Mar 2005 19:28:20 +0200, Matthieu Castet
> <[email protected]> wrote:
> > > The memory limits aren't good enough either: if you set them low
> > > enough that memory-forkbombs are unperilous for
> > > RLIMIT_NPROC*RLIMIT_DATA, it's probably too low for serious
> > > applications.
> >
> > yes, if you want to run application like openoffice.org you need at
> > least 200Mo. If you want that your system is usable, you need at least 40 process per user. So 40*200 = 8Go, and it don't think you have all this memory...
> >
> > I think per user limit could be a solution.
> >
> > attached a small fork-memory bombing.
>
> Doesn't do anything on my machine:
>
> # ulimits -a
...

> it tops at 100 processes and eats a little CPU... although the system
> is under load, it's completely responsive.

100 processes is low. I often have over 150.

I use the patch mentioned here:
http://marc.theaimsgroup.com/?l=linux-kernel&m=111209980932023&w=2
(it set the default max_threads and RLIMIT_NPROC to half of the current
default)

and my system survived.

ncopa@nc ~ $ ulimit -u
4093

(I have 1 GiB RAM)

--
Natanael Copa


2005-03-31 08:08:38

by Jacek Luczak

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

Natanael Copa napisa?(a):
> On Thu, 2005-03-31 at 01:46 +0200, Felipe Alfaro Solana wrote:
>
>>On Mon, 28 Mar 2005 19:28:20 +0200, Matthieu Castet
>><[email protected]> wrote:
>>
>>>>The memory limits aren't good enough either: if you set them low
>>>>enough that memory-forkbombs are unperilous for
>>>>RLIMIT_NPROC*RLIMIT_DATA, it's probably too low for serious
>>>>applications.
>>>
>>>yes, if you want to run application like openoffice.org you need at
>>>least 200Mo. If you want that your system is usable, you need at least 40 process per user. So 40*200 = 8Go, and it don't think you have all this memory...
>>>
>>>I think per user limit could be a solution.
>>>
>>>attached a small fork-memory bombing.
>>
>>Doesn't do anything on my machine:
>>
>># ulimits -a
>
> ...
>
>
>>it tops at 100 processes and eats a little CPU... although the system
>>is under load, it's completely responsive.
>
>
> 100 processes is low. I often have over 150.

On desktop system 150 processes is low too. 250 is safe and sufficient
value.

> I use the patch mentioned here:
> http://marc.theaimsgroup.com/?l=linux-kernel&m=111209980932023&w=2
> (it set the default max_threads and RLIMIT_NPROC to half of the current
> default)
>
> and my system survived.

Hmmm....my didn't when nearly all users start forkbombing!

I think that changing the default max_threads is not a good idea. It
might solve many problems but forkbombing require something more universal.

Jacek

2005-03-31 10:01:11

by Natanael Copa

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

On Wed, 2005-03-30 at 19:40 +0200, Jacek ?uczak wrote:
> Hi
>
> I made some tests and almost all Linux distros brings down while freebsd
> survive!Forkbombing is a big problem but i don't think that something like
>
> max_threads = mempages / (16 * THREAD_SIZE / PAGE_SIZE);
>
> is good solution!!!
> How about add max_user_threads to the kernel? It could be tunable via
> proc filesystem. Limit is set only for users.
> I made a fast:) patch - see below - and test it on 2.6.11,
> 2.6.11ac4,2.6.12rc1...works great!!!New forks are stoped in
> copy_process() before dup_task_struct() and EAGAIN is returned. System
> works without any problems and root can killall -9 forkbomb.
>

I really liked this approach because:

* it is similar to other *nixes. (freebsd, openbsd)

* it is easily tuneable (/proc or systcl)

* it is stupid simple - small chance that things can go wrong.

* this solves *many* things in comparation to possible problems it
causes.

Only thing that could be a problem that I come to think of is that you
cannot raise the limit through /etc/security/limits.conf or similar. Eg.
you migh want all setuid() services/daemons run with a low limit but you
want give user Bob more processes. (I don't know if this is a realistic
situation though)

The default value could be something like:

max_user_threads = max_threads / 2

or:

max_user_threads = max_threads / 4;

With a lower limit to 20 or something, just like max_threads (in case
you try run Linux on 2MiB RAM)

If a fixed value (like 300, 512, 2000) is used then will probably
systems with low amount of RAM be vulerable to the forkbomb attack.

--
Natanael Copa


2005-03-31 17:12:01

by Lee Revell

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

On Thu, 2005-03-31 at 12:00 +0200, Natanael Copa wrote:
> On Wed, 2005-03-30 at 19:40 +0200, Jacek ?uczak wrote:
> >
> > I made some tests and almost all Linux distros brings down while freebsd
> > survive!Forkbombing is a big problem but i don't think that something like

> I really liked this approach because:

Christ, why is this idiotic thread still going? No one is going to
change the kernel, because the problem is trivial to solve in userspace!
Didn't you ever look up what a ulimit is?

If you consider your distro's default ulimits unreasonable, file a bug
report with them. But no one is going to make Linux "restrictive by
default" to make life easier for people who don't bother to RTFM.

Lee


2005-04-05 09:54:00

by Natanael Copa

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

On Thu, 2005-03-31 at 12:11 -0500, Lee Revell wrote:

> Didn't you ever look up what a ulimit is?

ofcourse i did. I just think that ulimit (or other userspace tools)
should be used to *raise* the limit if you need more. Not the reverse.

> If you consider your distro's default ulimits unreasonable, file a bug
> report with them. But no one is going to make Linux "restrictive by
> default" to make life easier for people who don't bother to RTFM.

I already suggested ulimit solutions for my distro. They think that if
this is needed the kernel dev's would do something (ie its a kernel
problem) while the kernel dev's says this is a userspace prob.

I wouldn't bother if this was a problem for one or two distros only.
Now, almost all distros seems to be vulnerable by default.

I wouldn't bother if other *nixes would set this limit in userspace.
(the BSD's set the limit lower in kernel and let users who need more
raise with userland tools)

I wouldn't bother if this wouldn't give Linux a bad reputation.

I'm Sorry if I made some people upset.

--
Natanael Copa

2005-04-05 10:22:29

by Jacek Luczak

[permalink] [raw]
Subject: Re: forkbombing Linux distributions

Natanael Copa napisa?(a):
> On Thu, 2005-03-31 at 12:11 -0500, Lee Revell wrote:
>
>
>>Didn't you ever look up what a ulimit is?
>
>
> ofcourse i did. I just think that ulimit (or other userspace tools)
> should be used to *raise* the limit if you need more. Not the reverse.
>
>
>>If you consider your distro's default ulimits unreasonable, file a bug
>>report with them. But no one is going to make Linux "restrictive by
>>default" to make life easier for people who don't bother to RTFM.
>
>
> I already suggested ulimit solutions for my distro. They think that if
> this is needed the kernel dev's would do something (ie its a kernel
> problem) while the kernel dev's says this is a userspace prob.
>
> I wouldn't bother if this was a problem for one or two distros only.
> Now, almost all distros seems to be vulnerable by default.
>
> I wouldn't bother if other *nixes would set this limit in userspace.
> (the BSD's set the limit lower in kernel and let users who need more
> raise with userland tools)
>
> I wouldn't bother if this wouldn't give Linux a bad reputation.
>
> I'm Sorry if I made some people upset.
>
> --
> Natanael Copa
>
>
You have absolutely right!!! Even if 'good' ulimit is set there isn't
anything bad in adding ulimit-like mechanism into kernel.

Long live ... hmm... kernel, not ulimit:)

Best regards,
Jacek