Hi,
:(){ :|:&};:
Paste that into bash and watch linux die. (2.4.21 stock)
I've seen some methods of dealing with infinitely forking processes, but
short of solving the Halting Problem I doubt we will ever find a perfect
solution to _preventing_ them. So I had a few ideas that might help an
admin _deal_ with a fork storm when it is occurring so that the S-U-B
approach can be avoided.
I also found it interesting that alt-sysrq-S took about 5 minutes to
complete the sync. Is there some sort of priority issue there? I would
think that kernel operations should forget about all the little annoying
processes going crazy. Also, eventually, the OOM killer started killing
off stuff, but I noticed that it would repeatedly attempt to kill the
same pid, such as gpm's pid, up to 10 times or so. Was it not getting
enough CPU time to die, or something?
Anyway, here are my half-baked ideas, maybe someone else has more
suggestions:
1) Alt-SysRq-<x>- then type the name of a process and hit enter. All
processes matching this name are killed. Drawback -- if you use this to
kill e.g. bash, all your login shells will die too, putting a desktop
user back at a login prompt. This is ok for servers, not for desktops.
This would solve shell bombs but not compiled bombs -- a process would
just overwrite argv[0] after it forks with random gibberish to defeat
it.
2) Alt-SysRq-<x> - Kill all processes that share the most popular
process size in the system table. This way even if the name is changed,
if there is a process making infinite copies of itself, since all the
processes are carrying out the same action, they may have the same size.
This is speculation and may be wrong.
3) Alt-SysRq-<x> - Kill the process that has the most descendent
processes. This could be made "smart" so that it only kills off the
part of the process tree where it really starts branching off, which
is a likely candidate for where the fork bomb started.
4) Since processes are created with increasing pids, a "killall" against
a fork bomb does nothing. It simply starts killing processes matching
that name starting at the lowest pid. But the processes which are
forking at higher pids eventually wrap around and get lower pids again,
which makes you end up with a forkbomb ring buffer. Not too effective
at getting rid of the problem.
What about some sort of reverse killall, or a killall with specific
capabilities tailored to taking out fork bombs? My roommate suggested
perhaps a "killall-bomb" may be in order. A killall that forks
infinitely just like the bomb does, but also works to kill off the bomb
by filling up the process table itself. Eventually the predators should
exhaust their prey, and then expire themselves with nothing left to eat.
5) Alt-SysRq-<x> - Until this key combination is pressed again, when a
process tries to fork(), kill it instead. After a couple seconds, all
the forking annoyances should be gone. You may lose some legitimate
processes who try to fork within that interval, but you will most likely
retain control of your system with little interruption. (?)
6) A fork flag in a process header? Perhaps like the digital copy
flag to impose restrictions on consumer devices, a process should only
be allowed to fork a set number of times before any further fork returns
-1.
When I am in sysadmin mode, the very last thing on earth I want to do is
admit defeat to errant programs running on my system. Perhaps the Linux
kernel can be made more resilient to fork bomb behavior in the first
place, but if not, it would certainly help to be able to take care of
the problem once it is already happening, aside from a punch of the reset
button.
Comments appreciated!
See ya,
--
Ryan Underwood, <nemesis at icequake.net>, icq=10317253
Ryan Underwood <[email protected]> wrote:
>
> [Fork bomb] and watch linux die. (2.4.21 stock)
That's what per-user process limits are for. Doesn't matter if it's a
shellscript or something else; any system without limits set is vulnerable.
Charles
--
-----------------------------------------------------------------------
Charles Cazabon <[email protected]>
GPL'ed software available at: http://www.qcc.ca/~charlesc/software/
-----------------------------------------------------------------------
Ryan Underwood wrote:
>Hi,
>
>:(){ :|:&};:
>
>Paste that into bash and watch linux die. (2.4.21 stock)
>
>I've seen some methods of dealing with infinitely forking processes, but
>short of solving the Halting Problem I doubt we will ever find a perfect
>solution to _preventing_ them. So I had a few ideas that might help an
>admin _deal_ with a fork storm when it is occurring so that the S-U-B
>approach can be avoided.
>
>I also found it interesting that alt-sysrq-S took about 5 minutes to
>complete the sync. Is there some sort of priority issue there? I would
>think that kernel operations should forget about all the little annoying
>processes going crazy. Also, eventually, the OOM killer started killing
>off stuff, but I noticed that it would repeatedly attempt to kill the
>same pid, such as gpm's pid, up to 10 times or so. Was it not getting
>enough CPU time to die, or something?
>
>Anyway, here are my half-baked ideas, maybe someone else has more
>suggestions:
>
>1) Alt-SysRq-<x>- then type the name of a process and hit enter. All
>processes matching this name are killed. Drawback -- if you use this to
>kill e.g. bash, all your login shells will die too, putting a desktop
>user back at a login prompt. This is ok for servers, not for desktops.
>This would solve shell bombs but not compiled bombs -- a process would
>just overwrite argv[0] after it forks with random gibberish to defeat
>it.
>
>2) Alt-SysRq-<x> - Kill all processes that share the most popular
>process size in the system table. This way even if the name is changed,
>if there is a process making infinite copies of itself, since all the
>processes are carrying out the same action, they may have the same size.
>This is speculation and may be wrong.
>
>3) Alt-SysRq-<x> - Kill the process that has the most descendent
>processes. This could be made "smart" so that it only kills off the
>part of the process tree where it really starts branching off, which
>is a likely candidate for where the fork bomb started.
>
>4) Since processes are created with increasing pids, a "killall" against
>a fork bomb does nothing. It simply starts killing processes matching
>that name starting at the lowest pid. But the processes which are
>forking at higher pids eventually wrap around and get lower pids again,
>which makes you end up with a forkbomb ring buffer. Not too effective
>at getting rid of the problem.
>
>What about some sort of reverse killall, or a killall with specific
>capabilities tailored to taking out fork bombs? My roommate suggested
>perhaps a "killall-bomb" may be in order. A killall that forks
>infinitely just like the bomb does, but also works to kill off the bomb
>by filling up the process table itself. Eventually the predators should
>exhaust their prey, and then expire themselves with nothing left to eat.
>
>5) Alt-SysRq-<x> - Until this key combination is pressed again, when a
>process tries to fork(), kill it instead. After a couple seconds, all
>the forking annoyances should be gone. You may lose some legitimate
>processes who try to fork within that interval, but you will most likely
>retain control of your system with little interruption. (?)
>
>6) A fork flag in a process header? Perhaps like the digital copy
>flag to impose restrictions on consumer devices, a process should only
>be allowed to fork a set number of times before any further fork returns
>-1.
>
>When I am in sysadmin mode, the very last thing on earth I want to do is
>admit defeat to errant programs running on my system. Perhaps the Linux
>kernel can be made more resilient to fork bomb behavior in the first
>place, but if not, it would certainly help to be able to take care of
>the problem once it is already happening, aside from a punch of the reset
>button.
>
>Comments appreciated!
>
>See ya,
>
>
It's a base redhat kernel, after the cannot allocate memory, my system
returned to normal operation and it didnt die.
Is this the type of behavior you were looking for? or am i off base?
Linux sloth 2.4.20-8 #1 Thu Mar 13 17:54:28 EST 2003 i686 i686 i386
GNU/Linux
$ :(){ :|:&};:
[1] 3071
$
[1]+ Done : | :
$ -bash: fork: Cannot allocate memory
-bash: fork: Cannot allocate memory
-bash: fork: Cannot allocate memory
-bash: fork: Cannot allocate memory
Jon
Hi,
> That's what per-user process limits are for. Doesn't matter if it's a
> shellscript or something else; any system without limits set is
> vulnerable.
I agree, but it would also be nice to have a way to clean up after the
fact without giving up the box. My limit is set at 2047 processes
which, while being a lot, doesn't seem like enough to guarantee a dead
box. (Don't many busy systems have more than this number running at any
given time?)
> It's a base redhat kernel, after the cannot allocate memory, my system
> returned to normal operation and it didnt die.
> Is this the type of behavior you were looking for? or am i off base?
>
> Linux sloth 2.4.20-8 #1 Thu Mar 13 17:54:28 EST 2003 i686 i686 i386
> GNU/Linux
>
> $ :(){ :|:&};:
> [1] 3071
>
> $
> [1]+ Done : | :
>
> $ -bash: fork: Cannot allocate memory
> -bash: fork: Cannot allocate memory
> -bash: fork: Cannot allocate memory
> -bash: fork: Cannot allocate memory
Nope, on my system running stock 2.4.21, after hitting enter, wait about 2
seconds, and the system is frozen. Telnet connects but never gets a
shell. None of the SysRq process-killing combos have any effect. After
a few failed killalls (which eventually killed the one shell I was able
to get), and Alt-SysRq-S never completing the sync, I gave up and
Alt-SysRq-B.
What does ulimit -u say on your system? 2047 on mine.
--
Ryan Underwood, <nemesis at icequake.net>, icq=10317253
Ryan Underwood wrote:
>Hi,
>
>
>
>>That's what per-user process limits are for. Doesn't matter if it's a
>>shellscript or something else; any system without limits set is
>>vulnerable.
>>
>>
>
>I agree, but it would also be nice to have a way to clean up after the
>fact without giving up the box. My limit is set at 2047 processes
>which, while being a lot, doesn't seem like enough to guarantee a dead
>box. (Don't many busy systems have more than this number running at any
>given time?)
>
>
>
>>It's a base redhat kernel, after the cannot allocate memory, my system
>>returned to normal operation and it didnt die.
>>Is this the type of behavior you were looking for? or am i off base?
>>
>>Linux sloth 2.4.20-8 #1 Thu Mar 13 17:54:28 EST 2003 i686 i686 i386
>>GNU/Linux
>>
>>$ :(){ :|:&};:
>>[1] 3071
>>
>>$
>>[1]+ Done : | :
>>
>>$ -bash: fork: Cannot allocate memory
>>-bash: fork: Cannot allocate memory
>>-bash: fork: Cannot allocate memory
>>-bash: fork: Cannot allocate memory
>>
>>
>
>Nope, on my system running stock 2.4.21, after hitting enter, wait about 2
>seconds, and the system is frozen. Telnet connects but never gets a
>shell. None of the SysRq process-killing combos have any effect. After
>a few failed killalls (which eventually killed the one shell I was able
>to get), and Alt-SysRq-S never completing the sync, I gave up and
>Alt-SysRq-B.
>
>What does ulimit -u say on your system? 2047 on mine.
>
>
>
$ ulimit -u
3072
Have you tried this on any 2.5.x kernels? Just curious to see what it
does, I plan on giving it a go later.
Interesting! Kills a Slackware 9.0 stock system nicely, kills the same
system nicely with 2.5.74 config'd with make oldconfig answering no to all
questions.
--
/"\ / For information and quotes, email us at
\ / ASCII RIBBON CAMPAIGN / [email protected]
X AGAINST HTML MAIL / http://www.lrsehosting.com/
/ \ AND POSTINGS / [email protected]
-------------------------------------------------------------------------
-----Original Message-----
From: [email protected]
[mailto:[email protected]]On Behalf Of Ryan Underwood
Sent: Tuesday, July 08, 2003 1:46 PM
To: [email protected]
Cc: [email protected]
Subject: Forking shell bombs
Hi,
:(){ :|:&};:
Paste that into bash and watch linux die. (2.4.21 stock)
1023 here. System still died. It is a PII 233-ECC with 256 megs of ram
though.
--
/"\ / For information and quotes, email us at
\ / ASCII RIBBON CAMPAIGN / [email protected]
X AGAINST HTML MAIL / http://www.lrsehosting.com/
/ \ AND POSTINGS / [email protected]
-------------------------------------------------------------------------
-----Original Message-----
From: [email protected]
[mailto:[email protected]]On Behalf Of Ryan Underwood
Sent: Tuesday, July 08, 2003 3:28 PM
To: [email protected]
Subject: Re: Forking shell bombs
What does ulimit -u say on your system? 2047 on mine.
--
Ryan Underwood, <nemesis at icequake.net>, icq=10317253
-
Hi,
On Tue, Jul 08, 2003 at 04:43:18PM -0400, jhigdon wrote:
>
> Have you tried this on any 2.5.x kernels? Just curious to see what it
> does, I plan on giving it a go later.
I haven't, but a previous poster indicated that they had (2.5.74) with
the same results.
I wonder if we could find an upper limit on the number of allowable
processes that would leave the box in a workable state? Unfortunately,
I don't have a spare box to test such things on at the moment. ;)
Thanks,
--
Ryan Underwood, <nemesis at icequake.net>, icq=10317253
Hello,
Debian 3.0 (woody) stable. Kernel 2.4.21 ...
bash 2.05a
$ulimit -u
1791
---
No problems here. In 1-2 secs the process dies and all is ok, i can work
with the shell without any problem.
Seeya
At 16:43 08/07/2003 -0400, jhigdon wrote:
>Ryan Underwood wrote:
>
>>Hi,
>>
>>
>>
>>>That's what per-user process limits are for. Doesn't matter if it's a
>>>shellscript or something else; any system without limits set is
>>>vulnerable.
>>>
>>
>>I agree, but it would also be nice to have a way to clean up after the
>>fact without giving up the box. My limit is set at 2047 processes
>>which, while being a lot, doesn't seem like enough to guarantee a dead
>>box. (Don't many busy systems have more than this number running at any
>>given time?)
>>
>>
>>
>>>It's a base redhat kernel, after the cannot allocate memory, my system
>>>returned to normal operation and it didnt die.
>>>Is this the type of behavior you were looking for? or am i off base?
>>>
>>>Linux sloth 2.4.20-8 #1 Thu Mar 13 17:54:28 EST 2003 i686 i686 i386
>>>GNU/Linux
>>>
>>>$ :(){ :|:&};:
>>>[1] 3071
>>>
>>>$
>>>[1]+ Done : | :
>>>
>>>$ -bash: fork: Cannot allocate memory
>>>-bash: fork: Cannot allocate memory
>>>-bash: fork: Cannot allocate memory
>>>-bash: fork: Cannot allocate memory
>>>
>>
>>Nope, on my system running stock 2.4.21, after hitting enter, wait about 2
>>seconds, and the system is frozen. Telnet connects but never gets a
>>shell. None of the SysRq process-killing combos have any effect. After
>>a few failed killalls (which eventually killed the one shell I was able
>>to get), and Alt-SysRq-S never completing the sync, I gave up and
>>Alt-SysRq-B.
>>
>>What does ulimit -u say on your system? 2047 on mine.
>>
>>
>$ ulimit -u
>3072
>
>
>Have you tried this on any 2.5.x kernels? Just curious to see what it
>does, I plan on giving it a go later.
>
>
>-
>To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>the body of a message to [email protected]
>More majordomo info at http://vger.kernel.org/majordomo-info.html
>Please read the FAQ at http://www.tux.org/lkml/
On Maw, 2003-07-08 at 19:55, Charles Cazabon wrote:
> Ryan Underwood <[email protected]> wrote:
> >
> > [Fork bomb] and watch linux die. (2.4.21 stock)
>
> That's what per-user process limits are for. Doesn't matter if it's a
> shellscript or something else; any system without limits set is vulnerable.
In general turning on vm overcommit protection on a -ac tree should
be sufficient - per user limits are better
I set the ulimit -u 1791
and the box keeps running(2.4.20-gentoo-r5) , but we still need the
problem corrected, any other user can run ther DOS and crash the box, is
there any way to set ulimits for all users fixed ??, not by sourcein a
bashrc or something like that ?? because the user can delete the line on
.bashrc and thats it
Max
--
uname -a: Linux garaged 2.4.20-gentoo-r5 #6 SMP Wed Jun 4 15:32:53 Local time zone must be set--see zic m i686 Pentium III (Coppermine) GenuineIntel GNU/Linux
-----BEGIN GEEK CODE BLOCK-----
Version: 3.1
GS/ d-s:a-28C++ILHA+++P+L++>+++E---W++N*o--K-w++++O-M--V--PS+PEY--PGP++t5XRtv++b++DI--D-G++e++h-r+y**
------END GEEK CODE BLOCK------
gpg-key: http://garaged.homeip.net/gpg-key.txt
> From: Max Valdez [mailto:[email protected]]
>
> I set the ulimit -u 1791
> and the box keeps running(2.4.20-gentoo-r5) , but we still need the
> problem corrected, any other user can run ther DOS and crash the box, is
> there any way to set ulimits for all users fixed ??, not by sourcein a
> bashrc or something like that ?? because the user can delete the line on
> .bashrc and thats it
/etc/profile.
This is an admin/distro problem.
I?aky P?rez-Gonz?lez -- Not speaking for Intel -- all opinions are my own (and my fault)
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
tirsdag 8. juli 2003, 19:18, skrev Max Valdez:
> I set the ulimit -u 1791
> and the box keeps running(2.4.20-gentoo-r5) , but we still need the
> problem corrected, any other user can run ther DOS and crash the box, is
> there any way to set ulimits for all users fixed ??, not by sourcein a
> bashrc or something like that ?? because the user can delete the line on
> .bashrc and thats it
/etc/security/limits.conf, on my box.
- - Svein Ove Aas
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.2 (GNU/Linux)
iD8DBQE/C0UL9OlFkai3rMARAnwsAKDEOnB8YqlzBu5TwjWFm3qVN2W8nACfZPSe
weYkPZfbHktsk4Eubm3jFAM=
=4/Xo
-----END PGP SIGNATURE-----
On Maw, 2003-07-08 at 18:18, Max Valdez wrote:
> I set the ulimit -u 1791
> and the box keeps running(2.4.20-gentoo-r5) , but we still need the
> problem corrected, any other user can run ther DOS and crash the box, is
> there any way to set ulimits for all users fixed ??, not by sourcein a
> bashrc or something like that ?? because the user can delete the line on
> .bashrc and thats it
You can set the limits using the pam limits module and set the hard
limit so the user cannot revert it. Or with -ac just set no overcommit
and it seems fine
HP/UX and some other systems, have kernel parameters that can be set for
max number of processes, and or threads. I'm not a big fan of it myself,
since there are instances where it would appear to be a 'fork attack' but
be legitimate threads or forks.
Not to mention it is just tacky... A better solution is:
Know thy users, .... and thy baseball bat.
-- Sir Ace
On Tue, 8 Jul 2003, Ryan Underwood wrote:
>
> Hi,
>
> On Tue, Jul 08, 2003 at 04:43:18PM -0400, jhigdon wrote:
> >
> > Have you tried this on any 2.5.x kernels? Just curious to see what it
> > does, I plan on giving it a go later.
>
> I haven't, but a previous poster indicated that they had (2.5.74) with
> the same results.
>
> I wonder if we could find an upper limit on the number of allowable
> processes that would leave the box in a workable state? Unfortunately,
> I don't have a spare box to test such things on at the moment. ;)
>
> Thanks,
>
On an Athlon 600 running 2.4.20, with ulimit -u 2047, the box recovers
no problem from the fork bomb.
On my Celeron 800 running 2.4.21, ulimit -u 1500, the box recovers after
4-5 minutes.
ulimit -u 2047 (Debian's default ulimit), the box fights for 5-10
minutes, and no longer responds after 10. I can see fork perrors in
the terminal like usual, but the machine no longer responds. (I waited
more than half an hour.)
--
Ryan Underwood, <nemesis at icequake.net>, icq=10317253
> Nope, on my system running stock 2.4.21, after hitting enter, wait about 2
> seconds, and the system is frozen. Telnet connects but never gets a
> shell. None of the SysRq process-killing combos have any effect. After
> a few failed killalls (which eventually killed the one shell I was able
> to get), and Alt-SysRq-S never completing the sync, I gave up and
> Alt-SysRq-B.
I've used killall -STOP several times since it will fill up the process
table. once they're all in T state, I killall -KILL them. seems to work
for me.
--
Lab tests show that use of micro$oft causes cancer in lab animals
The problem can be attacked in two steps:
1. Stop new forks from being created
2. Kill the process causing the forks
The current ulimit implementation, afaik, can only control the
processes which will be created from the current moment onwards.
Ther processes which are already started will continue creating forks.
New processes created by the fork wil have this limit.
Basically it does not ensure that first step is completely executed.
So if your rate of killing is less than the processes being created and
resources are exausted, your system hangs.
There was a RFC patch "[RFC][PATCH 2.5.70] Dynamically tunable
maxusers, maxuprc and max_pt_cnt" posted on 2003-06-06. It implements
maxuprc (maxuprc: maximum number of processes per user) as a dynamic tunable
parameter. It can be useful to overcome this problem. By setting maxuprc to a
very low value, new creation of the process is stopped. Then root can kill ('cos
this limit is not applicable to root) the erring processes.
There is no race against time as there is no chance of new process getting created once
this value is reduced.
cheers ..
Arvind ...
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On Tuesday 08 July 2003 22:28, Ryan Underwood wrote:
> Hi,
>
> > That's what per-user process limits are for. Doesn't matter if it's a
> > shellscript or something else; any system without limits set is
> > vulnerable.
>
> I agree, but it would also be nice to have a way to clean up after the
> fact without giving up the box. My limit is set at 2047 processes
> which, while being a lot, doesn't seem like enough to guarantee a dead
> box. (Don't many busy systems have more than this number running at any
> given time?)
>
> > It's a base redhat kernel, after the cannot allocate memory, my system
> > returned to normal operation and it didnt die.
> > Is this the type of behavior you were looking for? or am i off base?
> >
> > Linux sloth 2.4.20-8 #1 Thu Mar 13 17:54:28 EST 2003 i686 i686 i386
> > GNU/Linux
> >
> > $ :(){ :|:&};:
> > [1] 3071
> >
> > $
> > [1]+ Done : | :
> >
> > $ -bash: fork: Cannot allocate memory
> > -bash: fork: Cannot allocate memory
> > -bash: fork: Cannot allocate memory
> > -bash: fork: Cannot allocate memory
>
> Nope, on my system running stock 2.4.21, after hitting enter, wait about 2
> seconds, and the system is frozen. Telnet connects but never gets a
> shell. None of the SysRq process-killing combos have any effect. After
> a few failed killalls (which eventually killed the one shell I was able
> to get), and Alt-SysRq-S never completing the sync, I gave up and
> Alt-SysRq-B.
>
> What does ulimit -u say on your system? 2047 on mine.
mb@lfs:~> ulimit -u
2047
my system doesn't freeze.
It becomes _very_ slow, but I can kill the forking processes.
After killing, the system runs just fine.
I'm using 2.4.21.
- --
Regards Michael Buesch
http://www.8ung.at/tuxsoft
13:34:31 up 12 min, 2 users, load average: 1.32, 1.22, 0.74
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.1 (GNU/Linux)
iD8DBQE/C/4soxoigfggmSgRAjRDAJ9lH45Hr0LoAJTLwG4/Xlo6a2pE7ACeMUVl
ITJ5Tucg2LLpYqkQ2Vk/dmY=
=qCMR
-----END PGP SIGNATURE-----
No such thing exists. I can have 10,000 processes doing nothing and
have a load average of 0.00. I can have 100 processes each sucking cpu
as fast as the electrons flow and have a dead box.
Learn how to manage resource limits and you can tuck another feather
into your fledgeling sysadmin hat ;)
david
Ryan Underwood wrote:
>Hi,
>
>On Tue, Jul 08, 2003 at 04:43:18PM -0400, jhigdon wrote:
>
>
>>Have you tried this on any 2.5.x kernels? Just curious to see what it
>>does, I plan on giving it a go later.
>>
>>
>
>I haven't, but a previous poster indicated that they had (2.5.74) with
>the same results.
>
>I wonder if we could find an upper limit on the number of allowable
>processes that would leave the box in a workable state? Unfortunately,
>I don't have a spare box to test such things on at the moment. ;)
>
>Thanks,
>
>
Hi David,
On Wed, Jul 09, 2003 at 11:07:55AM -0400, David Ford wrote:
> No such thing exists. I can have 10,000 processes doing nothing and
> have a load average of 0.00. I can have 100 processes each sucking cpu
> as fast as the electrons flow and have a dead box.
Well, like I said, in this specific case we talk about a fork bomb, not
a bunch of idle processes. My question is what the upper limit to set,
in order to ensure that processes that do nothing but "while (1)
fork();" do not take down the system. Apparently 2047 is too high for
2.4.21, at least on my system. But, a slower box manages a 2047 ulimit
fine with a 2.4.20 kernel.
> Learn how to manage resource limits and you can tuck another feather
> into your fledgeling sysadmin hat ;)
I already know how to manage the limits, but I am asking why the system
seems to hang indefinitely when a maximum of 2047 is set, but not when
e.g. 1500 is set. Do you have any idea? Why would there be such a
large change in behavior with such a small change in parameter?
Furthermore, why does my (slower, 600 < 800mhz) system running 2.4.20
kill off a fork bomb at a 2047 ulimit instantaneously, but 2.4.21 takes
half an hour or more, at which point I give up?
--
Ryan Underwood, <nemesis at icequake.net>, icq=10317253
Em Ter, 2003-07-08 ?s 17:43, jhigdon escreveu:
> Have you tried this on any 2.5.x kernels? Just curious to see what it
> does, I plan on giving it a go later.
Athlon 1.0GHz with 1GB of memory running 2.5.74-bk6:
$ ulimit -u
2047
The system does not freeze, it becomes slow, but in some
seconds becomes normal.
--
Luiz Fernando N. Capitulino
<[email protected]>
<http://www.telecentros.sp.gov.br>
On Sunday 13 July 2003 05:10, Riley Williams wrote:
>Hi all.
>
>It sounds like what is required is some way of basically saying
>"Don't permit new processes to be created if CPU usage > 75%"
>(where the 75% is configurable but less than 100%).
>
Which would immediately cause problems for anyone running setiathome.
My cpu useage has been 100% for 5 years.
>Best wishes from Riley.
>---
> * Nothing as pretty as a smile, nothing as ugly as a frown.
>
> > -----Original Message-----
> > From: [email protected]
> > [mailto:[email protected]]On Behalf Of David
> > Ford Sent: Wednesday, July 09, 2003 4:08 PM
> > To: Ryan Underwood
> > Cc: [email protected]
> > Subject: Re: Forking shell bombs
> >
> > No such thing exists. I can have 10,000 processes doing nothing
> > and have a load average of 0.00. I can have 100 processes each
> > sucking cpu as fast as the electrons flow and have a dead box.
> >
> > Learn how to manage resource limits and you can tuck another
> > feather into your fledgeling sysadmin hat ;)
> >
> > david
> >
> > Ryan Underwood wrote:
> >> Hi,
> >>
> >> On Tue, Jul 08, 2003 at 04:43:18PM -0400, jhigdon wrote:
> >>> Have you tried this on any 2.5.x kernels? Just curious to see
> >>> what it does, I plan on giving it a go later.
> >>
> >> I haven't, but a previous poster indicated that they had
> >> (2.5.74) with the same results.
> >>
> >> I wonder if we could find an upper limit on the number of
> >> allowable processes that would leave the box in a workable
> >> state? Unfortunately, I don't have a spare box to test such
> >> things on at the moment. ;)
> >>
> >> Thanks,
>
>---
>Outgoing mail is certified Virus Free.
>Checked by AVG anti-virus system (http://www.grisoft.com).
>Version: 6.0.500 / Virus Database: 298 - Release Date: 10-Jul-2003
>
>-
>To unsubscribe from this list: send the line "unsubscribe
> linux-kernel" in the body of a message to [email protected]
>More majordomo info at http://vger.kernel.org/majordomo-info.html
>Please read the FAQ at http://www.tux.org/lkml/
--
Cheers, Gene
AMD K6-III@500mhz 320M
Athlon1600XP@1400mhz 512M
99.26% setiathome rank, not too shabby for a WV hillbilly
Yahoo.com attornies please note, additions to this message
by Gene Heskett are:
Copyright 2003 by Maurice Eugene Heskett, all rights reserved.
On Sun, 2003-07-13 10:10:22 +0100, Riley Williams <[email protected]>
wrote in message <[email protected]>:
> Hi all.
>
> It sounds like what is required is some way of basically saying
> "Don't permit new processes to be created if CPU usage > 75%"
> (where the 75% is configurable but less than 100%).
That won't allow a load > 0.75 . Bad idea.
MfG, JBG
--
Jan-Benedict Glaw [email protected] . +49-172-7608481
"Eine Freie Meinung in einem Freien Kopf | Gegen Zensur | Gegen Krieg
fuer einen Freien Staat voll Freier B?rger" | im Internet! | im Irak!
ret = do_actions((curr | FREE_SPEECH) & ~(IRAQ_WAR_2 | DRM | TCPA));
Hi all.
It sounds like what is required is some way of basically saying
"Don't permit new processes to be created if CPU usage > 75%"
(where the 75% is configurable but less than 100%).
Best wishes from Riley.
---
* Nothing as pretty as a smile, nothing as ugly as a frown.
> -----Original Message-----
> From: [email protected]
> [mailto:[email protected]]On Behalf Of David Ford
> Sent: Wednesday, July 09, 2003 4:08 PM
> To: Ryan Underwood
> Cc: [email protected]
> Subject: Re: Forking shell bombs
>
> No such thing exists. I can have 10,000 processes doing nothing and
> have a load average of 0.00. I can have 100 processes each sucking cpu
> as fast as the electrons flow and have a dead box.
>
> Learn how to manage resource limits and you can tuck another feather
> into your fledgeling sysadmin hat ;)
>
> david
>
> Ryan Underwood wrote:
>
>> Hi,
>>
>> On Tue, Jul 08, 2003 at 04:43:18PM -0400, jhigdon wrote:
>>
>>> Have you tried this on any 2.5.x kernels? Just curious to see what it
>>> does, I plan on giving it a go later.
>>
>> I haven't, but a previous poster indicated that they had (2.5.74) with
>> the same results.
>>
>> I wonder if we could find an upper limit on the number of allowable
>> processes that would leave the box in a workable state? Unfortunately,
>> I don't have a spare box to test such things on at the moment. ;)
>>
>> Thanks,
---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.500 / Virus Database: 298 - Release Date: 10-Jul-2003