Hello all,
I've got a basic question:
Would it be possible to kill only the process which consumes the most
memory in the last delta t?
Or does somebody have a better idea?
Background:
rsync seems to have problems when it is break off (by an error or by
CTRL-C). After the break off, rsync eats up all the memory of the
machine until it is killed by the kernel.
Unfortunately, the kernel didn't kill rsync at first, but too late, so
that a lot of other, very important processes were killed before.
E.g.:
Mar 24 11:46:05 susi kernel: Out of Memory: Killed process 732 (kdeinit).
Mar 24 11:46:06 susi kernel: Out of Memory: Killed process 734 (kdeinit).
Mar 24 11:46:06 susi kernel: Out of Memory: Killed process 734 (kdeinit).
Mar 24 11:46:21 susi kernel: Out of Memory: Killed process 704 (konsole).
Mar 24 11:46:29 susi kernel: Out of Memory: Killed process 729 (kdeinit).
Mar 24 11:46:35 susi kernel: Out of Memory: Killed process 738 (kdeinit).
Mar 24 11:46:43 susi kernel: Out of Memory: Killed process 576 (kppp).
Mar 24 11:46:49 susi kernel: Out of Memory: Killed process 268 (squid).
Mar 24 11:46:50 susi squid[266]: Squid Parent: child process 268 exited
due to signal 9
Mar 24 11:46:55 susi squid[266]: Squid Parent: child process 1935 started
Mar 24 11:47:01 susi kernel: Out of Memory: Killed process 700 (ktail).
Mar 24 11:47:08 susi kernel: Out of Memory: Killed process 200 (innd).
Mar 24 11:47:18 susi kernel: Out of Memory: Killed process 214 (httpd).
Mar 24 11:47:24 susi kernel: Out of Memory: Killed process 215 (httpd).
Mar 24 11:47:30 susi kernel: Out of Memory: Killed process 216 (httpd).
Mar 24 11:47:36 susi kernel: Out of Memory: Killed process 217 (httpd).
Mar 24 11:47:42 susi kernel: Out of Memory: Killed process 218 (httpd).
Mar 24 11:47:42 susi kernel: Out of Memory: Killed process 218 (httpd).
Mar 24 11:47:47 susi kernel: Out of Memory: Killed process 266 (squid).
Mar 24 11:47:53 susi kernel: Out of Memory: Killed process 1935 (squid).
Mar 24 11:47:53 susi kernel: Out of Memory: Killed process 1935 (squid).
Mar 24 11:48:03 susi kernel: Out of Memory: Killed process 114 (named).
Mar 24 11:48:03 susi kernel: Out of Memory: Killed process 114 (named).
Mar 24 11:48:12 susi kernel: Out of Memory: Killed process 1936 (httpd).
Mar 24 11:48:12 susi kernel: Out of Memory: Killed process 1936 (httpd).
Mar 24 11:48:17 susi kernel: Out of Memory: Killed process 1937 (httpd).
Mar 24 11:48:17 susi kernel: Out of Memory: Killed process 1937 (httpd).
Mar 24 11:48:22 susi kernel: Out of Memory: Killed process 1939 (httpd).
Mar 24 11:48:22 susi kernel: Out of Memory: Killed process 1939 (httpd).
Mar 24 11:48:27 susi kernel: Out of Memory: Killed process 1938 (httpd).
Mar 24 11:48:27 susi kernel: Out of Memory: Killed process 1938 (httpd).
Mar 24 11:48:33 susi kernel: Out of Memory: Killed process 1940 (httpd).
Mar 24 11:48:33 susi kernel: Out of Memory: Killed process 1940 (httpd).
Mar 24 11:48:40 susi kernel: Out of Memory: Killed process 1941 (httpd).
Mar 24 11:48:40 susi kernel: Out of Memory: Killed process 1941 (httpd).
Mar 24 11:48:44 susi kernel: Out of Memory: Killed process 1942 (httpd).
Mar 24 11:48:44 susi kernel: Out of Memory: Killed process 1942 (httpd).
Mar 24 11:48:50 susi kernel: Out of Memory: Killed process 1943 (httpd).
Mar 24 11:48:55 susi kernel: Out of Memory: Killed process 581 (bash).
Mar 24 11:49:00 susi kernel: Out of Memory: Killed process 1944 (httpd).
Mar 24 11:49:06 susi kernel: Out of Memory: Killed process 923 (rsync).
Mar 24 11:49:06 susi kernel: Out of Memory: Killed process 923 (rsync).
Mar 24 11:49:06 susi kernel: VM: killing process rsync
There could have been killed any other process before the evil-doer is
removed from the memory.
Fortunately, sshd wasn't killed, so I could restart all the other
processes again.
rsync is an actual example for the problem, I wrote. This could be any
other process, eating up the memory. Then, the kernel kills wildly some
processes until the right process is killed - and the machine is
probably unavailable meanwhile.
Regards,
Andreas Hartmann
Fran?ois Cami wrote:
>
> just wondering : what version of rsync are you using ?
>
Version 2.5.2 and 2.5.4 (I don't know 2.5.3)
Regards,
Andreas Hartmann
Fran?ois Cami wrote:
> Andreas Hartmann wrote:
>
>> Fran?ois Cami wrote:
>>
>>>
>>> just wondering : what version of rsync are you using ?
>>>
>>
>> Version 2.5.2 and 2.5.4 (I don't know 2.5.3)
>>
>> Regards,
>> Andreas Hartmann
>>
>>
>
> okay... on my server (openbsd) 2.5.4 behaves a lot better than 2.5.2
I can't see any difference.
>
> 2.4 hadn't that kind of problem
> I'm wondering if they'll fix it before I go back to 2.4 :-)
That's right - but there is a security-issue why you shouldn't use
earlier versions.
Regards,
Andreas Hartmann
On Sun, 24 Mar 2002, andreas wrote:
> I've got a basic question:
> Would it be possible to kill only the process which consumes the most
> memory in the last delta t?
> rsync is an actual example for the problem, I wrote. This could be any
> other process, eating up the memory. Then, the kernel kills wildly some
> processes until the right process is killed - and the machine is
> probably unavailable meanwhile.
The problem is that 'rsync' might as well have been 'scientific
calculation that ran for 3 days'.
One 'solution' could be to let the OOM killer ignore CPU usage
of less than say 1 hour, but it'll always be heuristics that
can go wrong in some scenario.
regards,
Rik
--
Bravely reimplemented by the knights who say "NIH".
http://www.surriel.com/ http://distro.conectiva.com/
Rik van Riel wrote:
> On Sun, 24 Mar 2002, andreas wrote:
>
>
>>I've got a basic question:
>>Would it be possible to kill only the process which consumes the most
>>memory in the last delta t?
>
>
>>rsync is an actual example for the problem, I wrote. This could be any
>>other process, eating up the memory. Then, the kernel kills wildly some
>>processes until the right process is killed - and the machine is
>>probably unavailable meanwhile.
>
>
> The problem is that 'rsync' might as well have been 'scientific
> calculation that ran for 3 days'.
>
> One 'solution' could be to let the OOM killer ignore CPU usage
> of less than say 1 hour, but it'll always be heuristics that
> can go wrong in some scenario.
In that particular case, rsync eats up the memory within seconds. Not
only 5 MB's but hundreds of MB's. Wouldn't it be possible to detect such
behaviour and kill such processes, if they consume all the memory very
fast (I mean RAM and SWAP-space), that is, if no more virtual memory is
left, or let's say e.g., if 99% percent of virtual memory are used?
Processes, that consume memory during a long term run cannot be detected
with this heuristic.
Maybe, it would be possible to sort all known processes by there memory
usage and combine it with the speed of their memoryrequests.
If memory gets low, and there is a process, which suddenly requests a
lot of memory, this process get's killed, even if there is another
process, which has three times more memory allocated then the "fast
growing" process. If all processes are growing nearly equal and memory
gets low, the process with the most memory usage get's killed - because
with this process, the kernel achieves the target (to get free memspace)
best.
Advantage of combining consumption-speed and memory usage per process
would be, that processes could be filtered, which are obviously broken.
If the behaviour of the process is correct, than the machine hasn't
enough memory. But this is a problem, which cannot be handled by the kernel.
Regards,
Andreas Hartmann
andreas wrote:
> Hello all,
>
> I've got a basic question:
> Would it be possible to kill only the process which consumes the most
> memory in the last delta t?
> Or does somebody have a better idea?
I had a patch for 2.4.something which would allow you to configure which
processes were killed first by the OOM killer. You basically gave
processes an oom_nice value, either by pid or process name, and that was
taken into account by the oom killer. You could also protect a process
completely from the oom killer, which would be good to do for your sshd
process in the example you give.
Look at http://www.uwsg.iu.edu/hypermail/linux/kernel/0011.1/0453.html
chris
> Advantage of combining consumption-speed and memory usage per process
> would be, that processes could be filtered, which are obviously broken.
> If the behaviour of the process is correct, than the machine hasn't
> enough memory. But this is a problem, which cannot be handled by the kernel.
With 2.4.19pre3-ac3+ you don't need a heuristic. Do
echo "2" >/proc/sys/vm/overcommit_memory
The system will then fail allocations before they can cause an OOM status.
It might be interesting to add "except root" modes to this.
Alan
> I've got a basic question:
> Would it be possible to kill only the process which consumes the most
> memory in the last delta t?
> Or does somebody have a better idea?
At the point you hit OOM every possible heuristic is simply handwaving that
will work for a subset of the user base. Fix the real problem and it goes
away. My box doesn't OOM, the worst case (which I've never seen happen) is
a task being killed by a stack growth failing to get memory.
Alan Cox wrote:
>>Advantage of combining consumption-speed and memory usage per process
>>would be, that processes could be filtered, which are obviously broken.
>>If the behaviour of the process is correct, than the machine hasn't
>>enough memory. But this is a problem, which cannot be handled by the kernel.
>
>
> With 2.4.19pre3-ac3+ you don't need a heuristic. Do
>
> echo "2" >/proc/sys/vm/overcommit_memory
>
> The system will then fail allocations before they can cause an OOM status.
> It might be interesting to add "except root" modes to this.
>
> Alan
If I would ad the option "except root", I would have the same problem,
because rsync must run as root to do a full backup :-(.
If I understand this feature right, always this process gets a problem,
which want's to have memory even if there is no more free. It could be
the wrong process too.
But if a process gets wild like rsync in this situation, it's very
likely that rsync is the first which doesn't get no more memory. But
what's afterwards if it didn't get any more memory?
If the process, which wants to have so much memory, isn't stopped (or
doesn't stop itself), the memory situation isn't getting better. Other
processes, which are working right, will probably fail while the broken
process eats up the new free mem again.
I think that a broken process like rsync should be killed in order to
prevent other processes to be damaged indirectly.
Regards,
Andreas Hartmann
On Sun, 24 Mar 2002, Alan Cox wrote:
> > I've got a basic question:
> > Would it be possible to kill only the process which consumes the most
> > memory in the last delta t?
> > Or does somebody have a better idea?
>
> At the point you hit OOM every possible heuristic is simply handwaving that
> will work for a subset of the user base. Fix the real problem and it goes
> away. My box doesn't OOM, the worst case (which I've never seen happen) is
> a task being killed by a stack growth failing to get memory.
Would it hard to do some memory allocation statistics, so if some process
at one point (as rsync did) goes crazy eating all memory, that would be
detected?
I'm quite sure other OSes have similar funcitonality, such as AIX
roy
--
Roy Sigurd Karlsbakk, Datavaktmester
Computers are like air conditioners.
They stop working when you open Windows.
Alan Cox wrote:
>>I've got a basic question:
>>Would it be possible to kill only the process which consumes the most
>>memory in the last delta t?
>>Or does somebody have a better idea?
>
>
> At the point you hit OOM every possible heuristic is simply handwaving that
> will work for a subset of the user base. Fix the real problem and it goes
> away.
This would and must be the first solution. I agree with you.
On the other hand - nobody is perfect and there can be such situations.
Why shouldn't the kernel be the ultimate checkpoint to prevent greater
damage? That's what I'm thinking.
It's not easy and it takes probably ressources (processor and RAM) to do
some
checks. The idea would be, to do such checks only when the memory-usage is
over a defined value, e.g. 60% or later. Best would be, if it would be
free configurable (to have the checks at all and at which point beginning).
I suggested a heuristic. Maybe, there are better ones. What I want to
say is, that I think that there should be a mechanism to detect and kill
a process as good as possible, which wants to have all the memory and
even more - before the memory is used to 100%.
Regards,
Andreas Hartmann
On Sun, 24 Mar 2002, Roy Sigurd Karlsbakk wrote:
> Would it hard to do some memory allocation statistics, so if some
> process at one point (as rsync did) goes crazy eating all memory, that
> would be detected?
No. What I doubt however is whether it would be worth it,
since most machines never run OOM.
regards,
Rik
--
Bravely reimplemented by the knights who say "NIH".
http://www.surriel.com/ http://distro.conectiva.com/
Rik van Riel wrote:
> On Sun, 24 Mar 2002, Roy Sigurd Karlsbakk wrote:
> > Would it hard to do some memory allocation statistics, so if some
> > process at one point (as rsync did) goes crazy eating all memory, that
> > would be detected?
>
> No. What I doubt however is whether it would be worth it,
> since most machines never run OOM.
Well, I think could be worth in terms of security, because a local user could
use a bad memory-eating program to produce an Denial of Service of other
processes.
Unfortunately detecting a program, written to cause harm is harder than
detecting a crazy program.
greetings
Christian
On Sun, 24 Mar 2002, Christian Borntr?ger wrote:
> Rik van Riel wrote:
> > On Sun, 24 Mar 2002, Roy Sigurd Karlsbakk wrote:
> > > Would it hard to do some memory allocation statistics, so if some
> > > process at one point (as rsync did) goes crazy eating all memory, that
> > > would be detected?
> >
> > No. What I doubt however is whether it would be worth it,
> > since most machines never run OOM.
>
> Well, I think could be worth in terms of security, because a local user
> could use a bad memory-eating program to produce an Denial of Service of
> other processes.
If you don't trust your users you should do some editing
of /etc/security/limits.conf instead of relying on the
safety net in the kernel.
regards,
Rik
--
Bravely reimplemented by the knights who say "NIH".
http://www.surriel.com/ http://distro.conectiva.com/
Christian Borntr ger <[email protected]> wrote:
>Well, I think could be worth in terms of security, because a local user could
>use a bad memory-eating program to produce an Denial of Service of other
>processes.
man setrlimit
>Unfortunately detecting a program, written to cause harm is harder than
>detecting a crazy program.
ACK.
Bernd
--
Bernd Petrovitsch Email : [email protected]
g.a.m.s gmbh Fax : +43 1 205255-900
Prinz-Eugen-Stra?e 8 A-1040 Vienna/Austria/Europe
LUGA : http://www.luga.at
> > At the point you hit OOM every possible heuristic is simply handwaving that
> > will work for a subset of the user base. Fix the real problem and it goes
> > away.
>
> On the other hand - nobody is perfect and there can be such situations.
My system cannot (short of a bug) go OOM. Thats what the new overcommit
mode 2/3 ensures
> > away. My box doesn't OOM, the worst case (which I've never seen happen) is
> > a task being killed by a stack growth failing to get memory.
>
> Would it hard to do some memory allocation statistics, so if some process
> at one point (as rsync did) goes crazy eating all memory, that would be
> detected?
man ulimit
> Well, I think could be worth in terms of security, because a local user c=
> ould=20
> use a bad memory-eating program to produce an Denial of Service of other=20
> processes.
>
> Unfortunately detecting a program, written to cause harm is harder than=20
> detecting a crazy program.
Its not about detection its about containment. Thats what the beancounter
patches covered.
Hello Alan , Is there any documentation for the 'new overcommit
mode 2/3' functionaltiy ? Tia , JimL
On Sun, 24 Mar 2002, Alan Cox wrote:
> > > At the point you hit OOM every possible heuristic is simply handwaving that
> > > will work for a subset of the user base. Fix the real problem and it goes
> > > away.
> > On the other hand - nobody is perfect and there can be such situations.
> My system cannot (short of a bug) go OOM. Thats what the new overcommit
> mode 2/3 ensures
+------------------------------------------------------------------+
| James W. Laferriere | System Techniques | Give me VMS |
| Network Engineer | P.O. Box 854 | Give me Linux |
| [email protected] | Coudersport PA 16915 | only on AXP |
+------------------------------------------------------------------+
Alan Cox wrote:
>>>At the point you hit OOM every possible heuristic is simply handwaving that
>>>will work for a subset of the user base. Fix the real problem and it goes
>>>away.
>>
>>On the other hand - nobody is perfect and there can be such situations.
>
>
> My system cannot (short of a bug) go OOM. Thats what the new overcommit
> mode 2/3 ensures
How does a process react that doesn't get no more memory?
Regards,
Andreas Hartmann
Christian Borntr?ger wrote:
> Rik van Riel wrote:
>
>>On Sun, 24 Mar 2002, Roy Sigurd Karlsbakk wrote:
>>
>>>Would it hard to do some memory allocation statistics, so if some
>>>process at one point (as rsync did) goes crazy eating all memory, that
>>>would be detected?
>>
>>No. What I doubt however is whether it would be worth it,
>>since most machines never run OOM.
>
>
> Well, I think could be worth in terms of security, because a local user could
> use a bad memory-eating program to produce an Denial of Service of other
> processes.
That's what I fear.
Take the actual example. You've running a server on which people can
connect with rsync. Somebody breaks off rsync - and the rsync-process on
the server is getting crazy - that's the situation, I described at the
beginning.
Now, the httpd-process on the server is killed, the named, ... .
It's a perfect DOS-attack.
Regards,
Andreas Hartmann
> Would it hard to do some memory allocation statistics, so if some process
> at one point (as rsync did) goes crazy eating all memory, that would be
> detected?
For a good reason, systems tend to kill the process which has the
largest VM. In some cases it's a process that just went stray. In the
more common cases it would be that process that's working on
an important simulation for the last month or just your X server.
> I'm quite sure other OSes have similar funcitonality, such as AIX
AIX 3.x had this feature and IMHO it was causing too much problems.
IIRC an option to change the default bahavior came only in 4.x.
AIX also has two signals that are sent to all processes when the
swap space is full above certain thresholds, thus warning processes
about getting close to OOM condition. IIRC, a process that catches
SIGWARN is considered "well bahaved" by the system so it does not
get the SIGKILL that follows.
-- Itai
> Hello Alan , Is there any documentation for the 'new overcommit
> mode 2/3' functionaltiy ? Tia , JimL
Yes, its in the Documentation dir of the kernel..
> > My system cannot (short of a bug) go OOM. Thats what the new overcommit
> > mode 2/3 ensures
>
> How does a process react that doesn't get no more memory?
Thats up to the process. If a program doesn't handle malloc/mmap/etc
failures then its junk anyway
Hello Alan , Uh ? , OK . Please show me where it is described
the values Greater than 1 for overcommit_memory ? Tia , JimL
Linux filesrv1 2.4.19-pre3 #1 SMP Mon Mar 18 11:49:42 EST 2002 i686 unknown
From: filesystems/proc.txt
overcommit_memory
-----------------
This file contains one value. The following algorithm is used to decide if
there's enough memory: if the value of overcommit_memory is positive, then
there's always enough memory. This is a useful feature, since programs often
malloc() huge amounts of memory 'just in case', while they only use a small
part of it. Leaving this value at 0 will lead to the failure of such a huge
malloc(), when in fact the system has enough memory for the program to run.
On the other hand, enabling this feature can cause you to run out of memory
and thrash the system to death, so large and/or important servers will want to
set this value to 0.
From: sysctl/vm.txt
overcommit_memory:
This value contains a flag that enables memory overcommitment.
When this flag is 0, the kernel checks before each malloc()
to see if there's enough memory left. If the flag is nonzero,
the system pretends there's always enough memory.
This feature can be very useful because there are a lot of
programs that malloc() huge amounts of memory "just-in-case"
and don't use much of it.
Look at: mm/mmap.c::vm_enough_memory() for more information.
From: mm/mmap.c::vm_enough_memory()
...
int sysctl_overcommit_memory;
int max_map_count = DEFAULT_MAX_MAP_COUNT;
/* Check that a process has enough memory to allocate a
* new virtual mapping.
*/
int vm_enough_memory(long pages)
{
/* Stupid algorithm to decide if we have enough memory: while
* simple, it hopefully works in most obvious cases.. Easy to
* fool it, but this should catch most mistakes.
*/
/* 23/11/98 NJC: Somewhat less stupid version of algorithm,
* which tries to do "TheRightThing". Instead of using half of
* (buffers+cache), use the minimum values. Allow an extra 2%
* of num_physpages for safety margin.
*/
unsigned long free;
/* Sometimes we want to use more memory than we have. */
if (sysctl_overcommit_memory)
return 1;
...
On Sun, 24 Mar 2002, Alan Cox wrote:
> > Hello Alan , Is there any documentation for the 'new overcommit
> > mode 2/3' functionaltiy ? Tia , JimL
>
> Yes, its in the Documentation dir of the kernel..
>
+------------------------------------------------------------------+
| James W. Laferriere | System Techniques | Give me VMS |
| Network Engineer | P.O. Box 854 | Give me Linux |
| [email protected] | Coudersport PA 16915 | only on AXP |
+------------------------------------------------------------------+
> Hello Alan , Uh ? , OK . Please show me where it is described
> the values Greater than 1 for overcommit_memory ? Tia , JimL
>
> Linux filesrv1 2.4.19-pre3 #1 SMP Mon Mar 18 11:49:42 EST 2002 i686 unknown
You need to upgrade to a -ac kernel.
Replying to Alan Cox:
> Thats up to the process. If a program doesn't handle malloc/mmap/etc
> failures then its junk anyway
The recent junk I fighting with to take full advantage of overcommit
accounting is squid.
Very popular junk. Maybe rsync uses same 'secret technique' to handle malloc
failures? :)))
Btw. Overcommit handling not very good yet.
Squid hits the limit, then bails out. Then shell script trying to start new
instance of squid (actually trying to sleep before restart), but gets
'fork - cannot allocate memory'. seems that memory isn't dealloced
from already exited process space :(
--
Paul P 'Stingray' Komkoff 'Greatest' Jr // (icq)23200764 // (irc)Spacebar
PPKJ1-RIPE // (smtp)[email protected] // (http)stingr.net // (pgp)0xA4B4ECA4
Thunder from the hill wrote:
> Hi,
>
>> Maybe, it would be possible to sort all known processes by there
>> memory usage and combine it with the speed of their memoryrequests.
>> If memory gets low, and there is a process, which suddenly requests a
>> lot of memory, this process get's killed, even if there is another
>> process, which has three times more memory allocated then the "fast
>> growing" process. If all processes are growing nearly equal and memory
>> gets low, the process with the most memory usage get's killed -
>> because with this process, the kernel achieves the target (to get free
>> memspace) best.
>
> So what if I want to malloc() say 100 MiBs at once? I'll get into
> trouble then, because if I don't malloc() with sleep()s I get killed.
> That's perfect performance then.
Only if there's not enough free memory. You can't get more memory then
the machine has.
The mechanism gets active only when there isn't enough free memory for
your request.
Regards,
Andreas Hartmann
Hello Alan and Paul.
On Sunday 24 March 2002 20:50, Alan Cox wrote:
> > How does a process react that doesn't get no more memory?
>
> Thats up to the process. If a program doesn't handle malloc/mmap/etc
> failures then its junk anyway
Agreed 101%.
On Sunday 24 March 2002 21:43, Paul P Komkoff Jr wrote:
> Replying to Alan Cox:
> > Thats up to the process. If a program doesn't handle malloc/mmap/etc
> > failures then its junk anyway
>
> The recent junk I fighting with to take full advantage of overcommit
> accounting is squid.
> Very popular junk. Maybe rsync uses same 'secret technique' to handle
> malloc failures? :)))
>
> Btw. Overcommit handling not very good yet.
> Squid hits the limit, then bails out. Then shell script trying to start new
> instance of squid (actually trying to sleep before restart), but gets
> 'fork - cannot allocate memory'. seems that memory isn't dealloced
> from already exited process space :(
In all the code I've written I always check if malloc() and friends don't
succeed. In some instances the code can even handle this gracefully and
malloc failure doesn't cause a fatal application error.
Now if kernel kills my process it doesn't matter if I can handle the out of
memory condition or not. The kernel already made a decision for me instead of
my application... and never mind the data I have cached in applications
buffers... the kernel also decided it wasn't worth writing to a disk.
--------------------------------------------------------------------------
Alan.. I would find it acceptable if a process is killed if it tries to
malloc() memory *after* a previous malloc has already failed and there is
still out-of-memory condition.
--
best regards,
Rok Pape?.
On Sun, 24 Mar 2002, Alan Cox wrote:
> > > My system cannot (short of a bug) go OOM. Thats what the new overcommit
> > > mode 2/3 ensures
> >
> > How does a process react that doesn't get no more memory?
>
> Thats up to the process. If a program doesn't handle malloc/mmap/etc
> failures then its junk anyway
what's the point if you're just going to get signal delivery when you
least want it, even when malloc returns non-NULL? it could even be due to
stack growth which the compiler is under control of and has no exception
mechanism available. i personally prefer to take the signal and exit,
it's guaranteed to work in all cases. (hence, apache-1.3 and other
multiprocess daemon superiority over threaded and event-driven code, tee
hee :)
-dean
> what's the point if you're just going to get signal delivery when you
> least want it, even when malloc returns non-NULL? it could even be due to
If you are running with no overcommit you'll always (for any statistically
interesting case get the malloc NULL and no signals)
> it's guaranteed to work in all cases. (hence, apache-1.3 and other
> multiprocess daemon superiority over threaded and event-driven code, tee
> hee :)
thttpd -> 1000 hits/second on a 32Mb pentium
I don't hear you 8)