2009-09-01 13:24:19

by Lasse Kärkkäinen

[permalink] [raw]
Subject: Avoiding crash in out-of-memory situations

Currently a number of simple while (1) malloc(n); processes can crash a
system even if resource limits are in place as one can only limit the
memory usage of a process (not that of an user nor the total used by the
userspace) and any otherwise reasonable nproc and memory limits can be
circumvented by using more processes.

The OOM killer is supposed to work as a fallback in these situations,
but unfortunately the system still goes absolutely unresponsive for
about 10 minutes whenever the OOM killer runs. It would seem that this
happens because the kernel first gets rid of all buffers and caches,
slowing things down to a halt, and the OOM killer activates only after
nothing else can be done.

In a more complex situation (e.g. the one that we just had on our server
by accidentally running too many valgrind processes) this hang state can
take very long, essentially requiring the server to be reseted the hard way.

As there AFAIK is no existing remedy to this problem, I would suggest
implementing either (a) per-user limits, (b) a memory reserve for the
kernel (e.g. one could reserve 100 MB for the kernel/buffers/caches,
giving less for the userspace to allocate even if that means having to
kill processes) or (c) both of them.

Or perhaps there is something that I missed?

P.S. using or not using swap doesn't really affect the fundamental
problem nor its symptoms, so please don't suggest that either way.


2009-09-02 01:13:50

by Kamezawa Hiroyuki

[permalink] [raw]
Subject: Re: Avoiding crash in out-of-memory situations

On Tue, 01 Sep 2009 16:24:09 +0300
Lasse Kärkkäinen <[email protected]> wrote:

> Currently a number of simple while (1) malloc(n); processes can crash a
> system even if resource limits are in place as one can only limit the
> memory usage of a process (not that of an user nor the total used by the
> userspace) and any otherwise reasonable nproc and memory limits can be
> circumvented by using more processes.
>
> The OOM killer is supposed to work as a fallback in these situations,
> but unfortunately the system still goes absolutely unresponsive for
> about 10 minutes whenever the OOM killer runs. It would seem that this
> happens because the kernel first gets rid of all buffers and caches,
> slowing things down to a halt, and the OOM killer activates only after
> nothing else can be done.
>
> In a more complex situation (e.g. the one that we just had on our server
> by accidentally running too many valgrind processes) this hang state can
> take very long, essentially requiring the server to be reseted the hard way.
>
> As there AFAIK is no existing remedy to this problem, I would suggest
> implementing either (a) per-user limits, (b) a memory reserve for the
> kernel (e.g. one could reserve 100 MB for the kernel/buffers/caches,
> giving less for the userspace to allocate even if that means having to
> kill processes) or (c) both of them.
>
> Or perhaps there is something that I missed?
>
if per-user limit is allowed, memory cgroup ?

Documentation/cgroups/memory.txt

thx,
-Kame


> P.S. using or not using swap doesn't really affect the fundamental
> problem nor its symptoms, so please don't suggest that either way.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>