2005-09-25 14:19:12

by Al Boldi

[permalink] [raw]
Subject: Resource limits


Resource limits in Linux, when available, are currently very limited.

i.e.:
Too many process forks and your system may crash.
This can be capped with threads-max, but may lead you into a lock-out.

What is needed is a soft, hard, and a special emergency limit that would
allow you to use the resource for a limited time to circumvent a lock-out.

Would this be difficult to implement?

Thanks!

--
Al


2005-09-26 03:36:57

by Rik van Riel

[permalink] [raw]
Subject: Re: Resource limits

On Sun, 25 Sep 2005, Al Boldi wrote:

> Resource limits in Linux, when available, are currently very limited.
>
> i.e.:
> Too many process forks and your system may crash.
> This can be capped with threads-max, but may lead you into a lock-out.
>
> What is needed is a soft, hard, and a special emergency limit that would
> allow you to use the resource for a limited time to circumvent a lock-out.
>
> Would this be difficult to implement?

How would you reclaim the resource after that limited time is
over ? Kill processes?

--
All Rights Reversed

2005-09-26 12:28:43

by Neil Horman

[permalink] [raw]
Subject: Re: Resource limits

On Sun, Sep 25, 2005 at 05:12:42PM +0300, Al Boldi wrote:
>
> Resource limits in Linux, when available, are currently very limited.
>
> i.e.:
> Too many process forks and your system may crash.
> This can be capped with threads-max, but may lead you into a lock-out.
>
> What is needed is a soft, hard, and a special emergency limit that would
> allow you to use the resource for a limited time to circumvent a lock-out.
>
Whats insufficient about the per-user limits that can be imposed by the ulimit
syscall?


> Would this be difficult to implement?
>
> Thanks!
>
> --
> Al
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/

--
/***************************************************
*Neil Horman
*Software Engineer
*gpg keyid: 1024D / 0x92A74FA1 - http://pgp.mit.edu
***************************************************/

2005-09-26 14:39:04

by Roger Heflin

[permalink] [raw]
Subject: RE: Resource limits


While talking about limits, one of my customers report that if
they set "ulimit -d" to be say 8GB, and then a program goes and
attempts to allocate 16GB (in one shot), that the process will
hang on the 16GB allocate as the machine does not have enough
memory+swap to handle this, the process is at this time unkillable,
the customers method to kill the process is to send the process
a kill signal, and then create enough swap to be able to meet
the request, after the request is filled the process terminates.

It would seem that the best thing to do would be to abort on
allocates that will by themselves exceed the limit.

This was a custom version of a earlier version of the 2.6 kernel,
I would bet that this has not changed in quite a while.

Roger


> -----Original Message-----
> From: [email protected]
> [mailto:[email protected]] On Behalf Of Al Boldi
> Sent: Sunday, September 25, 2005 9:13 AM
> To: [email protected]
> Subject: Resource limits
>
>
> Resource limits in Linux, when available, are currently very limited.
>
> i.e.:
> Too many process forks and your system may crash.
> This can be capped with threads-max, but may lead you into a lock-out.
>
> What is needed is a soft, hard, and a special emergency limit
> that would allow you to use the resource for a limited time
> to circumvent a lock-out.
>
> Would this be difficult to implement?
>
> Thanks!
>
> --
> Al
>
> -
> To unsubscribe from this list: send the line "unsubscribe
> linux-kernel" in the body of a message to
> [email protected] More majordomo info at
> http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>

2005-09-26 14:45:07

by Al Boldi

[permalink] [raw]
Subject: Re: Resource limits

Rik van Riel wrote:
> On Sun, 25 Sep 2005, Al Boldi wrote:
> > Resource limits in Linux, when available, are currently very limited.
> >
> > i.e.:
> > Too many process forks and your system may crash.
> > This can be capped with threads-max, but may lead you into a lock-out.
> >
> > What is needed is a soft, hard, and a special emergency limit that would
> > allow you to use the resource for a limited time to circumvent a
> > lock-out.
> >
> > Would this be difficult to implement?
>
> How would you reclaim the resource after that limited time is
> over ? Kill processes?

That's one way, but really, the issue needs some deep thought.
Leaving Linux exposed to a lock-out is rather frightening.

Neil Horman wrote:
> Whats insufficient about the per-user limits that can be imposed by the
> ulimit syscall?

Are they system wide or per-user?

--
Al

2005-09-26 15:57:03

by Neil Horman

[permalink] [raw]
Subject: Re: Resource limits

On Mon, Sep 26, 2005 at 05:18:17PM +0300, Al Boldi wrote:
> Rik van Riel wrote:
> > On Sun, 25 Sep 2005, Al Boldi wrote:
> > > Resource limits in Linux, when available, are currently very limited.
> > >
> > > i.e.:
> > > Too many process forks and your system may crash.
> > > This can be capped with threads-max, but may lead you into a lock-out.
> > >
> > > What is needed is a soft, hard, and a special emergency limit that would
> > > allow you to use the resource for a limited time to circumvent a
> > > lock-out.
> > >
> > > Would this be difficult to implement?
> >
> > How would you reclaim the resource after that limited time is
> > over ? Kill processes?
>
> That's one way, but really, the issue needs some deep thought.
> Leaving Linux exposed to a lock-out is rather frightening.
>
What exactly is it that you're worried about here? Do you have a particular
concern that a process won't be able to fork or create a thread? Resources that
can be allocated to user space processes always run the risk that their
allocation will not succede. Its up to the application to deal with that.

> Neil Horman wrote:
> > Whats insufficient about the per-user limits that can be imposed by the
> > ulimit syscall?
>
> Are they system wide or per-user?
>
ulimits are per-user.
Neil

> --
> Al
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/

--
/***************************************************
*Neil Horman
*Software Engineer
*gpg keyid: 1024D / 0x92A74FA1 - http://pgp.mit.edu
***************************************************/

2005-09-26 16:44:44

by Alan

[permalink] [raw]
Subject: RE: Resource limits

On Llu, 2005-09-26 at 09:44 -0500, Roger Heflin wrote:
> While talking about limits, one of my customers report that if
> they set "ulimit -d" to be say 8GB, and then a program goes and

The kernel doesn't yet support rlimit64() - glibc does but it emulates
it best effort. Thats a good intro project for someone

> It would seem that the best thing to do would be to abort on
> allocates that will by themselves exceed the limit.

2.6 supports "no overcommit" modes.

Alan

2005-09-26 17:33:33

by Al Boldi

[permalink] [raw]
Subject: Re: Resource limits

Alan Cox wrote:
> On Llu, 2005-09-26 at 09:44 -0500, Roger Heflin wrote:
> > While talking about limits, one of my customers report that if
> > they set "ulimit -d" to be say 8GB, and then a program goes and
>
> The kernel doesn't yet support rlimit64() - glibc does but it emulates
> it best effort. Thats a good intro project for someone
>
> > It would seem that the best thing to do would be to abort on
> > allocates that will by themselves exceed the limit.
>
> 2.6 supports "no overcommit" modes.

By name only. see "Kswapd flaw" thread.

Thanks!

--
Al

2005-09-26 17:35:10

by Al Boldi

[permalink] [raw]
Subject: Re: Resource limits

Neil Horman wrote:
> On Mon, Sep 26, 2005 at 05:18:17PM +0300, Al Boldi wrote:
> > Rik van Riel wrote:
> > > On Sun, 25 Sep 2005, Al Boldi wrote:
> > > > Resource limits in Linux, when available, are currently very
> > > > limited.
> > > >
> > > > i.e.:
> > > > Too many process forks and your system may crash.
> > > > This can be capped with threads-max, but may lead you into a
> > > > lock-out.
> > > >
> > > > What is needed is a soft, hard, and a special emergency limit that
> > > > would allow you to use the resource for a limited time to circumvent
> > > > a lock-out.
> > > >
> > > > Would this be difficult to implement?
> > >
> > > How would you reclaim the resource after that limited time is
> > > over ? Kill processes?
> >
> > That's one way, but really, the issue needs some deep thought.
> > Leaving Linux exposed to a lock-out is rather frightening.
>
> What exactly is it that you're worried about here? Do you have a
> particular concern that a process won't be able to fork or create a
> thread? Resources that can be allocated to user space processes always
> run the risk that their allocation will not succede. Its up to the
> application to deal with that.

Think about a DoS attack.

Thanks!

--
Al

2005-09-26 17:52:14

by Neil Horman

[permalink] [raw]
Subject: Re: Resource limits

On Mon, Sep 26, 2005 at 08:32:14PM +0300, Al Boldi wrote:
> Neil Horman wrote:
> > On Mon, Sep 26, 2005 at 05:18:17PM +0300, Al Boldi wrote:
> > > Rik van Riel wrote:
> > > > On Sun, 25 Sep 2005, Al Boldi wrote:
> > > > > Resource limits in Linux, when available, are currently very
> > > > > limited.
> > > > >
> > > > > i.e.:
> > > > > Too many process forks and your system may crash.
> > > > > This can be capped with threads-max, but may lead you into a
> > > > > lock-out.
> > > > >
> > > > > What is needed is a soft, hard, and a special emergency limit that
> > > > > would allow you to use the resource for a limited time to circumvent
> > > > > a lock-out.
> > > > >
> > > > > Would this be difficult to implement?
> > > >
> > > > How would you reclaim the resource after that limited time is
> > > > over ? Kill processes?
> > >
> > > That's one way, but really, the issue needs some deep thought.
> > > Leaving Linux exposed to a lock-out is rather frightening.
> >
> > What exactly is it that you're worried about here? Do you have a
> > particular concern that a process won't be able to fork or create a
> > thread? Resources that can be allocated to user space processes always
> > run the risk that their allocation will not succede. Its up to the
> > application to deal with that.
>
> Think about a DoS attack.
>
> Thanks!
>
Be more specific. Are you talking about a fork bomb, a ICMP flood, what?
preventing resource starvation/exhaustion is often handled in a way thats
dovetailed to the semantics of how that resources is allocated (i.e. you prevent
syn-flood attacks differently than you manage excessive disk usage).

Regards
Neil

> --
> Al
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/

--
/***************************************************
*Neil Horman
*Software Engineer
*gpg keyid: 1024D / 0x92A74FA1 - http://pgp.mit.edu
***************************************************/

2005-09-26 19:43:56

by Matt Helsley

[permalink] [raw]
Subject: Re: Resource limits

On Sun, 2005-09-25 at 17:12 +0300, Al Boldi wrote:
> Resource limits in Linux, when available, are currently very limited.
>
> i.e.:
> Too many process forks and your system may crash.
> This can be capped with threads-max, but may lead you into a lock-out.
>
> What is needed is a soft, hard, and a special emergency limit that would
> allow you to use the resource for a limited time to circumvent a lock-out.
>
> Would this be difficult to implement?
>
> Thanks!
>
> --
> Al

Have you looked at Class-Based Kernel Resource Managment (CKRM)
(http://ckrm.sf.net) to see if it fits your needs? My initial thought is
that the CKRM numtasks controller may help limit forks in the way you
describe.

If you have any questions about it please join the CKRM-Tech mailing
list ([email protected]) or chat with folks on the OFTC
IRC #ckrm channel.

Cheers,
-Matt Helsley

2005-09-26 20:27:38

by Al Boldi

[permalink] [raw]
Subject: Re: Resource limits

Neil Horman wrote:
> On Mon, Sep 26, 2005 at 08:32:14PM +0300, Al Boldi wrote:
> > Neil Horman wrote:
> > > On Mon, Sep 26, 2005 at 05:18:17PM +0300, Al Boldi wrote:
> > > > Rik van Riel wrote:
> > > > > On Sun, 25 Sep 2005, Al Boldi wrote:
> > > > > > Too many process forks and your system may crash.
> > > > > > This can be capped with threads-max, but may lead you into a
> > > > > > lock-out.
> > > > > >
> > > > > > What is needed is a soft, hard, and a special emergency limit
> > > > > > that would allow you to use the resource for a limited time to
> > > > > > circumvent a lock-out.
> > > > >
> > > > > How would you reclaim the resource after that limited time is
> > > > > over ? Kill processes?
> > > >
> > > > That's one way, but really, the issue needs some deep thought.
> > > > Leaving Linux exposed to a lock-out is rather frightening.
> > >
> > > What exactly is it that you're worried about here?
> >
> > Think about a DoS attack.
>
> Be more specific. Are you talking about a fork bomb, a ICMP flood, what?

How would you deal with a situation where the system hit the threads-max
ceiling?

> preventing resource starvation/exhaustion is often handled in a way thats
> dovetailed to the semantics of how that resources is allocated (i.e. you
> prevent syn-flood attacks differently than you manage excessive disk
> usage).

The issue here is a general lack of proper kernel support for resource
limits. The fork problem is just an example.

Thanks!

--
Al

2005-09-26 21:16:34

by Roger Heflin

[permalink] [raw]
Subject: RE: Resource limits



> On Llu, 2005-09-26 at 09:44 -0500, Roger Heflin wrote:
> > While talking about limits, one of my customers report that if they
> > set "ulimit -d" to be say 8GB, and then a program goes and
>
> The kernel doesn't yet support rlimit64() - glibc does but it
> emulates it best effort. Thats a good intro project for someone
>
> > It would seem that the best thing to do would be to abort
> on allocates
> > that will by themselves exceed the limit.
>
> 2.6 supports "no overcommit" modes.
>
> Alan
>

Ah.

So any limit over 4GB, is emulated through glibc which means the
fix would need to be in the emulation that is outside of the
kernel.

And I think they were setting the limit to more like 32 or 48GB,
and having single allocation's go over that. Some of the machines
in question have 32GB of ram, others have 64GB of ram, both with
fair amounts of swap, and when the event happens they need to create
enough swap to get enough swap to process the request.

The overcommit thing may do what they want.

Thanks.
Roger

2005-09-27 01:03:09

by Neil Horman

[permalink] [raw]
Subject: Re: Resource limits

On Mon, Sep 26, 2005 at 11:26:10PM +0300, Al Boldi wrote:
> Neil Horman wrote:
> > On Mon, Sep 26, 2005 at 08:32:14PM +0300, Al Boldi wrote:
> > > Neil Horman wrote:
> > > > On Mon, Sep 26, 2005 at 05:18:17PM +0300, Al Boldi wrote:
> > > > > Rik van Riel wrote:
> > > > > > On Sun, 25 Sep 2005, Al Boldi wrote:
> > > > > > > Too many process forks and your system may crash.
> > > > > > > This can be capped with threads-max, but may lead you into a
> > > > > > > lock-out.
> > > > > > >
> > > > > > > What is needed is a soft, hard, and a special emergency limit
> > > > > > > that would allow you to use the resource for a limited time to
> > > > > > > circumvent a lock-out.
> > > > > >
> > > > > > How would you reclaim the resource after that limited time is
> > > > > > over ? Kill processes?
> > > > >
> > > > > That's one way, but really, the issue needs some deep thought.
> > > > > Leaving Linux exposed to a lock-out is rather frightening.
> > > >
> > > > What exactly is it that you're worried about here?
> > >
> > > Think about a DoS attack.
> >
> > Be more specific. Are you talking about a fork bomb, a ICMP flood, what?
>
> How would you deal with a situation where the system hit the threads-max
> ceiling?
>
Nominally I would log the inability to successfully create a new process/thread,
attempt to free some of my applications resources, and try again.

> > preventing resource starvation/exhaustion is often handled in a way thats
> > dovetailed to the semantics of how that resources is allocated (i.e. you
> > prevent syn-flood attacks differently than you manage excessive disk
> > usage).
>
> The issue here is a general lack of proper kernel support for resource
> limits. The fork problem is just an example.
>
Thats not really true. As Mr. Helsley pointed out, CKRM is available to provide
a level of class based resource management if you need it. By default you can
also create a level of resource limitation with ulimits as I mentioned. But no
matter what you do, the only way you can guarantee that a system will be able to
provide the resources your workload needs is to limit the number of resources
your workload asks for, and in the event it asks for too much, make sure it can
handle the denial of the resource gracefully.

Thanks and regards
Neil

> Thanks!
>
> --
> Al
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/

--
/***************************************************
*Neil Horman
*Software Engineer
*gpg keyid: 1024D / 0x92A74FA1 - http://pgp.mit.edu
***************************************************/

2005-09-27 03:50:15

by Coywolf Qi Hunt

[permalink] [raw]
Subject: Re: Resource limits

On 9/26/05, Roger Heflin <[email protected]> wrote:
>
> While talking about limits, one of my customers report that if
> they set "ulimit -d" to be say 8GB, and then a program goes and
> attempts to allocate 16GB (in one shot), that the process will
> hang on the 16GB allocate as the machine does not have enough
> memory+swap to handle this, the process is at this time unkillable,
> the customers method to kill the process is to send the process
> a kill signal, and then create enough swap to be able to meet
> the request, after the request is filled the process terminates.
>
> It would seem that the best thing to do would be to abort on
> allocates that will by themselves exceed the limit.
>
> This was a custom version of a earlier version of the 2.6 kernel,
> I would bet that this has not changed in quite a while.
>
> Roger

It's simple. Set /proc/sys/vm/overcommit_memory to 2 (iirc) to get
arround this `bug' .
--
Coywolf Qi Hunt
http://sosdg.org/~coywolf/

2005-09-27 05:11:04

by Al Boldi

[permalink] [raw]
Subject: Re: Resource limits

Neil Horman wrote:
> On Mon, Sep 26, 2005 at 11:26:10PM +0300, Al Boldi wrote:
> > Neil Horman wrote:
> > > On Mon, Sep 26, 2005 at 08:32:14PM +0300, Al Boldi wrote:
> > > > Neil Horman wrote:
> > > > > On Mon, Sep 26, 2005 at 05:18:17PM +0300, Al Boldi wrote:
> > > > > > Rik van Riel wrote:
> > > > > > > On Sun, 25 Sep 2005, Al Boldi wrote:
> > > > > > > > Too many process forks and your system may crash.
> > > > > > > > This can be capped with threads-max, but may lead you into a
> > > > > > > > lock-out.
> > > > > > > >
> > > > > > > > What is needed is a soft, hard, and a special emergency
> > > > > > > > limit that would allow you to use the resource for a limited
> > > > > > > > time to circumvent a lock-out.
> > > > > > >
> > > > > > > How would you reclaim the resource after that limited time is
> > > > > > > over ? Kill processes?
> > > > > >
> > > > > > That's one way, but really, the issue needs some deep thought.
> > > > > > Leaving Linux exposed to a lock-out is rather frightening.
> > > > >
> > > > > What exactly is it that you're worried about here?
> > > >
> > > > Think about a DoS attack.
> > >
> > > Be more specific. Are you talking about a fork bomb, a ICMP flood,
> > > what?
> >
> > How would you deal with a situation where the system hit the threads-max
> > ceiling?
>
> Nominally I would log the inability to successfully create a new
> process/thread, attempt to free some of my applications resources, and try
> again.

Consider this dilemma:
Runaway proc/s hit the limit.
Try to kill some and you are denied due to the resource limit.
Use some previously running app like top, hope it hasn't been killed by some
OOM situation, try killing some procs and another one takes it's place
because of the runaway situation.
Raise the limit, and it gets filled by the runaways.
You are pretty much stuck.

You may get around the problem by a user-space solution, but this will always
run the risks associated with user-space.

> > The issue here is a general lack of proper kernel support for resource
> > limits. The fork problem is just an example.
>
> Thats not really true. As Mr. Helsley pointed out, CKRM is available

Matthew Helsley wrote:
> Have you looked at Class-Based Kernel Resource Managment (CKRM)
> (http://ckrm.sf.net) to see if it fits your needs? My initial thought is
> that the CKRM numtasks controller may help limit forks in the way you
> describe.

Thanks for the link! CKRM is great!

Is there a CKRM-lite version? This would make it easier to be included into
the mainline, something that would concentrate on the pressing issues, like
lock-out prevention, and leave all the management features as an option.

Thanks!

--
Al

2005-09-27 12:09:04

by Neil Horman

[permalink] [raw]
Subject: Re: Resource limits

On Tue, Sep 27, 2005 at 08:08:21AM +0300, Al Boldi wrote:
> Neil Horman wrote:
> > On Mon, Sep 26, 2005 at 11:26:10PM +0300, Al Boldi wrote:
> > > Neil Horman wrote:
> > > > On Mon, Sep 26, 2005 at 08:32:14PM +0300, Al Boldi wrote:
> > > > > Neil Horman wrote:
> > > > > > On Mon, Sep 26, 2005 at 05:18:17PM +0300, Al Boldi wrote:
> > > > > > > Rik van Riel wrote:
> > > > > > > > On Sun, 25 Sep 2005, Al Boldi wrote:
> > > > > > > > > Too many process forks and your system may crash.
> > > > > > > > > This can be capped with threads-max, but may lead you into a
> > > > > > > > > lock-out.
> > > > > > > > >
> > > > > > > > > What is needed is a soft, hard, and a special emergency
> > > > > > > > > limit that would allow you to use the resource for a limited
> > > > > > > > > time to circumvent a lock-out.
> > > > > > > >
> > > > > > > > How would you reclaim the resource after that limited time is
> > > > > > > > over ? Kill processes?
> > > > > > >
> > > > > > > That's one way, but really, the issue needs some deep thought.
> > > > > > > Leaving Linux exposed to a lock-out is rather frightening.
> > > > > >
> > > > > > What exactly is it that you're worried about here?
> > > > >
> > > > > Think about a DoS attack.
> > > >
> > > > Be more specific. Are you talking about a fork bomb, a ICMP flood,
> > > > what?
> > >
> > > How would you deal with a situation where the system hit the threads-max
> > > ceiling?
> >
> > Nominally I would log the inability to successfully create a new
> > process/thread, attempt to free some of my applications resources, and try
> > again.
>
> Consider this dilemma:
> Runaway proc/s hit the limit.
> Try to kill some and you are denied due to the resource limit.
> Use some previously running app like top, hope it hasn't been killed by some
> OOM situation, try killing some procs and another one takes it's place
> because of the runaway situation.
> Raise the limit, and it gets filled by the runaways.
> You are pretty much stuck.
>
Not really, this is the sort of thing ulimit is meant for. To keep processes
from any one user from running away. It lets you limit the damage it can do,
until such time as you can control it and fix the runaway application.

> You may get around the problem by a user-space solution, but this will always
> run the risks associated with user-space.
>
Ulimit isn't a user-space solution, its a user-_based_ restriction mechanism for
resources. It allows you to prevent user X (or group X, IIRC) from creating
more than A MB of files, or B processes, or allocating C KB of memory, etc. man
3 ulimit.


> > > The issue here is a general lack of proper kernel support for resource
> > > limits. The fork problem is just an example.
> >
> > Thats not really true. As Mr. Helsley pointed out, CKRM is available
>
> Matthew Helsley wrote:
> > Have you looked at Class-Based Kernel Resource Managment (CKRM)
> > (http://ckrm.sf.net) to see if it fits your needs? My initial thought is
> > that the CKRM numtasks controller may help limit forks in the way you
> > describe.
>
> Thanks for the link! CKRM is great!
>
> Is there a CKRM-lite version? This would make it easier to be included into
> the mainline, something that would concentrate on the pressing issues, like
> lock-out prevention, and leave all the management features as an option.
>
> Thanks!
>
> --
> Al

--
/***************************************************
*Neil Horman
*Software Engineer
*gpg keyid: 1024D / 0x92A74FA1 - http://pgp.mit.edu
***************************************************/

2005-09-27 13:43:43

by Al Boldi

[permalink] [raw]
Subject: Re: Resource limits

Neil Horman wrote:
> On Tue, Sep 27, 2005 at 08:08:21AM +0300, Al Boldi wrote:
> > Neil Horman wrote:
> > > On Mon, Sep 26, 2005 at 11:26:10PM +0300, Al Boldi wrote:
> > > > Neil Horman wrote:
> > > > > On Mon, Sep 26, 2005 at 08:32:14PM +0300, Al Boldi wrote:
> > > > > > Neil Horman wrote:
> > > > > > > On Mon, Sep 26, 2005 at 05:18:17PM +0300, Al Boldi wrote:
> > > > > > > > Rik van Riel wrote:
> > > > > > > > > On Sun, 25 Sep 2005, Al Boldi wrote:
> > > > > > > > > > Too many process forks and your system may crash.
> > > > > > > > > > This can be capped with threads-max, but may lead you
> > > > > > > > > > into a lock-out.
> > > > > > > > > >
> > > > > > > > > > What is needed is a soft, hard, and a special emergency
> > > > > > > > > > limit that would allow you to use the resource for a
> > > > > > > > > > limited time to circumvent a lock-out.
> > > > > > > > >
> > > > > > > > > How would you reclaim the resource after that limited time
> > > > > > > > > is over ? Kill processes?
> > > > > > > >
> > > > > > > > That's one way, but really, the issue needs some deep
> > > > > > > > thought. Leaving Linux exposed to a lock-out is rather
> > > > > > > > frightening.
> > > > > > >
> > > > > > > What exactly is it that you're worried about here?
> > > > > >
> > > > > > Think about a DoS attack.
> > > > >
> > > > > Be more specific. Are you talking about a fork bomb, a ICMP
> > > > > flood, what?
> >
> > Consider this dilemma:
> > Runaway proc/s hit the limit.
> > Try to kill some and you are denied due to the resource limit.
> > Use some previously running app like top, hope it hasn't been killed by
> > some OOM situation, try killing some procs and another one takes it's
> > place because of the runaway situation.
> > Raise the limit, and it gets filled by the runaways.
> > You are pretty much stuck.
>
> Not really, this is the sort of thing ulimit is meant for. To keep
> processes from any one user from running away. It lets you limit the
> damage it can do, until such time as you can control it and fix the
> runaway application.

threads-max = 1024
ulimit = 100 forks
11 runaway procs hitting the threads-max limit

This example is extreme, but it's possible, and there should be a safe and
easy way out.

What do you think?

Thanks!
--
Al

2005-09-27 14:37:17

by Neil Horman

[permalink] [raw]
Subject: Re: Resource limits

On Tue, Sep 27, 2005 at 04:42:07PM +0300, Al Boldi wrote:
> Neil Horman wrote:
> > On Tue, Sep 27, 2005 at 08:08:21AM +0300, Al Boldi wrote:
> > > Neil Horman wrote:
> > > > On Mon, Sep 26, 2005 at 11:26:10PM +0300, Al Boldi wrote:
> > > > > Neil Horman wrote:
> > > > > > On Mon, Sep 26, 2005 at 08:32:14PM +0300, Al Boldi wrote:
> > > > > > > Neil Horman wrote:
> > > > > > > > On Mon, Sep 26, 2005 at 05:18:17PM +0300, Al Boldi wrote:
> > > > > > > > > Rik van Riel wrote:
> > > > > > > > > > On Sun, 25 Sep 2005, Al Boldi wrote:
> > > > > > > > > > > Too many process forks and your system may crash.
> > > > > > > > > > > This can be capped with threads-max, but may lead you
> > > > > > > > > > > into a lock-out.
> > > > > > > > > > >
> > > > > > > > > > > What is needed is a soft, hard, and a special emergency
> > > > > > > > > > > limit that would allow you to use the resource for a
> > > > > > > > > > > limited time to circumvent a lock-out.
> > > > > > > > > >
> > > > > > > > > > How would you reclaim the resource after that limited time
> > > > > > > > > > is over ? Kill processes?
> > > > > > > > >
> > > > > > > > > That's one way, but really, the issue needs some deep
> > > > > > > > > thought. Leaving Linux exposed to a lock-out is rather
> > > > > > > > > frightening.
> > > > > > > >
> > > > > > > > What exactly is it that you're worried about here?
> > > > > > >
> > > > > > > Think about a DoS attack.
> > > > > >
> > > > > > Be more specific. Are you talking about a fork bomb, a ICMP
> > > > > > flood, what?
> > >
> > > Consider this dilemma:
> > > Runaway proc/s hit the limit.
> > > Try to kill some and you are denied due to the resource limit.
> > > Use some previously running app like top, hope it hasn't been killed by
> > > some OOM situation, try killing some procs and another one takes it's
> > > place because of the runaway situation.
> > > Raise the limit, and it gets filled by the runaways.
> > > You are pretty much stuck.
> >
> > Not really, this is the sort of thing ulimit is meant for. To keep
> > processes from any one user from running away. It lets you limit the
> > damage it can do, until such time as you can control it and fix the
> > runaway application.
>
> threads-max = 1024
> ulimit = 100 forks
> 11 runaway procs hitting the threads-max limit
>
This is incorrect. If you ulimit a user to 100 forks, and 11 processes running
with that uid start to fork repeatedly, they will get fork failures after they
have, in aggregate called fork 89 times. That user can have no more than 100
processes running in the system at any given time. Another user (or root) can
fork another process to kill one of the runaways.

If you have a user process that for some reason legitimately needs to try use
every process resource available in the system, the yes, you are prone to a lock
out condition, if you have no way of killing those processes from a controlling
terminal, then yes, you are prone to lock out. In those conditions I would set
my ulimit on processes for the user running this process to something less than
threads-max, so that I could have some wiggle room to get out of that situation.
I would of course also file a bug report with the application author, but thats
another discussion :).
Regards
Neil

> This example is extreme, but it's possible, and there should be a safe and
> easy way out.
>
> What do you think?
>
> Thanks!
> --
> Al

--
/***************************************************
*Neil Horman
*Software Engineer
*gpg keyid: 1024D / 0x92A74FA1 - http://pgp.mit.edu
***************************************************/

2005-09-27 15:52:21

by Al Boldi

[permalink] [raw]
Subject: Re: Resource limits

Neil Horman wrote:
> On Tue, Sep 27, 2005 at 04:42:07PM +0300, Al Boldi wrote:
> > Neil Horman wrote:
> > > On Tue, Sep 27, 2005 at 08:08:21AM +0300, Al Boldi wrote:
> > > > Neil Horman wrote:
> > > > > On Mon, Sep 26, 2005 at 11:26:10PM +0300, Al Boldi wrote:
> > > > > > Neil Horman wrote:
> > > > > > > On Mon, Sep 26, 2005 at 08:32:14PM +0300, Al Boldi wrote:
> > > > > > > > Neil Horman wrote:
> > > > > > > > > On Mon, Sep 26, 2005 at 05:18:17PM +0300, Al Boldi wrote:
> > > > > > > > > > Rik van Riel wrote:
> > > > > > > > > > > On Sun, 25 Sep 2005, Al Boldi wrote:
> > > > > > > > > > > > Too many process forks and your system may crash.
> > > > > > > > > > > > This can be capped with threads-max, but may lead
> > > > > > > > > > > > you into a lock-out.
> > > > > > > > > > > >
> > > > > > > > > > > > What is needed is a soft, hard, and a special
> > > > > > > > > > > > emergency limit that would allow you to use the
> > > > > > > > > > > > resource for a limited time to circumvent a
> > > > > > > > > > > > lock-out.
> > > > > > > > > > >
> > > > > > > > > > > How would you reclaim the resource after that limited
> > > > > > > > > > > time is over ? Kill processes?
> > > > > > > > > >
> > > > > > > > > > That's one way, but really, the issue needs some deep
> > > > > > > > > > thought. Leaving Linux exposed to a lock-out is rather
> > > > > > > > > > frightening.
> > > > > > > > >
> > > > > > > > > What exactly is it that you're worried about here?
> > > > > > > >
> > > > > > > > Think about a DoS attack.
> > > > > > >
> > > > > > > Be more specific. Are you talking about a fork bomb, a ICMP
> > > > > > > flood, what?
> > > >
> > > > Consider this dilemma:
> > > > Runaway proc/s hit the limit.
> > > > Try to kill some and you are denied due to the resource limit.
> > > > Use some previously running app like top, hope it hasn't been killed
> > > > by some OOM situation, try killing some procs and another one takes
> > > > it's place because of the runaway situation.
> > > > Raise the limit, and it gets filled by the runaways.
> > > > You are pretty much stuck.
> > >
> > > Not really, this is the sort of thing ulimit is meant for. To keep
> > > processes from any one user from running away. It lets you limit the
> > > damage it can do, until such time as you can control it and fix the
> > > runaway application.
> >
> > threads-max = 1024
> > ulimit = 100 forks
> > 11 runaway procs hitting the threads-max limit
>
> This is incorrect. If you ulimit a user to 100 forks, and 11 processes
> running with that uid

Different uid.

> If you have a user process that for some reason legitimately needs to try
> use every process resource available in the system, then yes, you are prone
> to a lock out condition

Couldn't this be easily fixed in kernel-space?

Thanks!

--
Al

2005-09-27 17:25:40

by Neil Horman

[permalink] [raw]
Subject: Re: Resource limits

On Tue, Sep 27, 2005 at 06:50:01PM +0300, Al Boldi wrote:
> Neil Horman wrote:
> > On Tue, Sep 27, 2005 at 04:42:07PM +0300, Al Boldi wrote:
> > > Neil Horman wrote:
> > > > On Tue, Sep 27, 2005 at 08:08:21AM +0300, Al Boldi wrote:
> > > > > Neil Horman wrote:
> > > > > > On Mon, Sep 26, 2005 at 11:26:10PM +0300, Al Boldi wrote:
> > > > > > > Neil Horman wrote:
> > > > > > > > On Mon, Sep 26, 2005 at 08:32:14PM +0300, Al Boldi wrote:
> > > > > > > > > Neil Horman wrote:
> > > > > > > > > > On Mon, Sep 26, 2005 at 05:18:17PM +0300, Al Boldi wrote:
> > > > > > > > > > > Rik van Riel wrote:
> > > > > > > > > > > > On Sun, 25 Sep 2005, Al Boldi wrote:
> > > > > > > > > > > > > Too many process forks and your system may crash.
> > > > > > > > > > > > > This can be capped with threads-max, but may lead
> > > > > > > > > > > > > you into a lock-out.
> > > > > > > > > > > > >
> > > > > > > > > > > > > What is needed is a soft, hard, and a special
> > > > > > > > > > > > > emergency limit that would allow you to use the
> > > > > > > > > > > > > resource for a limited time to circumvent a
> > > > > > > > > > > > > lock-out.
> > > > > > > > > > > >
> > > > > > > > > > > > How would you reclaim the resource after that limited
> > > > > > > > > > > > time is over ? Kill processes?
> > > > > > > > > > >
> > > > > > > > > > > That's one way, but really, the issue needs some deep
> > > > > > > > > > > thought. Leaving Linux exposed to a lock-out is rather
> > > > > > > > > > > frightening.
> > > > > > > > > >
> > > > > > > > > > What exactly is it that you're worried about here?
> > > > > > > > >
> > > > > > > > > Think about a DoS attack.
> > > > > > > >
> > > > > > > > Be more specific. Are you talking about a fork bomb, a ICMP
> > > > > > > > flood, what?
> > > > >
> > > > > Consider this dilemma:
> > > > > Runaway proc/s hit the limit.
> > > > > Try to kill some and you are denied due to the resource limit.
> > > > > Use some previously running app like top, hope it hasn't been killed
> > > > > by some OOM situation, try killing some procs and another one takes
> > > > > it's place because of the runaway situation.
> > > > > Raise the limit, and it gets filled by the runaways.
> > > > > You are pretty much stuck.
> > > >
> > > > Not really, this is the sort of thing ulimit is meant for. To keep
> > > > processes from any one user from running away. It lets you limit the
> > > > damage it can do, until such time as you can control it and fix the
> > > > runaway application.
> > >
> > > threads-max = 1024
> > > ulimit = 100 forks
> > > 11 runaway procs hitting the threads-max limit
> >
> > This is incorrect. If you ulimit a user to 100 forks, and 11 processes
> > running with that uid
>
> Different uid.
>
Then yes, if you set a system-wide limit that is less than the sum of the limits
imposed on each accountable part of the system you can have lock out. But thats
your fault for misconfiguring the system. Don't do that.

> > If you have a user process that for some reason legitimately needs to try
> > use every process resource available in the system, then yes, you are prone
> > to a lock out condition
>
> Couldn't this be easily fixed in kernel-space?
>
You're not getting it. The resource limits applied by ulimit (and CKRM as
far as I know), _are_ inforced in kernel space. The ulimit library call and its
corresponding setrlimit system call set resource limitations in the rlim array
thats part of each task struct. These limits are queried whenever an instance
of the corresponding resource is requested by a user space process, if the
requesting process is over its limit, the request is deined.

Regards
Neil

> Thanks!
>
> --
> Al
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/

--
/***************************************************
*Neil Horman
*Software Engineer
*gpg keyid: 1024D / 0x92A74FA1 - http://pgp.mit.edu
***************************************************/

2005-09-27 21:35:09

by Chandra Seetharaman

[permalink] [raw]
Subject: Re: Resource limits

On Tue, 2005-09-27 at 08:08 +0300, Al Boldi wrote:
<snip>
> Consider this dilemma:
> Runaway proc/s hit the limit.
> Try to kill some and you are denied due to the resource limit.
> Use some previously running app like top, hope it hasn't been killed by some
> OOM situation, try killing some procs and another one takes it's place
> because of the runaway situation.
> Raise the limit, and it gets filled by the runaways.
> You are pretty much stuck.

CKRM can solve this problem nicely. You can define classes (for example,
you can define a class and it attach to a user). Limits will be applied
only to that class(user), failures will be seen only by that class(user)
and the rest of the system will be free to operate without getting into
the situation stated above.

> and associate resources to a class
> You may get around the problem by a user-space solution, but this will always
> run the risks associated with user-space.
>
> > > The issue here is a general lack of proper kernel support for resource
> > > limits. The fork problem is just an example.
> >
> > Thats not really true. As Mr. Helsley pointed out, CKRM is available
>
> Matthew Helsley wrote:
> > Have you looked at Class-Based Kernel Resource Managment (CKRM)
> > (http://ckrm.sf.net) to see if it fits your needs? My initial thought is
> > that the CKRM numtasks controller may help limit forks in the way you
> > describe.
>
> Thanks for the link! CKRM is great!

Thank you!! :)
>
> Is there a CKRM-lite version? This would make it easier to be included into

we are currently working on reducing the codesize and complexity of
CKRM, which will be lot thinner and less complex than what was in -mm
tree a while ago. The development is underway and you can follow the
progress of the f-series in ckrm-tech mailing list.

> the mainline, something that would concentrate on the pressing issues, like
> lock-out prevention, and leave all the management features as an option.
>

You are welcome to join the mailing list and provide feedback on how the
f-series shapes up.

Thanks,

chandra
> Thanks!
>
> --
> Al
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
>
--

----------------------------------------------------------------------
Chandra Seetharaman | Be careful what you choose....
- [email protected] | .......you may get it.
----------------------------------------------------------------------