2004-11-23 09:25:54

by Andrew Daviel

[permalink] [raw]
Subject: towards dynamic resource quotas


Is there any mechanism within Linux to limit the fraction of a resource
(CPU, memory, network bandwidth) which may be allocated to a particular
process or UID ? If not, perhaps there should be. The idea being to
ensure that critical tasks can always run, regardless of resource
depletion by other tasks

As I understand things, at present there is a disk quota system, and the
rlimit mechanism which may be used to limit memory, total CPU time,
number of processes etc. Not all of these are feasible to use
to limit the resources of a server process such as httpd
- a CPU limit would cause the process to die after N seconds of
CPU time, while what is desired is to limit the process to X % of CPU

- I attended a conference recently on embedded systems, where the
penetration of Linux (and XP) into the embedded computing market was
discussed, along with the enhancements for pre-emptability in 2.6 and
security issues (patchability of embedded systems, sharing code base with
ubiquitous desktops & servers). It occurred to me that it would be
advantageous to have a system that could continue to function reliably,
even if infected with malware such as viruses or DDOS agents that would
otherwise saturate resources such as network connectivity or CPU.

I could imagine the following scenario: a device is developed which is
responsible for monitoring an implanted pacemaker and for administering
drugs as required to a heart patient. It has networking capability to
call a physician if required; let's make it smart enough to make an
"intelligent" call anywhere in the world. Let's add a server (sshd
perhaps) for the physician to log in and check the history, and, though
it would be naive to trust a perimeter firewall in a hospital, let's make
it a wireless device and place it on an untrusted network. What happens
if the ssh server is exploited by an intruder to send spam, or a
programming error causes the emergency calling routine to loop if it
encounters a busy signal in the Guatamala phone system ? The primary
task, dispensing drugs, must continue to run even though another process
attempts to use 100% of CPU or 100% of network bandwidth.
It is not clear to me that existing mechanisms such as nice, rlimit or
pre-empting can guarantee this.


--
Andrew Daviel, TRIUMF, Canada
Tel. +1 (604) 222-7376
[email protected]


2004-11-23 11:04:50

by Amon Ott

[permalink] [raw]
Subject: Re: towards dynamic resource quotas

On Dienstag, 23. November 2004 10:25, Andrew Daviel wrote:
> Is there any mechanism within Linux to limit the fraction of a
resource
> (CPU, memory, network bandwidth) which may be allocated to a
particular
> process or UID ? If not, perhaps there should be. The idea being to
> ensure that critical tasks can always run, regardless of resource
> depletion by other tasks

The Class-based Kernel Resource Management (CKRM) project at
http://ckrm.sourceforge.net/ works on this.

Amon.
--
http://www.rsbac.org - GnuPG: 2048g/5DEAAA30 2002-10-22