Hi,
I would like to limit the maximum resident memory size
of a process within a threshold, i.e. if its virtual
memory footprint exceeds this threshold, it needs to
swap out pages *only* from within its VM space.
First, is there a way this can be done at application
level ? The setrlimit interface seems to contain an
option for specifying max resident set size, but it
doesnt seem like it is implemented as of 2.4 -- am I
wrong ?
If the kernel doesnt currently support it, is there an
efficient way (data structure etc) to traverse the
resident set of a *process* in lru fashion ? All the
page replacement and swapping code work on the entire
page lists -- is there any simple way to group these
per process ?
thanks for any pointers,
Muthian.
__________________________________
Do you Yahoo!?
Yahoo! Calendar - Free online calendar with sync to Outlook(TM).
http://calendar.yahoo.com
> I would like to limit the maximum resident memory size
> of a process within a threshold, i.e. if its virtual
> memory footprint exceeds this threshold, it needs to
> swap out pages *only* from within its VM space.
Why? If you think this is a good way to be nice to other processes, you're
wrong.
> First, is there a way this can be done at application
> level ? The setrlimit interface seems to contain an
> option for specifying max resident set size, but it
> doesnt seem like it is implemented as of 2.4 -- am I
> wrong ?
> If the kernel doesnt currently support it, is there an
> efficient way (data structure etc) to traverse the
> resident set of a *process* in lru fashion ? All the
> page replacement and swapping code work on the entire
> page lists -- is there any simple way to group these
> per process ?
One process paging and swapping excessively will hurt other processes that
aren't. What's your outer problem? What you're trying to do doesn't seem to
have any rational purpose.
DS
David Schwartz wrote:
>>I would like to limit the maximum resident memory size
>>of a process within a threshold, i.e. if its virtual
>>memory footprint exceeds this threshold, it needs to
>>swap out pages *only* from within its VM space.
>
>
> Why? If you think this is a good way to be nice to other processes, you're
> wrong.
Why is he wrong?
well, the goal is to enforce strict upper bounds on
how much resources a process can consume, including
memory, disk bandwidth etc. I understand that this
may not give the best aggregate system performance,
but so is any proportional sharing scheme. The impact
of swapping/paging on the other processes can be
minimized by rate-limiting the disk I/O that the
process does, for swapping or anything else.
Muthian.
--- David Schwartz <[email protected]> wrote:
>
> > I would like to limit the maximum resident memory
> size
> > of a process within a threshold, i.e. if its
> virtual
> > memory footprint exceeds this threshold, it needs
> to
> > swap out pages *only* from within its VM space.
>
> Why? If you think this is a good way to be nice to
> other processes, you're
> wrong.
>
> > First, is there a way this can be done at
> application
> > level ? The setrlimit interface seems to contain
> an
> > option for specifying max resident set size, but
> it
> > doesnt seem like it is implemented as of 2.4 -- am
> I
> > wrong ?
>
> > If the kernel doesnt currently support it, is
> there an
> > efficient way (data structure etc) to traverse the
> > resident set of a *process* in lru fashion ? All
> the
> > page replacement and swapping code work on the
> entire
> > page lists -- is there any simple way to group
> these
> > per process ?
>
> One process paging and swapping excessively will
> hurt other processes that
> aren't. What's your outer problem? What you're
> trying to do doesn't seem to
> have any rational purpose.
>
> DS
>
>
__________________________________
Do you Yahoo!?
Yahoo! Calendar - Free online calendar with sync to Outlook(TM).
http://calendar.yahoo.com
> David Schwartz wrote:
> >>I would like to limit the maximum resident memory size
> >>of a process within a threshold, i.e. if its virtual
> >>memory footprint exceeds this threshold, it needs to
> >>swap out pages *only* from within its VM space.
> > Why? If you think this is a good way to be nice to other
> > processes, you're
> > wrong.
> Why is he wrong?
Because increasing the amount of swapping and paging will slow the system
down overall. Other processes will be interrupted more frequently and cache
effectiveness will decline. If the disks are shared, the additional disk
access will slow down other processes on the system as well.
It's also not clear how shared pages should be handled. If this process
causes large chunks of a shared library to be resident that wouldn't be
otherwise, should this be charged against the process or not? If you exempt
all shared memory, you not only create a whole a malicious process could
drive a truck through but you don't measure accurately.
If the process has a limited amount of work to do, it's much more sensible
to just let it get done using the memory it needs to run quickly so it can
get out of the way of other processes. If the process has an unlimited
amonut of work to do, it makes more sense to control its use of processor
resources, which will inherently limit its resident set size.
Basically, which pages should be resident is just one of those things the
system knows better than you. Trying to make things better for one process
may wind up making them worse as the system as a whole bogs down.
Overall, this just doesn't strike me as a sensible thing to do. Depending
upon what effect he's trying to achieve, there are probably more sensible
ways to do it.
DS
On Thu, 12 Jun 2003, David Schwartz wrote:
> > I would like to limit the maximum resident memory size
> > of a process within a threshold, i.e. if its virtual
> > memory footprint exceeds this threshold, it needs to
> > swap out pages *only* from within its VM space.
>
> Why? If you think this is a good way to be nice to other
> processes, you're wrong.
RSS limits are a good idea, provided that they are only
enforced when the system is low on memory. Once the system
starts swapping and is into the "lots of disk IO" territory
anyway, it can be a good idea to have the processes that
exceed their RSS limit suffer more than the ones that don't.
On Thu, Jun 12, 2003 at 03:15:13PM -0700, David Schwartz wrote:
>
> > I would like to limit the maximum resident memory size
> > of a process within a threshold, i.e. if its virtual
> > memory footprint exceeds this threshold, it needs to
> > swap out pages *only* from within its VM space.
>
> Why? If you think this is a good way to be nice to other
> processes, you're wrong.
I have to disagree. I used to use a Digital Unix system, which had this
feature, to do software development. The program I was working on was
large, and linking it required more memory than the 128M that was installed
on the computer. All my makes ended with a 10 minute swap storm during
which the computer was virtually useless.
I discovered that if I limited the RSS of the link process so that it left
a few megs of memory free then I could read mail or look around the web
while the link was running. This of course slowed down the link, but I
was supprised by how little it suffered. It might have been 10% slower
and the tradeoff I got was to be able to use the machine while it was
working rather than sitting there looking at it.
Thanks,
Jim
> Because increasing the amount of swapping and
> paging will slow the system
> down overall. Other processes will be interrupted
> more frequently and cache
> effectiveness will decline. If the disks are shared,
> the additional disk
> access will slow down other processes on the system
> as well.
the goals of resource isolation and optimal overall
performance are often in conflict, and there are
definitely cases when one really needs performance
isolation across processes inspite of the slight
performance hit this may entail.
> It's also not clear how shared pages should be
> handled. If this process
> causes large chunks of a shared library to be
> resident that wouldn't be
> otherwise, should this be charged against the
> process or not? If you exempt
> all shared memory, you not only create a whole a
> malicious process could
> drive a truck through but you don't measure
> accurately.
>
I agree thats an issue -- how to treat shared pages is
a policy decision which may vary with the exact
requirements, but the general mechanism to be able to
monitor resident sets on a per process basis seems to
be useful. How to treat shared pages could be a
user-specified parameter.
> If the process has a limited amount of work to do,
> it's much more sensible
> to just let it get done using the memory it needs to
> run quickly so it can
> get out of the way of other processes. If the
> process has an unlimited
> amonut of work to do, it makes more sense to control
> its use of processor
> resources, which will inherently limit its resident
> set size.
>
I am concerned about reasonably long running processes
-- in those cases, limiting cpu usage need not
necessarily lead to bounds on resident set size --
what about a process that reads in a large file abt
the size of main memory ? it doesnt need much cpu, but
can fill up your memory.
> Basically, which pages should be resident is just
> one of those things the
> system knows better than you. Trying to make things
> better for one process
> may wind up making them worse as the system as a
> whole bogs down.
>
again, I disagree. Though it may be true in certain
cases, there are situations when a user knows better
about the relative importance of the processes he
runs. Wherever one desires performance isolation,
having per-process control over resource usage is
definitely a useful mechanism.
Muthian.
__________________________________
Do you Yahoo!?
Yahoo! Calendar - Free online calendar with sync to Outlook(TM).
http://calendar.yahoo.com