2011-03-28 11:10:05

by Kamezawa Hiroyuki

[permalink] [raw]
Subject: Re: [RFC 0/3] Implementation of cgroup isolation

On Mon, 28 Mar 2011 11:39:57 +0200
Michal Hocko <[email protected]> wrote:

> Hi all,
>
> Memory cgroups can be currently used to throttle memory usage of a group of
> processes. It, however, cannot be used for an isolation of processes from
> the rest of the system because all the pages that belong to the group are
> also placed on the global LRU lists and so they are eligible for the global
> memory reclaim.
>
> This patchset aims at providing an opt-in memory cgroup isolation. This
> means that a cgroup can be configured to be isolated from the rest of the
> system by means of cgroup virtual filesystem (/dev/memctl/group/memory.isolated).
>
> Isolated mem cgroup can be particularly helpful in deployments where we have
> a primary service which needs to have a certain guarantees for memory
> resources (e.g. a database server) and we want to shield it off the
> rest of the system (e.g. a burst memory activity in another group). This is
> currently possible only with mlocking memory that is essential for the
> application(s) or a rather hacky configuration where the primary app is in
> the root mem cgroup while all the other system activity happens in other
> groups.
>
> mlocking is not an ideal solution all the time because sometimes the working
> set is very large and it depends on the workload (e.g. number of incoming
> requests) so it can end up not fitting in into memory (leading to a OOM
> killer). If we use mem. cgroup isolation instead we are keeping memory resident
> and if the working set goes wild we can still do per-cgroup reclaim so the
> service is less prone to be OOM killed.
>
> The patch series is split into 3 patches. First one adds a new flag into
> mem_cgroup structure which controls whether the group is isolated (false by
> default) and a cgroup fs interface to set it.
> The second patch implements interaction with the global LRU. The current
> semantic is that we are putting a page into a global LRU only if mem cgroup
> LRU functions say they do not want the page for themselves.
> The last patch prevents from soft reclaim if the group is isolated.
>
> I have tested the patches with the simple memory consumer (allocating
> private and shared anon memory and SYSV SHM).
>
> One instance (call it big consumer) running in the group and paging in the
> memory (>90% of cgroup limit) and sleeping for the rest of its life. Then I
> had a pool of consumers running in the same cgroup which page in smaller
> amount of memory and paging them in the loop to simulate in group memory
> pressure (call them sharks).
> The sum of consumed memory is more than memory.limit_in_bytes so some
> portion of the memory is swapped out.
> There is one consumer running in the root cgroup running in parallel which
> makes a pressure on the memory (to trigger background reclaim).
>
> Rss+cache of the group drops down significantly (~66% of the limit) if the
> group is not isolated. On the other hand if we isolate the group we are
> still saturating the group (~97% of the limit). I can show more
> comprehensive results if somebody is interested.
>

Isn't it the same result with the case where no cgroup is used ?
What is the problem ?
Why it's not a problem of configuration ?
IIUC, you can put all logins to some cgroup by using cgroupd/libgcgroup.

> Thanks for comments.
>


Maybe you just want "guarantee".
At 1st thought, this approarch has 3 problems. And memcg is desgined
never to prevent global vm scans,

1. This cannot be used as "guarantee". Just a way for "don't steal from me!!!"
This just implements a "first come, first served" system.
I guess this can be used for server desgines.....only with very very careful play.
If an application exits and lose its memory, there is no guarantee anymore.

2. Even with isolation, a task in memcg can be killed by OOM-killer at
global memory shortage.

3. it seems this will add more page fragmentation if implemented poorly, IOW,
can this be work with compaction ?



I think of other approaches.

1. cpuset+nodehotplug enhances.
At boot, hide most of memory from the system by boot option.
You can rename node-id of "all unused memory" and create arbitrary nodes
if the kernel has an interface. You can add a virtual nodes and move
pages between nodes by renaming it.

This will allow you to create a safe box dynamically. If you move pages in
the order of MAX_ORDER, you don't add any fragmentation.
(But with this way, you need to avoid tasks in root cgrou, too.)


2. allow a mount option to link ROOT cgroup's LRU and add limit for
root cgroup. Then, softlimit will work well.
(If softlimit doesn't work, it's bug. That will be an enhancement point.)


Thanks,
-Kame



















2011-03-28 11:44:34

by Michal Hocko

[permalink] [raw]
Subject: Re: [RFC 0/3] Implementation of cgroup isolation

On Mon 28-03-11 20:03:32, KAMEZAWA Hiroyuki wrote:
> On Mon, 28 Mar 2011 11:39:57 +0200
> Michal Hocko <[email protected]> wrote:
[...]
>
> Isn't it the same result with the case where no cgroup is used ?

Yes and that is the point of the patchset. Memory cgroups will not give
you anything else but the top limit wrt. to the global memory activity.

> What is the problem ?

That we cannot prevent from paging out memory of process(es), even though
we have intentionaly isolated them in a group (read as we do not have
any other possibility for the isolation), because of unrelated memory
activity.

> Why it's not a problem of configuration ?
> IIUC, you can put all logins to some cgroup by using cgroupd/libgcgroup.

Yes, but this still doesn't bring the isolation.

> Maybe you just want "guarantee".
> At 1st thought, this approarch has 3 problems. And memcg is desgined
> never to prevent global vm scans,
>
> 1. This cannot be used as "guarantee". Just a way for "don't steal from me!!!"
> This just implements a "first come, first served" system.
> I guess this can be used for server desgines.....only with very very careful play.
> If an application exits and lose its memory, there is no guarantee anymore.

Yes, but once it got the memory and it needs to have it or benefits from
having it resindent what-ever happens around then there is no other
solution than mlocking the memory which is not ideal solution all the
time as I have described already.

>
> 2. Even with isolation, a task in memcg can be killed by OOM-killer at
> global memory shortage.

Yes it can but I think this is a different problem. Once you are that
short of memory you can hardly ask from any guarantees.
There is no 100% guarantee about anything in the system.

>
> 3. it seems this will add more page fragmentation if implemented poorly, IOW,
> can this be work with compaction ?

Why would it add any fragmentation. We are compacting memory based on
the pfn range scanning rather than walking global LRU list, aren't we?

> I think of other approaches.
>
> 1. cpuset+nodehotplug enhances.
> At boot, hide most of memory from the system by boot option.
> You can rename node-id of "all unused memory" and create arbitrary nodes
> if the kernel has an interface. You can add a virtual nodes and move
> pages between nodes by renaming it.
>
> This will allow you to create a safe box dynamically.

This sounds as it requires a completely new infrastructure for many
parts of VM code.

> If you move pages in
> the order of MAX_ORDER, you don't add any fragmentation.
> (But with this way, you need to avoid tasks in root cgrou, too.)
>
>
> 2. allow a mount option to link ROOT cgroup's LRU and add limit for
> root cgroup. Then, softlimit will work well.
> (If softlimit doesn't work, it's bug. That will be an enhancement point.)

So you mean that the root cgroup would be a normal group like any other?

Thanks
--
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9
Czech Republic

2011-03-29 00:15:48

by Kamezawa Hiroyuki

[permalink] [raw]
Subject: Re: [RFC 0/3] Implementation of cgroup isolation

On Mon, 28 Mar 2011 13:44:30 +0200
Michal Hocko <[email protected]> wrote:

> On Mon 28-03-11 20:03:32, KAMEZAWA Hiroyuki wrote:
> > On Mon, 28 Mar 2011 11:39:57 +0200
> > Michal Hocko <[email protected]> wrote:
> [...]
> >
> > Isn't it the same result with the case where no cgroup is used ?
>
> Yes and that is the point of the patchset. Memory cgroups will not give
> you anything else but the top limit wrt. to the global memory activity.
>
> > What is the problem ?
>
> That we cannot prevent from paging out memory of process(es), even though
> we have intentionaly isolated them in a group (read as we do not have
> any other possibility for the isolation), because of unrelated memory
> activity.
>
Because the design of memory cgroup is not for "defending" but for
"never attack some other guys".


> > Why it's not a problem of configuration ?
> > IIUC, you can put all logins to some cgroup by using cgroupd/libgcgroup.
>
> Yes, but this still doesn't bring the isolation.
>

Please explain this more.
Why don't you move all tasks under /root/default <- this has some limit ?


> > Maybe you just want "guarantee".
> > At 1st thought, this approarch has 3 problems. And memcg is desgined
> > never to prevent global vm scans,
> >
> > 1. This cannot be used as "guarantee". Just a way for "don't steal from me!!!"
> > This just implements a "first come, first served" system.
> > I guess this can be used for server desgines.....only with very very careful play.
> > If an application exits and lose its memory, there is no guarantee anymore.
>
> Yes, but once it got the memory and it needs to have it or benefits from
> having it resindent what-ever happens around then there is no other
> solution than mlocking the memory which is not ideal solution all the
> time as I have described already.
>

Yes, then, almost all mm guys answer has been "please use mlock".



> >
> > 2. Even with isolation, a task in memcg can be killed by OOM-killer at
> > global memory shortage.
>
> Yes it can but I think this is a different problem. Once you are that
> short of memory you can hardly ask from any guarantees.
> There is no 100% guarantee about anything in the system.
>

I think you should put tasks in root cgroup to somewhere. It works perfect
against OOM. And if memory are hidden by isolation, OOM will happen easier.


> >
> > 3. it seems this will add more page fragmentation if implemented poorly, IOW,
> > can this be work with compaction ?
>
> Why would it add any fragmentation. We are compacting memory based on
> the pfn range scanning rather than walking global LRU list, aren't we?
>

Please forget, I misunderstood.




> > I think of other approaches.
> >
> > 1. cpuset+nodehotplug enhances.
> > At boot, hide most of memory from the system by boot option.
> > You can rename node-id of "all unused memory" and create arbitrary nodes
> > if the kernel has an interface. You can add a virtual nodes and move
> > pages between nodes by renaming it.
> >
> > This will allow you to create a safe box dynamically.
>
> This sounds as it requires a completely new infrastructure for many
> parts of VM code.
>

Not so many parts, I guess. I think I can write a prototype in a week,
if I have time.


> > If you move pages in
> > the order of MAX_ORDER, you don't add any fragmentation.
> > (But with this way, you need to avoid tasks in root cgrou, too.)
> >
> >
> > 2. allow a mount option to link ROOT cgroup's LRU and add limit for
> > root cgroup. Then, softlimit will work well.
> > (If softlimit doesn't work, it's bug. That will be an enhancement point.)
>
> So you mean that the root cgroup would be a normal group like any other?
>

If necessary. Root cgroup has no limit/LRU/etc...just for gaining performance.
If admin can adimit the cost (2-5% now?), I think we can add knobs as boot
option or some.

Anyway, to work softlimit etc..in ideal way, admin should put all tasks into
some memcg which has limits.

Thanks,
-Kame





2011-03-29 07:32:39

by Michal Hocko

[permalink] [raw]
Subject: Re: [RFC 0/3] Implementation of cgroup isolation

On Tue 29-03-11 09:09:24, KAMEZAWA Hiroyuki wrote:
> On Mon, 28 Mar 2011 13:44:30 +0200
> Michal Hocko <[email protected]> wrote:
>
> > On Mon 28-03-11 20:03:32, KAMEZAWA Hiroyuki wrote:
> > > On Mon, 28 Mar 2011 11:39:57 +0200
> > > Michal Hocko <[email protected]> wrote:
> > [...]
> > >
> > > Isn't it the same result with the case where no cgroup is used ?
> >
> > Yes and that is the point of the patchset. Memory cgroups will not give
> > you anything else but the top limit wrt. to the global memory activity.
> >
> > > What is the problem ?
> >
> > That we cannot prevent from paging out memory of process(es), even though
> > we have intentionaly isolated them in a group (read as we do not have
> > any other possibility for the isolation), because of unrelated memory
> > activity.
> >
> Because the design of memory cgroup is not for "defending" but for
> "never attack some other guys".

Yes, I am aware of the current state of implementation. But as the
patchset show there is not quite trivial to implement also the other
(defending) part.

>
>
> > > Why it's not a problem of configuration ?
> > > IIUC, you can put all logins to some cgroup by using cgroupd/libgcgroup.
> >
> > Yes, but this still doesn't bring the isolation.
> >
>
> Please explain this more.
> Why don't you move all tasks under /root/default <- this has some limit ?

OK, I have tried to explain that in one of the (2nd) patch description.
If I move all task from the root group to other group(s) and keep the
primary application in the root group I would achieve some isolation as
well. That is very much true. But then there is only one such a group.
What if we need more such groups? I see this solution more as a misuse
of the current implementation of the (special) root cgroup.

> > > Maybe you just want "guarantee".
> > > At 1st thought, this approarch has 3 problems. And memcg is desgined
> > > never to prevent global vm scans,
> > >
> > > 1. This cannot be used as "guarantee". Just a way for "don't steal from me!!!"
> > > This just implements a "first come, first served" system.
> > > I guess this can be used for server desgines.....only with very very careful play.
> > > If an application exits and lose its memory, there is no guarantee anymore.
> >
> > Yes, but once it got the memory and it needs to have it or benefits from
> > having it resindent what-ever happens around then there is no other
> > solution than mlocking the memory which is not ideal solution all the
> > time as I have described already.
> >
>
> Yes, then, almost all mm guys answer has been "please use mlock".

Yes. As I already tried to explain, mlock is not the remedy all the
time. It gets very tricky when you balance on the edge of the limit of
the available memory resp. cgroup limit. Sometimes you rather want to
have something swapped out than being killed (or fail due to ENOMEM).
The important thing about swapped out above is that with the isolation
it is only per-cgroup.

> > > 2. Even with isolation, a task in memcg can be killed by OOM-killer at
> > > global memory shortage.
> >
> > Yes it can but I think this is a different problem. Once you are that
> > short of memory you can hardly ask from any guarantees.
> > There is no 100% guarantee about anything in the system.
> >
>
> I think you should put tasks in root cgroup to somewhere. It works perfect
> against OOM. And if memory are hidden by isolation, OOM will happen easier.

Why do you think that it would happen easier? Isn't it similar (from OOM
POV) as if somebody mlocked that memory?

Thanks for comments
--
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9
Czech Republic

2011-03-29 07:57:49

by Kamezawa Hiroyuki

[permalink] [raw]
Subject: Re: [RFC 0/3] Implementation of cgroup isolation

On Tue, 29 Mar 2011 09:32:32 +0200
Michal Hocko <[email protected]> wrote:

> On Tue 29-03-11 09:09:24, KAMEZAWA Hiroyuki wrote:
> > On Mon, 28 Mar 2011 13:44:30 +0200
> > Michal Hocko <[email protected]> wrote:
> >
> > > On Mon 28-03-11 20:03:32, KAMEZAWA Hiroyuki wrote:
> > > > On Mon, 28 Mar 2011 11:39:57 +0200
> > > > Michal Hocko <[email protected]> wrote:
> > > [...]
> > > >
> > > > Isn't it the same result with the case where no cgroup is used ?
> > >
> > > Yes and that is the point of the patchset. Memory cgroups will not give
> > > you anything else but the top limit wrt. to the global memory activity.
> > >
> > > > What is the problem ?
> > >
> > > That we cannot prevent from paging out memory of process(es), even though
> > > we have intentionaly isolated them in a group (read as we do not have
> > > any other possibility for the isolation), because of unrelated memory
> > > activity.
> > >
> > Because the design of memory cgroup is not for "defending" but for
> > "never attack some other guys".
>
> Yes, I am aware of the current state of implementation. But as the
> patchset show there is not quite trivial to implement also the other
> (defending) part.
>

My opinions is to enhance softlimit is better.


> >
> >
> > > > Why it's not a problem of configuration ?
> > > > IIUC, you can put all logins to some cgroup by using cgroupd/libgcgroup.
> > >
> > > Yes, but this still doesn't bring the isolation.
> > >
> >
> > Please explain this more.
> > Why don't you move all tasks under /root/default <- this has some limit ?
>
> OK, I have tried to explain that in one of the (2nd) patch description.
> If I move all task from the root group to other group(s) and keep the
> primary application in the root group I would achieve some isolation as
> well. That is very much true.

Okay, then, current works well.

> But then there is only one such a group.

I can't catch what you mean. you can create limitless cgroup, anywhere.
Can't you ?

> What if we need more such groups? I see this solution more as a misuse
> of the current implementation of the (special) root cgroup.
>

make a limitless cgroup and set softlimit properly, if necessary.
But as said in other e-mail, softlimit should be improved.


> > > > Maybe you just want "guarantee".
> > > > At 1st thought, this approarch has 3 problems. And memcg is desgined
> > > > never to prevent global vm scans,
> > > >
> > > > 1. This cannot be used as "guarantee". Just a way for "don't steal from me!!!"
> > > > This just implements a "first come, first served" system.
> > > > I guess this can be used for server desgines.....only with very very careful play.
> > > > If an application exits and lose its memory, there is no guarantee anymore.
> > >
> > > Yes, but once it got the memory and it needs to have it or benefits from
> > > having it resindent what-ever happens around then there is no other
> > > solution than mlocking the memory which is not ideal solution all the
> > > time as I have described already.
> > >
> >
> > Yes, then, almost all mm guys answer has been "please use mlock".
>
> Yes. As I already tried to explain, mlock is not the remedy all the
> time. It gets very tricky when you balance on the edge of the limit of
> the available memory resp. cgroup limit. Sometimes you rather want to
> have something swapped out than being killed (or fail due to ENOMEM).
> The important thing about swapped out above is that with the isolation
> it is only per-cgroup.
>

IMHO, doing isolation by hiding is not good idea. Because we're kernel
engineer, we should do isolation by scheduling. The kernel is art of
shceduling, not separation. I think we should start from some scheduling
as softlimit. Then, as an extreme case of scheduling, 'complete isolation'
should be archived. If it seems impossible after trial of making softlimit
better, okay, we should consider some.

BTW, if you want, please post a patch to enable limit/softlimit on ROOT
cgroup with performance measurements.
I myself has no requirements...


> > > > 2. Even with isolation, a task in memcg can be killed by OOM-killer at
> > > > global memory shortage.
> > >
> > > Yes it can but I think this is a different problem. Once you are that
> > > short of memory you can hardly ask from any guarantees.
> > > There is no 100% guarantee about anything in the system.
> > >
> >
> > I think you should put tasks in root cgroup to somewhere. It works perfect
> > against OOM. And if memory are hidden by isolation, OOM will happen easier.
>
> Why do you think that it would happen easier? Isn't it similar (from OOM
> POV) as if somebody mlocked that memory?
>

if global lru scan cannot find victim memory, oom happens.

Thanks,
-Kame



2011-03-29 08:59:46

by Michal Hocko

[permalink] [raw]
Subject: Re: [RFC 0/3] Implementation of cgroup isolation

On Tue 29-03-11 16:51:17, KAMEZAWA Hiroyuki wrote:
> On Tue, 29 Mar 2011 09:32:32 +0200
> Michal Hocko <[email protected]> wrote:
>
> > On Tue 29-03-11 09:09:24, KAMEZAWA Hiroyuki wrote:
> > > On Mon, 28 Mar 2011 13:44:30 +0200
> > > Michal Hocko <[email protected]> wrote:
> > >
> > > > On Mon 28-03-11 20:03:32, KAMEZAWA Hiroyuki wrote:
> > > > > On Mon, 28 Mar 2011 11:39:57 +0200
> > > > > Michal Hocko <[email protected]> wrote:
> > > > [...]
> > > > >
> > > > > Isn't it the same result with the case where no cgroup is used ?
> > > >
> > > > Yes and that is the point of the patchset. Memory cgroups will not give
> > > > you anything else but the top limit wrt. to the global memory activity.
> > > >
> > > > > What is the problem ?
> > > >
> > > > That we cannot prevent from paging out memory of process(es), even though
> > > > we have intentionaly isolated them in a group (read as we do not have
> > > > any other possibility for the isolation), because of unrelated memory
> > > > activity.
> > > >
> > > Because the design of memory cgroup is not for "defending" but for
> > > "never attack some other guys".
> >
> > Yes, I am aware of the current state of implementation. But as the
> > patchset show there is not quite trivial to implement also the other
> > (defending) part.
> >
>
> My opinions is to enhance softlimit is better.

I will look how softlimit can be enhanced to match the expectations but
I'm kind of suspicious it can handle workloads where heuristics simply
cannot guess that the resident memory is important even though it wasn't
touched for a long time.

> > > > > Why it's not a problem of configuration ?
> > > > > IIUC, you can put all logins to some cgroup by using cgroupd/libgcgroup.
> > > >
> > > > Yes, but this still doesn't bring the isolation.
> > > >
> > >
> > > Please explain this more.
> > > Why don't you move all tasks under /root/default <- this has some limit ?
> >
> > OK, I have tried to explain that in one of the (2nd) patch description.
> > If I move all task from the root group to other group(s) and keep the
> > primary application in the root group I would achieve some isolation as
> > well. That is very much true.
>
> Okay, then, current works well.
>
> > But then there is only one such a group.
>
> I can't catch what you mean. you can create limitless cgroup, anywhere.
> Can't you ?

This is not about limits. This is about global vs. per-cgroup reclaim
and how much they interact together.

The everything-in-groups approach with the "primary" service in the root
group (or call it unlimited) works just because all the memory activity
(but the primary service) is caped with the limits so the rest of the
memory can be used by the service. Moreover, in order this to work the
limit for other groups would be smaller then the working set of the
primary service.

Even if you created a limitless group for other important service they
would still interact together and if one goes wild the other would
suffer from that.

But, well, I might be wrong at this, I will play with it so see how it
works.

[...]
> > > Yes, then, almost all mm guys answer has been "please use mlock".
> >
> > Yes. As I already tried to explain, mlock is not the remedy all the
> > time. It gets very tricky when you balance on the edge of the limit of
> > the available memory resp. cgroup limit. Sometimes you rather want to
> > have something swapped out than being killed (or fail due to ENOMEM).
> > The important thing about swapped out above is that with the isolation
> > it is only per-cgroup.
> >
>
> IMHO, doing isolation by hiding is not good idea.

It depends on what you want to guarantee.

> Because we're kernel engineer, we should do isolation by
> scheduling. The kernel is art of shceduling, not separation.

Well, I would disagree with this statement (to some extend of course).
Cgroups are quite often used for separation (e.g. cpusets basically
hide tasks from CPUs that are not configured for them).

You are certainly right that the memory management is about proper
scheduling and balancing needs vs. demands. And it turned out to be
working fine in many (maybe even most of) workloads (modulo bugs
which are fixed over time). But if an application has more specific
requirements for its memory usage then it is quite limited in ways how
it can achieve them (mlock is one way how to pin the memory but there
are cases where it is not appropriate).
Kernel will simply never know the complete picture and have to rely on
heuristics which will never fit in with everybody.


> I think we should start from some scheduling as softlimit. Then,
> as an extreme case of scheduling, 'complete isolation' should be
> archived. If it seems impossible after trial of making softlimit
> better, okay, we should consider some.

As I already tried to point out what-ever will scheduling do it has no
way to guess that somebody needs to be isolated unless he says that to
kernel.
Anyway, I will have a look whether softlimit can be used and how helpful
it would be.

[...]
> > > I think you should put tasks in root cgroup to somewhere. It works perfect
> > > against OOM. And if memory are hidden by isolation, OOM will happen easier.
> >
> > Why do you think that it would happen easier? Isn't it similar (from OOM
> > POV) as if somebody mlocked that memory?
> >
>
> if global lru scan cannot find victim memory, oom happens.

Yes, but this will happen with mlocked memory as well, right?

--
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9
Czech Republic

2011-03-29 09:48:07

by Kamezawa Hiroyuki

[permalink] [raw]
Subject: Re: [RFC 0/3] Implementation of cgroup isolation

On Tue, 29 Mar 2011 10:59:43 +0200
Michal Hocko <[email protected]> wrote:

> On Tue 29-03-11 16:51:17, KAMEZAWA Hiroyuki wrote:
> > On Tue, 29 Mar 2011 09:32:32 +0200
> > Michal Hocko <[email protected]> wrote:
> >
> > > On Tue 29-03-11 09:09:24, KAMEZAWA Hiroyuki wrote:
> > > > On Mon, 28 Mar 2011 13:44:30 +0200
> > > > Michal Hocko <[email protected]> wrote:
> > > >
> > > > > On Mon 28-03-11 20:03:32, KAMEZAWA Hiroyuki wrote:
> > > > > > On Mon, 28 Mar 2011 11:39:57 +0200
> > > > > > Michal Hocko <[email protected]> wrote:
> > > > > [...]
> > > > > >
> > > > > > Isn't it the same result with the case where no cgroup is used ?
> > > > >
> > > > > Yes and that is the point of the patchset. Memory cgroups will not give
> > > > > you anything else but the top limit wrt. to the global memory activity.
> > > > >
> > > > > > What is the problem ?
> > > > >
> > > > > That we cannot prevent from paging out memory of process(es), even though
> > > > > we have intentionaly isolated them in a group (read as we do not have
> > > > > any other possibility for the isolation), because of unrelated memory
> > > > > activity.
> > > > >
> > > > Because the design of memory cgroup is not for "defending" but for
> > > > "never attack some other guys".
> > >
> > > Yes, I am aware of the current state of implementation. But as the
> > > patchset show there is not quite trivial to implement also the other
> > > (defending) part.
> > >
> >
> > My opinions is to enhance softlimit is better.
>
> I will look how softlimit can be enhanced to match the expectations but
> I'm kind of suspicious it can handle workloads where heuristics simply
> cannot guess that the resident memory is important even though it wasn't
> touched for a long time.
>

I think we recommend mlock() or hugepagefs to pin application's work area
in usual. And mm guyes have did hardwork to work mm better even without
memory cgroup under realisitic workloads.

If your worload is realistic but _important_ anonymous memory is swapped out,
it's problem of global VM rather than memcg.

If you add 'isolate' per process, okay, I'll agree to add isolate per memcg.



> > > > > > Why it's not a problem of configuration ?
> > > > > > IIUC, you can put all logins to some cgroup by using cgroupd/libgcgroup.
> > > > >
> > > > > Yes, but this still doesn't bring the isolation.
> > > > >
> > > >
> > > > Please explain this more.
> > > > Why don't you move all tasks under /root/default <- this has some limit ?
> > >
> > > OK, I have tried to explain that in one of the (2nd) patch description.
> > > If I move all task from the root group to other group(s) and keep the
> > > primary application in the root group I would achieve some isolation as
> > > well. That is very much true.
> >
> > Okay, then, current works well.
> >
> > > But then there is only one such a group.
> >
> > I can't catch what you mean. you can create limitless cgroup, anywhere.
> > Can't you ?
>
> This is not about limits. This is about global vs. per-cgroup reclaim
> and how much they interact together.
>
> The everything-in-groups approach with the "primary" service in the root
> group (or call it unlimited) works just because all the memory activity
> (but the primary service) is caped with the limits so the rest of the
> memory can be used by the service. Moreover, in order this to work the
> limit for other groups would be smaller then the working set of the
> primary service.
>
> Even if you created a limitless group for other important service they
> would still interact together and if one goes wild the other would
> suffer from that.
>

.........I can't understad what is the problem when global reclaim
runs just because an application wasn't limited ...or memory are
overcomitted.




> [...]
> > > > Yes, then, almost all mm guys answer has been "please use mlock".
> > >
> > > Yes. As I already tried to explain, mlock is not the remedy all the
> > > time. It gets very tricky when you balance on the edge of the limit of
> > > the available memory resp. cgroup limit. Sometimes you rather want to
> > > have something swapped out than being killed (or fail due to ENOMEM).
> > > The important thing about swapped out above is that with the isolation
> > > it is only per-cgroup.
> > >
> >
> > IMHO, doing isolation by hiding is not good idea.
>
> It depends on what you want to guarantee.
>
> > Because we're kernel engineer, we should do isolation by
> > scheduling. The kernel is art of shceduling, not separation.
>
> Well, I would disagree with this statement (to some extend of course).
> Cgroups are quite often used for separation (e.g. cpusets basically
> hide tasks from CPUs that are not configured for them).
>
> You are certainly right that the memory management is about proper
> scheduling and balancing needs vs. demands. And it turned out to be
> working fine in many (maybe even most of) workloads (modulo bugs
> which are fixed over time). But if an application has more specific
> requirements for its memory usage then it is quite limited in ways how
> it can achieve them (mlock is one way how to pin the memory but there
> are cases where it is not appropriate).
> Kernel will simply never know the complete picture and have to rely on
> heuristics which will never fit in with everybody.
>

That's what MM guys are tring.

IIUC, there has been many papers on 'hinting LRU' in OS study,
but none has been added to Linux successfully. I'm not sure there has
been no trial or they were rejected.



>
> > I think we should start from some scheduling as softlimit. Then,
> > as an extreme case of scheduling, 'complete isolation' should be
> > archived. If it seems impossible after trial of making softlimit
> > better, okay, we should consider some.
>
> As I already tried to point out what-ever will scheduling do it has no
> way to guess that somebody needs to be isolated unless he says that to
> kernel.
> Anyway, I will have a look whether softlimit can be used and how helpful
> it would be.
>

If softlimit (after some improvement) isn't enough, please add some other.

What I think of is

1. need to "guarantee" memory usages in future.
"first come, first served" is not good for admins.

2. need to handle zone memory shortage. Using memory migration
between zones will be necessary to avoid pageout.

3. need a knob to say "please reclaim from my own cgroup rather than
affecting others (if usage > some(soft)limit)."


> [...]
> > > > I think you should put tasks in root cgroup to somewhere. It works perfect
> > > > against OOM. And if memory are hidden by isolation, OOM will happen easier.
> > >
> > > Why do you think that it would happen easier? Isn't it similar (from OOM
> > > POV) as if somebody mlocked that memory?
> > >
> >
> > if global lru scan cannot find victim memory, oom happens.
>
> Yes, but this will happen with mlocked memory as well, right?
>
Yes, of course.

Anyway, I'll Nack to simple "first come, first served" isolation.
Please implement garantee, which is reliable and admin can use safely.

mlock() has similar problem, So, I recommend hugetlbfs to customers,
admin can schedule it at boot time.
(the number of users of hugetlbfs is tend to be one app. (oracle))

I'll be absent, tomorrow.

I think you'll come LSF/MM summit and from the schedule, you'll have
a joint session with Ying as "Memcg LRU management and isolation".

IIUC, "LRU management" is a google's performance improvement topic.

It's ok for me to talk only about 'isolation' 1st in earlier session.
If you want, please ask James to move session and overlay 1st memory
cgroup session. (I think you saw e-mail from James.)

Thanks,
-Kame

2011-03-29 11:19:06

by Michal Hocko

[permalink] [raw]
Subject: Re: [RFC 0/3] Implementation of cgroup isolation

On Tue 29-03-11 18:41:19, KAMEZAWA Hiroyuki wrote:
> On Tue, 29 Mar 2011 10:59:43 +0200
> Michal Hocko <[email protected]> wrote:
>
> > On Tue 29-03-11 16:51:17, KAMEZAWA Hiroyuki wrote:
[...]
> > > My opinions is to enhance softlimit is better.
> >
> > I will look how softlimit can be enhanced to match the expectations but
> > I'm kind of suspicious it can handle workloads where heuristics simply
> > cannot guess that the resident memory is important even though it wasn't
> > touched for a long time.
> >
>
> I think we recommend mlock() or hugepagefs to pin application's work area
> in usual. And mm guyes have did hardwork to work mm better even without
> memory cgroup under realisitic workloads.

Agreed. Whenever this approach is possible we recomend the same thing.

> If your worload is realistic but _important_ anonymous memory is swapped out,
> it's problem of global VM rather than memcg.

I would disagree with you on that. The important thing is that it can be
defined from many perspectives. One is the kernel which considers long
unused memory as not _that_ important. And it makes a perfect sense for
most workloads.
An important memory for an application can be something that would
considerably increase the latency just because the memory got paged out
(be it swap or the storage) because it contains pre-computed
data that have a big initial costs.
As you can see there is no mention about the time from the application
POV because it can depend on the incoming requests which you cannot
control.

> If you add 'isolate' per process, okay, I'll agree to add isolate per memcg.

What do you mean by isolate per process?

[...]
> > > > OK, I have tried to explain that in one of the (2nd) patch description.
> > > > If I move all task from the root group to other group(s) and keep the
> > > > primary application in the root group I would achieve some isolation as
> > > > well. That is very much true.
> > >
> > > Okay, then, current works well.
> > >
> > > > But then there is only one such a group.
> > >
> > > I can't catch what you mean. you can create limitless cgroup, anywhere.
> > > Can't you ?
> >
> > This is not about limits. This is about global vs. per-cgroup reclaim
> > and how much they interact together.
> >
> > The everything-in-groups approach with the "primary" service in the root
> > group (or call it unlimited) works just because all the memory activity
> > (but the primary service) is caped with the limits so the rest of the
> > memory can be used by the service. Moreover, in order this to work the
> > limit for other groups would be smaller then the working set of the
> > primary service.
> >
> > Even if you created a limitless group for other important service they
> > would still interact together and if one goes wild the other would
> > suffer from that.
> >
>
> .........I can't understad what is the problem when global reclaim
> runs just because an application wasn't limited ...or memory are
> overcomitted.

I am not sure I understand but what I see as a problem is when unrelated
memory activity triggers reclaim and it pushes out the memory of a
process group just because the heuristics done by the reclaim algorithm
do not pick up the right memory - and honestly, no heuristic will fit
all requirements. Isolation can protect from an unrelated activity
without new heuristics.

[...]
> If softlimit (after some improvement) isn't enough, please add some other.
>
> What I think of is
>
> 1. need to "guarantee" memory usages in future.
> "first come, first served" is not good for admins.

this is not in scope of these patchsets but I agree that it would be
nice to have this guarantee

> 2. need to handle zone memory shortage. Using memory migration
> between zones will be necessary to avoid pageout.

I am not sure I understand.

>
> 3. need a knob to say "please reclaim from my own cgroup rather than
> affecting others (if usage > some(soft)limit)."

Isn't this handled already and enhanced by the per-cgroup background
reclaim patches?

>
> > [...]
> > > > > I think you should put tasks in root cgroup to somewhere. It works perfect
> > > > > against OOM. And if memory are hidden by isolation, OOM will happen easier.
> > > >
> > > > Why do you think that it would happen easier? Isn't it similar (from OOM
> > > > POV) as if somebody mlocked that memory?
> > > >
> > >
> > > if global lru scan cannot find victim memory, oom happens.
> >
> > Yes, but this will happen with mlocked memory as well, right?
> >
> Yes, of course.
>
> Anyway, I'll Nack to simple "first come, first served" isolation.
> Please implement garantee, which is reliable and admin can use safely.

Isolation is not about future guarantee. It is rather after you have it
you can rely it will stay in unless in-group activity pushes it out.

> mlock() has similar problem, So, I recommend hugetlbfs to customers,
> admin can schedule it at boot time.
> (the number of users of hugetlbfs is tend to be one app. (oracle))

What if we decide that hugetlbfs won't be pinned into memory in future?

>
> I'll be absent, tomorrow.
>
> I think you'll come LSF/MM summit and from the schedule, you'll have
> a joint session with Ying as "Memcg LRU management and isolation".

I didn't have plans to do a session actively, but I can certainly join
to talk and will be happy to discuss this topic.

>
> IIUC, "LRU management" is a google's performance improvement topic.
>
> It's ok for me to talk only about 'isolation' 1st in earlier session.
> If you want, please ask James to move session and overlay 1st memory
> cgroup session. (I think you saw e-mail from James.)

Yeah, I can do that.

Thanks
--
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9
Czech Republic

2011-03-29 13:16:21

by Zhu Yanhai

[permalink] [raw]
Subject: Re: [RFC 0/3] Implementation of cgroup isolation

Michal,
Maybe what we need here is some kind of trade-off?
Let's say a new configuable parameter reserve_limit, for the cgroups
which want to
have some guarantee in the memory resource, we have:

limit_in_bytes > soft_limit > reserve_limit

MEM[limit_in_bytes..soft_limit] are the bytes that I'm willing to contribute
to the others if they are short of memory.

MEM[soft_limit..reserve_limit] are the bytes that I can afford if the others
are still eager for memory after I gave them MEM[limit_in_bytes..soft_limit].

MEM[reserve_limit..0] are the bytes which is a must for me to guarantee QoS.
Nobody is allowed to steal them.

And reserve_limit is 0 by default for the cgroups who don't care about Qos.

Then the reclaim path also needs some changes, i.e, balance_pgdat():
1) call mem_cgroup_soft_limit_reclaim(), if nr_reclaimed is meet, goto finish.
2) shrink the global LRU list, and skip the pages which belong to the cgroup
who have set a reserve_limit. if nr_reclaimed is meet, goto finish.
3) shrink the cgroups who have set a reserve_limit, and leave them with only
the reserve_limit bytes they need. if nr_reclaimed is meet, goto finish.
4) OOM

Does it make sense?

Thanks,
Zhu Yanhai


2011/3/29 Michal Hocko <[email protected]>:
> On Tue 29-03-11 18:41:19, KAMEZAWA Hiroyuki wrote:
>> On Tue, 29 Mar 2011 10:59:43 +0200
>> Michal Hocko <[email protected]> wrote:
>>
>> > On Tue 29-03-11 16:51:17, KAMEZAWA Hiroyuki wrote:
> [...]
>> > > My opinions is to enhance softlimit is better.
>> >
>> > I will look how softlimit can be enhanced to match the expectations but
>> > I'm kind of suspicious it can handle workloads where heuristics simply
>> > cannot guess that the resident memory is important even though it wasn't
>> > touched for a long time.
>> >
>>
>> I think we recommend mlock() or hugepagefs to pin application's work area
>> in usual. And mm guyes have did hardwork to work mm better even without
>> memory cgroup under realisitic workloads.
>
> Agreed. Whenever this approach is possible we recomend the same thing.
>
>> If your worload is realistic but _important_ anonymous memory is swapped out,
>> it's problem of global VM rather than memcg.
>
> I would disagree with you on that. The important thing is that it can be
> defined from many perspectives. One is the kernel which considers long
> unused memory as not _that_ important. And it makes a perfect sense for
> most workloads.
> An important memory for an application can be something that would
> considerably increase the latency just because the memory got paged out
> (be it swap or the storage) because it contains pre-computed
> data that have a big initial costs.
> As you can see there is no mention about the time from the application
> POV because it can depend on the incoming requests which you cannot
> control.
>
>> If you add 'isolate' per process, okay, I'll agree to add isolate per memcg.
>
> What do you mean by isolate per process?
>
> [...]
>> > > > OK, I have tried to explain that in one of the (2nd) patch description.
>> > > > If I move all task from the root group to other group(s) and keep the
>> > > > primary application in the root group I would achieve some isolation as
>> > > > well. That is very much true.
>> > >
>> > > Okay, then, current works well.
>> > >
>> > > > But then there is only one such a group.
>> > >
>> > > I can't catch what you mean. you can create limitless cgroup, anywhere.
>> > > Can't you ?
>> >
>> > This is not about limits. This is about global vs. per-cgroup reclaim
>> > and how much they interact together.
>> >
>> > The everything-in-groups approach with the "primary" service in the root
>> > group (or call it unlimited) works just because all the memory activity
>> > (but the primary service) is caped with the limits so the rest of the
>> > memory can be used by the service. Moreover, in order this to work the
>> > limit for other groups would be smaller then the working set of the
>> > primary service.
>> >
>> > Even if you created a limitless group for other important service they
>> > would still interact together and if one goes wild the other would
>> > suffer from that.
>> >
>>
>> .........I can't understad what is the problem when global reclaim
>> runs just because an application wasn't limited ...or memory are
>> overcomitted.
>
> I am not sure I understand but what I see as a problem is when unrelated
> memory activity triggers reclaim and it pushes out the memory of a
> process group just because the heuristics done by the reclaim algorithm
> do not pick up the right memory - and honestly, no heuristic will fit
> all requirements. Isolation can protect from an unrelated activity
> without new heuristics.
>
> [...]
>> If softlimit (after some improvement) isn't enough, please add some other.
>>
>> What I think of is
>>
>> 1. need to "guarantee" memory usages in future.
>>    "first come, first served" is not good for admins.
>
> this is not in scope of these patchsets but I agree that it would be
> nice to have this guarantee
>
>> 2. need to handle zone memory shortage. Using memory migration
>>    between zones will be necessary to avoid pageout.
>
> I am not sure I understand.
>
>>
>> 3. need a knob to say "please reclaim from my own cgroup rather than
>>    affecting others (if usage > some(soft)limit)."
>
> Isn't this handled already and enhanced by the per-cgroup background
> reclaim patches?
>
>>
>> > [...]
>> > > > > I think you should put tasks in root cgroup to somewhere. It works perfect
>> > > > > against OOM. And if memory are hidden by isolation, OOM will happen easier.
>> > > >
>> > > > Why do you think that it would happen easier? Isn't it similar (from OOM
>> > > > POV) as if somebody mlocked that memory?
>> > > >
>> > >
>> > > if global lru scan cannot find victim memory, oom happens.
>> >
>> > Yes, but this will happen with mlocked memory as well, right?
>> >
>> Yes, of course.
>>
>> Anyway, I'll Nack to simple "first come, first served" isolation.
>> Please implement garantee, which is reliable and admin can use safely.
>
> Isolation is not about future guarantee. It is rather after you have it
> you can rely it will stay in unless in-group activity pushes it out.
>
>> mlock() has similar problem, So, I recommend hugetlbfs to customers,
>> admin can schedule it at boot time.
>> (the number of users of hugetlbfs is tend to be one app. (oracle))
>
> What if we decide that hugetlbfs won't be pinned into memory in future?
>
>>
>> I'll be absent, tomorrow.
>>
>> I think you'll come LSF/MM summit and from the schedule, you'll have
>> a joint session with Ying as "Memcg LRU management and isolation".
>
> I didn't have plans to do a session actively, but I can certainly join
> to talk and will be happy to discuss this topic.
>
>>
>> IIUC, "LRU management" is a google's performance improvement topic.
>>
>> It's ok for me to talk only about 'isolation'  1st in earlier session.
>> If you want, please ask James to move session and overlay 1st memory
>> cgroup session. (I think you saw e-mail from James.)
>
> Yeah, I can do that.
>
> Thanks
> --
> Michal Hocko
> SUSE Labs
> SUSE LINUX s.r.o.
> Lihovarska 1060/12
> 190 00 Praha 9
> Czech Republic
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to [email protected].  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
> Don't email: <a href=mailto:"[email protected]"> [email protected] </a>
>

2011-03-29 13:42:26

by Michal Hocko

[permalink] [raw]
Subject: Re: [RFC 0/3] Implementation of cgroup isolation

On Tue 29-03-11 21:15:59, Zhu Yanhai wrote:
> Michal,

Hi,

> Maybe what we need here is some kind of trade-off?
> Let's say a new configuable parameter reserve_limit, for the cgroups
> which want to
> have some guarantee in the memory resource, we have:
>
> limit_in_bytes > soft_limit > reserve_limit
>
> MEM[limit_in_bytes..soft_limit] are the bytes that I'm willing to contribute
> to the others if they are short of memory.
>
> MEM[soft_limit..reserve_limit] are the bytes that I can afford if the others
> are still eager for memory after I gave them MEM[limit_in_bytes..soft_limit].
>
> MEM[reserve_limit..0] are the bytes which is a must for me to guarantee QoS.
> Nobody is allowed to steal them.
>
> And reserve_limit is 0 by default for the cgroups who don't care about Qos.
>
> Then the reclaim path also needs some changes, i.e, balance_pgdat():
> 1) call mem_cgroup_soft_limit_reclaim(), if nr_reclaimed is meet, goto finish.
> 2) shrink the global LRU list, and skip the pages which belong to the cgroup
> who have set a reserve_limit. if nr_reclaimed is meet, goto finish.

Isn't this an overhead that would slow the whole thing down. Consider
that you would need to lookup page_cgroup for every page and touch
mem_cgroup to get the limit.
The point of the isolation is to not touch the global reclaim path at
all.

> 3) shrink the cgroups who have set a reserve_limit, and leave them with only
> the reserve_limit bytes they need. if nr_reclaimed is meet, goto finish.
> 4) OOM
>
> Does it make sense?

It sounds like a good thing - in that regard it is more generic than
a simple flag - but I am afraid that the implementation wouldn't be
that easy to preserve the performance and keep the balance between
groups. But maybe it can be done without too much cost.

Thanks
--
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9
Czech Republic

2011-03-29 14:02:44

by Zhu Yanhai

[permalink] [raw]
Subject: Re: [RFC 0/3] Implementation of cgroup isolation

Hi,

2011/3/29 Michal Hocko <[email protected]>:
> Isn't this an overhead that would slow the whole thing down. Consider
> that you would need to lookup page_cgroup for every page and touch
> mem_cgroup to get the limit.

Current almost has did such things, say the direct reclaim path:
shrink_inactive_list()
->isolate_pages_global()
->isolate_lru_pages()
->mem_cgroup_del_lru(for each page it wants to isolate)
and in mem_cgroup_del_lru() we have:
[code]
pc = lookup_page_cgroup(page);
/*
* Used bit is set without atomic ops but after smp_wmb().
* For making pc->mem_cgroup visible, insert smp_rmb() here.
*/
smp_rmb();
/* unused or root page is not rotated. */
if (!PageCgroupUsed(pc) || mem_cgroup_is_root(pc->mem_cgroup))
return;
[/code]
By calling mem_cgroup_is_root(pc->mem_cgroup) we already brought the
struct mem_cgroup into cache.
So probably things won't get worse at least.

Thanks,
Zhu Yanhai

> The point of the isolation is to not touch the global reclaim path at
> all.
>
>> 3) shrink the cgroups who have set a reserve_limit, and leave them with only
>> the reserve_limit bytes they need. if nr_reclaimed is meet, goto finish.
>> 4) OOM
>>
>> Does it make sense?
>
> It sounds like a good thing - in that regard it is more generic than
> a simple flag - but I am afraid that the implementation wouldn't be
> that easy to preserve the performance and keep the balance between
> groups. But maybe it can be done without too much cost.
>
> Thanks
> --
> Michal Hocko
> SUSE Labs
> SUSE LINUX s.r.o.
> Lihovarska 1060/12
> 190 00 Praha 9
> Czech Republic
>

2011-03-29 14:08:21

by Zhu Yanhai

[permalink] [raw]
Subject: Re: [RFC 0/3] Implementation of cgroup isolation

2011/3/29 Zhu Yanhai <[email protected]>:
> Hi,
>
> 2011/3/29 Michal Hocko <[email protected]>:
>> Isn't this an overhead that would slow the whole thing down. Consider
>> that you would need to lookup page_cgroup for every page and touch
>> mem_cgroup to get the limit.
>
> Current almost has did such things, say the direct reclaim path:
> shrink_inactive_list()
>   ->isolate_pages_global()
>      ->isolate_lru_pages()
>         ->mem_cgroup_del_lru(for each page it wants to isolate)
>            and in mem_cgroup_del_lru() we have:
oops, the below code is from mem_cgroup_rotate_lru_list not
mem_cgroup_del_lru, the correct one should be:
[code]
pc = lookup_page_cgroup(page);
/* can happen while we handle swapcache. */
if (!TestClearPageCgroupAcctLRU(pc))
return;
VM_BUG_ON(!pc->mem_cgroup);
/*
* We don't check PCG_USED bit. It's cleared when the "page" is finally
* removed from global LRU.
*/
mz = page_cgroup_zoneinfo(pc);
MEM_CGROUP_ZSTAT(mz, lru) -= 1;
if (mem_cgroup_is_root(pc->mem_cgroup))
return;
[/code]
Anyway, the point still stands.

-zyh
> [code]
>        pc = lookup_page_cgroup(page);
>        /*
>         * Used bit is set without atomic ops but after smp_wmb().
>         * For making pc->mem_cgroup visible, insert smp_rmb() here.
>         */
>        smp_rmb();
>        /* unused or root page is not rotated. */
>        if (!PageCgroupUsed(pc) || mem_cgroup_is_root(pc->mem_cgroup))
>                return;
> [/code]
> By calling mem_cgroup_is_root(pc->mem_cgroup) we already brought the
> struct mem_cgroup into cache.
> So probably things won't get worse at least.
>
> Thanks,
> Zhu Yanhai
>
>> The point of the isolation is to not touch the global reclaim path at
>> all.
>>
>>> 3) shrink the cgroups who have set a reserve_limit, and leave them with only
>>> the reserve_limit bytes they need. if nr_reclaimed is meet, goto finish.
>>> 4) OOM
>>>
>>> Does it make sense?
>>
>> It sounds like a good thing - in that regard it is more generic than
>> a simple flag - but I am afraid that the implementation wouldn't be
>> that easy to preserve the performance and keep the balance between
>> groups. But maybe it can be done without too much cost.
>>
>> Thanks
>> --
>> Michal Hocko
>> SUSE Labs
>> SUSE LINUX s.r.o.
>> Lihovarska 1060/12
>> 190 00 Praha 9
>> Czech Republic
>>
>

2011-03-29 15:53:18

by Balbir Singh

[permalink] [raw]
Subject: Re: [RFC 0/3] Implementation of cgroup isolation

On 03/28/11 16:33, KAMEZAWA Hiroyuki wrote:
> On Mon, 28 Mar 2011 11:39:57 +0200
> Michal Hocko <[email protected]> wrote:
>
>> Hi all,
>>
>> Memory cgroups can be currently used to throttle memory usage of a group of
>> processes. It, however, cannot be used for an isolation of processes from
>> the rest of the system because all the pages that belong to the group are
>> also placed on the global LRU lists and so they are eligible for the global
>> memory reclaim.
>>
>> This patchset aims at providing an opt-in memory cgroup isolation. This
>> means that a cgroup can be configured to be isolated from the rest of the
>> system by means of cgroup virtual filesystem (/dev/memctl/group/memory.isolated).
>>
>> Isolated mem cgroup can be particularly helpful in deployments where we have
>> a primary service which needs to have a certain guarantees for memory
>> resources (e.g. a database server) and we want to shield it off the
>> rest of the system (e.g. a burst memory activity in another group). This is
>> currently possible only with mlocking memory that is essential for the
>> application(s) or a rather hacky configuration where the primary app is in
>> the root mem cgroup while all the other system activity happens in other
>> groups.
>>
>> mlocking is not an ideal solution all the time because sometimes the working
>> set is very large and it depends on the workload (e.g. number of incoming
>> requests) so it can end up not fitting in into memory (leading to a OOM
>> killer). If we use mem. cgroup isolation instead we are keeping memory resident
>> and if the working set goes wild we can still do per-cgroup reclaim so the
>> service is less prone to be OOM killed.
>>
>> The patch series is split into 3 patches. First one adds a new flag into
>> mem_cgroup structure which controls whether the group is isolated (false by
>> default) and a cgroup fs interface to set it.
>> The second patch implements interaction with the global LRU. The current
>> semantic is that we are putting a page into a global LRU only if mem cgroup
>> LRU functions say they do not want the page for themselves.
>> The last patch prevents from soft reclaim if the group is isolated.
>>
>> I have tested the patches with the simple memory consumer (allocating
>> private and shared anon memory and SYSV SHM).
>>
>> One instance (call it big consumer) running in the group and paging in the
>> memory (>90% of cgroup limit) and sleeping for the rest of its life. Then I
>> had a pool of consumers running in the same cgroup which page in smaller
>> amount of memory and paging them in the loop to simulate in group memory
>> pressure (call them sharks).
>> The sum of consumed memory is more than memory.limit_in_bytes so some
>> portion of the memory is swapped out.
>> There is one consumer running in the root cgroup running in parallel which
>> makes a pressure on the memory (to trigger background reclaim).
>>
>> Rss+cache of the group drops down significantly (~66% of the limit) if the
>> group is not isolated. On the other hand if we isolate the group we are
>> still saturating the group (~97% of the limit). I can show more
>> comprehensive results if somebody is interested.
>>
>
> Isn't it the same result with the case where no cgroup is used ?
> What is the problem ?
> Why it's not a problem of configuration ?
> IIUC, you can put all logins to some cgroup by using cgroupd/libgcgroup.
>

I agree with Kame, I am still at loss in terms of understand the use
case, I should probably see the rest of the patches

>> Thanks for comments.
>>
>
>
> Maybe you just want "guarantee".
> At 1st thought, this approarch has 3 problems. And memcg is desgined
> never to prevent global vm scans,
>
> 1. This cannot be used as "guarantee". Just a way for "don't steal from me!!!"
> This just implements a "first come, first served" system.
> I guess this can be used for server desgines.....only with very very careful play.
> If an application exits and lose its memory, there is no guarantee anymore.
>
> 2. Even with isolation, a task in memcg can be killed by OOM-killer at
> global memory shortage.
>
> 3. it seems this will add more page fragmentation if implemented poorly, IOW,
> can this be work with compaction ?
>

Good points

Balbir

2011-03-30 05:32:44

by Ying Han

[permalink] [raw]
Subject: Re: [RFC 0/3] Implementation of cgroup isolation

On Tue, Mar 29, 2011 at 2:41 AM, KAMEZAWA Hiroyuki
<[email protected]> wrote:
> On Tue, 29 Mar 2011 10:59:43 +0200
> Michal Hocko <[email protected]> wrote:
>
>> On Tue 29-03-11 16:51:17, KAMEZAWA Hiroyuki wrote:
>> > On Tue, 29 Mar 2011 09:32:32 +0200
>> > Michal Hocko <[email protected]> wrote:
>> >
>> > > On Tue 29-03-11 09:09:24, KAMEZAWA Hiroyuki wrote:
>> > > > On Mon, 28 Mar 2011 13:44:30 +0200
>> > > > Michal Hocko <[email protected]> wrote:
>> > > >
>> > > > > On Mon 28-03-11 20:03:32, KAMEZAWA Hiroyuki wrote:
>> > > > > > On Mon, 28 Mar 2011 11:39:57 +0200
>> > > > > > Michal Hocko <[email protected]> wrote:
>> > > > > [...]
>> > > > > >
>> > > > > > Isn't it the same result with the case where no cgroup is used ?
>> > > > >
>> > > > > Yes and that is the point of the patchset. Memory cgroups will not give
>> > > > > you anything else but the top limit wrt. to the global memory activity.
>> > > > >
>> > > > > > What is the problem ?
>> > > > >
>> > > > > That we cannot prevent from paging out memory of process(es), even though
>> > > > > we have intentionaly isolated them in a group (read as we do not have
>> > > > > any other possibility for the isolation), because of unrelated memory
>> > > > > activity.
>> > > > >
>> > > > Because the design of memory cgroup is not for "defending" but for
>> > > > "never attack some other guys".
>> > >
>> > > Yes, I am aware of the current state of implementation. But as the
>> > > patchset show there is not quite trivial to implement also the other
>> > > (defending) part.
>> > >
>> >
>> > My opinions is to enhance softlimit is better.
>>
>> I will look how softlimit can be enhanced to match the expectations but
>> I'm kind of suspicious it can handle workloads where heuristics simply
>> cannot guess that the resident memory is important even though it wasn't
>> touched for a long time.
>>
>
> I think we recommend mlock() or hugepagefs to pin application's work area
> in usual. And mm guyes have did hardwork to work mm better even without
> memory cgroup under realisitic workloads.
>
> If your worload is realistic but _important_ anonymous memory is swapped out,
> it's problem of global VM rather than memcg.
>
> If you add 'isolate' per process, okay, I'll agree to add isolate per memcg.
>
>
>
>> > > > > > Why it's not a problem of configuration ?
>> > > > > > IIUC, you can put all logins to some cgroup by using cgroupd/libgcgroup.
>> > > > >
>> > > > > Yes, but this still doesn't bring the isolation.
>> > > > >
>> > > >
>> > > > Please explain this more.
>> > > > Why don't you move all tasks under /root/default <- this has some limit ?
>> > >
>> > > OK, I have tried to explain that in one of the (2nd) patch description.
>> > > If I move all task from the root group to other group(s) and keep the
>> > > primary application in the root group I would achieve some isolation as
>> > > well. That is very much true.
>> >
>> > Okay, then, current works well.
>> >
>> > > But then there is only one such a group.
>> >
>> > I can't catch what you mean. you can create limitless cgroup, anywhere.
>> > Can't you ?
>>
>> This is not about limits. This is about global vs. per-cgroup reclaim
>> and how much they interact together.
>>
>> The everything-in-groups approach with the "primary" service in the root
>> group (or call it unlimited) works just because all the memory activity
>> (but the primary service) is caped with the limits so the rest of the
>> memory can be used by the service. Moreover, in order this to work the
>> limit for other groups would be smaller then the working set of the
>> primary service.
>>
>> Even if you created a limitless group for other important service they
>> would still interact together and if one goes wild the other would
>> suffer from that.
>>
>
> .........I can't understad what is the problem when global reclaim
> runs just because an application wasn't limited ...or memory are
> overcomitted.

I guess the problem here is not triggering global reclaim, but more of
what is the expected output of it. We can not prevent global memory
pressure from happening in over-commit environment, however we should
do targeting reclaim only when that happens.

Hopefully an example helps explaining the problem we are trying to solve here.

Here is the current supported mechanism on memcg limits:
1. limit_in_bytes:
If the usage_in_bytes goes over the limit, the memcg get throttled or
OOM killed.

2. soft_limit_in_bytes:
If the usage_in_bytes goes over the limit, the memory are
best_efforts. Otherwise, no memory pressure is expected in the memcg.
This serves as "guarantee" in some sense.

Here is the configuration memcg users might consider:
On a host with 32G ram, we would like to over-committing the machine
but also provide guarantees to individual memcg.

memcg-A/ -- limit_in_bytes = 20G, soft_limit_in_bytes = 15G
memcg-B/ -- limit_in_bytes = 20G, soft_limit_in_bytes = 15G

The expectation of this configuration is:
a) Either memcg-A or memcg-B can grow usage_in_bytes up to 20G as long
as there is no system memory contention.
b) Both memcg-A and memcg-B have memory guarantee of 15G, and there
shouldn't be memory pressure applied if usage_in_bytes below the
value.
c) If there is a global memory pressure, whoever allocate memory above
the guarantee (soft_limit) need to push pages out.
d) Either memcg-A or memcg-B will be throttled or OOM killed if the
usage_in_bytes goes above the limit_in_bytes.

In order to achieve that, we need the following:
a) Improve the current soft_limit reclaim mechanism. Right now it is
designed to be best-effort working with global background reclaim. I
can easily generate scenario where it is not picking the "right"
cgroup to reclaim from each time. ("right" here stands for the
efficiency of the reclaim)

b) When the global reclaim happens (both background and ttfp), we need
to rely on soft_limit targeting reclaim instead of picking page on
global lru. The later one just blindly throw pages away regardless of
the configuration of cgroup. In this case, the configuration means
"guarantee".

c) Of course, we will have per-memcg background reclaim patch. It will
do more targeting reclaim proactively before the global memory
contention.

Overall, I don't see why we should scan the global LRU, especially
after the things above being improved and supported.

--Ying

>
>
>
>
>> [...]
>> > > > Yes, then, almost all mm guys answer has been "please use mlock".
>> > >
>> > > Yes. As I already tried to explain, mlock is not the remedy all the
>> > > time. It gets very tricky when you balance on the edge of the limit of
>> > > the available memory resp. cgroup limit. Sometimes you rather want to
>> > > have something swapped out than being killed (or fail due to ENOMEM).
>> > > The important thing about swapped out above is that with the isolation
>> > > it is only per-cgroup.
>> > >
>> >
>> > IMHO, doing isolation by hiding is not good idea.
>>
>> It depends on what you want to guarantee.
>>
>> > Because we're kernel engineer, we should do isolation by
>> > scheduling. The kernel is art of shceduling, not separation.
>>
>> Well, I would disagree with this statement (to some extend of course).
>> Cgroups are quite often used for separation (e.g. cpusets basically
>> hide tasks from CPUs that are not configured for them).
>>
>> You are certainly right that the memory management is about proper
>> scheduling and balancing needs vs. demands. And it turned out to be
>> working fine in many (maybe even most of) workloads (modulo bugs
>> which are fixed over time). But if an application has more specific
>> requirements for its memory usage then it is quite limited in ways how
>> it can achieve them (mlock is one way how to pin the memory but there
>> are cases where it is not appropriate).
>> Kernel will simply never know the complete picture and have to rely on
>> heuristics which will never fit in with everybody.
>>
>
> That's what MM guys are tring.
>
> IIUC, there has been many papers on 'hinting LRU' in OS study,
> but none has been added to Linux successfully. I'm not sure there has
> been no trial or they were rejected.
>
>
>
>>
>> > I think we should start from some scheduling as softlimit. Then,
>> > as an extreme case of scheduling, 'complete isolation' should be
>> > archived. If it seems impossible after trial of making softlimit
>> > better, okay, we should consider some.
>>
>> As I already tried to point out what-ever will scheduling do it has no
>> way to guess that somebody needs to be isolated unless he says that to
>> kernel.
>> Anyway, I will have a look whether softlimit can be used and how helpful
>> it would be.
>>
>
> If softlimit (after some improvement) isn't enough, please add some other.
>
> What I think of is
>
> 1. need to "guarantee" memory usages in future.
> ? "first come, first served" is not good for admins.
>
> 2. need to handle zone memory shortage. Using memory migration
> ? between zones will be necessary to avoid pageout.
>
> 3. need a knob to say "please reclaim from my own cgroup rather than
> ? affecting others (if usage > some(soft)limit)."
>
>
>> [...]
>> > > > I think you should put tasks in root cgroup to somewhere. It works perfect
>> > > > against OOM. And if memory are hidden by isolation, OOM will happen easier.
>> > >
>> > > Why do you think that it would happen easier? Isn't it similar (from OOM
>> > > POV) as if somebody mlocked that memory?
>> > >
>> >
>> > if global lru scan cannot find victim memory, oom happens.
>>
>> Yes, but this will happen with mlocked memory as well, right?
>>
> Yes, of course.
>
> Anyway, I'll Nack to simple "first come, first served" isolation.
> Please implement garantee, which is reliable and admin can use safely.
>
> mlock() has similar problem, So, I recommend hugetlbfs to customers,
> admin can schedule it at boot time.
> (the number of users of hugetlbfs is tend to be one app. (oracle))
>
> I'll be absent, tomorrow.
>
> I think you'll come LSF/MM summit and from the schedule, you'll have
> a joint session with Ying as "Memcg LRU management and isolation".
>
> IIUC, "LRU management" is a google's performance improvement topic.
>
> It's ok for me to talk only about 'isolation' ?1st in earlier session.
> If you want, please ask James to move session and overlay 1st memory
> cgroup session. (I think you saw e-mail from James.)
>
> Thanks,
> -Kame
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to [email protected]. ?For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
> Don't email: <a href=mailto:"[email protected]"> [email protected] </a>
>

2011-03-30 07:42:34

by Michal Hocko

[permalink] [raw]
Subject: Re: [RFC 0/3] Implementation of cgroup isolation

On Tue 29-03-11 22:02:23, Zhu Yanhai wrote:
> Hi,
>
> 2011/3/29 Michal Hocko <[email protected]>:
> > Isn't this an overhead that would slow the whole thing down. Consider
> > that you would need to lookup page_cgroup for every page and touch
> > mem_cgroup to get the limit.
>
> Current almost has did such things, say the direct reclaim path:
> shrink_inactive_list()
> ->isolate_pages_global()
> ->isolate_lru_pages()
> ->mem_cgroup_del_lru(for each page it wants to isolate)
> and in mem_cgroup_del_lru() we have:
> [code]
> pc = lookup_page_cgroup(page);
> /*
> * Used bit is set without atomic ops but after smp_wmb().
> * For making pc->mem_cgroup visible, insert smp_rmb() here.
> */
> smp_rmb();
> /* unused or root page is not rotated. */
> if (!PageCgroupUsed(pc) || mem_cgroup_is_root(pc->mem_cgroup))
> return;
> [/code]
> By calling mem_cgroup_is_root(pc->mem_cgroup) we already brought the
> struct mem_cgroup into cache.
> So probably things won't get worse at least.

But we would still have to isolate and put back a lot of pages
potentially. If we do not have those pages on the list we will skip them
automatically.

--
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9
Czech Republic

2011-03-30 08:18:58

by Michal Hocko

[permalink] [raw]
Subject: Re: [RFC 0/3] Implementation of cgroup isolation

On Tue 29-03-11 21:23:10, Balbir Singh wrote:
> On 03/28/11 16:33, KAMEZAWA Hiroyuki wrote:
> > On Mon, 28 Mar 2011 11:39:57 +0200
> > Michal Hocko <[email protected]> wrote:
[...]
> > Isn't it the same result with the case where no cgroup is used ?
> > What is the problem ?
> > Why it's not a problem of configuration ?
> > IIUC, you can put all logins to some cgroup by using cgroupd/libgcgroup.
> >
>
> I agree with Kame, I am still at loss in terms of understand the use
> case, I should probably see the rest of the patches

OK, it looks that I am really bad at explaining the usecase. Let's try
it again then (hopefully in a better way).

Consider a service which serves requests based on the in-memory
precomputed or preprocessed data.
Let's assume that getting data into memory is rather costly operation
which considerably increases latency of the request processing. Memory
access can be considered random from the system POV because we never
know which requests will come from outside.
This workflow will benefit from having the memory resident as long as
and as much as possible because we have higher chances to be used more
often and so the initial costs would pay off.
Why is mlock not the right thing to do here? Well, if the memory would
be locked and the working set would grow (again this depends on the
incoming requests) then the application would have to unlock some
portions of the memory or to risk OOM because it basically cannot
overcommit.
On the other hand, if the memory is not mlocked and there is a global
memory pressure we can have some part of the costly memory swapped or
paged out which will increase requests latencies. If the application is
placed into an isolated cgroup, though, the global (or other cgroups)
activity doesn't influence its cgroup thus the working set of the
application.
If we compare that to mlock we will benefit from per-group reclaim when
we get over the limit (or soft limit). So we do not start evicting the
memory unless somebody makes really pressure on the _application_.
Cgroup limits would, of course, need to be selected carefully.

There might be other examples when simply kernel cannot know which
memory is important for the process and the long unused memory is not
the ideal choice.

Makes sense?
--
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9
Czech Republic

2011-03-30 17:59:26

by Ying Han

[permalink] [raw]
Subject: Re: [RFC 0/3] Implementation of cgroup isolation

On Wed, Mar 30, 2011 at 1:18 AM, Michal Hocko <[email protected]> wrote:
> On Tue 29-03-11 21:23:10, Balbir Singh wrote:
>> On 03/28/11 16:33, KAMEZAWA Hiroyuki wrote:
>> > On Mon, 28 Mar 2011 11:39:57 +0200
>> > Michal Hocko <[email protected]> wrote:
> [...]
>> > Isn't it the same result with the case where no cgroup is used ?
>> > What is the problem ?
>> > Why it's not a problem of configuration ?
>> > IIUC, you can put all logins to some cgroup by using cgroupd/libgcgroup.
>> >
>>
>> I agree with Kame, I am still at loss in terms of understand the use
>> case, I should probably see the rest of the patches
>
> OK, it looks that I am really bad at explaining the usecase. Let's try
> it again then (hopefully in a better way).
>
> Consider a service which serves requests based on the in-memory
> precomputed or preprocessed data.
> Let's assume that getting data into memory is rather costly operation
> which considerably increases latency of the request processing. Memory
> access can be considered random from the system POV because we never
> know which requests will come from outside.
> This workflow will benefit from having the memory resident as long as
> and as much as possible because we have higher chances to be used more
> often and so the initial costs would pay off.
> Why is mlock not the right thing to do here? Well, if the memory would
> be locked and the working set would grow (again this depends on the
> incoming requests) then the application would have to unlock some
> portions of the memory or to risk OOM because it basically cannot
> overcommit.
> On the other hand, if the memory is not mlocked and there is a global
> memory pressure we can have some part of the costly memory swapped or
> paged out which will increase requests latencies. If the application is
> placed into an isolated cgroup, though, the global (or other cgroups)
> activity doesn't influence its cgroup thus the working set of the
> application.

> If we compare that to mlock we will benefit from per-group reclaim when
> we get over the limit (or soft limit). So we do not start evicting the
> memory unless somebody makes really pressure on the _application_.
> Cgroup limits would, of course, need to be selected carefully.
>
> There might be other examples when simply kernel cannot know which
> memory is important for the process and the long unused memory is not
> the ideal choice.

Michal,

Reading through your example, sounds to me you can accomplish the
"guarantee" of the high priority service using existing
memcg mechanisms.

Assume you have the service named cgroup-A which needs memory
"guarantee". Meantime we want to launch cgroup-B with no memory
"guarantee". What you want is to have cgroup-B uses the slack memory
(not being allocated by cgroup-A), but also volunteer to give up under
system memory pressure.

So continue w/ my previous post, you can consider the following
configuration in 32G machine. We can only have resident size of
cgroup-A as much as the machine capacity.

cgroup-A : limit_in_bytes =32G soft_limit_in_bytes = 32G
cgroup-B : limit_in_bytes =20G soft_limit_in_bytes = 0G

To be a little bit extreme, there shouldn't be memory pressure on
cgroup-A unless it grows above the machine capacity. If the global
memory contention is triggered by cgroup-B, we should steal pages from
it always.

However, the current implementation of soft_limit needs to be improved
for the example above. Especially when we start having lots of cgroups
running w/ different limit setting, we need to have soft_limit being
efficient and we can eliminate the global lru scanning. The later one
breaks the isolation.

--Ying

> Makes sense?
> --
> Michal Hocko
> SUSE Labs
> SUSE LINUX s.r.o.
> Lihovarska 1060/12
> 190 00 Praha 9
> Czech Republic
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to [email protected]. ?For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
> Don't email: <a href=mailto:"[email protected]"> [email protected] </a>
>

2011-03-31 09:53:10

by Michal Hocko

[permalink] [raw]
Subject: Re: [RFC 0/3] Implementation of cgroup isolation

On Wed 30-03-11 10:59:21, Ying Han wrote:
> On Wed, Mar 30, 2011 at 1:18 AM, Michal Hocko <[email protected]> wrote:
> > On Tue 29-03-11 21:23:10, Balbir Singh wrote:
> >> On 03/28/11 16:33, KAMEZAWA Hiroyuki wrote:
> >> > On Mon, 28 Mar 2011 11:39:57 +0200
> >> > Michal Hocko <[email protected]> wrote:
> > [...]
> >> > Isn't it the same result with the case where no cgroup is used ?
> >> > What is the problem ?
> >> > Why it's not a problem of configuration ?
> >> > IIUC, you can put all logins to some cgroup by using cgroupd/libgcgroup.
> >> >
> >>
> >> I agree with Kame, I am still at loss in terms of understand the use
> >> case, I should probably see the rest of the patches
> >
> > OK, it looks that I am really bad at explaining the usecase. Let's try
> > it again then (hopefully in a better way).
> >
> > Consider a service which serves requests based on the in-memory
> > precomputed or preprocessed data.
> > Let's assume that getting data into memory is rather costly operation
> > which considerably increases latency of the request processing. Memory
> > access can be considered random from the system POV because we never
> > know which requests will come from outside.
> > This workflow will benefit from having the memory resident as long as
> > and as much as possible because we have higher chances to be used more
> > often and so the initial costs would pay off.
> > Why is mlock not the right thing to do here? Well, if the memory would
> > be locked and the working set would grow (again this depends on the
> > incoming requests) then the application would have to unlock some
> > portions of the memory or to risk OOM because it basically cannot
> > overcommit.
> > On the other hand, if the memory is not mlocked and there is a global
> > memory pressure we can have some part of the costly memory swapped or
> > paged out which will increase requests latencies. If the application is
> > placed into an isolated cgroup, though, the global (or other cgroups)
> > activity doesn't influence its cgroup thus the working set of the
> > application.
>
> > If we compare that to mlock we will benefit from per-group reclaim when
> > we get over the limit (or soft limit). So we do not start evicting the
> > memory unless somebody makes really pressure on the _application_.
> > Cgroup limits would, of course, need to be selected carefully.
> >
> > There might be other examples when simply kernel cannot know which
> > memory is important for the process and the long unused memory is not
> > the ideal choice.
>
> Michal,
>
> Reading through your example, sounds to me you can accomplish the
> "guarantee" of the high priority service using existing
> memcg mechanisms.
>
> Assume you have the service named cgroup-A which needs memory
> "guarantee". Meantime we want to launch cgroup-B with no memory
> "guarantee". What you want is to have cgroup-B uses the slack memory
> (not being allocated by cgroup-A), but also volunteer to give up under
> system memory pressure.

This would require a "guarantee" that no pages are reclaimed from a
group if that group is under its soft limit, right? I am thinking if we
can achieve that without too many corner cases when cgroups (process's
accounted memory) don't leave out much for other memory used by the
kernel.
That was my concern so I made that isolation rather opt-in without
modifying the current reclaim logic too much (there are, of course,
parts that can be improved).

> So continue w/ my previous post, you can consider the following
> configuration in 32G machine. We can only have resident size of
> cgroup-A as much as the machine capacity.
>
> cgroup-A : limit_in_bytes =32G soft_limit_in_bytes = 32G
> cgroup-B : limit_in_bytes =20G soft_limit_in_bytes = 0G
>
> To be a little bit extreme, there shouldn't be memory pressure on
> cgroup-A unless it grows above the machine capacity. If the global
> memory contention is triggered by cgroup-B, we should steal pages from
> it always.
>
> However, the current implementation of soft_limit needs to be improved
> for the example above. Especially when we start having lots of cgroups
> running w/ different limit setting, we need to have soft_limit being
> efficient and we can eliminate the global lru scanning.

Lots of groups is really an issue because we can end up in a situation
when everybody is under the limit while there is not much memory left
for the kernel. Maybe sum(soft_limit) < kernel_treshold condition would
solve this.

> The later one breaks the isolation.

Sorry, I don't understand. Why would elimination of the global lru
scanning break isolation? Or am I misreading you?

Thanks
--
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9
Czech Republic

2011-03-31 10:01:59

by Balbir Singh

[permalink] [raw]
Subject: Re: [RFC 0/3] Implementation of cgroup isolation

* Michal Hocko <[email protected]> [2011-03-30 10:18:53]:

> On Tue 29-03-11 21:23:10, Balbir Singh wrote:
> > On 03/28/11 16:33, KAMEZAWA Hiroyuki wrote:
> > > On Mon, 28 Mar 2011 11:39:57 +0200
> > > Michal Hocko <[email protected]> wrote:
> [...]
> > > Isn't it the same result with the case where no cgroup is used ?
> > > What is the problem ?
> > > Why it's not a problem of configuration ?
> > > IIUC, you can put all logins to some cgroup by using cgroupd/libgcgroup.
> > >
> >
> > I agree with Kame, I am still at loss in terms of understand the use
> > case, I should probably see the rest of the patches
>
> OK, it looks that I am really bad at explaining the usecase. Let's try
> it again then (hopefully in a better way).
>
> Consider a service which serves requests based on the in-memory
> precomputed or preprocessed data.
> Let's assume that getting data into memory is rather costly operation
> which considerably increases latency of the request processing. Memory
> access can be considered random from the system POV because we never
> know which requests will come from outside.
> This workflow will benefit from having the memory resident as long as
> and as much as possible because we have higher chances to be used more
> often and so the initial costs would pay off.
> Why is mlock not the right thing to do here? Well, if the memory would
> be locked and the working set would grow (again this depends on the
> incoming requests) then the application would have to unlock some
> portions of the memory or to risk OOM because it basically cannot
> overcommit.
> On the other hand, if the memory is not mlocked and there is a global
> memory pressure we can have some part of the costly memory swapped or
> paged out which will increase requests latencies. If the application is
> placed into an isolated cgroup, though, the global (or other cgroups)
> activity doesn't influence its cgroup thus the working set of the
> application.

I think one important aspect is what percentage of the memory needs to
be isolated/locked? If you expect really large parts, then we are in
trouble, unless we are aware of the exact requirements for memory and
know what else will run on the system.

> If we compare that to mlock we will benefit from per-group reclaim when
> we get over the limit (or soft limit). So we do not start evicting the
> memory unless somebody makes really pressure on the _application_.
> Cgroup limits would, of course, need to be selected carefully.
>
> There might be other examples when simply kernel cannot know which
> memory is important for the process and the long unused memory is not
> the ideal choice.
>

There are other watermark based approaches that would work better,
given that memory management is already complicated by topology, zones
and we have non-reclaimable memory being used in the kernel on behalf
of applications. I am not ruling out a solution, just sharing ideas.
NOTE: In the longer run, we want to account for kernel usage and look
at potential reclaim of slab pages.

--
Three Cheers,
Balbir

2011-03-31 18:10:08

by Ying Han

[permalink] [raw]
Subject: Re: [RFC 0/3] Implementation of cgroup isolation

On Thu, Mar 31, 2011 at 2:53 AM, Michal Hocko <[email protected]> wrote:
> On Wed 30-03-11 10:59:21, Ying Han wrote:
>> On Wed, Mar 30, 2011 at 1:18 AM, Michal Hocko <[email protected]> wrote:
>> > On Tue 29-03-11 21:23:10, Balbir Singh wrote:
>> >> On 03/28/11 16:33, KAMEZAWA Hiroyuki wrote:
>> >> > On Mon, 28 Mar 2011 11:39:57 +0200
>> >> > Michal Hocko <[email protected]> wrote:
>> > [...]
>> >> > Isn't it the same result with the case where no cgroup is used ?
>> >> > What is the problem ?
>> >> > Why it's not a problem of configuration ?
>> >> > IIUC, you can put all logins to some cgroup by using cgroupd/libgcgroup.
>> >> >
>> >>
>> >> I agree with Kame, I am still at loss in terms of understand the use
>> >> case, I should probably see the rest of the patches
>> >
>> > OK, it looks that I am really bad at explaining the usecase. Let's try
>> > it again then (hopefully in a better way).
>> >
>> > Consider a service which serves requests based on the in-memory
>> > precomputed or preprocessed data.
>> > Let's assume that getting data into memory is rather costly operation
>> > which considerably increases latency of the request processing. Memory
>> > access can be considered random from the system POV because we never
>> > know which requests will come from outside.
>> > This workflow will benefit from having the memory resident as long as
>> > and as much as possible because we have higher chances to be used more
>> > often and so the initial costs would pay off.
>> > Why is mlock not the right thing to do here? Well, if the memory would
>> > be locked and the working set would grow (again this depends on the
>> > incoming requests) then the application would have to unlock some
>> > portions of the memory or to risk OOM because it basically cannot
>> > overcommit.
>> > On the other hand, if the memory is not mlocked and there is a global
>> > memory pressure we can have some part of the costly memory swapped or
>> > paged out which will increase requests latencies. If the application is
>> > placed into an isolated cgroup, though, the global (or other cgroups)
>> > activity doesn't influence its cgroup thus the working set of the
>> > application.
>>
>> > If we compare that to mlock we will benefit from per-group reclaim when
>> > we get over the limit (or soft limit). So we do not start evicting the
>> > memory unless somebody makes really pressure on the _application_.
>> > Cgroup limits would, of course, need to be selected carefully.
>> >
>> > There might be other examples when simply kernel cannot know which
>> > memory is important for the process and the long unused memory is not
>> > the ideal choice.
>>
>> Michal,
>>
>> Reading through your example, sounds to me you can accomplish the
>> "guarantee" of the high priority service using existing
>> memcg mechanisms.
>>
>> Assume you have the service named cgroup-A which needs memory
>> "guarantee". Meantime we want to launch cgroup-B with no memory
>> "guarantee". What you want is to have cgroup-B uses the slack memory
>> (not being allocated by cgroup-A), but also volunteer to give up under
>> system memory pressure.
>
> This would require a "guarantee" that no pages are reclaimed from a
> group if that group is under its soft limit, right?

yes.

I am thinking if we
> can achieve that without too many corner cases when cgroups (process's
> accounted memory) don't leave out much for other memory used by the
> kernel.

> That was my concern so I made that isolation rather opt-in without
> modifying the current reclaim logic too much (there are, of course,
> parts that can be improved).

So far we are discussing the memory limit only for user pages. Later
we definitely need a kernel memory slab accounting and also for
reclaim. If we put them together, do you still have the concern? Sorry
guess I am just trying to understand the concern w/ example.

>
>> So continue w/ my previous post, you can consider the following
>> configuration in 32G machine. We can only have resident size of
>> cgroup-A as much as the machine capacity.
>>
>> cgroup-A : ?limit_in_bytes =32G soft_limit_in_bytes = 32G
>> cgroup-B : limit_in_bytes =20G ?soft_limit_in_bytes = 0G
>>
>> To be a little bit extreme, there shouldn't be memory pressure on
>> cgroup-A unless it grows above the machine capacity. If the global
>> memory contention is triggered by cgroup-B, we should steal pages from
>> it always.
>>
>> However, the current implementation of soft_limit needs to be improved
>> for the example above. Especially when we start having lots of cgroups
>> running w/ different limit setting, we need to have soft_limit being
>> efficient and we can eliminate the global lru scanning.
>
> Lots of groups is really an issue because we can end up in a situation
> when everybody is under the limit while there is not much memory left
> for the kernel. Maybe sum(soft_limit) < kernel_treshold condition would
> solve this.
most of the kernel memory are allocated on behalf of processes in
cgroup. One way of doing that (after having kernel memory accounting)
is to count in kernel memory into usage_in_bytes. So we have the
following:

1) limit_in_bytes: cap of memory allocation (user + kernel) for cgroup-A
2) soft_limit_in_bytes: guarantee of memory allocation (user +
kernel) for cgroup-A
3) usage_in_bytes: user pages + kernel pages (allocated on behalf of the memcg)

The above need kernel memory accounting and targeting reclaim. Then we
have sum(soft_limit) < machine capacity. Hope we can talk a bit in the
LSF on this too.





>> The later one breaks the isolation.
>
> Sorry, I don't understand. Why would elimination of the global lru
> scanning break isolation? Or am I misreading you?

Sorry, i meant the other way around. So we agree on this .

--Ying
>
> Thanks
> --
> Michal Hocko
> SUSE Labs
> SUSE LINUX s.r.o.
> Lihovarska 1060/12
> 190 00 Praha 9
> Czech Republic
>

2011-04-01 14:04:49

by Michal Hocko

[permalink] [raw]
Subject: Re: [RFC 0/3] Implementation of cgroup isolation

On Thu 31-03-11 11:10:00, Ying Han wrote:
> On Thu, Mar 31, 2011 at 2:53 AM, Michal Hocko <[email protected]> wrote:
> > On Wed 30-03-11 10:59:21, Ying Han wrote:
[...]
> > That was my concern so I made that isolation rather opt-in without
> > modifying the current reclaim logic too much (there are, of course,
> > parts that can be improved).
>
> So far we are discussing the memory limit only for user pages. Later
> we definitely need a kernel memory slab accounting and also for
> reclaim. If we put them together, do you still have the concern? Sorry
> guess I am just trying to understand the concern w/ example.

If we account the kernel memory then it should be less problematic, I
guess.

[...]
> > Lots of groups is really an issue because we can end up in a situation
> > when everybody is under the limit while there is not much memory left
> > for the kernel. Maybe sum(soft_limit) < kernel_treshold condition would
> > solve this.
> most of the kernel memory are allocated on behalf of processes in
> cgroup. One way of doing that (after having kernel memory accounting)
> is to count in kernel memory into usage_in_bytes. So we have the
> following:
>
> 1) limit_in_bytes: cap of memory allocation (user + kernel) for cgroup-A
> 2) soft_limit_in_bytes: guarantee of memory allocation (user +
> kernel) for cgroup-A
> 3) usage_in_bytes: user pages + kernel pages (allocated on behalf of the memcg)
>
> The above need kernel memory accounting and targeting reclaim. Then we
> have sum(soft_limit) < machine capacity. Hope we can talk a bit in the
> LSF on this too.

Sure. I am looking forward.

> >> The later one breaks the isolation.
> >
> > Sorry, I don't understand. Why would elimination of the global lru
> > scanning break isolation? Or am I misreading you?
>
> Sorry, i meant the other way around. So we agree on this .

Makes more sense now ;)

Thanks
--
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9
Czech Republic