2017-03-20 18:19:29

by Tejun Heo

[permalink] [raw]
Subject: Re: [RFC v3 1/5] sched/core: add capacity constraints to CPU controller

Hello,

On Tue, Feb 28, 2017 at 02:38:38PM +0000, Patrick Bellasi wrote:
> This patch extends the CPU controller by adding a couple of new
> attributes, capacity_min and capacity_max, which can be used to enforce
> bandwidth boosting and capping. More specifically:
>
> - capacity_min: defines the minimum capacity which should be granted
> (by schedutil) when a task in this group is running,
> i.e. the task will run at least at that capacity
>
> - capacity_max: defines the maximum capacity which can be granted
> (by schedutil) when a task in this group is running,
> i.e. the task can run up to that capacity

cpu.capacity.min and cpu.capacity.max are the more conventional names.
I'm not sure about the name capacity as it doesn't encode what it does
and is difficult to tell apart from cpu bandwidth limits. I think
it'd be better to represent what it controls more explicitly.

> These attributes:
> a) are tunable at all hierarchy levels, i.e. root group too

This usually is problematic because there should be a non-cgroup way
of configuring the feature in case cgroup isn't configured or used,
and it becomes awkward to have two separate mechanisms configuring the
same thing. Maybe the feature is cgroup specific enough that it makes
sense here but this needs more explanation / justification.

> b) allow to create subgroups of tasks which are not violating the
> capacity constraints defined by the parent group.
> Thus, tasks on a subgroup can only be more boosted and/or more

For both limits and protections, the parent caps the maximum the
children can get. At least that's what memcg does for memory.low.
Doing that makes sense for memcg because for memory the parent can
still do protections regardless of what its children are doing and it
makes delegation safe by default.

I understand why you would want a property like capacity to be the
other direction as that way you get more specific as you walk down the
tree for both limits and protections; however, I think we need to
think a bit more about it and ensure that the resulting interface
isn't confusing. Would it work for capacity to behave the other
direction - ie. a parent's min restricting the highest min that its
descendants can get? It's completely fine if that's weird.

Thanks.

--
tejun


2017-03-20 17:36:17

by Tejun Heo

[permalink] [raw]
Subject: Re: [RFC v3 1/5] sched/core: add capacity constraints to CPU controller

On Mon, Mar 20, 2017 at 01:15:11PM -0400, Tejun Heo wrote:
> > a) are tunable at all hierarchy levels, i.e. root group too
>
> This usually is problematic because there should be a non-cgroup way
> of configuring the feature in case cgroup isn't configured or used,
> and it becomes awkward to have two separate mechanisms configuring the
> same thing. Maybe the feature is cgroup specific enough that it makes
> sense here but this needs more explanation / justification.

A related issue here is that what the non-cgroup interface and its
interaction with cgroup should be. In the long term, I think it's
better to have a generic non-cgroup interface for these new features,
and we've gotten it wrong, or at least inconsistent, across different
settings - most don't affect API accessible settings and just confine
the configuration requested by the application inside the cgroup
constraints; however, cpuset does it the other way and overwrites
configurations set by individual applications.

If we agree that exposing this only through cgroup is fine, this isn't
a concern, but, given that this is a thread property and can obviously
be useful outside cgroups, that seems debatable at the very least.

Thanks.

--
tejun

2017-03-20 18:08:50

by Patrick Bellasi

[permalink] [raw]
Subject: Re: [RFC v3 1/5] sched/core: add capacity constraints to CPU controller

On 20-Mar 13:15, Tejun Heo wrote:
> Hello,
>
> On Tue, Feb 28, 2017 at 02:38:38PM +0000, Patrick Bellasi wrote:
> > This patch extends the CPU controller by adding a couple of new
> > attributes, capacity_min and capacity_max, which can be used to enforce
> > bandwidth boosting and capping. More specifically:
> >
> > - capacity_min: defines the minimum capacity which should be granted
> > (by schedutil) when a task in this group is running,
> > i.e. the task will run at least at that capacity
> >
> > - capacity_max: defines the maximum capacity which can be granted
> > (by schedutil) when a task in this group is running,
> > i.e. the task can run up to that capacity
>
> cpu.capacity.min and cpu.capacity.max are the more conventional names.

Ok, should be an easy renaming.

> I'm not sure about the name capacity as it doesn't encode what it does
> and is difficult to tell apart from cpu bandwidth limits. I think
> it'd be better to represent what it controls more explicitly.

In the scheduler jargon, capacity represents the amount of computation
that a CPU can provide and it's usually defined to be 1024 for the
biggest CPU (on non SMP systems) running at the highest OPP (i.e.
maximum frequency).

It's true that it kind of overlaps with the concept of "bandwidth".
However, the main difference here is that "bandwidth" is not frequency
(and architecture) scaled.
Thus, for example, assuming we have only one CPU with these two OPPs:

OPP | Frequency | Capacity
1 | 500MHz | 512
2 | 1GHz | 1024

a task running 60% of the time on that CPU when configured to run at
500MHz, from the bandwidth standpoint it's using 60% bandwidth but, from
the capacity standpoint, is using only 30% of the available capacity.

IOW, bandwidth is purely temporal based while capacity factors in both
frequency and architectural differences.
Thus, while a "bandwidth" constraint limits the amount of time a task
can use a CPU, independently from the "actual computation" performed,
with the new "capacity" constraints we can enforce much "actual
computation" a task can perform in the "unit of time".

> > These attributes:
> > a) are tunable at all hierarchy levels, i.e. root group too
>
> This usually is problematic because there should be a non-cgroup way
> of configuring the feature in case cgroup isn't configured or used,
> and it becomes awkward to have two separate mechanisms configuring the
> same thing. Maybe the feature is cgroup specific enough that it makes
> sense here but this needs more explanation / justification.

In the previous proposal I used to expose global tunables under
procfs, e.g.:

/proc/sys/kernel/sched_capacity_min
/proc/sys/kernel/sched_capacity_max

which can be used to defined tunable root constraints when CGroups are
not available, and becomes RO when CGroups are.

Can this be eventually an acceptable option?

In any case I think that this feature will be mainly targeting CGroup
based systems. Indeed, one of the main goals is to collect
"application specific" information from "informed run-times". Being
"application specific" means that we need a way to classify
applications depending on the runtime context... and that capability
in Linux is ultimately provided via the CGroup interface.

> > b) allow to create subgroups of tasks which are not violating the
> > capacity constraints defined by the parent group.
> > Thus, tasks on a subgroup can only be more boosted and/or more
>
> For both limits and protections, the parent caps the maximum the
> children can get. At least that's what memcg does for memory.low.
> Doing that makes sense for memcg because for memory the parent can
> still do protections regardless of what its children are doing and it
> makes delegation safe by default.

Just to be more clear, the current proposal enforces:

- capacity_max_child <= capacity_max_parent

Since, if a task is constrained to get only up to a certain amount
of capacity, than its childs cannot use more than that... eventually
they can only be further constrained.

- capacity_min_child >= capacity_min_parent

Since, if a task has been boosted to run at least as much fast, than
its childs cannot be constrained to go slower without eventually
impacting parent performance.

> I understand why you would want a property like capacity to be the
> other direction as that way you get more specific as you walk down the
> tree for both limits and protections;

Right, the protection schema is defined in such a way to never affect
parent constraints.

> however, I think we need to
> think a bit more about it and ensure that the resulting interface
> isn't confusing.

Sure.

> Would it work for capacity to behave the other
> direction - ie. a parent's min restricting the highest min that its
> descendants can get? It's completely fine if that's weird.

I had a thought about that possibility and it was not convincing me
from the use-cases standpoint, at least for the ones I've considered.

Reason is that capacity_min is used to implement a concept of
"boosting" where, let say we want to "run a task faster then a minimum
frequency". Assuming that this constraint has been defined because we
know that this task, and likely all its descendant threads, needs at
least that capacity level to perform according to expectations.

In that case the "refining down the hierarchy" can require to boost
further some threads but likely not less.

Does this make sense?

To me this seems to match quite well at least Android/ChromeOS
specific use-cases. I'm not sure if there can be other different
use-cases in the domain for example of managed containers.


> Thanks.
>
> --
> tejun

--
#include <best/regards.h>

Patrick Bellasi

2017-03-23 00:29:00

by Joel Fernandes

[permalink] [raw]
Subject: Re: [RFC v3 1/5] sched/core: add capacity constraints to CPU controller

Hi,

On Mon, Mar 20, 2017 at 11:08 AM, Patrick Bellasi
<[email protected]> wrote:
> On 20-Mar 13:15, Tejun Heo wrote:
>> Hello,
>>
>> On Tue, Feb 28, 2017 at 02:38:38PM +0000, Patrick Bellasi wrote:
[..]
>> > These attributes:
>> > a) are tunable at all hierarchy levels, i.e. root group too
>>
>> This usually is problematic because there should be a non-cgroup way
>> of configuring the feature in case cgroup isn't configured or used,
>> and it becomes awkward to have two separate mechanisms configuring the
>> same thing. Maybe the feature is cgroup specific enough that it makes
>> sense here but this needs more explanation / justification.
>
> In the previous proposal I used to expose global tunables under
> procfs, e.g.:
>
> /proc/sys/kernel/sched_capacity_min
> /proc/sys/kernel/sched_capacity_max
>

But then we would lose out on being able to attach capacity
constraints to specific tasks or groups of tasks?

> which can be used to defined tunable root constraints when CGroups are
> not available, and becomes RO when CGroups are.
>
> Can this be eventually an acceptable option?
>
> In any case I think that this feature will be mainly targeting CGroup
> based systems. Indeed, one of the main goals is to collect
> "application specific" information from "informed run-times". Being
> "application specific" means that we need a way to classify
> applications depending on the runtime context... and that capability
> in Linux is ultimately provided via the CGroup interface.

I think the concern raised is more about whether CGroups is the right
interface to use for attaching capacity constraints to task or groups
of tasks, or is there a better way to attach such constraints?

I am actually looking at a workload where its desirable to attach such
constraints to only 1 thread or task, in this case it would be a bit
overkill to use CGroups to attach such property just for 1 task with
specific constraints and it would be beneficial that along with the
CGroup interface, there's also an interface to attach it to individual
tasks. The other advantage of such interface is we don't have to
create a separate CGroup for every new constraint limit and can have
several tasks with different unique constraints.

Regards,
Joel

2017-03-23 10:33:05

by Patrick Bellasi

[permalink] [raw]
Subject: Re: [RFC v3 1/5] sched/core: add capacity constraints to CPU controller

On 22-Mar 17:28, Joel Fernandes (Google) wrote:
> Hi,
>
> On Mon, Mar 20, 2017 at 11:08 AM, Patrick Bellasi
> <[email protected]> wrote:
> > On 20-Mar 13:15, Tejun Heo wrote:
> >> Hello,
> >>
> >> On Tue, Feb 28, 2017 at 02:38:38PM +0000, Patrick Bellasi wrote:
> [..]
> >> > These attributes:
> >> > a) are tunable at all hierarchy levels, i.e. root group too
> >>
> >> This usually is problematic because there should be a non-cgroup way
> >> of configuring the feature in case cgroup isn't configured or used,
> >> and it becomes awkward to have two separate mechanisms configuring the
> >> same thing. Maybe the feature is cgroup specific enough that it makes
> >> sense here but this needs more explanation / justification.
> >
> > In the previous proposal I used to expose global tunables under
> > procfs, e.g.:
> >
> > /proc/sys/kernel/sched_capacity_min
> > /proc/sys/kernel/sched_capacity_max
> >
>
> But then we would lose out on being able to attach capacity
> constraints to specific tasks or groups of tasks?

Yes, right. If CGroups are not available than you cannot specify
per-task constraints. This is just a system-wide global tunable.

Question is: does this overall proposal makes sense outside the scope
of task groups classification? (more on that afterwards)

> > which can be used to defined tunable root constraints when CGroups are
> > not available, and becomes RO when CGroups are.
> >
> > Can this be eventually an acceptable option?
> >
> > In any case I think that this feature will be mainly targeting CGroup
> > based systems. Indeed, one of the main goals is to collect
> > "application specific" information from "informed run-times". Being
> > "application specific" means that we need a way to classify
> > applications depending on the runtime context... and that capability
> > in Linux is ultimately provided via the CGroup interface.
>
> I think the concern raised is more about whether CGroups is the right
> interface to use for attaching capacity constraints to task or groups
> of tasks, or is there a better way to attach such constraints?

Notice that CGroups based classification allows to easily enforce
the concept of "delegation containment". I think this feature should
be nice to have whatever interface we choose.

However, potentially we can define a proper per-task API; are you
thinking to something specifically?

> I am actually looking at a workload where its desirable to attach such
> constraints to only 1 thread or task, in this case it would be a bit
> overkill to use CGroups to attach such property just for 1 task with
> specific constraints

Well, perhaps it depends on how and when CGroups are created.
If we think about using a proper "Organize Once and Control" model,
(i.e. every app gets its own CGroup at creation time and there it
lives, for the rest of its time, while the user-space run-time
eventually tunes the assigned resources) than run-time overheads
should not be a major concerns.

AFAIK, Cgroups main overheads are associated to tasks migration and
tuning. Tuning will not be an issue for the kind of actions required
by capacity clamping. Regarding migrations, they should be avoided as
much as possible when CGroups are in use.

> and it would be beneficial that along with the
> CGroup interface, there's also an interface to attach it to individual
> tasks.

IMO a dual interface to do the same things will be at least confusing
and also more complicated to maintain.

> The other advantage of such interface is we don't have to
> create a separate CGroup for every new constraint limit and can have
> several tasks with different unique constraints.

That's still possible using CGroups and IMO it will not be the "most
common case".
Don't you think that in general we will need to set constraints at
applications level, thus group of tasks?

As a general rule we should probably go for an interface which makes
easy the most common case.

> Regards,
> Joel

--
#include <best/regards.h>

Patrick Bellasi

2017-03-23 16:01:18

by Tejun Heo

[permalink] [raw]
Subject: Re: [RFC v3 1/5] sched/core: add capacity constraints to CPU controller

Hello,

On Thu, Mar 23, 2017 at 10:32:54AM +0000, Patrick Bellasi wrote:
> > But then we would lose out on being able to attach capacity
> > constraints to specific tasks or groups of tasks?
>
> Yes, right. If CGroups are not available than you cannot specify
> per-task constraints. This is just a system-wide global tunable.
>
> Question is: does this overall proposal makes sense outside the scope
> of task groups classification? (more on that afterwards)

I think it does, given that it's a per-thread property which requires
internal application knowledge to tune.

> > I think the concern raised is more about whether CGroups is the right
> > interface to use for attaching capacity constraints to task or groups
> > of tasks, or is there a better way to attach such constraints?
>
> Notice that CGroups based classification allows to easily enforce
> the concept of "delegation containment". I think this feature should
> be nice to have whatever interface we choose.
>
> However, potentially we can define a proper per-task API; are you
> thinking to something specifically?

I don't think the overall outcome was too good when we used cgroup as
the direct way of configuring certain attributes - it either excludes
the possibility of easily accessible API from application side or
conflicts with the attributes set through such API. It's a lot
clearer when cgroup just sets what's allowed under the hierarchy.

This is also in line with the aspect that cgroup for the most part is
a scoping mechanism - it's the most straight-forward to implement and
use when the behavior inside cgroup matches a system without cgroup,
just scoped. It shows up here too. If you take out the cgroup part,
you're left with an interface which is hardly useful. cgroup isn't
scoping the global system here. It's becoming the primary interface
for this feature which most likely isn't a good sign.

So, my suggestion is to implement it as a per-task API. If the
feature calls for scoped restrictions, we definitely can add cgroup
support for that but I'm really not convinced about using cgroup as
the primary interface for this.

Thanks.

--
tejun

2017-03-23 18:15:39

by Patrick Bellasi

[permalink] [raw]
Subject: Re: [RFC v3 1/5] sched/core: add capacity constraints to CPU controller

On 23-Mar 12:01, Tejun Heo wrote:
> Hello,

Hi Tejun,

> On Thu, Mar 23, 2017 at 10:32:54AM +0000, Patrick Bellasi wrote:
> > > But then we would lose out on being able to attach capacity
> > > constraints to specific tasks or groups of tasks?
> >
> > Yes, right. If CGroups are not available than you cannot specify
> > per-task constraints. This is just a system-wide global tunable.
> >
> > Question is: does this overall proposal makes sense outside the scope
> > of task groups classification? (more on that afterwards)
>
> I think it does, given that it's a per-thread property which requires
> internal application knowledge to tune.

Yes and no... perhaps I'm biased on some specific usage scenarios, but where I
find this interface more useful is not when apps tune themselves but instead
when an "external actor" (which I usually call an "informed run-time") controls
these apps.

> > > I think the concern raised is more about whether CGroups is the right
> > > interface to use for attaching capacity constraints to task or groups
> > > of tasks, or is there a better way to attach such constraints?
> >
> > Notice that CGroups based classification allows to easily enforce
> > the concept of "delegation containment". I think this feature should
> > be nice to have whatever interface we choose.
> >
> > However, potentially we can define a proper per-task API; are you
> > thinking to something specifically?
>
> I don't think the overall outcome was too good when we used cgroup as
> the direct way of configuring certain attributes - it either excludes
> the possibility of easily accessible API from application side or

That's actually one of the main point: does it make sense to expose
such an API to applications at all?

What we are after is a properly defined interface where kernel-space and
user-space can potentially close this control loop:

a) a "privileged" user-space, which has much more a-priori information about
tasks requirements, can feed some constraints to kernel-space

b) kernel-space, which has optimized and efficient mechanisms,
enforces these constraints on a per task basis

Here is a graphical representation of these concepts:

+-------------+ +-------------+ +-------------+
| App1 Tasks ++ | App2 Tasks ++ | App3 Tasks ++
| || | || | ||
+--------------| +--------------| +--------------|
+-------------+ +-------------+ +-------------+
| | |
+----------------------------------------------------------+
| |
| +--------------------------------------------+ |
| | +-------------------------------------+ | |
| | | Run-Time Optimized Services | | |
| | | (e.g. execution model) | | |
| | +-------------------------------------+ | |
| | | |
| | Informed Run-Time Resource Manager | |
| | (Android, ChromeOS, Kubernets, etc...) | |
| +------------------------------------------^-+ |
| | | |
| |Constraints | |
| |(OPP and Task Placement biasing) | |
| | | |
| | Monitoring | |
| +-v------------------------------------------+ |
| | Linux Kernel | |
| | (Scheduler, schedutil, ...) | |
| +--------------------------------------------+ |
| |
| Closed control and optimization loop |
+----------------------------------------------------------+

What is important to notice is that there is a middleware, in between
the kernel and the applications. This is a special kind of user-space
where it is still safe for the kernel to delegate some "decisions".

The ultimate user of the proposed interface will be such a middleware, not each
and every application. That's why the "containment" feature provided by CGroups
I think is a good fitting for the kind of design.

> conflicts with the attributes set through such API.

In this "run-time resource management" schema, generic applications do not
access the proposed API, which is reserved to the privileged user-space.

Applications eventually can request better services to the middleware, using a
completely different and more abstract API, which can also be domain specific.


> It's a lot clearer when cgroup just sets what's allowed under the hierarchy.
> This is also in line with the aspect that cgroup for the most part is
> a scoping mechanism - it's the most straight-forward to implement and
> use when the behavior inside cgroup matches a system without cgroup,
> just scoped.

I like this concept of "CGroups being a scoping mechanism" and I think it
perfectly matches this use-case as well...

> It shows up here too. If you take out the cgroup part,
> you're left with an interface which is hardly useful. cgroup isn't
> scoping the global system here.

It is, indeed:

1) Applications do not see CGroups, never.
They use whatever resources are available when CGroups are not in use.

2) When an "Informed Run-time Resource Manager" schema is used, then the same
applications are scoped in the sense that they becomes "managed applications".

Managed applications are still completely "unaware" about the CGroup
interface, they do not relay on that interface for what they have to do.
However, in this scenario, there is a supervisor which know how much an
application can get each and every instant.

> It's becoming the primary interface
> for this feature which most likely isn't a good sign.

It's a primary interface yes, but not for apps, only for an (optional)
run-time resource manager.

What we want to enable with this interface is exactly the possibility for a
privileged user-space entity to "scope" different applications.

Described like that we can argue that we can still implement this model using a
custom per-task API. However, this proposal is about "tuning/partitioning" a
resource which is already (would say only) controllable using the CPU
controller.
That's also why the proposed interface has now been defined as a extension of
the CPU controller in such a way to keep a consistent view.

This controller is already used by run-times like Android to "scope" apps by
constraining the amount of CPUs resource they are getting.
Is that not a legitimate usage of the cpu controller?

What we are doing here is just extending it a bit in such a way that, while:

{cfs,rt}_{period,runtime}_us limits the amount of TIME we can use a CPU

we can also use:

capacity_{min,max} to limit the actual COMPUTATIONAL BANDWIDTH we can use
during that time.

> So, my suggestion is to implement it as a per-task API. If the
> feature calls for scoped restrictions, we definitely can add cgroup
> support for that but I'm really not convinced about using cgroup as
> the primary interface for this.

Given this viewpoint, I can definitively see a "scoped restrictions" usage, as
well as the idea that this can be a unique and primary interface.
Again, not exposed generically to apps but targeting a proper integration
of user-space run-time resource managers.

I hope this contributed to clarify better the scope. Do you still see the
CGroup API not as the best fit for such a usage?

> Thanks.
>
> --
> tejun

Cheers Patrick

--
#include <best/regards.h>

Patrick Bellasi

2017-03-23 18:39:42

by Tejun Heo

[permalink] [raw]
Subject: Re: [RFC v3 1/5] sched/core: add capacity constraints to CPU controller

Hello, Patrick.

On Thu, Mar 23, 2017 at 06:15:33PM +0000, Patrick Bellasi wrote:
> What is important to notice is that there is a middleware, in between
> the kernel and the applications. This is a special kind of user-space
> where it is still safe for the kernel to delegate some "decisions".
>
> The ultimate user of the proposed interface will be such a middleware, not each
> and every application. That's why the "containment" feature provided by CGroups
> I think is a good fitting for the kind of design.

cgroup isn't required for this type of uses. We've always had this
sort of usages in combination with mechanisms to restrict what
non-priv applications can do. The usage is perfectly valid but
whether to use cgroup as the sole interface is a different issue.

Yes, cgroup interface can be used this way; however, it does exclude,
or at least makes pretty cumbersome, different use cases which can be
served by a regular API. And that isn't the case when we approach it
from the other direction.

> I like this concept of "CGroups being a scoping mechanism" and I think it
> perfectly matches this use-case as well...
>
> > It shows up here too. If you take out the cgroup part,
> > you're left with an interface which is hardly useful. cgroup isn't
> > scoping the global system here.
>
> It is, indeed:
>
> 1) Applications do not see CGroups, never.
> They use whatever resources are available when CGroups are not in use.
>
> 2) When an "Informed Run-time Resource Manager" schema is used, then the same
> applications are scoped in the sense that they becomes "managed applications".
>
> Managed applications are still completely "unaware" about the CGroup
> interface, they do not relay on that interface for what they have to do.
> However, in this scenario, there is a supervisor which know how much an
> application can get each and every instant.

But it isn't useful if you take cgroup out of the picture. cgroup
isn't scoping a feature. The feature is buried in the cgroup itself.
I don't think it's useful to argue over the fine semantics. Please
see below.

> > It's becoming the primary interface
> > for this feature which most likely isn't a good sign.
>
> It's a primary interface yes, but not for apps, only for an (optional)
> run-time resource manager.
>
> What we want to enable with this interface is exactly the possibility for a
> privileged user-space entity to "scope" different applications.
>
> Described like that we can argue that we can still implement this model using a
> custom per-task API. However, this proposal is about "tuning/partitioning" a
> resource which is already (would say only) controllable using the CPU
> controller.
> That's also why the proposed interface has now been defined as a extension of
> the CPU controller in such a way to keep a consistent view.
>
> This controller is already used by run-times like Android to "scope" apps by
> constraining the amount of CPUs resource they are getting.
> Is that not a legitimate usage of the cpu controller?
>
> What we are doing here is just extending it a bit in such a way that, while:
>
> {cfs,rt}_{period,runtime}_us limits the amount of TIME we can use a CPU
>
> we can also use:
>
> capacity_{min,max} to limit the actual COMPUTATIONAL BANDWIDTH we can use
> during that time.

Yes, we do have bandwidth restriction as a cgroup only feature, which
is different from how we handle nice levels and weights. Given the
nature of bandwidth limits, if necessary, it is straight-forward to
expose per-task interface.

capacity min/max isn't the same thing. It isn't a limit on countable
units of a specific resource and that's why the interface you
suggested for .min is different. It's restricting attribute set which
can be picked in the subhierarchy rather than controlling distribution
of atoms of the resource.

That's also why we're gonna have problem if we later decide we need a
thread based API for it. Once we make cgroup the primary owner of the
attribute, it's not straight forward to add another owner.

> > So, my suggestion is to implement it as a per-task API. If the
> > feature calls for scoped restrictions, we definitely can add cgroup
> > support for that but I'm really not convinced about using cgroup as
> > the primary interface for this.
>
> Given this viewpoint, I can definitively see a "scoped restrictions" usage, as
> well as the idea that this can be a unique and primary interface.
> Again, not exposed generically to apps but targeting a proper integration
> of user-space run-time resource managers.
>
> I hope this contributed to clarify better the scope. Do you still see the
> CGroup API not as the best fit for such a usage?

Yes, I still think so. It'd be best to first figure out how the
attribute should be configured, inherited and restricted using the
normal APIs and then layer scoped restrictions on top with cgroup.
cgroup shouldn't be used as a way to bypass or get in the way of a
proper API.

Thanks.

--
tejun

2017-03-24 06:38:02

by Joel Fernandes

[permalink] [raw]
Subject: Re: [RFC v3 1/5] sched/core: add capacity constraints to CPU controller

Hi Tejun,

>> That's also why the proposed interface has now been defined as a extension of
>> the CPU controller in such a way to keep a consistent view.
>>
>> This controller is already used by run-times like Android to "scope" apps by
>> constraining the amount of CPUs resource they are getting.
>> Is that not a legitimate usage of the cpu controller?
>>
>> What we are doing here is just extending it a bit in such a way that, while:
>>
>> {cfs,rt}_{period,runtime}_us limits the amount of TIME we can use a CPU
>>
>> we can also use:
>>
>> capacity_{min,max} to limit the actual COMPUTATIONAL BANDWIDTH we can use
>> during that time.
>
> Yes, we do have bandwidth restriction as a cgroup only feature, which
> is different from how we handle nice levels and weights. Given the
> nature of bandwidth limits, if necessary, it is straight-forward to
> expose per-task interface.
>
> capacity min/max isn't the same thing. It isn't a limit on countable
> units of a specific resource and that's why the interface you
> suggested for .min is different. It's restricting attribute set which
> can be picked in the subhierarchy rather than controlling distribution
> of atoms of the resource.
>
> That's also why we're gonna have problem if we later decide we need a
> thread based API for it. Once we make cgroup the primary owner of the
> attribute, it's not straight forward to add another owner.

Sorry I don't immediately see why it is not straight forward to have a
per-task API later once CGroup interface is added? Maybe if you don't
mind giving an example that will help?

I can start with an example, say you have a single level hierarchy
(Top-app in Android terms is the set of tasks that are user facing and
we'd like to enforce some capacity minimums, background on the other
hand is the opposite):

ROOT (min = 0, max = 1024)
/ \
/ \
TOP-APP (min = 200, max = 1024) BACKGROUND (min = 0, max = 500)

If in the future, if we want to have a per-task API to individually
configure the task with these limits, it seems it will be straight
forward to implement IMO.

As Patrick mentioned, all of the usecases needing this right now is an
informed runtime placing a task in a group of tasks and not needing to
set attributes for each individual task. We are already placing tasks
in individual CGroups in Android based on the information the runtime
has so adding in the capacity constraints will make it fit naturally
while leaving the door open for any future per-task API additions IMO.

Thanks,

Joel

2017-03-24 07:02:39

by Joel Fernandes

[permalink] [raw]
Subject: Re: [RFC v3 1/5] sched/core: add capacity constraints to CPU controller

Hi Patrick,

On Thu, Mar 23, 2017 at 3:32 AM, Patrick Bellasi
<[email protected]> wrote:
[..]
>> > which can be used to defined tunable root constraints when CGroups are
>> > not available, and becomes RO when CGroups are.
>> >
>> > Can this be eventually an acceptable option?
>> >
>> > In any case I think that this feature will be mainly targeting CGroup
>> > based systems. Indeed, one of the main goals is to collect
>> > "application specific" information from "informed run-times". Being
>> > "application specific" means that we need a way to classify
>> > applications depending on the runtime context... and that capability
>> > in Linux is ultimately provided via the CGroup interface.
>>
>> I think the concern raised is more about whether CGroups is the right
>> interface to use for attaching capacity constraints to task or groups
>> of tasks, or is there a better way to attach such constraints?
>
> Notice that CGroups based classification allows to easily enforce
> the concept of "delegation containment". I think this feature should
> be nice to have whatever interface we choose.
>
> However, potentially we can define a proper per-task API; are you
> thinking to something specifically?
>

I was thinking how about adding per-task constraints to the resource
limits API if it makes sense to? There's already RLIMIT_CPU and
RLIMIT_NICE. An informed-runtime could then modify the limits of tasks
using prlimit.

>> The other advantage of such interface is we don't have to
>> create a separate CGroup for every new constraint limit and can have
>> several tasks with different unique constraints.
>
> That's still possible using CGroups and IMO it will not be the "most
> common case".
> Don't you think that in general we will need to set constraints at
> applications level, thus group of tasks?

Some applications could be a single task, also not all tasks in an
application may need constraints right?

> As a general rule we should probably go for an interface which makes
> easy the most common case.

I agree.

Thanks,
Joel

2017-03-24 15:00:59

by Tejun Heo

[permalink] [raw]
Subject: Re: [RFC v3 1/5] sched/core: add capacity constraints to CPU controller

Hello,

On Thu, Mar 23, 2017 at 11:37:50PM -0700, Joel Fernandes (Google) wrote:
> > That's also why we're gonna have problem if we later decide we need a
> > thread based API for it. Once we make cgroup the primary owner of the
> > attribute, it's not straight forward to add another owner.
>
> Sorry I don't immediately see why it is not straight forward to have a
> per-task API later once CGroup interface is added? Maybe if you don't
> mind giving an example that will help?
>
> I can start with an example, say you have a single level hierarchy
> (Top-app in Android terms is the set of tasks that are user facing and
> we'd like to enforce some capacity minimums, background on the other
> hand is the opposite):
>
> ROOT (min = 0, max = 1024)
> / \
> / \
> TOP-APP (min = 200, max = 1024) BACKGROUND (min = 0, max = 500)
>
> If in the future, if we want to have a per-task API to individually
> configure the task with these limits, it seems it will be straight
> forward to implement IMO.

Ah, you're right. I got fixated on controllers which control single
value attributes (the net ones). Yeah, we can extend the same range
interace to thread level. Sorry about the confusion.

Not necessarliy specific to the API discussion but in general lower
level in the configuration hierarchy shouldn't be able to restrict
what the parents can do, so we need to think more about how delegation
should work.

> As Patrick mentioned, all of the usecases needing this right now is an
> informed runtime placing a task in a group of tasks and not needing to
> set attributes for each individual task. We are already placing tasks
> in individual CGroups in Android based on the information the runtime
> has so adding in the capacity constraints will make it fit naturally
> while leaving the door open for any future per-task API additions IMO.

But the question is that while the use case can be served by cgroup as
the primary API, it doesn't have to be. This still is an attribute
range restriction controller rather than an actual hierarchical
resource distributor. All the controller does is restricting per-task
configuration which is the only thing operative. This is different
from bandwidth which is an actual resource controller which
distributes cpu cycles hierarchically.

There can be some benefits to having cgroup restrictions on capacity
but they won't add anything functionally fundamental. So, I still
don't see why making cgroup the primary interface would be a good idea
here.

Thanks.

--
tejun

2017-03-30 21:13:38

by Paul Turner

[permalink] [raw]
Subject: Re: [RFC v3 1/5] sched/core: add capacity constraints to CPU controller

There is one important, fundamental difference here:
{cfs,rt}_{period,runtime}_us is a property that applies to a group of
threads, it can be sub-divided.
We can consume 100ms of quota either by having one thread run for
100ms, or 2 threads running for 50ms.

This is not true for capacity. It's a tag that affects the individual
threads it's applied to.
I'm also not sure if it's a hard constraint. For example, suppose we
set a max that is smaller than a "big" cpu on an asymmetric system.
In the case that the faster CPU is relatively busy, but still
opportunistically available, we would still want to schedule it there.

This definitely seems to make more sense as a per-thread interface in
its current form.

2017-03-30 21:16:33

by Paul Turner

[permalink] [raw]
Subject: Re: [RFC v3 1/5] sched/core: add capacity constraints to CPU controller

On Mon, Mar 20, 2017 at 11:08 AM, Patrick Bellasi
<[email protected]> wrote:
> On 20-Mar 13:15, Tejun Heo wrote:
>> Hello,
>>
>> On Tue, Feb 28, 2017 at 02:38:38PM +0000, Patrick Bellasi wrote:
>> > This patch extends the CPU controller by adding a couple of new
>> > attributes, capacity_min and capacity_max, which can be used to enforce
>> > bandwidth boosting and capping. More specifically:
>> >
>> > - capacity_min: defines the minimum capacity which should be granted
>> > (by schedutil) when a task in this group is running,
>> > i.e. the task will run at least at that capacity
>> >
>> > - capacity_max: defines the maximum capacity which can be granted
>> > (by schedutil) when a task in this group is running,
>> > i.e. the task can run up to that capacity
>>
>> cpu.capacity.min and cpu.capacity.max are the more conventional names.
>
> Ok, should be an easy renaming.
>
>> I'm not sure about the name capacity as it doesn't encode what it does
>> and is difficult to tell apart from cpu bandwidth limits. I think
>> it'd be better to represent what it controls more explicitly.
>
> In the scheduler jargon, capacity represents the amount of computation
> that a CPU can provide and it's usually defined to be 1024 for the
> biggest CPU (on non SMP systems) running at the highest OPP (i.e.
> maximum frequency).
>
> It's true that it kind of overlaps with the concept of "bandwidth".
> However, the main difference here is that "bandwidth" is not frequency
> (and architecture) scaled.
> Thus, for example, assuming we have only one CPU with these two OPPs:
>
> OPP | Frequency | Capacity
> 1 | 500MHz | 512
> 2 | 1GHz | 1024

I think exposing capacity in this manner is extremely challenging.
It's not normalized in any way between architectures, which places a
lot of the ABI in the API.

Have you considered any schemes for normalizing this in a reasonable fashion?
`
>
> a task running 60% of the time on that CPU when configured to run at
> 500MHz, from the bandwidth standpoint it's using 60% bandwidth but, from
> the capacity standpoint, is using only 30% of the available capacity.
>
> IOW, bandwidth is purely temporal based while capacity factors in both
> frequency and architectural differences.
> Thus, while a "bandwidth" constraint limits the amount of time a task
> can use a CPU, independently from the "actual computation" performed,
> with the new "capacity" constraints we can enforce much "actual
> computation" a task can perform in the "unit of time".
>
>> > These attributes:
>> > a) are tunable at all hierarchy levels, i.e. root group too
>>
>> This usually is problematic because there should be a non-cgroup way
>> of configuring the feature in case cgroup isn't configured or used,
>> and it becomes awkward to have two separate mechanisms configuring the
>> same thing. Maybe the feature is cgroup specific enough that it makes
>> sense here but this needs more explanation / justification.
>
> In the previous proposal I used to expose global tunables under
> procfs, e.g.:
>
> /proc/sys/kernel/sched_capacity_min
> /proc/sys/kernel/sched_capacity_max
>
> which can be used to defined tunable root constraints when CGroups are
> not available, and becomes RO when CGroups are.
>
> Can this be eventually an acceptable option?
>
> In any case I think that this feature will be mainly targeting CGroup
> based systems. Indeed, one of the main goals is to collect
> "application specific" information from "informed run-times". Being
> "application specific" means that we need a way to classify
> applications depending on the runtime context... and that capability
> in Linux is ultimately provided via the CGroup interface.
>
>> > b) allow to create subgroups of tasks which are not violating the
>> > capacity constraints defined by the parent group.
>> > Thus, tasks on a subgroup can only be more boosted and/or more
>>
>> For both limits and protections, the parent caps the maximum the
>> children can get. At least that's what memcg does for memory.low.
>> Doing that makes sense for memcg because for memory the parent can
>> still do protections regardless of what its children are doing and it
>> makes delegation safe by default.
>
> Just to be more clear, the current proposal enforces:
>
> - capacity_max_child <= capacity_max_parent
>
> Since, if a task is constrained to get only up to a certain amount
> of capacity, than its childs cannot use more than that... eventually
> they can only be further constrained.
>
> - capacity_min_child >= capacity_min_parent
>
> Since, if a task has been boosted to run at least as much fast, than
> its childs cannot be constrained to go slower without eventually
> impacting parent performance.
>
>> I understand why you would want a property like capacity to be the
>> other direction as that way you get more specific as you walk down the
>> tree for both limits and protections;
>
> Right, the protection schema is defined in such a way to never affect
> parent constraints.
>
>> however, I think we need to
>> think a bit more about it and ensure that the resulting interface
>> isn't confusing.
>
> Sure.
>
>> Would it work for capacity to behave the other
>> direction - ie. a parent's min restricting the highest min that its
>> descendants can get? It's completely fine if that's weird.
>
> I had a thought about that possibility and it was not convincing me
> from the use-cases standpoint, at least for the ones I've considered.
>
> Reason is that capacity_min is used to implement a concept of
> "boosting" where, let say we want to "run a task faster then a minimum
> frequency". Assuming that this constraint has been defined because we
> know that this task, and likely all its descendant threads, needs at
> least that capacity level to perform according to expectations.
>
> In that case the "refining down the hierarchy" can require to boost
> further some threads but likely not less.
>
> Does this make sense?
>
> To me this seems to match quite well at least Android/ChromeOS
> specific use-cases. I'm not sure if there can be other different
> use-cases in the domain for example of managed containers.
>
>
>> Thanks.
>>
>> --
>> tejun
>
> --
> #include <best/regards.h>
>
> Patrick Bellasi

2017-04-01 16:25:20

by Patrick Bellasi

[permalink] [raw]
Subject: Re: [RFC v3 1/5] sched/core: add capacity constraints to CPU controller

Hi Paul,

On 30-Mar 14:15, Paul Turner wrote:
> On Mon, Mar 20, 2017 at 11:08 AM, Patrick Bellasi
> <[email protected]> wrote:
> > On 20-Mar 13:15, Tejun Heo wrote:
> >> Hello,
> >>
> >> On Tue, Feb 28, 2017 at 02:38:38PM +0000, Patrick Bellasi wrote:
> >> > This patch extends the CPU controller by adding a couple of new
> >> > attributes, capacity_min and capacity_max, which can be used to enforce
> >> > bandwidth boosting and capping. More specifically:
> >> >
> >> > - capacity_min: defines the minimum capacity which should be granted
> >> > (by schedutil) when a task in this group is running,
> >> > i.e. the task will run at least at that capacity
> >> >
> >> > - capacity_max: defines the maximum capacity which can be granted
> >> > (by schedutil) when a task in this group is running,
> >> > i.e. the task can run up to that capacity
> >>
> >> cpu.capacity.min and cpu.capacity.max are the more conventional names.
> >
> > Ok, should be an easy renaming.
> >
> >> I'm not sure about the name capacity as it doesn't encode what it does
> >> and is difficult to tell apart from cpu bandwidth limits. I think
> >> it'd be better to represent what it controls more explicitly.
> >
> > In the scheduler jargon, capacity represents the amount of computation
> > that a CPU can provide and it's usually defined to be 1024 for the
> > biggest CPU (on non SMP systems) running at the highest OPP (i.e.
> > maximum frequency).
> >
> > It's true that it kind of overlaps with the concept of "bandwidth".
> > However, the main difference here is that "bandwidth" is not frequency
> > (and architecture) scaled.
> > Thus, for example, assuming we have only one CPU with these two OPPs:
> >
> > OPP | Frequency | Capacity
> > 1 | 500MHz | 512
> > 2 | 1GHz | 1024
>
> I think exposing capacity in this manner is extremely challenging.
> It's not normalized in any way between architectures, which places a
> lot of the ABI in the API.

Capacities of CPUs are already exposed, at least for ARM platforms, using
a platform independent definition which is documented here:

http://lxr.free-electrons.com/source/Documentation/devicetree/bindings/arm/cpus.txt#L245

As the notes in the documentation highlight, it's not a perfect
metrics but still it allows to distinguish between computational
capabilities of different (micro)architectures and/or OPPs.

Within the scheduler we use SCHED_CAPACITY_SCALE:
http://lxr.free-electrons.com/ident?i=SCHED_CAPACITY_SCALE
for everything related to actual CPU computational capabilities.

That's why in the current implementation we expose the same metric to
define capacity constraints.

We considered also the idea to expose a more generic percentage value
[0..100], do you think that could be better?
Consider that at the end we will still have to scale 100% to 1024.

> Have you considered any schemes for normalizing this in a reasonable fashion?

For each specific target platform, capacities are already normalized
to 1024, which is the capacity of the most capable CPU running at the
highest OPP. Thus, 1024 always represents 100% of the available
computational capabilities of the most preforming system's CPU.

Perhaps I cannot completely get what what you mean when you say that
it should be "normalized between architectures".
Can you explain better, maybe with an example?

[...]

Cheers Patrick

--
#include <best/regards.h>

Patrick Bellasi