2021-06-10 15:15:56

by Quentin Perret

[permalink] [raw]
Subject: [PATCH v2 3/3] sched: Make uclamp changes depend on CAP_SYS_NICE

There is currently nothing preventing tasks from changing their per-task
clamp values in anyway that they like. The rationale is probably that
system administrators are still able to limit those clamps thanks to the
cgroup interface. However, this causes pain in a system where both
per-task and per-cgroup clamp values are expected to be under the
control of core system components (as is the case for Android).

To fix this, let's require CAP_SYS_NICE to increase per-task clamp
values. This allows unprivileged tasks to lower their requests, but not
increase them, which is consistent with the existing behaviour for nice
values.

Signed-off-by: Quentin Perret <[email protected]>
---
kernel/sched/core.c | 55 +++++++++++++++++++++++++++++++++++++++------
1 file changed, 48 insertions(+), 7 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 1d4aedbbcf96..6e24daca8d53 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1430,6 +1430,11 @@ static int uclamp_validate(struct task_struct *p,
if (util_min != -1 && util_max != -1 && util_min > util_max)
return -EINVAL;

+ return 0;
+}
+
+static void uclamp_enable(void)
+{
/*
* We have valid uclamp attributes; make sure uclamp is enabled.
*
@@ -1438,8 +1443,32 @@ static int uclamp_validate(struct task_struct *p,
* scheduler locks.
*/
static_branch_enable(&sched_uclamp_used);
+}

- return 0;
+static bool uclamp_reduce(struct task_struct *p, const struct sched_attr *attr)
+{
+ if (attr->sched_flags & SCHED_FLAG_UTIL_CLAMP_MIN) {
+ int util_min = p->uclamp_req[UCLAMP_MIN].value;
+
+ if (attr->sched_util_min + 1 > util_min + 1)
+ return false;
+
+ if (rt_task(p) && attr->sched_util_min == -1 &&
+ util_min < sysctl_sched_uclamp_util_min_rt_default)
+ return false;
+ }
+
+ if (attr->sched_flags & SCHED_FLAG_UTIL_CLAMP_MAX) {
+ int util_max = p->uclamp_req[UCLAMP_MAX].value;
+
+ if (attr->sched_util_max + 1 > util_max + 1)
+ return false;
+
+ if (attr->sched_util_max == -1 && util_max < uclamp_none(UCLAMP_MAX))
+ return false;
+ }
+
+ return true;
}

static bool uclamp_reset(const struct sched_attr *attr,
@@ -1580,6 +1609,11 @@ static inline int uclamp_validate(struct task_struct *p,
{
return -EOPNOTSUPP;
}
+static inline void uclamp_enable(void) { }
+static bool uclamp_reduce(struct task_struct *p, const struct sched_attr *attr)
+{
+ return true;
+}
static void __setscheduler_uclamp(struct task_struct *p,
const struct sched_attr *attr) { }
static inline void uclamp_fork(struct task_struct *p) { }
@@ -6116,6 +6150,13 @@ static int __sched_setscheduler(struct task_struct *p,
(rt_policy(policy) != (attr->sched_priority != 0)))
return -EINVAL;

+ /* Update task specific "requested" clamps */
+ if (attr->sched_flags & SCHED_FLAG_UTIL_CLAMP) {
+ retval = uclamp_validate(p, attr);
+ if (retval)
+ return retval;
+ }
+
/*
* Allow unprivileged RT tasks to decrease priority:
*/
@@ -6165,6 +6206,10 @@ static int __sched_setscheduler(struct task_struct *p,
/* Normal users shall not reset the sched_reset_on_fork flag: */
if (p->sched_reset_on_fork && !reset_on_fork)
return -EPERM;
+
+ /* Can't increase util-clamps */
+ if (attr->sched_flags & SCHED_FLAG_UTIL_CLAMP && !uclamp_reduce(p, attr))
+ return -EPERM;
}

if (user) {
@@ -6176,12 +6221,8 @@ static int __sched_setscheduler(struct task_struct *p,
return retval;
}

- /* Update task specific "requested" clamps */
- if (attr->sched_flags & SCHED_FLAG_UTIL_CLAMP) {
- retval = uclamp_validate(p, attr);
- if (retval)
- return retval;
- }
+ if (attr->sched_flags & SCHED_FLAG_UTIL_CLAMP)
+ uclamp_enable();

if (pi)
cpuset_read_lock();
--
2.32.0.272.g935e593368-goog


2021-06-11 12:50:10

by Qais Yousef

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] sched: Make uclamp changes depend on CAP_SYS_NICE

On 06/10/21 15:13, Quentin Perret wrote:
> There is currently nothing preventing tasks from changing their per-task
> clamp values in anyway that they like. The rationale is probably that
> system administrators are still able to limit those clamps thanks to the
> cgroup interface. However, this causes pain in a system where both
> per-task and per-cgroup clamp values are expected to be under the
> control of core system components (as is the case for Android).
>
> To fix this, let's require CAP_SYS_NICE to increase per-task clamp
> values. This allows unprivileged tasks to lower their requests, but not
> increase them, which is consistent with the existing behaviour for nice
> values.

Hmmm. I'm not in favour of this.

So uclamp is a performance and power management mechanism, it has no impact on
fairness AFAICT, so it being a privileged operation doesn't make sense.

We had a thought about this in the past and we didn't think there's any harm if
a task (app) wants to self manage. Yes a task could ask to run at max
performance and waste power, but anyone can generate a busy loop and waste
power too.

Now that doesn't mean your use case is not valid. I agree if there's a system
wide framework that wants to explicitly manage performance and power of tasks
via uclamp, then we can end up with 2 layers of controls overriding each
others.

Would it make more sense to have a procfs/sysfs flag that is disabled by
default that allows sys-admin to enforce a privileged uclamp access?

Something like

/proc/sys/kernel/sched_uclamp_privileged

I think both usage scenarios are valid and giving sys-admins the power to
enforce a behavior makes more sense for me.

Unless there's a real concern in terms of security/fairness that we missed?


Cheers

--
Qais Yousef

2021-06-11 13:11:24

by Quentin Perret

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] sched: Make uclamp changes depend on CAP_SYS_NICE

Hi Qais,

On Friday 11 Jun 2021 at 13:48:20 (+0100), Qais Yousef wrote:
> On 06/10/21 15:13, Quentin Perret wrote:
> > There is currently nothing preventing tasks from changing their per-task
> > clamp values in anyway that they like. The rationale is probably that
> > system administrators are still able to limit those clamps thanks to the
> > cgroup interface. However, this causes pain in a system where both
> > per-task and per-cgroup clamp values are expected to be under the
> > control of core system components (as is the case for Android).
> >
> > To fix this, let's require CAP_SYS_NICE to increase per-task clamp
> > values. This allows unprivileged tasks to lower their requests, but not
> > increase them, which is consistent with the existing behaviour for nice
> > values.
>
> Hmmm. I'm not in favour of this.
>
> So uclamp is a performance and power management mechanism, it has no impact on
> fairness AFAICT, so it being a privileged operation doesn't make sense.
>
> We had a thought about this in the past and we didn't think there's any harm if
> a task (app) wants to self manage. Yes a task could ask to run at max
> performance and waste power, but anyone can generate a busy loop and waste
> power too.
>
> Now that doesn't mean your use case is not valid. I agree if there's a system
> wide framework that wants to explicitly manage performance and power of tasks
> via uclamp, then we can end up with 2 layers of controls overriding each
> others.

Right, that's the main issue. Also, the reality is that most of time the
'right' clamps are platform-dependent, so most userspace apps are simply
not equipped to decide what their own clamps should be.

> Would it make more sense to have a procfs/sysfs flag that is disabled by
> default that allows sys-admin to enforce a privileged uclamp access?
>
> Something like
>
> /proc/sys/kernel/sched_uclamp_privileged

Hmm, dunno, I'm not aware of anything else having a behaviour like that,
so that feels a bit odd.

> I think both usage scenarios are valid and giving sys-admins the power to
> enforce a behavior makes more sense for me.

Yes, I wouldn't mind something like that in general. I originally wanted
to suggest introducing a dedicated capability for uclamp, but that felt
a bit overkill. Now if others think this should be the way to go I'm
happy to go implement it.

Thanks,
Quentin

2021-06-11 13:30:49

by Qais Yousef

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] sched: Make uclamp changes depend on CAP_SYS_NICE

Hi Quentin

On 06/11/21 13:08, Quentin Perret wrote:
> Hi Qais,
>
> On Friday 11 Jun 2021 at 13:48:20 (+0100), Qais Yousef wrote:
> > On 06/10/21 15:13, Quentin Perret wrote:
> > > There is currently nothing preventing tasks from changing their per-task
> > > clamp values in anyway that they like. The rationale is probably that
> > > system administrators are still able to limit those clamps thanks to the
> > > cgroup interface. However, this causes pain in a system where both
> > > per-task and per-cgroup clamp values are expected to be under the
> > > control of core system components (as is the case for Android).
> > >
> > > To fix this, let's require CAP_SYS_NICE to increase per-task clamp
> > > values. This allows unprivileged tasks to lower their requests, but not
> > > increase them, which is consistent with the existing behaviour for nice
> > > values.
> >
> > Hmmm. I'm not in favour of this.
> >
> > So uclamp is a performance and power management mechanism, it has no impact on
> > fairness AFAICT, so it being a privileged operation doesn't make sense.
> >
> > We had a thought about this in the past and we didn't think there's any harm if
> > a task (app) wants to self manage. Yes a task could ask to run at max
> > performance and waste power, but anyone can generate a busy loop and waste
> > power too.
> >
> > Now that doesn't mean your use case is not valid. I agree if there's a system
> > wide framework that wants to explicitly manage performance and power of tasks
> > via uclamp, then we can end up with 2 layers of controls overriding each
> > others.
>
> Right, that's the main issue. Also, the reality is that most of time the
> 'right' clamps are platform-dependent, so most userspace apps are simply
> not equipped to decide what their own clamps should be.

I'd argue this is true for both a framework or an app point of view. It depends
on the application and how it would be used.

I can foresee for example and HTTP server wanting to use uclamp to guarantee
a QoS target ie: X number of requests per second or a maximum of Y tail
latency. The application can try to tune (calibrate) itself without having to
have the whole system tuned or pumped on steroid.

Or a framework could manage this on behalf of the application. Both can use
uclamp with a feedback loop to calibrate the perf requirement of the tasks to
meet a given perf/power criteria.

If you want to do a static management, system framework would make more sense
in this case, true.

>
> > Would it make more sense to have a procfs/sysfs flag that is disabled by
> > default that allows sys-admin to enforce a privileged uclamp access?
> >
> > Something like
> >
> > /proc/sys/kernel/sched_uclamp_privileged
>
> Hmm, dunno, I'm not aware of anything else having a behaviour like that,
> so that feels a bit odd.

I think /proc/sys/kernel/perf_event_paranoid falls into this category.

>
> > I think both usage scenarios are valid and giving sys-admins the power to
> > enforce a behavior makes more sense for me.
>
> Yes, I wouldn't mind something like that in general. I originally wanted
> to suggest introducing a dedicated capability for uclamp, but that felt
> a bit overkill. Now if others think this should be the way to go I'm
> happy to go implement it.

Would be good to hear what others think for sure :)


Cheers

--
Qais Yousef

2021-06-11 13:50:52

by Quentin Perret

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] sched: Make uclamp changes depend on CAP_SYS_NICE

On Friday 11 Jun 2021 at 14:26:53 (+0100), Qais Yousef wrote:
> Hi Quentin
>
> On 06/11/21 13:08, Quentin Perret wrote:
> > Hi Qais,
> >
> > On Friday 11 Jun 2021 at 13:48:20 (+0100), Qais Yousef wrote:
> > > On 06/10/21 15:13, Quentin Perret wrote:
> > > > There is currently nothing preventing tasks from changing their per-task
> > > > clamp values in anyway that they like. The rationale is probably that
> > > > system administrators are still able to limit those clamps thanks to the
> > > > cgroup interface. However, this causes pain in a system where both
> > > > per-task and per-cgroup clamp values are expected to be under the
> > > > control of core system components (as is the case for Android).
> > > >
> > > > To fix this, let's require CAP_SYS_NICE to increase per-task clamp
> > > > values. This allows unprivileged tasks to lower their requests, but not
> > > > increase them, which is consistent with the existing behaviour for nice
> > > > values.
> > >
> > > Hmmm. I'm not in favour of this.
> > >
> > > So uclamp is a performance and power management mechanism, it has no impact on
> > > fairness AFAICT, so it being a privileged operation doesn't make sense.
> > >
> > > We had a thought about this in the past and we didn't think there's any harm if
> > > a task (app) wants to self manage. Yes a task could ask to run at max
> > > performance and waste power, but anyone can generate a busy loop and waste
> > > power too.
> > >
> > > Now that doesn't mean your use case is not valid. I agree if there's a system
> > > wide framework that wants to explicitly manage performance and power of tasks
> > > via uclamp, then we can end up with 2 layers of controls overriding each
> > > others.
> >
> > Right, that's the main issue. Also, the reality is that most of time the
> > 'right' clamps are platform-dependent, so most userspace apps are simply
> > not equipped to decide what their own clamps should be.
>
> I'd argue this is true for both a framework or an app point of view. It depends
> on the application and how it would be used.
>
> I can foresee for example and HTTP server wanting to use uclamp to guarantee
> a QoS target ie: X number of requests per second or a maximum of Y tail
> latency. The application can try to tune (calibrate) itself without having to
> have the whole system tuned or pumped on steroid.

Right, but the problem I see with this approach is that the app only
understand its own performance, but is unable to decide what is best for
the overall system health.

Anyway, it sounds like we agree that having _some_ way of limiting this
would be useful, so we're all good.

> Or a framework could manage this on behalf of the application. Both can use
> uclamp with a feedback loop to calibrate the perf requirement of the tasks to
> meet a given perf/power criteria.
>
> If you want to do a static management, system framework would make more sense
> in this case, true.
>
> >
> > > Would it make more sense to have a procfs/sysfs flag that is disabled by
> > > default that allows sys-admin to enforce a privileged uclamp access?
> > >
> > > Something like
> > >
> > > /proc/sys/kernel/sched_uclamp_privileged
> >
> > Hmm, dunno, I'm not aware of anything else having a behaviour like that,
> > so that feels a bit odd.
>
> I think /proc/sys/kernel/perf_event_paranoid falls into this category.

Aha, so I'm guessing this was introduced as a sysfs knob rather than a
CAP because it is a non-binary knob, but it's an interesting example.

> >
> > > I think both usage scenarios are valid and giving sys-admins the power to
> > > enforce a behavior makes more sense for me.
> >
> > Yes, I wouldn't mind something like that in general. I originally wanted
> > to suggest introducing a dedicated capability for uclamp, but that felt
> > a bit overkill. Now if others think this should be the way to go I'm
> > happy to go implement it.
>
> Would be good to hear what others think for sure :)

Thinking about it a bit more, a more involved option would be to have
this patch as is, but to also introduce a new RLIMIT_UCLAMP on top of
it. The semantics could be:

- if the clamp requested by the non-privileged task is lower than its
existing clamp, then allow;
- otherwise, if the requested clamp is less than UCLAMP_RLIMIT, then
allow;
- otherwise, deny,

And the same principle would apply to both uclamp.min and uclamp.max,
and UCLAMP_RLIMIT would default to 0.

Thoughts?

Thanks,
Quentin

2021-06-11 14:21:28

by Qais Yousef

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] sched: Make uclamp changes depend on CAP_SYS_NICE

On 06/11/21 13:49, Quentin Perret wrote:
> On Friday 11 Jun 2021 at 14:26:53 (+0100), Qais Yousef wrote:
> > Hi Quentin
> >
> > On 06/11/21 13:08, Quentin Perret wrote:
> > > Hi Qais,
> > >
> > > On Friday 11 Jun 2021 at 13:48:20 (+0100), Qais Yousef wrote:
> > > > On 06/10/21 15:13, Quentin Perret wrote:
> > > > > There is currently nothing preventing tasks from changing their per-task
> > > > > clamp values in anyway that they like. The rationale is probably that
> > > > > system administrators are still able to limit those clamps thanks to the
> > > > > cgroup interface. However, this causes pain in a system where both
> > > > > per-task and per-cgroup clamp values are expected to be under the
> > > > > control of core system components (as is the case for Android).
> > > > >
> > > > > To fix this, let's require CAP_SYS_NICE to increase per-task clamp
> > > > > values. This allows unprivileged tasks to lower their requests, but not
> > > > > increase them, which is consistent with the existing behaviour for nice
> > > > > values.
> > > >
> > > > Hmmm. I'm not in favour of this.
> > > >
> > > > So uclamp is a performance and power management mechanism, it has no impact on
> > > > fairness AFAICT, so it being a privileged operation doesn't make sense.
> > > >
> > > > We had a thought about this in the past and we didn't think there's any harm if
> > > > a task (app) wants to self manage. Yes a task could ask to run at max
> > > > performance and waste power, but anyone can generate a busy loop and waste
> > > > power too.
> > > >
> > > > Now that doesn't mean your use case is not valid. I agree if there's a system
> > > > wide framework that wants to explicitly manage performance and power of tasks
> > > > via uclamp, then we can end up with 2 layers of controls overriding each
> > > > others.
> > >
> > > Right, that's the main issue. Also, the reality is that most of time the
> > > 'right' clamps are platform-dependent, so most userspace apps are simply
> > > not equipped to decide what their own clamps should be.
> >
> > I'd argue this is true for both a framework or an app point of view. It depends
> > on the application and how it would be used.
> >
> > I can foresee for example and HTTP server wanting to use uclamp to guarantee
> > a QoS target ie: X number of requests per second or a maximum of Y tail
> > latency. The application can try to tune (calibrate) itself without having to
> > have the whole system tuned or pumped on steroid.
>
> Right, but the problem I see with this approach is that the app only
> understand its own performance, but is unable to decide what is best for
> the overall system health.

How do you define the overall system health here? If the app will cause the
system to heat up massively, it can do this with or without uclamp, no?

The app has better understanding of what tasks it creates and what's important
and what's not; and when certain tasks would need extra boost or not. How would
the system know that without some cooperation from the app anyway?

If you were referring what would happen if two apps are running
simultaneously; in my view the problem is not worse than what would happen
without them using uclamp. They could trip over each others in both cases. You
can actually argue if they both are doing a good job at self regulating you
could end up with a better overall result when using uclamp.

I agree one can do smarter things via a framework. I just don't find it a good
reason to limit the other use cases too.

>
> Anyway, it sounds like we agree that having _some_ way of limiting this
> would be useful, so we're all good.

Yes. If there's a framework that is smart and want to be the master controller
of managing uclamp requests, then it make sense for it to have a way to ensure
apps can't try to escape what it has imposed. Otherwise it's chaos.

>
> > Or a framework could manage this on behalf of the application. Both can use
> > uclamp with a feedback loop to calibrate the perf requirement of the tasks to
> > meet a given perf/power criteria.
> >
> > If you want to do a static management, system framework would make more sense
> > in this case, true.
> >
> > >
> > > > Would it make more sense to have a procfs/sysfs flag that is disabled by
> > > > default that allows sys-admin to enforce a privileged uclamp access?
> > > >
> > > > Something like
> > > >
> > > > /proc/sys/kernel/sched_uclamp_privileged
> > >
> > > Hmm, dunno, I'm not aware of anything else having a behaviour like that,
> > > so that feels a bit odd.
> >
> > I think /proc/sys/kernel/perf_event_paranoid falls into this category.
>
> Aha, so I'm guessing this was introduced as a sysfs knob rather than a
> CAP because it is a non-binary knob, but it's an interesting example.
>
> > >
> > > > I think both usage scenarios are valid and giving sys-admins the power to
> > > > enforce a behavior makes more sense for me.
> > >
> > > Yes, I wouldn't mind something like that in general. I originally wanted
> > > to suggest introducing a dedicated capability for uclamp, but that felt
> > > a bit overkill. Now if others think this should be the way to go I'm
> > > happy to go implement it.
> >
> > Would be good to hear what others think for sure :)
>
> Thinking about it a bit more, a more involved option would be to have
> this patch as is, but to also introduce a new RLIMIT_UCLAMP on top of
> it. The semantics could be:
>
> - if the clamp requested by the non-privileged task is lower than its
> existing clamp, then allow;
> - otherwise, if the requested clamp is less than UCLAMP_RLIMIT, then
> allow;
> - otherwise, deny,
>
> And the same principle would apply to both uclamp.min and uclamp.max,
> and UCLAMP_RLIMIT would default to 0.
>
> Thoughts?

That could work. But then I'd prefer your patch to go as-is. I don't think
uclamp can do with this extra complexity in using it.

We basically want to specify we want to be paranoid about uclamp CAP or not. In
my view that is simple and can't see why it would be a big deal to have
a procfs entry to define the level of paranoia the system wants to impose. If
it is a big deal though (would love to hear the arguments); requiring apps that
want to self regulate to have CAP_SYS_NICE is better approach. Though I'd still
prefer to keep uclamp ubiquitous and not enforce a specific usage pattern.

Cheers

--
Qais Yousef

2021-06-11 14:47:17

by Quentin Perret

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] sched: Make uclamp changes depend on CAP_SYS_NICE

On Friday 11 Jun 2021 at 15:17:37 (+0100), Qais Yousef wrote:
> On 06/11/21 13:49, Quentin Perret wrote:
> > Thinking about it a bit more, a more involved option would be to have
> > this patch as is, but to also introduce a new RLIMIT_UCLAMP on top of
> > it. The semantics could be:
> >
> > - if the clamp requested by the non-privileged task is lower than its
> > existing clamp, then allow;
> > - otherwise, if the requested clamp is less than UCLAMP_RLIMIT, then
> > allow;
> > - otherwise, deny,
> >
> > And the same principle would apply to both uclamp.min and uclamp.max,
> > and UCLAMP_RLIMIT would default to 0.
> >
> > Thoughts?
>
> That could work. But then I'd prefer your patch to go as-is. I don't think
> uclamp can do with this extra complexity in using it.

Sorry I'm not sure what you mean here?

> We basically want to specify we want to be paranoid about uclamp CAP or not. In
> my view that is simple and can't see why it would be a big deal to have
> a procfs entry to define the level of paranoia the system wants to impose. If
> it is a big deal though (would love to hear the arguments);

Not saying it's a big deal, but I think there are a few arguments in
favor of using rlimit instead of a sysfs knob. It allows for a much
finer grain configuration -- constraints can be set per-task as well as
system wide if needed, and it is the standard way of limiting resources
that tasks can ask for.

> requiring apps that
> want to self regulate to have CAP_SYS_NICE is better approach.

Rlimit wouldn't require that though, which is also nice as CAP_SYS_NICE
grants you a lot more power than just clamps ...

2021-06-14 15:05:03

by Qais Yousef

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] sched: Make uclamp changes depend on CAP_SYS_NICE

On 06/11/21 14:43, Quentin Perret wrote:
> On Friday 11 Jun 2021 at 15:17:37 (+0100), Qais Yousef wrote:
> > On 06/11/21 13:49, Quentin Perret wrote:
> > > Thinking about it a bit more, a more involved option would be to have
> > > this patch as is, but to also introduce a new RLIMIT_UCLAMP on top of
> > > it. The semantics could be:
> > >
> > > - if the clamp requested by the non-privileged task is lower than its
> > > existing clamp, then allow;
> > > - otherwise, if the requested clamp is less than UCLAMP_RLIMIT, then
> > > allow;
> > > - otherwise, deny,
> > >
> > > And the same principle would apply to both uclamp.min and uclamp.max,
> > > and UCLAMP_RLIMIT would default to 0.
> > >
> > > Thoughts?
> >
> > That could work. But then I'd prefer your patch to go as-is. I don't think
> > uclamp can do with this extra complexity in using it.
>
> Sorry I'm not sure what you mean here?

Hmm. I understood this as a new flag to sched_setattr() syscall first, but now
I get it. You want to use getrlimit()/setrlimit()/prlimit() API to impose
a restriction. My comment was in regard to this being a sys call extension,
which it isn't. So please ignore it.

>
> > We basically want to specify we want to be paranoid about uclamp CAP or not. In
> > my view that is simple and can't see why it would be a big deal to have
> > a procfs entry to define the level of paranoia the system wants to impose. If
> > it is a big deal though (would love to hear the arguments);
>
> Not saying it's a big deal, but I think there are a few arguments in
> favor of using rlimit instead of a sysfs knob. It allows for a much
> finer grain configuration -- constraints can be set per-task as well as
> system wide if needed, and it is the standard way of limiting resources
> that tasks can ask for.

Is it system wide or per user?

>
> > requiring apps that
> > want to self regulate to have CAP_SYS_NICE is better approach.
>
> Rlimit wouldn't require that though, which is also nice as CAP_SYS_NICE
> grants you a lot more power than just clamps ...

Now I better understand your suggestion. It seems a viable option I agree.
I need to digest it more still though. The devil is in the details :)

Shouldn't the default be RLIM_INIFINITY? ie: no limit?

We will need to add two limit, RLIMIT_UCLAMP_MIN/MAX, right?

We have the following hierarchy now:

1. System Wide (/proc/sys/kerenl/sched_util_clamp_min/max)
2. Cgroup
3. Per-Task

In that order of priority where 1 limits/overrides 2 and 3. And
2 limits/overrides 3.

Where do you see the RLIMIT fit in this hierarchy? It should be between 2 and
3, right? Cgroup settings should still win even if the user/processes were
limited?

If the framework decided a user can't request any boost at all (can't increase
its uclamp_min above 0). IIUC then setting the hard limit of RLIMIT_UCLAMP_MIN
to 0 would achieve that, right?

Since the framework and the task itself would go through the same
sched_setattr() call, how would the framework circumvent this limit? IIUC it
has to raise the RLIMIT_UCLAMP_MIN first then perform sched_setattr() to
request the boost value, right? Would this overhead be acceptable? It looks
considerable to me.

Also, Will prlimit() allow you to go outside what was set for the user via
setrlimit()? Reading the man pages it seems to override, so that should be
fine.

For 1 (System Wide) limits, sched_setattr() requests are accepted, but the
effective uclamp is *capped by* the system wide limit.

Were you thinking RLIMIT_UCLAMP* will behave similarly? If they do, we have
consistent behavior with how the current system wide limits work; but this will
break your use case because tasks can change the requested uclamp value for
a task, albeit the effective value will be limited.

RLIMIT_UCLAMP_MIN=512
p->uclamp[UCLAMP_min] = 800 // this request is allowed but
// Effective UCLAMP_MIN = 512

If not, then

RLIMIT_UCLAMP_MIN=no limit
p->uclamp[UCLAMP_min] = 800 // task changed its uclamp_min to 800
RLIMIT_UCLAMP_MIN=512 // limit was lowered for task/user

what will happen to p->uclamp[UCLAMP_MIN] in this case? Will it be lowered to
match the new limit? And this will be inconsistent with the current system wide
limits we already have.

Sorry too many questions. I was mainly thinking loudly. I need to spend more
time to dig into the details of how RLIMITs are imposed to understand how this
could be a good fit. I already see some friction points that needs more
thinking.

Thanks

--
Qais Yousef

2021-06-21 10:56:24

by Quentin Perret

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] sched: Make uclamp changes depend on CAP_SYS_NICE

Hi Qais,

Apologies for the delayed reply, I was away last week.

On Monday 14 Jun 2021 at 16:03:27 (+0100), Qais Yousef wrote:
> On 06/11/21 14:43, Quentin Perret wrote:
> > On Friday 11 Jun 2021 at 15:17:37 (+0100), Qais Yousef wrote:
> > > On 06/11/21 13:49, Quentin Perret wrote:
> > > > Thinking about it a bit more, a more involved option would be to have
> > > > this patch as is, but to also introduce a new RLIMIT_UCLAMP on top of
> > > > it. The semantics could be:
> > > >
> > > > - if the clamp requested by the non-privileged task is lower than its
> > > > existing clamp, then allow;
> > > > - otherwise, if the requested clamp is less than UCLAMP_RLIMIT, then
> > > > allow;
> > > > - otherwise, deny,
> > > >
> > > > And the same principle would apply to both uclamp.min and uclamp.max,
> > > > and UCLAMP_RLIMIT would default to 0.
> > > >
> > > > Thoughts?
> > >
> > > That could work. But then I'd prefer your patch to go as-is. I don't think
> > > uclamp can do with this extra complexity in using it.
> >
> > Sorry I'm not sure what you mean here?
>
> Hmm. I understood this as a new flag to sched_setattr() syscall first, but now
> I get it. You want to use getrlimit()/setrlimit()/prlimit() API to impose
> a restriction. My comment was in regard to this being a sys call extension,
> which it isn't. So please ignore it.
>
> >
> > > We basically want to specify we want to be paranoid about uclamp CAP or not. In
> > > my view that is simple and can't see why it would be a big deal to have
> > > a procfs entry to define the level of paranoia the system wants to impose. If
> > > it is a big deal though (would love to hear the arguments);
> >
> > Not saying it's a big deal, but I think there are a few arguments in
> > favor of using rlimit instead of a sysfs knob. It allows for a much
> > finer grain configuration -- constraints can be set per-task as well as
> > system wide if needed, and it is the standard way of limiting resources
> > that tasks can ask for.
>
> Is it system wide or per user?

Right, so calling this 'system-wide' is probably an abuse, but IIRC
rlimits are per-process, and are inherited accross fork/exec. So the
usual trick to have a default value is to set the rlimits on the init
task accordingly. Android for instance already does that for a few
things, and I would guess that systemd and friends have equivalents
(though admittedly I should check that).

> >
> > > requiring apps that
> > > want to self regulate to have CAP_SYS_NICE is better approach.
> >
> > Rlimit wouldn't require that though, which is also nice as CAP_SYS_NICE
> > grants you a lot more power than just clamps ...
>
> Now I better understand your suggestion. It seems a viable option I agree.
> I need to digest it more still though. The devil is in the details :)
>
> Shouldn't the default be RLIM_INIFINITY? ie: no limit?

I guess so yes.

> We will need to add two limit, RLIMIT_UCLAMP_MIN/MAX, right?

Not sure, but I was originally envisioning to have only one that applies
to both min and max. In which would we need separate ones?

> We have the following hierarchy now:
>
> 1. System Wide (/proc/sys/kerenl/sched_util_clamp_min/max)
> 2. Cgroup
> 3. Per-Task
>
> In that order of priority where 1 limits/overrides 2 and 3. And
> 2 limits/overrides 3.
>
> Where do you see the RLIMIT fit in this hierarchy? It should be between 2 and
> 3, right? Cgroup settings should still win even if the user/processes were
> limited?

Yes, the rlimit stuff would just apply the syscall interface.

> If the framework decided a user can't request any boost at all (can't increase
> its uclamp_min above 0). IIUC then setting the hard limit of RLIMIT_UCLAMP_MIN
> to 0 would achieve that, right?

Exactly.

> Since the framework and the task itself would go through the same
> sched_setattr() call, how would the framework circumvent this limit? IIUC it
> has to raise the RLIMIT_UCLAMP_MIN first then perform sched_setattr() to
> request the boost value, right? Would this overhead be acceptable? It looks
> considerable to me.

The framework needs to have CAP_SYS_NICE to change another process'
clamps, and generally rlimit checks don't apply to CAP_SYS_NICE-capable
processes -- see __sched_setscheduler(). So I think we should be fine.
IOW, rlimits are just constraining what unprivileged tasks are allowed
to request for themselves IIUC.

> Also, Will prlimit() allow you to go outside what was set for the user via
> setrlimit()? Reading the man pages it seems to override, so that should be
> fine.

IIRC rlimit are per-process properties, not per-user, so I think we
should be fine here as well?

> For 1 (System Wide) limits, sched_setattr() requests are accepted, but the
> effective uclamp is *capped by* the system wide limit.
>
> Were you thinking RLIMIT_UCLAMP* will behave similarly?

Nope, I was actually thinking of having the syscall return -EPERM in
this case, as we already do for nice values or RT priorities.

> If they do, we have
> consistent behavior with how the current system wide limits work; but this will
> break your use case because tasks can change the requested uclamp value for
> a task, albeit the effective value will be limited.
>
> RLIMIT_UCLAMP_MIN=512
> p->uclamp[UCLAMP_min] = 800 // this request is allowed but
> // Effective UCLAMP_MIN = 512
>
> If not, then
>
> RLIMIT_UCLAMP_MIN=no limit
> p->uclamp[UCLAMP_min] = 800 // task changed its uclamp_min to 800
> RLIMIT_UCLAMP_MIN=512 // limit was lowered for task/user
>
> what will happen to p->uclamp[UCLAMP_MIN] in this case? Will it be lowered to
> match the new limit? And this will be inconsistent with the current system wide
> limits we already have.

As per the above, if the syscall returns -EPERM we can leave the
integration with system-wide defaults and such untouched I think.

> Sorry too many questions. I was mainly thinking loudly. I need to spend more
> time to dig into the details of how RLIMITs are imposed to understand how this
> could be a good fit. I already see some friction points that needs more
> thinking.

No need to apologize, this would be a new userspace-visible interface,
so you're right that we need to think it through.

Thanks for the feedback,
Quentin