2024-03-28 00:31:35

by Vitalii Bursov

[permalink] [raw]
Subject: [PATCH 1/1] sched/fair: allow disabling newidle_balance with sched_relax_domain_level

Change relax_domain_level checks so that it would be possible
to exclude all domains from newidle balancing.

This matches the behavior described in the documentation:
-1 no request. use system default or follow request of others.
0 no search.
1 search siblings (hyperthreads in a core).

"2" enables levels 0 and 1, level_max excludes the last (level_max)
level, and level_max+1 includes all levels.

Signed-off-by: Vitalii Bursov <[email protected]>
---
kernel/cgroup/cpuset.c | 2 +-
kernel/sched/debug.c | 1 +
kernel/sched/topology.c | 2 +-
3 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 4237c874871..da24187c4e0 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -2948,7 +2948,7 @@ bool current_cpuset_is_being_rebound(void)
static int update_relax_domain_level(struct cpuset *cs, s64 val)
{
#ifdef CONFIG_SMP
- if (val < -1 || val >= sched_domain_level_max)
+ if (val < -1 || val > sched_domain_level_max + 1)
return -EINVAL;
#endif

diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index 8d5d98a5834..8454cd4e5e1 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -423,6 +423,7 @@ static void register_sd(struct sched_domain *sd, struct dentry *parent)

#undef SDM

+ debugfs_create_u32("level", 0444, parent, (u32 *)&sd->level);
debugfs_create_file("flags", 0444, parent, &sd->flags, &sd_flags_fops);
debugfs_create_file("groups_flags", 0444, parent, &sd->groups->flags, &sd_flags_fops);
}
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 99ea5986038..3127c9b30af 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -1468,7 +1468,7 @@ static void set_domain_attribute(struct sched_domain *sd,
} else
request = attr->relax_domain_level;

- if (sd->level > request) {
+ if (sd->level >= request) {
/* Turn off idle balance on this domain: */
sd->flags &= ~(SD_BALANCE_WAKE|SD_BALANCE_NEWIDLE);
}
--
2.20.1



2024-03-28 05:56:34

by Shrikanth Hegde

[permalink] [raw]
Subject: Re: [PATCH 1/1] sched/fair: allow disabling newidle_balance with sched_relax_domain_level



On 3/28/24 6:00 AM, Vitalii Bursov wrote:
> Change relax_domain_level checks so that it would be possible
> to exclude all domains from newidle balancing.
>
> This matches the behavior described in the documentation:
> -1 no request. use system default or follow request of others.
> 0 no search.
> 1 search siblings (hyperthreads in a core).
>
> "2" enables levels 0 and 1, level_max excludes the last (level_max)
> level, and level_max+1 includes all levels.
>
> Signed-off-by: Vitalii Bursov <[email protected]>
> ---
> kernel/cgroup/cpuset.c | 2 +-
> kernel/sched/debug.c | 1 +
> kernel/sched/topology.c | 2 +-
> 3 files changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index 4237c874871..da24187c4e0 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -2948,7 +2948,7 @@ bool current_cpuset_is_being_rebound(void)
> static int update_relax_domain_level(struct cpuset *cs, s64 val)
> {
> #ifdef CONFIG_SMP
> - if (val < -1 || val >= sched_domain_level_max)
> + if (val < -1 || val > sched_domain_level_max + 1)
> return -EINVAL;
> #endif
>
> diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
> index 8d5d98a5834..8454cd4e5e1 100644
> --- a/kernel/sched/debug.c
> +++ b/kernel/sched/debug.c
> @@ -423,6 +423,7 @@ static void register_sd(struct sched_domain *sd, struct dentry *parent)
>
> #undef SDM
>
> + debugfs_create_u32("level", 0444, parent, (u32 *)&sd->level);

It would be better if the level can be after group_flags since its a new addition?

> debugfs_create_file("flags", 0444, parent, &sd->flags, &sd_flags_fops);
> debugfs_create_file("groups_flags", 0444, parent, &sd->groups->flags, &sd_flags_fops);
> }
> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> index 99ea5986038..3127c9b30af 100644
> --- a/kernel/sched/topology.c
> +++ b/kernel/sched/topology.c
> @@ -1468,7 +1468,7 @@ static void set_domain_attribute(struct sched_domain *sd,
> } else
> request = attr->relax_domain_level;
>
> - if (sd->level > request) {
> + if (sd->level >= request) {
> /* Turn off idle balance on this domain: */
> sd->flags &= ~(SD_BALANCE_WAKE|SD_BALANCE_NEWIDLE);
> }

Other than the above change looks good.

2024-03-28 14:43:27

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH 1/1] sched/fair: allow disabling newidle_balance with sched_relax_domain_level

On Thu, 28 Mar 2024 at 01:31, Vitalii Bursov <[email protected]> wrote:
>
> Change relax_domain_level checks so that it would be possible
> to exclude all domains from newidle balancing.
>
> This matches the behavior described in the documentation:
> -1 no request. use system default or follow request of others.
> 0 no search.
> 1 search siblings (hyperthreads in a core).
>
> "2" enables levels 0 and 1, level_max excludes the last (level_max)
> level, and level_max+1 includes all levels.

I was about to say that max+1 is useless because it's the same as -1
but it's not exactly the same because it can supersede the system wide
default_relax_domain_level. I wonder if one should be able to enable
more levels than what the system has set by default.

>
> Signed-off-by: Vitalii Bursov <[email protected]>
> ---
> kernel/cgroup/cpuset.c | 2 +-
> kernel/sched/debug.c | 1 +
> kernel/sched/topology.c | 2 +-
> 3 files changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index 4237c874871..da24187c4e0 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -2948,7 +2948,7 @@ bool current_cpuset_is_being_rebound(void)
> static int update_relax_domain_level(struct cpuset *cs, s64 val)
> {
> #ifdef CONFIG_SMP
> - if (val < -1 || val >= sched_domain_level_max)
> + if (val < -1 || val > sched_domain_level_max + 1)
> return -EINVAL;
> #endif
>
> diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
> index 8d5d98a5834..8454cd4e5e1 100644
> --- a/kernel/sched/debug.c
> +++ b/kernel/sched/debug.c
> @@ -423,6 +423,7 @@ static void register_sd(struct sched_domain *sd, struct dentry *parent)
>
> #undef SDM
>
> + debugfs_create_u32("level", 0444, parent, (u32 *)&sd->level);

IMO, this should be a separate patch as it's not part of the fix

> debugfs_create_file("flags", 0444, parent, &sd->flags, &sd_flags_fops);
> debugfs_create_file("groups_flags", 0444, parent, &sd->groups->flags, &sd_flags_fops);
> }
> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> index 99ea5986038..3127c9b30af 100644
> --- a/kernel/sched/topology.c
> +++ b/kernel/sched/topology.c
> @@ -1468,7 +1468,7 @@ static void set_domain_attribute(struct sched_domain *sd,
> } else
> request = attr->relax_domain_level;
>
> - if (sd->level > request) {
> + if (sd->level >= request) {

good catch and worth :
Fixes: 9ae7ab20b483 ("sched/topology: Don't set SD_BALANCE_WAKE on
cpuset domain relax")


> /* Turn off idle balance on this domain: */
> sd->flags &= ~(SD_BALANCE_WAKE|SD_BALANCE_NEWIDLE);
> }
> --
> 2.20.1
>

2024-03-28 16:36:08

by Vitalii Bursov

[permalink] [raw]
Subject: Re: [PATCH 1/1] sched/fair: allow disabling newidle_balance with sched_relax_domain_level


On 28.03.24 07:51, Shrikanth Hegde wrote:
>
>
> On 3/28/24 6:00 AM, Vitalii Bursov wrote:
>> Change relax_domain_level checks so that it would be possible
>> to exclude all domains from newidle balancing.
>>
>> This matches the behavior described in the documentation:
>> -1 no request. use system default or follow request of others.
>> 0 no search.
>> 1 search siblings (hyperthreads in a core).
>>
>> "2" enables levels 0 and 1, level_max excludes the last (level_max)
>> level, and level_max+1 includes all levels.
>>
>> Signed-off-by: Vitalii Bursov <[email protected]>
>> ---
>> kernel/cgroup/cpuset.c | 2 +-
>> kernel/sched/debug.c | 1 +
>> kernel/sched/topology.c | 2 +-
>> 3 files changed, 3 insertions(+), 2 deletions(-)
>>
>> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
>> index 4237c874871..da24187c4e0 100644
>> --- a/kernel/cgroup/cpuset.c
>> +++ b/kernel/cgroup/cpuset.c
>> @@ -2948,7 +2948,7 @@ bool current_cpuset_is_being_rebound(void)
>> static int update_relax_domain_level(struct cpuset *cs, s64 val)
>> {
>> #ifdef CONFIG_SMP
>> - if (val < -1 || val >= sched_domain_level_max)
>> + if (val < -1 || val > sched_domain_level_max + 1)
>> return -EINVAL;
>> #endif
>>
>> diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
>> index 8d5d98a5834..8454cd4e5e1 100644
>> --- a/kernel/sched/debug.c
>> +++ b/kernel/sched/debug.c
>> @@ -423,6 +423,7 @@ static void register_sd(struct sched_domain *sd, struct dentry *parent)
>>
>> #undef SDM
>>
>> + debugfs_create_u32("level", 0444, parent, (u32 *)&sd->level);
>
> It would be better if the level can be after group_flags since its a new addition?

I'll change the order.
Thanks

>> debugfs_create_file("flags", 0444, parent, &sd->flags, &sd_flags_fops);
>> debugfs_create_file("groups_flags", 0444, parent, &sd->groups->flags, &sd_flags_fops);
>> }
>> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
>> index 99ea5986038..3127c9b30af 100644
>> --- a/kernel/sched/topology.c
>> +++ b/kernel/sched/topology.c
>> @@ -1468,7 +1468,7 @@ static void set_domain_attribute(struct sched_domain *sd,
>> } else
>> request = attr->relax_domain_level;
>>
>> - if (sd->level > request) {
>> + if (sd->level >= request) {
>> /* Turn off idle balance on this domain: */
>> sd->flags &= ~(SD_BALANCE_WAKE|SD_BALANCE_NEWIDLE);
>> }
>
> Other than the above change looks good.

2024-03-28 16:49:10

by Vitalii Bursov

[permalink] [raw]
Subject: Re: [PATCH 1/1] sched/fair: allow disabling newidle_balance with sched_relax_domain_level


On 28.03.24 16:43, Vincent Guittot wrote:
> On Thu, 28 Mar 2024 at 01:31, Vitalii Bursov <[email protected]> wrote:
>>
>> Change relax_domain_level checks so that it would be possible
>> to exclude all domains from newidle balancing.
>>
>> This matches the behavior described in the documentation:
>> -1 no request. use system default or follow request of others.
>> 0 no search.
>> 1 search siblings (hyperthreads in a core).
>>
>> "2" enables levels 0 and 1, level_max excludes the last (level_max)
>> level, and level_max+1 includes all levels.
>
> I was about to say that max+1 is useless because it's the same as -1
> but it's not exactly the same because it can supersede the system wide
> default_relax_domain_level. I wonder if one should be able to enable
> more levels than what the system has set by default.

I don't know is such systems exist, but cpusets.rst suggests that
increasing it beyoud the default value is possible:
> If your situation is:
>
> - The migration costs between each cpu can be assumed considerably
> small(for you) due to your special application's behavior or
> special hardware support for CPU cache etc.
> - The searching cost doesn't have impact(for you) or you can make
> the searching cost enough small by managing cpuset to compact etc.
> - The latency is required even it sacrifices cache hit rate etc.
> then increasing 'sched_relax_domain_level' would benefit you.


>>
>> Signed-off-by: Vitalii Bursov <[email protected]>
>> ---
>> kernel/cgroup/cpuset.c | 2 +-
>> kernel/sched/debug.c | 1 +
>> kernel/sched/topology.c | 2 +-
>> 3 files changed, 3 insertions(+), 2 deletions(-)
>>
>> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
>> index 4237c874871..da24187c4e0 100644
>> --- a/kernel/cgroup/cpuset.c
>> +++ b/kernel/cgroup/cpuset.c
>> @@ -2948,7 +2948,7 @@ bool current_cpuset_is_being_rebound(void)
>> static int update_relax_domain_level(struct cpuset *cs, s64 val)
>> {
>> #ifdef CONFIG_SMP
>> - if (val < -1 || val >= sched_domain_level_max)
>> + if (val < -1 || val > sched_domain_level_max + 1)
>> return -EINVAL;
>> #endif
>>
>> diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
>> index 8d5d98a5834..8454cd4e5e1 100644
>> --- a/kernel/sched/debug.c
>> +++ b/kernel/sched/debug.c
>> @@ -423,6 +423,7 @@ static void register_sd(struct sched_domain *sd, struct dentry *parent)
>>
>> #undef SDM
>>
>> + debugfs_create_u32("level", 0444, parent, (u32 *)&sd->level);
>
> IMO, this should be a separate patch as it's not part of the fix

Thanks, I'll split it.

>> debugfs_create_file("flags", 0444, parent, &sd->flags, &sd_flags_fops);
>> debugfs_create_file("groups_flags", 0444, parent, &sd->groups->flags, &sd_flags_fops);
>> }
>> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
>> index 99ea5986038..3127c9b30af 100644
>> --- a/kernel/sched/topology.c
>> +++ b/kernel/sched/topology.c
>> @@ -1468,7 +1468,7 @@ static void set_domain_attribute(struct sched_domain *sd,
>> } else
>> request = attr->relax_domain_level;
>>
>> - if (sd->level > request) {
>> + if (sd->level >= request) {
>
> good catch and worth :
> Fixes: 9ae7ab20b483 ("sched/topology: Don't set SD_BALANCE_WAKE on
> cpuset domain relax")
>
Will add this.
Thanks.

>
>> /* Turn off idle balance on this domain: */
>> sd->flags &= ~(SD_BALANCE_WAKE|SD_BALANCE_NEWIDLE);
>> }
>> --
>> 2.20.1
>>

2024-03-28 16:51:28

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH 1/1] sched/fair: allow disabling newidle_balance with sched_relax_domain_level

On Thu, 28 Mar 2024 at 17:27, Vitalii Bursov <[email protected]> wrote:
>
>
> On 28.03.24 16:43, Vincent Guittot wrote:
> > On Thu, 28 Mar 2024 at 01:31, Vitalii Bursov <[email protected]> wrote:
> >>
> >> Change relax_domain_level checks so that it would be possible
> >> to exclude all domains from newidle balancing.
> >>
> >> This matches the behavior described in the documentation:
> >> -1 no request. use system default or follow request of others.
> >> 0 no search.
> >> 1 search siblings (hyperthreads in a core).
> >>
> >> "2" enables levels 0 and 1, level_max excludes the last (level_max)
> >> level, and level_max+1 includes all levels.
> >
> > I was about to say that max+1 is useless because it's the same as -1
> > but it's not exactly the same because it can supersede the system wide
> > default_relax_domain_level. I wonder if one should be able to enable
> > more levels than what the system has set by default.
>
> I don't know is such systems exist, but cpusets.rst suggests that
> increasing it beyoud the default value is possible:
> > If your situation is:
> >
> > - The migration costs between each cpu can be assumed considerably
> > small(for you) due to your special application's behavior or
> > special hardware support for CPU cache etc.
> > - The searching cost doesn't have impact(for you) or you can make
> > the searching cost enough small by managing cpuset to compact etc.
> > - The latency is required even it sacrifices cache hit rate etc.
> > then increasing 'sched_relax_domain_level' would benefit you.

Fair enough. The doc should be updated as we can now clear the flags
but not set them

>
>
> >>
> >> Signed-off-by: Vitalii Bursov <[email protected]>
> >> ---
> >> kernel/cgroup/cpuset.c | 2 +-
> >> kernel/sched/debug.c | 1 +
> >> kernel/sched/topology.c | 2 +-
> >> 3 files changed, 3 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> >> index 4237c874871..da24187c4e0 100644
> >> --- a/kernel/cgroup/cpuset.c
> >> +++ b/kernel/cgroup/cpuset.c
> >> @@ -2948,7 +2948,7 @@ bool current_cpuset_is_being_rebound(void)
> >> static int update_relax_domain_level(struct cpuset *cs, s64 val)
> >> {
> >> #ifdef CONFIG_SMP
> >> - if (val < -1 || val >= sched_domain_level_max)
> >> + if (val < -1 || val > sched_domain_level_max + 1)
> >> return -EINVAL;
> >> #endif
> >>
> >> diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
> >> index 8d5d98a5834..8454cd4e5e1 100644
> >> --- a/kernel/sched/debug.c
> >> +++ b/kernel/sched/debug.c
> >> @@ -423,6 +423,7 @@ static void register_sd(struct sched_domain *sd, struct dentry *parent)
> >>
> >> #undef SDM
> >>
> >> + debugfs_create_u32("level", 0444, parent, (u32 *)&sd->level);
> >
> > IMO, this should be a separate patch as it's not part of the fix
>
> Thanks, I'll split it.
>
> >> debugfs_create_file("flags", 0444, parent, &sd->flags, &sd_flags_fops);
> >> debugfs_create_file("groups_flags", 0444, parent, &sd->groups->flags, &sd_flags_fops);
> >> }
> >> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> >> index 99ea5986038..3127c9b30af 100644
> >> --- a/kernel/sched/topology.c
> >> +++ b/kernel/sched/topology.c
> >> @@ -1468,7 +1468,7 @@ static void set_domain_attribute(struct sched_domain *sd,
> >> } else
> >> request = attr->relax_domain_level;
> >>
> >> - if (sd->level > request) {
> >> + if (sd->level >= request) {
> >
> > good catch and worth :
> > Fixes: 9ae7ab20b483 ("sched/topology: Don't set SD_BALANCE_WAKE on
> > cpuset domain relax")
> >
> Will add this.
> Thanks.
>
> >
> >> /* Turn off idle balance on this domain: */
> >> sd->flags &= ~(SD_BALANCE_WAKE|SD_BALANCE_NEWIDLE);
> >> }
> >> --
> >> 2.20.1
> >>

2024-03-28 17:11:43

by Vitalii Bursov

[permalink] [raw]
Subject: Re: [PATCH 1/1] sched/fair: allow disabling newidle_balance with sched_relax_domain_level



On 28.03.24 18:48, Vincent Guittot wrote:
> On Thu, 28 Mar 2024 at 17:27, Vitalii Bursov <[email protected]> wrote:
>>
>>
>> On 28.03.24 16:43, Vincent Guittot wrote:
>>> On Thu, 28 Mar 2024 at 01:31, Vitalii Bursov <[email protected]> wrote:
>>>>
>>>> Change relax_domain_level checks so that it would be possible
>>>> to exclude all domains from newidle balancing.
>>>>
>>>> This matches the behavior described in the documentation:
>>>> -1 no request. use system default or follow request of others.
>>>> 0 no search.
>>>> 1 search siblings (hyperthreads in a core).
>>>>
>>>> "2" enables levels 0 and 1, level_max excludes the last (level_max)
>>>> level, and level_max+1 includes all levels.
>>>
>>> I was about to say that max+1 is useless because it's the same as -1
>>> but it's not exactly the same because it can supersede the system wide
>>> default_relax_domain_level. I wonder if one should be able to enable
>>> more levels than what the system has set by default.
>>
>> I don't know is such systems exist, but cpusets.rst suggests that
>> increasing it beyoud the default value is possible:
>>> If your situation is:
>>>
>>> - The migration costs between each cpu can be assumed considerably
>>> small(for you) due to your special application's behavior or
>>> special hardware support for CPU cache etc.
>>> - The searching cost doesn't have impact(for you) or you can make
>>> the searching cost enough small by managing cpuset to compact etc.
>>> - The latency is required even it sacrifices cache hit rate etc.
>>> then increasing 'sched_relax_domain_level' would benefit you.
>
> Fair enough. The doc should be updated as we can now clear the flags
> but not set them
>

SD_BALANCE_NEWIDLE is always set by default in sd_init() and cleared
in set_domain_attribute() depending on default_relax_domain_level
("relax_domain_level" kernel parameter) and cgroup configuration
if it's present.

So, it should work both ways - clearing flags when relax level
is decreasing, and not clearing the flag when it's increasing,
isn't it?

Also, after a closer look at set_domain_attribute(), it looks like
default_relax_domain_level is -1 on all systems, so if cgroup does
not set relax level, it won't clear any flags, which probably means
that level_max+1 is redundant today.

2024-03-28 17:39:00

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH 1/1] sched/fair: allow disabling newidle_balance with sched_relax_domain_level

On Thu, 28 Mar 2024 at 18:10, Vitalii Bursov <[email protected]> wrote:
>
>
>
> On 28.03.24 18:48, Vincent Guittot wrote:
> > On Thu, 28 Mar 2024 at 17:27, Vitalii Bursov <[email protected]> wrote:
> >>
> >>
> >> On 28.03.24 16:43, Vincent Guittot wrote:
> >>> On Thu, 28 Mar 2024 at 01:31, Vitalii Bursov <[email protected]> wrote:
> >>>>
> >>>> Change relax_domain_level checks so that it would be possible
> >>>> to exclude all domains from newidle balancing.
> >>>>
> >>>> This matches the behavior described in the documentation:
> >>>> -1 no request. use system default or follow request of others.
> >>>> 0 no search.
> >>>> 1 search siblings (hyperthreads in a core).
> >>>>
> >>>> "2" enables levels 0 and 1, level_max excludes the last (level_max)
> >>>> level, and level_max+1 includes all levels.
> >>>
> >>> I was about to say that max+1 is useless because it's the same as -1
> >>> but it's not exactly the same because it can supersede the system wide
> >>> default_relax_domain_level. I wonder if one should be able to enable
> >>> more levels than what the system has set by default.
> >>
> >> I don't know is such systems exist, but cpusets.rst suggests that
> >> increasing it beyoud the default value is possible:
> >>> If your situation is:
> >>>
> >>> - The migration costs between each cpu can be assumed considerably
> >>> small(for you) due to your special application's behavior or
> >>> special hardware support for CPU cache etc.
> >>> - The searching cost doesn't have impact(for you) or you can make
> >>> the searching cost enough small by managing cpuset to compact etc.
> >>> - The latency is required even it sacrifices cache hit rate etc.
> >>> then increasing 'sched_relax_domain_level' would benefit you.
> >
> > Fair enough. The doc should be updated as we can now clear the flags
> > but not set them
> >
>
> SD_BALANCE_NEWIDLE is always set by default in sd_init() and cleared
> in set_domain_attribute() depending on default_relax_domain_level
> ("relax_domain_level" kernel parameter) and cgroup configuration
> if it's present.

Yes, I meant that before
9ae7ab20b483 ("sched/topology: Don't set SD_BALANCE_WAKE on cpuset
domain relax")
The flags SD_BALANCE_NEWIDLE and SD_BALANCE_WAKE could also be set
even though sd_init() would not set them

>
> So, it should work both ways - clearing flags when relax level
> is decreasing, and not clearing the flag when it's increasing,
> isn't it?
>
> Also, after a closer look at set_domain_attribute(), it looks like
> default_relax_domain_level is -1 on all systems, so if cgroup does
> not set relax level, it won't clear any flags, which probably means
> that level_max+1 is redundant today.

Except if the boot parameter has set it to another level which was my
point. Does it make sense to be able to set a relax_level to level_max
in one cgroup if we have "relax_domain_level=1" in boot params as an
example ? But this is out of the scope of this patch because it
already works for level_max-1 so why not for level_max

So keep your change in update_relax_domain_level()

Thanks