2024-01-17 08:57:27

by alexs

[permalink] [raw]
Subject: [PATCH 1/5] sched/fair: add SD_CLUSTER in comments

From: Alex Shi <[email protected]>

The SD_CLUSTER omitted in following TOPOLOGY_SD_FLAGS explaination, add
it to fill the absent.

Signed-off-by: Alex Shi <[email protected]>
To: Valentin Schneider <[email protected]>
To: Vincent Guittot <[email protected]>
To: Juri Lelli <[email protected]>
To: Peter Zijlstra <[email protected]>
To: Ingo Molnar <[email protected]>
---
kernel/sched/topology.c | 1 +
1 file changed, 1 insertion(+)

diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 10d1391e7416..c342c52b1f34 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -1554,6 +1554,7 @@ static struct cpumask ***sched_domains_numa_masks;
* function:
*
* SD_SHARE_CPUCAPACITY - describes SMT topologies
+ * SD_CLUSTER - describes Cluster topologies
* SD_SHARE_PKG_RESOURCES - describes shared caches
* SD_NUMA - describes NUMA topologies
*
--
2.43.0



2024-01-17 08:57:42

by alexs

[permalink] [raw]
Subject: [PATCH 2/5] sched/fair: remove unused parameters

From: Alex Shi <[email protected]>

sds isn't used in function sched_asym(), so remove it to cleanup code.

Signed-off-by: Alex Shi <[email protected]>
To: Valentin Schneider <[email protected]>
To: Vincent Guittot <[email protected]>
To: Juri Lelli <[email protected]>
To: Peter Zijlstra <[email protected]>
To: Ingo Molnar <[email protected]>
---
kernel/sched/fair.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 46ba8329b10a..8d70417f5125 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9750,7 +9750,6 @@ static bool sched_use_asym_prio(struct sched_domain *sd, int cpu)
/**
* sched_asym - Check if the destination CPU can do asym_packing load balance
* @env: The load balancing environment
- * @sds: Load-balancing data with statistics of the local group
* @sgs: Load-balancing statistics of the candidate busiest group
* @group: The candidate busiest group
*
@@ -9769,8 +9768,7 @@ static bool sched_use_asym_prio(struct sched_domain *sd, int cpu)
* otherwise.
*/
static inline bool
-sched_asym(struct lb_env *env, struct sd_lb_stats *sds, struct sg_lb_stats *sgs,
- struct sched_group *group)
+sched_asym(struct lb_env *env, struct sg_lb_stats *sgs, struct sched_group *group)
{
/* Ensure that the whole local core is idle, if applicable. */
if (!sched_use_asym_prio(env->sd, env->dst_cpu))
@@ -9941,7 +9939,7 @@ static inline void update_sg_lb_stats(struct lb_env *env,
/* Check if dst CPU is idle and preferred to this group */
if (!local_group && env->sd->flags & SD_ASYM_PACKING &&
env->idle != CPU_NOT_IDLE && sgs->sum_h_nr_running &&
- sched_asym(env, sds, sgs, group)) {
+ sched_asym(env, sgs, group)) {
sgs->group_asym_packing = 1;
}

--
2.43.0


2024-01-17 08:57:58

by alexs

[permalink] [raw]
Subject: [PATCH 3/5] sched/fair: cleanup sched_use_asym_prio

From: Alex Shi <[email protected]>

And simplify the one line code. No function change.

Signed-off-by: Alex Shi <[email protected]>
To: Valentin Schneider <[email protected]>
To: Vincent Guittot <[email protected]>
To: Peter Zijlstra <[email protected]>
To: Ingo Molnar <[email protected]>
---
kernel/sched/fair.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 8d70417f5125..ebd659af2d78 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9741,10 +9741,8 @@ group_type group_classify(unsigned int imbalance_pct,
*/
static bool sched_use_asym_prio(struct sched_domain *sd, int cpu)
{
- if (!sched_smt_active())
- return true;
-
- return sd->flags & SD_SHARE_CPUCAPACITY || is_core_idle(cpu);
+ return (!sched_smt_active()) ||
+ (sd->flags & SD_SHARE_CPUCAPACITY) || is_core_idle(cpu);
}

/**
--
2.43.0


2024-01-17 08:58:29

by alexs

[permalink] [raw]
Subject: [PATCH 5/5] sched/fair: narrow the sched_use_asym_prio checking scenario

From: Alex Shi <[email protected]>

Current function doesn't match it's comments, in fact, core_idle
checking is only meaningful with non-SMT.
So make the function right.

Signed-off-by: Alex Shi <[email protected]>
To: Valentin Schneider <[email protected]>
To: Vincent Guittot <[email protected]>
To: Peter Zijlstra <[email protected]>
To: Ingo Molnar <[email protected]>
---
kernel/sched/fair.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 96163ab69ae0..0a321f639c79 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9741,8 +9741,8 @@ group_type group_classify(unsigned int imbalance_pct,
*/
static bool sched_use_asym_prio(struct sched_domain *sd, int cpu)
{
- return (!sched_smt_active()) ||
- (sd->flags & SD_SHARE_CPUCAPACITY) || is_core_idle(cpu);
+ return (sd->flags & SD_SHARE_CPUCAPACITY) ||
+ (!sched_smt_active() && is_core_idle(cpu));
}

static inline bool _sched_asym(struct sched_domain *sd,
--
2.43.0


2024-01-17 08:58:30

by alexs

[permalink] [raw]
Subject: [PATCH 4/5] sched/fair: add a func _sched_asym

From: Alex Shi <[email protected]>

Use this func in sched_asym and other path to simply code.
No function change.

Signed-off-by: Alex Shi <[email protected]>
To: Valentin Schneider <[email protected]>
To: Vincent Guittot <[email protected]>
To: Peter Zijlstra <[email protected]>
To: Ingo Molnar <[email protected]>
---
kernel/sched/fair.c | 27 +++++++++++++--------------
1 file changed, 13 insertions(+), 14 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ebd659af2d78..96163ab69ae0 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9745,6 +9745,14 @@ static bool sched_use_asym_prio(struct sched_domain *sd, int cpu)
(sd->flags & SD_SHARE_CPUCAPACITY) || is_core_idle(cpu);
}

+static inline bool _sched_asym(struct sched_domain *sd,
+ int dst_cpu, int repl_cpu)
+{
+ /* Ensure that the whole local core is idle, if applicable. */
+ return sched_use_asym_prio(sd, dst_cpu) &&
+ sched_asym_prefer(dst_cpu, repl_cpu);
+}
+
/**
* sched_asym - Check if the destination CPU can do asym_packing load balance
* @env: The load balancing environment
@@ -9768,20 +9776,13 @@ static bool sched_use_asym_prio(struct sched_domain *sd, int cpu)
static inline bool
sched_asym(struct lb_env *env, struct sg_lb_stats *sgs, struct sched_group *group)
{
- /* Ensure that the whole local core is idle, if applicable. */
- if (!sched_use_asym_prio(env->sd, env->dst_cpu))
- return false;
-
/*
* CPU priorities does not make sense for SMT cores with more than one
* busy sibling.
*/
- if (group->flags & SD_SHARE_CPUCAPACITY) {
- if (sgs->group_weight - sgs->idle_cpus != 1)
- return false;
- }
-
- return sched_asym_prefer(env->dst_cpu, group->asym_prefer_cpu);
+ return !(group->flags & SD_SHARE_CPUCAPACITY &&
+ sgs->group_weight - sgs->idle_cpus != 1) &&
+ _sched_asym(env->sd, env->dst_cpu, group->asym_prefer_cpu);
}

/* One group has more than one SMT CPU while the other group does not */
@@ -11036,8 +11037,7 @@ static struct rq *find_busiest_queue(struct lb_env *env,
* SMT cores with more than one busy sibling.
*/
if ((env->sd->flags & SD_ASYM_PACKING) &&
- sched_use_asym_prio(env->sd, i) &&
- sched_asym_prefer(i, env->dst_cpu) &&
+ _sched_asym(env->sd, i, env->dst_cpu) &&
nr_running == 1)
continue;

@@ -11907,8 +11907,7 @@ static void nohz_balancer_kick(struct rq *rq)
* preferred CPU must be idle.
*/
for_each_cpu_and(i, sched_domain_span(sd), nohz.idle_cpus_mask) {
- if (sched_use_asym_prio(sd, i) &&
- sched_asym_prefer(i, cpu)) {
+ if (_sched_asym(sd, i, cpu)) {
flags = NOHZ_STATS_KICK | NOHZ_BALANCE_KICK;
goto unlock;
}
--
2.43.0


2024-01-23 08:47:30

by Shrikanth Hegde

[permalink] [raw]
Subject: Re: [PATCH 5/5] sched/fair: narrow the sched_use_asym_prio checking scenario



On 1/17/24 2:27 PM, [email protected] wrote:
> From: Alex Shi <[email protected]>
>
> Current function doesn't match it's comments, in fact, core_idle
> checking is only meaningful with non-SMT.
> So make the function right.
>
> Signed-off-by: Alex Shi <[email protected]>
> To: Valentin Schneider <[email protected]>
> To: Vincent Guittot <[email protected]>
> To: Peter Zijlstra <[email protected]>
> To: Ingo Molnar <[email protected]>
> ---
> kernel/sched/fair.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 96163ab69ae0..0a321f639c79 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -9741,8 +9741,8 @@ group_type group_classify(unsigned int imbalance_pct,
> */
> static bool sched_use_asym_prio(struct sched_domain *sd, int cpu)
> {
> - return (!sched_smt_active()) ||
> - (sd->flags & SD_SHARE_CPUCAPACITY) || is_core_idle(cpu);
> + return (sd->flags & SD_SHARE_CPUCAPACITY) ||
> + (!sched_smt_active() && is_core_idle(cpu));
> }

This seems wrong. This would always return false for higher than SMT domains
if smt is active.

Was this meant to be sched_smt_active() && is_core_idle(cpu)?

>
> static inline bool _sched_asym(struct sched_domain *sd,

2024-01-25 10:10:01

by Alex Shi

[permalink] [raw]
Subject: Re: [PATCH 5/5] sched/fair: narrow the sched_use_asym_prio checking scenario



On 1/23/24 4:47 PM, Shrikanth Hegde wrote:
>
>
> On 1/17/24 2:27 PM, [email protected] wrote:
>> From: Alex Shi <[email protected]>
>>
>> Current function doesn't match it's comments, in fact, core_idle
>> checking is only meaningful with non-SMT.
>> So make the function right.
>>
>> Signed-off-by: Alex Shi <[email protected]>
>> To: Valentin Schneider <[email protected]>
>> To: Vincent Guittot <[email protected]>
>> To: Peter Zijlstra <[email protected]>
>> To: Ingo Molnar <[email protected]>
>> ---
>> kernel/sched/fair.c | 4 ++--
>> 1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 96163ab69ae0..0a321f639c79 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -9741,8 +9741,8 @@ group_type group_classify(unsigned int imbalance_pct,
>> */
>> static bool sched_use_asym_prio(struct sched_domain *sd, int cpu)
>> {
>> - return (!sched_smt_active()) ||
>> - (sd->flags & SD_SHARE_CPUCAPACITY) || is_core_idle(cpu);
>> + return (sd->flags & SD_SHARE_CPUCAPACITY) ||
>> + (!sched_smt_active() && is_core_idle(cpu));
>> }
>
> This seems wrong. This would always return false for higher than SMT domains
> if smt is active.
>

yes, thanks for point out.

> Was this meant to be sched_smt_active() && is_core_idle(cpu)?

In theory, yes, it should like this. But I have no ASYM device to test. :(

Thanks!
Alex

>
>>
>> static inline bool _sched_asym(struct sched_domain *sd,

2024-01-26 00:05:13

by Ricardo Neri

[permalink] [raw]
Subject: Re: [PATCH 5/5] sched/fair: narrow the sched_use_asym_prio checking scenario

On Tue, Jan 23, 2024 at 02:17:00PM +0530, Shrikanth Hegde wrote:
>
>
> On 1/17/24 2:27 PM, [email protected] wrote:
> > From: Alex Shi <[email protected]>
> >
> > Current function doesn't match it's comments, in fact, core_idle
> > checking is only meaningful with non-SMT.

For SMT cores, we _do_ need to check for a whole core to be idle when
deciding to use asym_packing priorities when balancing between cores, but
not in SMT domains. This is what the function's documentation states.

> > So make the function right.
> >
> > Signed-off-by: Alex Shi <[email protected]>
> > To: Valentin Schneider <[email protected]>
> > To: Vincent Guittot <[email protected]>
> > To: Peter Zijlstra <[email protected]>
> > To: Ingo Molnar <[email protected]>
> > ---
> > kernel/sched/fair.c | 4 ++--
> > 1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 96163ab69ae0..0a321f639c79 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -9741,8 +9741,8 @@ group_type group_classify(unsigned int imbalance_pct,
> > */
> > static bool sched_use_asym_prio(struct sched_domain *sd, int cpu)
> > {
> > - return (!sched_smt_active()) ||
> > - (sd->flags & SD_SHARE_CPUCAPACITY) || is_core_idle(cpu);
> > + return (sd->flags & SD_SHARE_CPUCAPACITY) ||
> > + (!sched_smt_active() && is_core_idle(cpu));
> > }
>
> This seems wrong. This would always return false for higher than SMT domains
> if smt is active.

Agreed.

>
> Was this meant to be sched_smt_active() && is_core_idle(cpu)?

But this would not work if SMT is inactive, in such case checking
for a whole idle core is pointless.


2024-01-26 01:14:50

by Ricardo Neri

[permalink] [raw]
Subject: Re: [PATCH 5/5] sched/fair: narrow the sched_use_asym_prio checking scenario

On Thu, Jan 25, 2024 at 05:35:32PM +0800, kuiliang Shi wrote:
>
>
> On 1/23/24 4:47 PM, Shrikanth Hegde wrote:
> >
> >
> > On 1/17/24 2:27 PM, [email protected] wrote:
> >> From: Alex Shi <[email protected]>
> >>
> >> Current function doesn't match it's comments, in fact, core_idle
> >> checking is only meaningful with non-SMT.
> >> So make the function right.
> >>
> >> Signed-off-by: Alex Shi <[email protected]>
> >> To: Valentin Schneider <[email protected]>
> >> To: Vincent Guittot <[email protected]>
> >> To: Peter Zijlstra <[email protected]>
> >> To: Ingo Molnar <[email protected]>
> >> ---
> >> kernel/sched/fair.c | 4 ++--
> >> 1 file changed, 2 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> >> index 96163ab69ae0..0a321f639c79 100644
> >> --- a/kernel/sched/fair.c
> >> +++ b/kernel/sched/fair.c
> >> @@ -9741,8 +9741,8 @@ group_type group_classify(unsigned int imbalance_pct,
> >> */
> >> static bool sched_use_asym_prio(struct sched_domain *sd, int cpu)
> >> {
> >> - return (!sched_smt_active()) ||
> >> - (sd->flags & SD_SHARE_CPUCAPACITY) || is_core_idle(cpu);
> >> + return (sd->flags & SD_SHARE_CPUCAPACITY) ||
> >> + (!sched_smt_active() && is_core_idle(cpu));
> >> }
> >
> > This seems wrong. This would always return false for higher than SMT domains
> > if smt is active.
> >
>
> yes, thanks for point out.
>
> > Was this meant to be sched_smt_active() && is_core_idle(cpu)?
>
> In theory, yes, it should like this. But I have no ASYM device to test. :(

This would not work with !SMT and asym_packing.

I can test your patches on asym_packing + SMT systems if you post a new
version.


2024-01-26 01:58:10

by Alex Shi

[permalink] [raw]
Subject: Re: [PATCH 4/5] sched/fair: add a func _sched_asym



On 1/26/24 5:56 AM, Ricardo Neri wrote:
> On Wed, Jan 17, 2024 at 04:57:14PM +0800, [email protected] wrote:
>> From: Alex Shi <[email protected]>
>>
>> Use this func in sched_asym and other path to simply code.
>> No function change.
>>
>> Signed-off-by: Alex Shi <[email protected]>
>> To: Valentin Schneider <[email protected]>
>> To: Vincent Guittot <[email protected]>
>> To: Peter Zijlstra <[email protected]>
>> To: Ingo Molnar <[email protected]>
>> ---
>> kernel/sched/fair.c | 27 +++++++++++++--------------
>> 1 file changed, 13 insertions(+), 14 deletions(-)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index ebd659af2d78..96163ab69ae0 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -9745,6 +9745,14 @@ static bool sched_use_asym_prio(struct sched_domain *sd, int cpu)
>> (sd->flags & SD_SHARE_CPUCAPACITY) || is_core_idle(cpu);
>> }
>>
>> +static inline bool _sched_asym(struct sched_domain *sd,
>> + int dst_cpu, int repl_cpu)
>
> What does repl_cpu mean? Maybe renaming to src_cpu?

Uh, src_cpu is better than a 'replacing' cpu. Thanks!

>
>> +{
>> + /* Ensure that the whole local core is idle, if applicable. */
>> + return sched_use_asym_prio(sd, dst_cpu) &&
>> + sched_asym_prefer(dst_cpu, repl_cpu);
>
> The comment no longer applies to the whole expression. Perhaps rewording is
> in order. First we check for the whole idle core if applicable (i.e., when
> not balancing among SMT siblings). Then we check priorities.

Right will fix this by v2.
>
> Also, indentation should be aligned with `return`, IMO.
>
>> +}
>> +
>> /**
>> * sched_asym - Check if the destination CPU can do asym_packing load balance
>> * @env: The load balancing environment
>> @@ -9768,20 +9776,13 @@ static bool sched_use_asym_prio(struct sched_domain *sd, int cpu)
>> static inline bool
>> sched_asym(struct lb_env *env, struct sg_lb_stats *sgs, struct sched_group *group)
>> {
>> - /* Ensure that the whole local core is idle, if applicable. */
>> - if (!sched_use_asym_prio(env->sd, env->dst_cpu))
>> - return false;
>> -
>> /*
>> * CPU priorities does not make sense for SMT cores with more than one
>> * busy sibling.
>
> While here, it might be good to fix a syntax error above: s/does/do/.
>

Yes, thanks

>> */
>> - if (group->flags & SD_SHARE_CPUCAPACITY) {
>> - if (sgs->group_weight - sgs->idle_cpus != 1)
>> - return false;
>> - }
>> -
>> - return sched_asym_prefer(env->dst_cpu, group->asym_prefer_cpu);
>> + return !(group->flags & SD_SHARE_CPUCAPACITY &&
>> + sgs->group_weight - sgs->idle_cpus != 1) &&
>> + _sched_asym(env->sd, env->dst_cpu, group->asym_prefer_cpu);
>
> Perhaps we can come up with a better name than _sched_asym(). After this
> patch the difference between sched_asym() and _sched_asym() is that the
> former considers the stats of the source group. Maybe sched_asym() can be
> renamed as sched_group_asym(); it is only used in update_sg_lb_stats().
> Your _sched_asym() can become sched_asym().

Thanks for good suggestion! will send v2 according your suggestion.

Alex
>
>> }
>>
>> /* One group has more than one SMT CPU while the other group does not */
>> @@ -11036,8 +11037,7 @@ static struct rq *find_busiest_queue(struct lb_env *env,
>> * SMT cores with more than one busy sibling.
>> */
>> if ((env->sd->flags & SD_ASYM_PACKING) &&
>> - sched_use_asym_prio(env->sd, i) &&
>> - sched_asym_prefer(i, env->dst_cpu) &&
>> + _sched_asym(env->sd, i, env->dst_cpu) &&
>> nr_running == 1)
>> continue;
>>
>> @@ -11907,8 +11907,7 @@ static void nohz_balancer_kick(struct rq *rq)
>> * preferred CPU must be idle.
>> */
>> for_each_cpu_and(i, sched_domain_span(sd), nohz.idle_cpus_mask) {
>> - if (sched_use_asym_prio(sd, i) &&
>> - sched_asym_prefer(i, cpu)) {
>> + if (_sched_asym(sd, i, cpu)) {
>> flags = NOHZ_STATS_KICK | NOHZ_BALANCE_KICK;
>> goto unlock;
>> }
>> --
>> 2.43.0
>>

2024-01-26 04:30:30

by Ricardo Neri

[permalink] [raw]
Subject: Re: [PATCH 4/5] sched/fair: add a func _sched_asym

On Wed, Jan 17, 2024 at 04:57:14PM +0800, [email protected] wrote:
> From: Alex Shi <[email protected]>
>
> Use this func in sched_asym and other path to simply code.
> No function change.
>
> Signed-off-by: Alex Shi <[email protected]>
> To: Valentin Schneider <[email protected]>
> To: Vincent Guittot <[email protected]>
> To: Peter Zijlstra <[email protected]>
> To: Ingo Molnar <[email protected]>
> ---
> kernel/sched/fair.c | 27 +++++++++++++--------------
> 1 file changed, 13 insertions(+), 14 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index ebd659af2d78..96163ab69ae0 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -9745,6 +9745,14 @@ static bool sched_use_asym_prio(struct sched_domain *sd, int cpu)
> (sd->flags & SD_SHARE_CPUCAPACITY) || is_core_idle(cpu);
> }
>
> +static inline bool _sched_asym(struct sched_domain *sd,
> + int dst_cpu, int repl_cpu)

What does repl_cpu mean? Maybe renaming to src_cpu?

> +{
> + /* Ensure that the whole local core is idle, if applicable. */
> + return sched_use_asym_prio(sd, dst_cpu) &&
> + sched_asym_prefer(dst_cpu, repl_cpu);

The comment no longer applies to the whole expression. Perhaps rewording is
in order. First we check for the whole idle core if applicable (i.e., when
not balancing among SMT siblings). Then we check priorities.

Also, indentation should be aligned with `return`, IMO.

> +}
> +
> /**
> * sched_asym - Check if the destination CPU can do asym_packing load balance
> * @env: The load balancing environment
> @@ -9768,20 +9776,13 @@ static bool sched_use_asym_prio(struct sched_domain *sd, int cpu)
> static inline bool
> sched_asym(struct lb_env *env, struct sg_lb_stats *sgs, struct sched_group *group)
> {
> - /* Ensure that the whole local core is idle, if applicable. */
> - if (!sched_use_asym_prio(env->sd, env->dst_cpu))
> - return false;
> -
> /*
> * CPU priorities does not make sense for SMT cores with more than one
> * busy sibling.

While here, it might be good to fix a syntax error above: s/does/do/.

> */
> - if (group->flags & SD_SHARE_CPUCAPACITY) {
> - if (sgs->group_weight - sgs->idle_cpus != 1)
> - return false;
> - }
> -
> - return sched_asym_prefer(env->dst_cpu, group->asym_prefer_cpu);
> + return !(group->flags & SD_SHARE_CPUCAPACITY &&
> + sgs->group_weight - sgs->idle_cpus != 1) &&
> + _sched_asym(env->sd, env->dst_cpu, group->asym_prefer_cpu);

Perhaps we can come up with a better name than _sched_asym(). After this
patch the difference between sched_asym() and _sched_asym() is that the
former considers the stats of the source group. Maybe sched_asym() can be
renamed as sched_group_asym(); it is only used in update_sg_lb_stats().
Your _sched_asym() can become sched_asym().

> }
>
> /* One group has more than one SMT CPU while the other group does not */
> @@ -11036,8 +11037,7 @@ static struct rq *find_busiest_queue(struct lb_env *env,
> * SMT cores with more than one busy sibling.
> */
> if ((env->sd->flags & SD_ASYM_PACKING) &&
> - sched_use_asym_prio(env->sd, i) &&
> - sched_asym_prefer(i, env->dst_cpu) &&
> + _sched_asym(env->sd, i, env->dst_cpu) &&
> nr_running == 1)
> continue;
>
> @@ -11907,8 +11907,7 @@ static void nohz_balancer_kick(struct rq *rq)
> * preferred CPU must be idle.
> */
> for_each_cpu_and(i, sched_domain_span(sd), nohz.idle_cpus_mask) {
> - if (sched_use_asym_prio(sd, i) &&
> - sched_asym_prefer(i, cpu)) {
> + if (_sched_asym(sd, i, cpu)) {
> flags = NOHZ_STATS_KICK | NOHZ_BALANCE_KICK;
> goto unlock;
> }
> --
> 2.43.0
>

2024-01-26 05:36:37

by Ricardo Neri

[permalink] [raw]
Subject: Re: [PATCH 2/5] sched/fair: remove unused parameters

On Wed, Jan 17, 2024 at 04:57:12PM +0800, [email protected] wrote:
> From: Alex Shi <[email protected]>
>
> sds isn't used in function sched_asym(), so remove it to cleanup code.
>
> Signed-off-by: Alex Shi <[email protected]>
> To: Valentin Schneider <[email protected]>
> To: Vincent Guittot <[email protected]>
> To: Juri Lelli <[email protected]>
> To: Peter Zijlstra <[email protected]>
> To: Ingo Molnar <[email protected]>

FWIW, Reviewed-by: Ricardo Neri <[email protected]>


2024-01-30 13:23:31

by Alex Shi

[permalink] [raw]
Subject: Re: [PATCH 5/5] sched/fair: narrow the sched_use_asym_prio checking scenario



On 1/26/24 8:06 AM, Ricardo Neri wrote:
> On Thu, Jan 25, 2024 at 05:35:32PM +0800, kuiliang Shi wrote:
>>
>>
>> On 1/23/24 4:47 PM, Shrikanth Hegde wrote:
>>>
>>>
>>> On 1/17/24 2:27 PM, [email protected] wrote:
>>>> From: Alex Shi <[email protected]>
>>>>
>>>> Current function doesn't match it's comments, in fact, core_idle
>>>> checking is only meaningful with non-SMT.
>>>> So make the function right.
>>>>
>>>> Signed-off-by: Alex Shi <[email protected]>
>>>> To: Valentin Schneider <[email protected]>
>>>> To: Vincent Guittot <[email protected]>
>>>> To: Peter Zijlstra <[email protected]>
>>>> To: Ingo Molnar <[email protected]>
>>>> ---
>>>> kernel/sched/fair.c | 4 ++--
>>>> 1 file changed, 2 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>>>> index 96163ab69ae0..0a321f639c79 100644
>>>> --- a/kernel/sched/fair.c
>>>> +++ b/kernel/sched/fair.c
>>>> @@ -9741,8 +9741,8 @@ group_type group_classify(unsigned int imbalance_pct,
>>>> */
>>>> static bool sched_use_asym_prio(struct sched_domain *sd, int cpu)
>>>> {
>>>> - return (!sched_smt_active()) ||
>>>> - (sd->flags & SD_SHARE_CPUCAPACITY) || is_core_idle(cpu);
>>>> + return (sd->flags & SD_SHARE_CPUCAPACITY) ||
>>>> + (!sched_smt_active() && is_core_idle(cpu));
>>>> }
>>>
>>> This seems wrong. This would always return false for higher than SMT domains
>>> if smt is active.
>>>
>>
>> yes, thanks for point out.
>>
>>> Was this meant to be sched_smt_active() && is_core_idle(cpu)?
>>
>> In theory, yes, it should like this. But I have no ASYM device to test. :(
>
> This would not work with !SMT and asym_packing.
>
> I can test your patches on asym_packing + SMT systems if you post a new
> version.
>

Hi Neri,

Thanks a lot for generous offer! I don't know if my understanding right, but I try my best to have a best guessing in V2 patch for you. :)

Many thanks for the help!

Best regards
Alex