2020-03-04 17:03:11

by Vineeth Remanan Pillai

[permalink] [raw]
Subject: [RFC PATCH 11/13] sched: migration changes for core scheduling

From: Aubrey Li <[email protected]>

- Don't migrate if there is a cookie mismatch
Load balance tries to move task from busiest CPU to the
destination CPU. When core scheduling is enabled, if the
task's cookie does not match with the destination CPU's
core cookie, this task will be skipped by this CPU. This
mitigates the forced idle time on the destination CPU.

- Select cookie matched idle CPU
In the fast path of task wakeup, select the first cookie matched
idle CPU instead of the first idle CPU.

- Find cookie matched idlest CPU
In the slow path of task wakeup, find the idlest CPU whose core
cookie matches with task's cookie

- Don't migrate task if cookie not match
For the NUMA load balance, don't migrate task to the CPU whose
core cookie does not match with task's cookie

Signed-off-by: Aubrey Li <[email protected]>
Signed-off-by: Tim Chen <[email protected]>
Signed-off-by: Vineeth Remanan Pillai <[email protected]>
---
kernel/sched/fair.c | 55 +++++++++++++++++++++++++++++++++++++++++---
kernel/sched/sched.h | 29 +++++++++++++++++++++++
2 files changed, 81 insertions(+), 3 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1c9a80d8dbb8..f42ceecb749f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1789,6 +1789,15 @@ static void task_numa_find_cpu(struct task_numa_env *env,
if (!cpumask_test_cpu(cpu, env->p->cpus_ptr))
continue;

+#ifdef CONFIG_SCHED_CORE
+ /*
+ * Skip this cpu if source task's cookie does not match
+ * with CPU's core cookie.
+ */
+ if (!sched_core_cookie_match(cpu_rq(cpu), env->p))
+ continue;
+#endif
+
env->dst_cpu = cpu;
task_numa_compare(env, taskimp, groupimp, maymove);
}
@@ -5660,8 +5669,13 @@ find_idlest_group_cpu(struct sched_group *group, struct task_struct *p, int this

/* Traverse only the allowed CPUs */
for_each_cpu_and(i, sched_group_span(group), p->cpus_ptr) {
+ struct rq *rq = cpu_rq(i);
+
+#ifdef CONFIG_SCHED_CORE
+ if (!sched_core_cookie_match(rq, p))
+ continue;
+#endif
if (available_idle_cpu(i)) {
- struct rq *rq = cpu_rq(i);
struct cpuidle_state *idle = idle_get_state(rq);
if (idle && idle->exit_latency < min_exit_latency) {
/*
@@ -5927,8 +5941,14 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
return si_cpu;
if (!cpumask_test_cpu(cpu, p->cpus_ptr))
continue;
+#ifdef CONFIG_SCHED_CORE
+ if (available_idle_cpu(cpu) &&
+ sched_core_cookie_match(cpu_rq(cpu), p))
+ break;
+#else
if (available_idle_cpu(cpu))
break;
+#endif
if (si_cpu == -1 && sched_idle_cpu(cpu))
si_cpu = cpu;
}
@@ -7264,8 +7284,9 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env)
* We do not migrate tasks that are:
* 1) throttled_lb_pair, or
* 2) cannot be migrated to this CPU due to cpus_ptr, or
- * 3) running (obviously), or
- * 4) are cache-hot on their current CPU.
+ * 3) task's cookie does not match with this CPU's core cookie
+ * 4) running (obviously), or
+ * 5) are cache-hot on their current CPU.
*/
if (throttled_lb_pair(task_group(p), env->src_cpu, env->dst_cpu))
return 0;
@@ -7300,6 +7321,15 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env)
return 0;
}

+#ifdef CONFIG_SCHED_CORE
+ /*
+ * Don't migrate task if the task's cookie does not match
+ * with the destination CPU's core cookie.
+ */
+ if (!sched_core_cookie_match(cpu_rq(env->dst_cpu), p))
+ return 0;
+#endif
+
/* Record that we found atleast one task that could run on dst_cpu */
env->flags &= ~LBF_ALL_PINNED;

@@ -8498,6 +8528,25 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p,
p->cpus_ptr))
continue;

+#ifdef CONFIG_SCHED_CORE
+ if (sched_core_enabled(cpu_rq(this_cpu))) {
+ int i = 0;
+ bool cookie_match = false;
+
+ for_each_cpu(i, sched_group_span(group)) {
+ struct rq *rq = cpu_rq(i);
+
+ if (sched_core_cookie_match(rq, p)) {
+ cookie_match = true;
+ break;
+ }
+ }
+ /* Skip over this group if no cookie matched */
+ if (!cookie_match)
+ continue;
+ }
+#endif
+
local_group = cpumask_test_cpu(this_cpu,
sched_group_span(group));

diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 552c80b70757..e4019a482f0e 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1057,6 +1057,35 @@ static inline raw_spinlock_t *rq_lockp(struct rq *rq)
return &rq->__lock;
}

+/*
+ * Helper to check if the CPU's core cookie matches with the task's cookie
+ * when core scheduling is enabled.
+ * A special case is that the task's cookie always matches with CPU's core
+ * cookie if the CPU is in an idle core.
+ */
+static inline bool sched_core_cookie_match(struct rq *rq, struct task_struct *p)
+{
+ bool idle_core = true;
+ int cpu;
+
+ /* Ignore cookie match if core scheduler is not enabled on the CPU. */
+ if (!sched_core_enabled(rq))
+ return true;
+
+ for_each_cpu(cpu, cpu_smt_mask(cpu_of(rq))) {
+ if (!available_idle_cpu(cpu)) {
+ idle_core = false;
+ break;
+ }
+ }
+
+ /*
+ * A CPU in an idle core is always the best choice for tasks with
+ * cookies.
+ */
+ return idle_core || rq->core->core_cookie == p->core_cookie;
+}
+
extern void queue_core_balance(struct rq *rq);

void sched_core_add(struct rq *rq, struct task_struct *p);
--
2.17.1


2020-06-12 13:23:51

by Joel Fernandes

[permalink] [raw]
Subject: Re: [RFC PATCH 11/13] sched: migration changes for core scheduling

On Wed, Mar 04, 2020 at 05:00:01PM +0000, vpillai wrote:
> From: Aubrey Li <[email protected]>
>
> - Don't migrate if there is a cookie mismatch
> Load balance tries to move task from busiest CPU to the
> destination CPU. When core scheduling is enabled, if the
> task's cookie does not match with the destination CPU's
> core cookie, this task will be skipped by this CPU. This
> mitigates the forced idle time on the destination CPU.
>
> - Select cookie matched idle CPU
> In the fast path of task wakeup, select the first cookie matched
> idle CPU instead of the first idle CPU.
>
> - Find cookie matched idlest CPU
> In the slow path of task wakeup, find the idlest CPU whose core
> cookie matches with task's cookie
>
> - Don't migrate task if cookie not match
> For the NUMA load balance, don't migrate task to the CPU whose
> core cookie does not match with task's cookie
>
> Signed-off-by: Aubrey Li <[email protected]>
> Signed-off-by: Tim Chen <[email protected]>
> Signed-off-by: Vineeth Remanan Pillai <[email protected]>
> ---
> kernel/sched/fair.c | 55 +++++++++++++++++++++++++++++++++++++++++---
> kernel/sched/sched.h | 29 +++++++++++++++++++++++
> 2 files changed, 81 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 1c9a80d8dbb8..f42ceecb749f 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -1789,6 +1789,15 @@ static void task_numa_find_cpu(struct task_numa_env *env,
> if (!cpumask_test_cpu(cpu, env->p->cpus_ptr))
> continue;
>
> +#ifdef CONFIG_SCHED_CORE
> + /*
> + * Skip this cpu if source task's cookie does not match
> + * with CPU's core cookie.
> + */
> + if (!sched_core_cookie_match(cpu_rq(cpu), env->p))
> + continue;
> +#endif
> +
> env->dst_cpu = cpu;
> task_numa_compare(env, taskimp, groupimp, maymove);
> }
> @@ -5660,8 +5669,13 @@ find_idlest_group_cpu(struct sched_group *group, struct task_struct *p, int this
>
> /* Traverse only the allowed CPUs */
> for_each_cpu_and(i, sched_group_span(group), p->cpus_ptr) {
> + struct rq *rq = cpu_rq(i);
> +
> +#ifdef CONFIG_SCHED_CORE
> + if (!sched_core_cookie_match(rq, p))
> + continue;
> +#endif
> if (available_idle_cpu(i)) {
> - struct rq *rq = cpu_rq(i);
> struct cpuidle_state *idle = idle_get_state(rq);
> if (idle && idle->exit_latency < min_exit_latency) {
> /*
> @@ -5927,8 +5941,14 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
> return si_cpu;
> if (!cpumask_test_cpu(cpu, p->cpus_ptr))
> continue;
> +#ifdef CONFIG_SCHED_CORE
> + if (available_idle_cpu(cpu) &&
> + sched_core_cookie_match(cpu_rq(cpu), p))
> + break;
> +#else

select_idle_cpu() is called only if no idle core could be found in the LLC by
select_idle_core().

So, would it be better here to just do the cookie equality check directly
instead of calling the sched_core_cookie_match() helper? More so, because
select_idle_sibling() is a fastpath.

AFAIR, that's what v4 did:

if (available_idle_cpu(cpu))
#ifdef CONFIG_SCHED_CORE
if (sched_core_enabled(cpu_rq(cpu)) &&
(p->core_cookie == cpu_rq(cpu)->core->core_cookie))
break;
#else
break;
#endif


Thoughts? thanks,

- Joel

2020-06-12 21:36:36

by Vineeth Remanan Pillai

[permalink] [raw]
Subject: Re: [RFC PATCH 11/13] sched: migration changes for core scheduling

On Fri, Jun 12, 2020 at 9:21 AM Joel Fernandes <[email protected]> wrote:
>
> > +#ifdef CONFIG_SCHED_CORE
> > + if (available_idle_cpu(cpu) &&
> > + sched_core_cookie_match(cpu_rq(cpu), p))
> > + break;
> > +#else
>
> select_idle_cpu() is called only if no idle core could be found in the LLC by
> select_idle_core().
>
> So, would it be better here to just do the cookie equality check directly
> instead of calling the sched_core_cookie_match() helper? More so, because
> select_idle_sibling() is a fastpath.
>
Agree, this makes sense to me.

> AFAIR, that's what v4 did:
>
> if (available_idle_cpu(cpu))
> #ifdef CONFIG_SCHED_CORE
> if (sched_core_enabled(cpu_rq(cpu)) &&
> (p->core_cookie == cpu_rq(cpu)->core->core_cookie))
> break;
> #else
> break;
> #endif
>
This patch was initially not in v4 and this is a merging of 4 patches
suggested post-v4. During the initial round, code was like above. But since
there looked like a code duplication in the different migration paths,
it was consolidated into sched_core_cookie_match() and it caused this
extra logic to this specific code path. As you mentioned, I also feel
we do not need to check for core idleness in this path.

Thanks,
Vineeth

2020-06-13 02:29:48

by Joel Fernandes

[permalink] [raw]
Subject: Re: [RFC PATCH 11/13] sched: migration changes for core scheduling

On Fri, Jun 12, 2020 at 05:32:01PM -0400, Vineeth Remanan Pillai wrote:
> > AFAIR, that's what v4 did:
> >
> > if (available_idle_cpu(cpu))
> > #ifdef CONFIG_SCHED_CORE
> > if (sched_core_enabled(cpu_rq(cpu)) &&
> > (p->core_cookie == cpu_rq(cpu)->core->core_cookie))
> > break;
> > #else
> > break;
> > #endif
> >
> This patch was initially not in v4 and this is a merging of 4 patches
> suggested post-v4. During the initial round, code was like above. But since
> there looked like a code duplication in the different migration paths,
> it was consolidated into sched_core_cookie_match() and it caused this
> extra logic to this specific code path. As you mentioned, I also feel
> we do not need to check for core idleness in this path.

Ok, so I take it that you will make it so in v6 then, unless of course
someone else objects.

thanks!

- Joel

2020-06-13 19:04:03

by Vineeth Remanan Pillai

[permalink] [raw]
Subject: Re: [RFC PATCH 11/13] sched: migration changes for core scheduling

On Fri, Jun 12, 2020 at 10:25 PM Joel Fernandes <[email protected]> wrote:
>
> Ok, so I take it that you will make it so in v6 then, unless of course
> someone else objects.
>
Yes, just wanted to hear from Aubrey, Tim and others as well to see
if we have not missed anything obvious. Will have this in v6 if
there are no objections.

Thanks for bringing this up!

~Vineeth

2020-06-15 02:13:44

by Li, Aubrey

[permalink] [raw]
Subject: Re: [RFC PATCH 11/13] sched: migration changes for core scheduling

On 2020/6/14 2:59, Vineeth Remanan Pillai wrote:
> On Fri, Jun 12, 2020 at 10:25 PM Joel Fernandes <[email protected]> wrote:
>>
>> Ok, so I take it that you will make it so in v6 then, unless of course
>> someone else objects.
>>
> Yes, just wanted to hear from Aubrey, Tim and others as well to see
> if we have not missed anything obvious. Will have this in v6 if
> there are no objections.
>
> Thanks for bringing this up!
>
> ~Vineeth
>
Yes, this makes sense to me, no need to find idle core in select_idle_cpu().
Thanks to catch this!

Thanks,
-Aubrey