2024-04-03 15:07:34

by Pierre Gondois

[permalink] [raw]
Subject: [PATCH 4/7] sched/fair: Move/add on_null_domain()/housekeeping_cpu() checks

Prepare a following patch by moving/adding on_null_domain()
and housekeeping_runtime_cpu() checks.

In nohz_balance_enter_idle():
-
The housekeeping_runtime_cpu(cpu, HKR_TYPE_SCHED) call is currently
a no-op as HKR_TYPE_SCHED is never configured. The call can thus
be moved down.
-
In the current code, an isolated CPU sets nohz.has_blocked,
but isn't set in nohz.idle_cpus_mask.
However, _nohz_idle_balance::for_each_cpu_wrap() iterates
over nohz.idle_cpus_mask cpus.
Move the check up to avoid this.

In nohz_balance_exit_idle():
-
The check against a NULL sd in:
nohz_balance_enter_idle()
\-if (on_null_domain())
\-[returning here]
\-rq->nohz_tick_stopped = 1
prevents from setting the rq's nohz_tick_stopped, and
sched_balance_trigger()
\-if (on_null_domain())
\-[returning here]
\-nohz_balancer_kick()
\-nohz_balance_exit_idle()
prevents from resetting the nohz.[nr_cpus|idle_cpus_mask] variables.
So the newly added on_null_domain() check doesn't change current
behaviour.
It however prepares:
- the use of the HKR_TYPE_SCHED isolation mask
- the removal of on_null_domain()
in a later patch.

Signed-off-by: Pierre Gondois <[email protected]>
---
kernel/sched/fair.c | 20 ++++++++++++--------
1 file changed, 12 insertions(+), 8 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 0665f5eb4703..3e0f2a0f153f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -12039,6 +12039,10 @@ void nohz_balance_exit_idle(struct rq *rq)
{
SCHED_WARN_ON(rq != this_rq());

+ /* If we're a completely isolated CPU, we don't play: */
+ if (on_null_domain(rq))
+ return;
+
if (likely(!rq->nohz_tick_stopped))
return;

@@ -12079,10 +12083,6 @@ void nohz_balance_enter_idle(int cpu)
if (!cpu_active(cpu))
return;

- /* Spare idle load balancing on CPUs that don't want to be disturbed: */
- if (!housekeeping_runtime_test_cpu(cpu, HKR_TYPE_SCHED))
- return;
-
/*
* Can be set safely without rq->lock held
* If a clear happens, it will have evaluated last additions because
@@ -12090,6 +12090,14 @@ void nohz_balance_enter_idle(int cpu)
*/
rq->has_blocked_load = 1;

+ /* Spare idle load balancing on CPUs that don't want to be disturbed: */
+ if (!housekeeping_runtime_test_cpu(cpu, HKR_TYPE_SCHED))
+ return;
+
+ /* If we're a completely isolated CPU, we don't play: */
+ if (on_null_domain(rq))
+ return;
+
/*
* The tick is still stopped but load could have been added in the
* meantime. We set the nohz.has_blocked flag to trig a check of the
@@ -12099,10 +12107,6 @@ void nohz_balance_enter_idle(int cpu)
if (rq->nohz_tick_stopped)
goto out;

- /* If we're a completely isolated CPU, we don't play: */
- if (on_null_domain(rq))
- return;
-
rq->nohz_tick_stopped = 1;

cpumask_set_cpu(cpu, nohz.idle_cpus_mask);
--
2.25.1