2023-06-01 06:07:02

by John Stultz

[permalink] [raw]
Subject: [PATCH v4 00/13] Generalized Priority Inheritance via Proxy Execution v3

After having to catch up on other work after OSPM[1], I've finally
gotten back to focusing on Proxy Execution and wanted to send out this
next iteration of the patch series for review, testing, and feedback.
(Many thanks to folks who provided feedback on the last revision!)

As mentioned previously, this Proxy Execution series has a long history:
First described in a paper[2] by Watkins, Straub, Niehaus, then from
patches from Peter Zijlstra, extended with lots of work by Juri Lelli,
Valentin Schneider, and Connor O'Brien. (and thank you to Steven Rostedt
for providing additional details here!)

So again, many thanks to those above, as all the credit for this series
really is due to them - while the mistakes are likely mine.

Overview:
—----------
Proxy Execution is a generalized form of priority inheritance. Classic
priority inheritance works well for real-time tasks where there is a
straight forward priority order to how things are run. But it breaks
down when used between CFS or DEADLINE tasks, as there are lots
of parameters involved outside of just the task’s nice value when
selecting the next task to run (via pick_next_task()). So ideally we
want to imbue the mutex holder with all the scheduler attributes of
the blocked waiting task.

Proxy Execution does this via a few changes:
* Keeping tasks that are blocked on a mutex *on* the runqueue
* Keeping additional tracking of which mutex a task is blocked on, and
which task holds a specific mutex.
* Special handling for when we select a blocked task to run, so that we
instead run the mutex holder.

The first of these is the most difficult to grasp (I do get the mental
friction here: blocked tasks on the *run*queue sounds like nonsense!
Personally I like to think of the runqueue in this model more like a
“task-selection queue”).

By leaving blocked tasks on the runqueue, we allow pick_next_task() to
choose the task that should run next (even if it’s blocked waiting on a
mutex). If we do select a blocked task, we look at the task’s blocked_on
mutex and from there look at the mutex’s owner task. And in the simple
case, the task which owns the mutex is what we then choose to run,
allowing it to release the mutex.

This means that instead of just tracking “curr”, the scheduler needs to
track both the scheduler context (what was picked and all the state used
for scheduling decisions), and the execution context (what we’re
running)

In this way, the mutex owner is run “on behalf” of the blocked task
that was picked to run, essentially inheriting the scheduler context of
the blocked task.

As Connor outlined in a previous submission of this patch series, this
raises a number of complicated situations: The mutex owner might itself
be blocked on another mutex, or it could be sleeping, running on a
different CPU, in the process of migrating between CPUs, etc.

But the functionality provided by Proxy Execution is useful, as in
Android we have a number of cases where we are seeing priority inversion
(not unbounded, but longer than we’d like) between “foreground” and
“background” SCHED_NORMAL applications, so having a generalized solution
would be very useful.

New in v4:
—------
* Fixed deadlock that was caused by wait/wound mutexes having circular
blocked_on references by clearing the blocked_on pointer on the task
we are waking to wound/die.

* Tried to resolve an issue Dietmar raised with RT balancing where the
proxy migration and push_rt_task() were fighting ping-ponging tasks
back and forth, caused by push_rt_task() migrating tasks even if they
were in the chain that ends with the current running task. Though this
likely needs more work, as this change resulted in different migration
quirks (see below).

* Fixed a number of null-pointer traversals that the changes were
occasionally tripping on

* Reworked patch that exposes __mutex_owner() to the scheduler to ensure
it doesn’t expose it any more than necessary, as suggested by Peter.

* To address some of Peter’s complaints, backed out the rq_curr()
wrapper, and reworked rq_selected() to be a macro to avoid needing
multiple accessors for READ_ONCE/rcu_dereference() type accesses.

* Removed verbose legacy comments from previous developers of the series
as Dietmar was finding them distracting when reviewing the diffs
(Though, to ensure I heed the warnings from previous experienced
travelers, I’ve preserved the comments/questions in a separate patch
in my own development tree).

* Dropped patch that added *_task_blocked_on() wrappers to check locking
correctness. Mostly as Peter didn’t seem happy with the wrappers in
other patches, but I still think it's useful for testing (and the
blocked_on locking still needs some work), so I’m carrying it in my
personal development tree.

Issues still to address:
—----------
* Occasional getting null scheduler entities from pick_next_entity() in
CFS. I’m a little stumped as to where this is going awry just yet, and
delayed sending this out, but figured it was worth getting it out for
review on the other issues while I chase this down.

* Better deadlock handling in proxy(): With the ww_mutex issues
resolved, we shouldn’t see circular blocked_on references, but a
number of the bugs I’ve been chasing recently come from getting stuck
with proxy() returning null forcing a reselection over and over. These
are still bugs to address, but my current thinking is that if we get
stuck like this, we can start to remove the selected mutex blocked
tasks from the rq, and let them be woken from the mutex waiters list
as is done currently? Thoughts here would be appreciated.

* More work on migration correctness (RT/DL load balancing,etc). I’m
still seeing occasional trouble as cpu counts go up which seems to be
due to a bunch of tasks being proxy migrated to a cpu, then having to
migrate them all away at once (seeing lots of pick again iterations).
This may actually be correct, due to chain migration, but it ends up
looking similar to a deadlock.

* “rq_selected()” naming. Peter doesn’t like it, but I’ve not thought of
a better name. Open to suggestions.

* As discussed at OSPM, I want to split pick_next_task() up into two
phases selecting and setting the next tasks, as currently
pick_next_task() assumes the returned task will be run which results
in various side-effects in sched class logic when it’s run. This
causes trouble should proxy() require us to re-select a task due to
migration or other edge cases.

* CFS load balancing. Blocked tasks may carry forward load (PELT) to the
lock owner's CPU, so CPU may look like it is overloaded.

* I still want to push down the split scheduler and execution context
awareness further through the scheduling code, as lots of logic still
assumes there’s only a single “rq->curr” task.

* Optimization to avoid migrating blocked tasks (allowing for optimistic
spinning) if the runnable lock-owner at the end of the blocked_on chain
is already running.


Performance:
—----------
This patch series switches mutexes to use handoff mode rather than
optimistic spinning. This is a potential concern where locks are under
high contention. However, so far in our initial performance analysis (on
both x86 and mobile devices) we’ve not seen major regressions. That
said, Chenyu did report a regression[3], which we’ll need to look
further into. As mentioned above, there may be some optimizations that
can help here, but my focus is on getting the code working well before I
spend time optimizing.

Review and feedback would be greatly appreciated!

If folks find it easier to test/tinker with, this patch series can also
be found here (along with some debug patches):
https://github.com/johnstultz-work/linux-dev/commits/proxy-exec-v4-6.4-rc3

Thanks so much!
-john

[1] https://youtu.be/QEWqRhVS3lI (video of my OSPM talk)
[2] https://static.lwn.net/images/conf/rtlws11/papers/proc/p38.pdf
[3] https://lore.kernel.org/lkml/Y7vVqE0M%2FAoDoVbj@chenyu5-mobl1/

Cc: Joel Fernandes <[email protected]>
Cc: Qais Yousef <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Zimuzo Ezeozue <[email protected]>
Cc: Youssef Esmat <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Waiman Long <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: "Paul E . McKenney" <[email protected]>
Cc: [email protected]

Connor O'Brien (1):
sched: Attempt to fix rt/dl load balancing via chain level balance

John Stultz (3):
sched: Unnest ttwu_runnable in prep for proxy-execution
sched: Fix runtime accounting w/ proxy-execution
sched: Fixups to find_exec_ctx

Juri Lelli (2):
locking/mutex: make mutex::wait_lock irq safe
locking/mutex: Expose __mutex_owner()

Peter Zijlstra (6):
sched: Unify runtime accounting across classes
locking/ww_mutex: Remove wakeups from under mutex::wait_lock
locking/mutex: Rework task_struct::blocked_on
locking/mutex: Add task_struct::blocked_lock to serialize changes to
the blocked_on state
sched: Split scheduler execution context
sched: Add proxy execution

Valentin Schneider (1):
sched/rt: Fix proxy/current (push,pull)ability

include/linux/sched.h | 10 +-
include/linux/ww_mutex.h | 3 +
init/Kconfig | 7 +
init/init_task.c | 1 +
kernel/Kconfig.locks | 2 +-
kernel/fork.c | 6 +-
kernel/locking/mutex-debug.c | 9 +-
kernel/locking/mutex.c | 113 ++++--
kernel/locking/mutex.h | 25 ++
kernel/locking/ww_mutex.h | 54 ++-
kernel/sched/core.c | 719 +++++++++++++++++++++++++++++++++--
kernel/sched/cpudeadline.c | 12 +-
kernel/sched/cpudeadline.h | 3 +-
kernel/sched/cpupri.c | 28 +-
kernel/sched/cpupri.h | 6 +-
kernel/sched/deadline.c | 187 +++++----
kernel/sched/fair.c | 99 +++--
kernel/sched/rt.c | 242 +++++++-----
kernel/sched/sched.h | 75 +++-
kernel/sched/stop_task.c | 13 +-
20 files changed, 1284 insertions(+), 330 deletions(-)

--
2.41.0.rc0.172.g3f132b7071-goog



2023-06-01 06:07:26

by John Stultz

[permalink] [raw]
Subject: [PATCH v4 01/13] sched: Unify runtime accounting across classes

From: Peter Zijlstra <[email protected]>

All classes use sched_entity::exec_start to track runtime and have
copies of the exact same code around to compute runtime.

Collapse all that.

Cc: Joel Fernandes <[email protected]>
Cc: Qais Yousef <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Zimuzo Ezeozue <[email protected]>
Cc: Youssef Esmat <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Waiman Long <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: "Paul E . McKenney" <[email protected]>
Cc: [email protected]
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
[fix conflicts, fold in update_current_exec_runtime]
Signed-off-by: Connor O'Brien <[email protected]>
[jstultz: rebased, resovling minor conflicts]
Signed-off-by: John Stultz <[email protected]>
---
NOTE: This patch is a general cleanup and if no one objects
could be merged at this point. If needed, I'll resend separately
if it isn't picked up on its own.
---
include/linux/sched.h | 2 +-
kernel/sched/deadline.c | 13 +++-------
kernel/sched/fair.c | 56 ++++++++++++++++++++++++++++++----------
kernel/sched/rt.c | 13 +++-------
kernel/sched/sched.h | 12 ++-------
kernel/sched/stop_task.c | 13 +---------
6 files changed, 52 insertions(+), 57 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index eed5d65b8d1f..37dd571a1246 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -521,7 +521,7 @@ struct sched_statistics {
u64 block_max;
s64 sum_block_runtime;

- u64 exec_max;
+ s64 exec_max;
u64 slice_max;

u64 nr_migrations_cold;
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 5a9a4b81c972..f6f746d52410 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1308,9 +1308,8 @@ static void update_curr_dl(struct rq *rq)
{
struct task_struct *curr = rq->curr;
struct sched_dl_entity *dl_se = &curr->dl;
- u64 delta_exec, scaled_delta_exec;
+ s64 delta_exec, scaled_delta_exec;
int cpu = cpu_of(rq);
- u64 now;

if (!dl_task(curr) || !on_dl_rq(dl_se))
return;
@@ -1323,21 +1322,15 @@ static void update_curr_dl(struct rq *rq)
* natural solution, but the full ramifications of this
* approach need further study.
*/
- now = rq_clock_task(rq);
- delta_exec = now - curr->se.exec_start;
- if (unlikely((s64)delta_exec <= 0)) {
+ delta_exec = update_curr_common(rq);
+ if (unlikely(delta_exec <= 0)) {
if (unlikely(dl_se->dl_yielded))
goto throttle;
return;
}

- schedstat_set(curr->stats.exec_max,
- max(curr->stats.exec_max, delta_exec));
-
trace_sched_stat_runtime(curr, delta_exec, 0);

- update_current_exec_runtime(curr, now, delta_exec);
-
if (dl_entity_is_special(dl_se))
return;

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 373ff5f55884..bf9e8f29398e 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -891,23 +891,17 @@ static void update_tg_load_avg(struct cfs_rq *cfs_rq)
}
#endif /* CONFIG_SMP */

-/*
- * Update the current task's runtime statistics.
- */
-static void update_curr(struct cfs_rq *cfs_rq)
+static s64 update_curr_se(struct rq *rq, struct sched_entity *curr)
{
- struct sched_entity *curr = cfs_rq->curr;
- u64 now = rq_clock_task(rq_of(cfs_rq));
- u64 delta_exec;
-
- if (unlikely(!curr))
- return;
+ u64 now = rq_clock_task(rq);
+ s64 delta_exec;

delta_exec = now - curr->exec_start;
- if (unlikely((s64)delta_exec <= 0))
- return;
+ if (unlikely(delta_exec <= 0))
+ return delta_exec;

curr->exec_start = now;
+ curr->sum_exec_runtime += delta_exec;

if (schedstat_enabled()) {
struct sched_statistics *stats;
@@ -917,9 +911,43 @@ static void update_curr(struct cfs_rq *cfs_rq)
max(delta_exec, stats->exec_max));
}

- curr->sum_exec_runtime += delta_exec;
- schedstat_add(cfs_rq->exec_clock, delta_exec);
+ return delta_exec;
+}
+
+/*
+ * Used by other classes to account runtime.
+ */
+s64 update_curr_common(struct rq *rq)
+{
+ struct task_struct *curr = rq->curr;
+ s64 delta_exec;

+ delta_exec = update_curr_se(rq, &curr->se);
+ if (unlikely(delta_exec <= 0))
+ return delta_exec;
+
+ account_group_exec_runtime(curr, delta_exec);
+ cgroup_account_cputime(curr, delta_exec);
+
+ return delta_exec;
+}
+
+/*
+ * Update the current task's runtime statistics.
+ */
+static void update_curr(struct cfs_rq *cfs_rq)
+{
+ struct sched_entity *curr = cfs_rq->curr;
+ s64 delta_exec;
+
+ if (unlikely(!curr))
+ return;
+
+ delta_exec = update_curr_se(rq_of(cfs_rq), curr);
+ if (unlikely(delta_exec <= 0))
+ return;
+
+ schedstat_add(cfs_rq->exec_clock, delta_exec);
curr->vruntime += calc_delta_fair(delta_exec, curr);
update_min_vruntime(cfs_rq);

diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 00e0e5074115..0d0b276c447d 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1046,24 +1046,17 @@ static void update_curr_rt(struct rq *rq)
{
struct task_struct *curr = rq->curr;
struct sched_rt_entity *rt_se = &curr->rt;
- u64 delta_exec;
- u64 now;
+ s64 delta_exec;

if (curr->sched_class != &rt_sched_class)
return;

- now = rq_clock_task(rq);
- delta_exec = now - curr->se.exec_start;
- if (unlikely((s64)delta_exec <= 0))
+ delta_exec = update_curr_common(rq);
+ if (unlikely(delta_exec < 0))
return;

- schedstat_set(curr->stats.exec_max,
- max(curr->stats.exec_max, delta_exec));
-
trace_sched_stat_runtime(curr, delta_exec, 0);

- update_current_exec_runtime(curr, now, delta_exec);
-
if (!rt_bandwidth_enabled())
return;

diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index ec7b3e0a2b20..4a1ef64449b2 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2166,6 +2166,8 @@ struct affinity_context {
unsigned int flags;
};

+extern s64 update_curr_common(struct rq *rq);
+
struct sched_class {

#ifdef CONFIG_UCLAMP_TASK
@@ -3242,16 +3244,6 @@ extern int sched_dynamic_mode(const char *str);
extern void sched_dynamic_update(int mode);
#endif

-static inline void update_current_exec_runtime(struct task_struct *curr,
- u64 now, u64 delta_exec)
-{
- curr->se.sum_exec_runtime += delta_exec;
- account_group_exec_runtime(curr, delta_exec);
-
- curr->se.exec_start = now;
- cgroup_account_cputime(curr, delta_exec);
-}
-
#ifdef CONFIG_SCHED_MM_CID

#define SCHED_MM_CID_PERIOD_NS (100ULL * 1000000) /* 100ms */
diff --git a/kernel/sched/stop_task.c b/kernel/sched/stop_task.c
index 85590599b4d6..7595494ceb6d 100644
--- a/kernel/sched/stop_task.c
+++ b/kernel/sched/stop_task.c
@@ -70,18 +70,7 @@ static void yield_task_stop(struct rq *rq)

static void put_prev_task_stop(struct rq *rq, struct task_struct *prev)
{
- struct task_struct *curr = rq->curr;
- u64 now, delta_exec;
-
- now = rq_clock_task(rq);
- delta_exec = now - curr->se.exec_start;
- if (unlikely((s64)delta_exec < 0))
- delta_exec = 0;
-
- schedstat_set(curr->stats.exec_max,
- max(curr->stats.exec_max, delta_exec));
-
- update_current_exec_runtime(curr, now, delta_exec);
+ update_curr_common(rq);
}

/*
--
2.41.0.rc0.172.g3f132b7071-goog


2023-06-01 06:10:12

by John Stultz

[permalink] [raw]
Subject: [PATCH v4 04/13] locking/mutex: Rework task_struct::blocked_on

From: Peter Zijlstra <[email protected]>

Track the blocked-on relation for mutexes, this allows following this
relation at schedule time.

task
| blocked-on
v
mutex
| owner
v
task

Cc: Joel Fernandes <[email protected]>
Cc: Qais Yousef <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Zimuzo Ezeozue <[email protected]>
Cc: Youssef Esmat <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Waiman Long <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: "Paul E . McKenney" <[email protected]>
Cc: [email protected]
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
[minor changes while rebasing]
Signed-off-by: Juri Lelli <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Signed-off-by: Connor O'Brien <[email protected]>
[jstultz: Fix blocked_on tracking in __mutex_lock_common in error paths]
Signed-off-by: John Stultz <[email protected]>
---
v2:
* Fixed blocked_on tracking in error paths that was causing crashes
v4:
* Ensure we clear blocked_on when waking ww_mutexes to die or wound.
This is critical so we don't get ciruclar blocked_on relationships
that can't be resolved.
---
include/linux/sched.h | 5 +----
kernel/fork.c | 3 +--
kernel/locking/mutex-debug.c | 9 +++++----
kernel/locking/mutex.c | 7 +++++++
kernel/locking/ww_mutex.h | 16 ++++++++++++++--
5 files changed, 28 insertions(+), 12 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 37dd571a1246..a312a2ff47bf 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1141,10 +1141,7 @@ struct task_struct {
struct rt_mutex_waiter *pi_blocked_on;
#endif

-#ifdef CONFIG_DEBUG_MUTEXES
- /* Mutex deadlock detection: */
- struct mutex_waiter *blocked_on;
-#endif
+ struct mutex *blocked_on; /* lock we're blocked on */

#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
int non_block_count;
diff --git a/kernel/fork.c b/kernel/fork.c
index ed4e01daccaa..9244c540bb13 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -2461,9 +2461,8 @@ __latent_entropy struct task_struct *copy_process(
lockdep_init_task(p);
#endif

-#ifdef CONFIG_DEBUG_MUTEXES
p->blocked_on = NULL; /* not blocked yet */
-#endif
+
#ifdef CONFIG_BCACHE
p->sequential_io = 0;
p->sequential_io_avg = 0;
diff --git a/kernel/locking/mutex-debug.c b/kernel/locking/mutex-debug.c
index bc8abb8549d2..7228909c3e62 100644
--- a/kernel/locking/mutex-debug.c
+++ b/kernel/locking/mutex-debug.c
@@ -52,17 +52,18 @@ void debug_mutex_add_waiter(struct mutex *lock, struct mutex_waiter *waiter,
{
lockdep_assert_held(&lock->wait_lock);

- /* Mark the current thread as blocked on the lock: */
- task->blocked_on = waiter;
+ /* Current thread can't be already blocked (since it's executing!) */
+ DEBUG_LOCKS_WARN_ON(task->blocked_on);
}

void debug_mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter,
struct task_struct *task)
{
+ struct mutex *blocked_on = READ_ONCE(task->blocked_on);
+
DEBUG_LOCKS_WARN_ON(list_empty(&waiter->list));
DEBUG_LOCKS_WARN_ON(waiter->task != task);
- DEBUG_LOCKS_WARN_ON(task->blocked_on != waiter);
- task->blocked_on = NULL;
+ DEBUG_LOCKS_WARN_ON(blocked_on && blocked_on != lock);

INIT_LIST_HEAD(&waiter->list);
waiter->task = NULL;
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index a528e7f42caa..d7a202c35ebe 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -646,6 +646,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
goto err_early_kill;
}

+ current->blocked_on = lock;
set_current_state(state);
trace_contention_begin(lock, LCB_F_MUTEX);
for (;;) {
@@ -683,6 +684,10 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas

first = __mutex_waiter_is_first(lock, &waiter);

+ /*
+ * Gets reset by ttwu_runnable().
+ */
+ current->blocked_on = lock;
set_current_state(state);
/*
* Here we order against unlock; we must either see it change
@@ -720,6 +725,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
debug_mutex_free_waiter(&waiter);

skip_wait:
+ current->blocked_on = NULL;
/* got the lock - cleanup and rejoice! */
lock_acquired(&lock->dep_map, ip);
trace_contention_end(lock, 0);
@@ -734,6 +740,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
return 0;

err:
+ current->blocked_on = NULL;
__set_current_state(TASK_RUNNING);
__mutex_remove_waiter(lock, &waiter);
err_early_kill:
diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h
index 984a4e0bff36..7d623417b496 100644
--- a/kernel/locking/ww_mutex.h
+++ b/kernel/locking/ww_mutex.h
@@ -291,6 +291,12 @@ __ww_mutex_die(struct MUTEX *lock, struct MUTEX_WAITER *waiter,
debug_mutex_wake_waiter(lock, waiter);
#endif
wake_q_add(&ww_ctx->wake_q, waiter->task);
+ /*
+ * When waking up the task to die, be sure to clear the
+ * blocked_on pointer. Otherwise we can see circular
+ * blocked_on relationships that can't resolve.
+ */
+ waiter->task->blocked_on = NULL;
}

return true;
@@ -336,9 +342,15 @@ static bool __ww_mutex_wound(struct MUTEX *lock,
* it's wounded in __ww_mutex_check_kill() or has a
* wakeup pending to re-read the wounded state.
*/
- if (owner != current)
+ if (owner != current) {
wake_q_add(&ww_ctx->wake_q, owner);
-
+ /*
+ * When waking up the task to wound, be sure to clear the
+ * blocked_on pointer. Otherwise we can see circular
+ * blocked_on relationships that can't resolve.
+ */
+ owner->blocked_on = NULL;
+ }
return true;
}

--
2.41.0.rc0.172.g3f132b7071-goog


2023-06-01 06:14:02

by John Stultz

[permalink] [raw]
Subject: [PATCH v4 06/13] locking/mutex: Expose __mutex_owner()

From: Juri Lelli <[email protected]>

Implementing proxy execution requires that scheduler code be able to
identify the current owner of a mutex. Expose __mutex_owner() for
this purpose (alone!).

Cc: Joel Fernandes <[email protected]>
Cc: Qais Yousef <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Zimuzo Ezeozue <[email protected]>
Cc: Youssef Esmat <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Waiman Long <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: "Paul E . McKenney" <[email protected]>
Cc: [email protected]
Signed-off-by: Juri Lelli <[email protected]>
[Removed the EXPORT_SYMBOL]
Signed-off-by: Valentin Schneider <[email protected]>
Signed-off-by: Connor O'Brien <[email protected]>
[jstultz: Reworked per Peter's suggestions]
Signed-off-by: John Stultz <[email protected]>
---
v4:
* Move __mutex_owner() to kernel/locking/mutex.h instead of
adding a new globally available accessor function to keep
the exposure of this low, along with keeping it an inline
function, as suggested by PeterZ
---
kernel/locking/mutex.c | 25 -------------------------
kernel/locking/mutex.h | 25 +++++++++++++++++++++++++
2 files changed, 25 insertions(+), 25 deletions(-)

diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index ac3d2e350fac..8c9f9dffe473 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -56,31 +56,6 @@ __mutex_init(struct mutex *lock, const char *name, struct lock_class_key *key)
}
EXPORT_SYMBOL(__mutex_init);

-/*
- * @owner: contains: 'struct task_struct *' to the current lock owner,
- * NULL means not owned. Since task_struct pointers are aligned at
- * at least L1_CACHE_BYTES, we have low bits to store extra state.
- *
- * Bit0 indicates a non-empty waiter list; unlock must issue a wakeup.
- * Bit1 indicates unlock needs to hand the lock to the top-waiter
- * Bit2 indicates handoff has been done and we're waiting for pickup.
- */
-#define MUTEX_FLAG_WAITERS 0x01
-#define MUTEX_FLAG_HANDOFF 0x02
-#define MUTEX_FLAG_PICKUP 0x04
-
-#define MUTEX_FLAGS 0x07
-
-/*
- * Internal helper function; C doesn't allow us to hide it :/
- *
- * DO NOT USE (outside of mutex code).
- */
-static inline struct task_struct *__mutex_owner(struct mutex *lock)
-{
- return (struct task_struct *)(atomic_long_read(&lock->owner) & ~MUTEX_FLAGS);
-}
-
static inline struct task_struct *__owner_task(unsigned long owner)
{
return (struct task_struct *)(owner & ~MUTEX_FLAGS);
diff --git a/kernel/locking/mutex.h b/kernel/locking/mutex.h
index 0b2a79c4013b..1c7d3d32def8 100644
--- a/kernel/locking/mutex.h
+++ b/kernel/locking/mutex.h
@@ -20,6 +20,31 @@ struct mutex_waiter {
#endif
};

+/*
+ * @owner: contains: 'struct task_struct *' to the current lock owner,
+ * NULL means not owned. Since task_struct pointers are aligned at
+ * at least L1_CACHE_BYTES, we have low bits to store extra state.
+ *
+ * Bit0 indicates a non-empty waiter list; unlock must issue a wakeup.
+ * Bit1 indicates unlock needs to hand the lock to the top-waiter
+ * Bit2 indicates handoff has been done and we're waiting for pickup.
+ */
+#define MUTEX_FLAG_WAITERS 0x01
+#define MUTEX_FLAG_HANDOFF 0x02
+#define MUTEX_FLAG_PICKUP 0x04
+
+#define MUTEX_FLAGS 0x07
+
+/*
+ * Internal helper function; C doesn't allow us to hide it :/
+ *
+ * DO NOT USE (outside of mutex & scheduler code).
+ */
+static inline struct task_struct *__mutex_owner(struct mutex *lock)
+{
+ return (struct task_struct *)(atomic_long_read(&lock->owner) & ~MUTEX_FLAGS);
+}
+
#ifdef CONFIG_DEBUG_MUTEXES
extern void debug_mutex_lock_common(struct mutex *lock,
struct mutex_waiter *waiter);
--
2.41.0.rc0.172.g3f132b7071-goog


2023-06-01 06:17:17

by John Stultz

[permalink] [raw]
Subject: [PATCH v4 10/13] sched/rt: Fix proxy/current (push,pull)ability

From: Valentin Schneider <[email protected]>

Proxy execution forms atomic pairs of tasks: a proxy (scheduling context)
and an owner (execution context). The proxy, along with the rest of the
blocked chain, follows the owner wrt CPU placement.

They can be the same task, in which case push/pull doesn't need any
modification. When they are different, however,
FIFO1 & FIFO42:

,-> RT42
| | blocked-on
| v
blocked_donor | mutex
| | owner
| v
`-- RT1

RT1
RT42

CPU0 CPU1
^ ^
| |
overloaded !overloaded
rq prio = 42 rq prio = 0

RT1 is eligible to be pushed to CPU1, but should that happen it will
"carry" RT42 along. Clearly here neither RT1 nor RT42 must be seen as
push/pullable.

Furthermore, tasks becoming blocked on a mutex don't need an explicit
dequeue/enqueue cycle to be made (push/pull)able: they have to be running
to block on a mutex, thus they will eventually hit put_prev_task().

XXX: pinned tasks becoming unblocked should be removed from the push/pull
lists, but those don't get to see __schedule() straight away.

Cc: Joel Fernandes <[email protected]>
Cc: Qais Yousef <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Zimuzo Ezeozue <[email protected]>
Cc: Youssef Esmat <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Waiman Long <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: "Paul E . McKenney" <[email protected]>
Cc: [email protected]
Signed-off-by: Valentin Schneider <[email protected]>
Signed-off-by: Connor O'Brien <[email protected]>
Signed-off-by: John Stultz <[email protected]>
---
v3:
* Tweaked comments & commit message

TODO: Rework the wording of the commit message to match the rq_selected
renaming. (XXX Maybe "Delegator" for the task being proxied for?)
---
kernel/sched/core.c | 36 +++++++++++++++++++++++++++---------
kernel/sched/rt.c | 22 +++++++++++++++++-----
2 files changed, 44 insertions(+), 14 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 328776421c7a..c56921dc427e 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6989,12 +6989,29 @@ proxy(struct rq *rq, struct task_struct *next, struct rq_flags *rf)
WARN_ON_ONCE(owner && !owner->on_rq);
return owner;
}
+
+static inline void proxy_tag_curr(struct rq *rq, struct task_struct *next)
+{
+ /*
+ * pick_next_task() calls set_next_task() on the selected task
+ * at some point, which ensures it is not push/pullable.
+ * However, the selected task *and* the ,mutex owner form an
+ * atomic pair wrt push/pull.
+ *
+ * Make sure owner is not pushable. Unfortunately we can only
+ * deal with that by means of a dequeue/enqueue cycle. :-/
+ */
+ dequeue_task(rq, next, DEQUEUE_NOCLOCK | DEQUEUE_SAVE);
+ enqueue_task(rq, next, ENQUEUE_NOCLOCK | ENQUEUE_RESTORE);
+}
#else /* PROXY_EXEC */
static struct task_struct *
proxy(struct rq *rq, struct task_struct *next, struct rq_flags *rf)
{
return next;
}
+
+static inline void proxy_tag_curr(struct rq *rq, struct task_struct *next) { }
#endif /* PROXY_EXEC */

/*
@@ -7043,6 +7060,7 @@ static void __sched notrace __schedule(unsigned int sched_mode)
unsigned long prev_state;
struct rq_flags rf;
struct rq *rq;
+ bool proxied;
int cpu;
bool preserve_need_resched = false;

@@ -7116,19 +7134,11 @@ static void __sched notrace __schedule(unsigned int sched_mode)
atomic_inc(&rq->nr_iowait);
delayacct_blkio_start();
}
- } else {
- /*
- * Let's make this task, which is blocked on
- * a mutex, (push/pull)able (RT/DL).
- * Unfortunately we can only deal with that by
- * means of a dequeue/enqueue cycle. :-/
- */
- dequeue_task(rq, prev, 0);
- enqueue_task(rq, prev, 0);
}
switch_count = &prev->nvcsw;
}

+ proxied = !!prev->blocked_donor;
pick_again:
/*
* If picked task is actually blocked it means that it can act as a
@@ -7165,6 +7175,10 @@ static void __sched notrace __schedule(unsigned int sched_mode)
* changes to task_struct made by pick_next_task().
*/
RCU_INIT_POINTER(rq->curr, next);
+
+ if (unlikely(!task_current_selected(rq, next)))
+ proxy_tag_curr(rq, next);
+
/*
* The membarrier system call requires each architecture
* to have a full memory barrier after updating
@@ -7189,6 +7203,10 @@ static void __sched notrace __schedule(unsigned int sched_mode)
/* Also unlocks the rq: */
rq = context_switch(rq, prev, next, &rf);
} else {
+ /* In case next was already curr but just got blocked_donor*/
+ if (unlikely(!proxied && next->blocked_donor))
+ proxy_tag_curr(rq, next);
+
rq->clock_update_flags &= ~(RQCF_ACT_SKIP|RQCF_REQ_SKIP);

rq_unpin_lock(rq, &rf);
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index f5b1075e8170..d6bffcf31de0 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1537,9 +1537,21 @@ enqueue_task_rt(struct rq *rq, struct task_struct *p, int flags)

enqueue_rt_entity(rt_se, flags);

- if (!task_current(rq, p) && p->nr_cpus_allowed > 1 &&
- !task_is_blocked(p))
- enqueue_pushable_task(rq, p);
+ /*
+ * Current can't be pushed away. Proxy is tied to current, so don't
+ * push it either.
+ */
+ if (task_current(rq, p) || task_current_selected(rq, p))
+ return;
+
+ /*
+ * Pinned tasks can't be pushed.
+ * Affinity of blocked tasks doesn't matter.
+ */
+ if (!task_is_blocked(p) && p->nr_cpus_allowed == 1)
+ return;
+
+ enqueue_pushable_task(rq, p);
}

static void dequeue_task_rt(struct rq *rq, struct task_struct *p, int flags)
@@ -1828,9 +1840,9 @@ static void put_prev_task_rt(struct rq *rq, struct task_struct *p)

/*
* The previous task needs to be made eligible for pushing
- * if it is still active
+ * if it is still active. Affinity of blocked task doesn't matter.
*/
- if (on_rt_rq(&p->rt) && p->nr_cpus_allowed > 1)
+ if (on_rt_rq(&p->rt) && (p->nr_cpus_allowed > 1 || task_is_blocked(p)))
enqueue_pushable_task(rq, p);
}

--
2.41.0.rc0.172.g3f132b7071-goog


2023-06-01 06:24:13

by John Stultz

[permalink] [raw]
Subject: [PATCH v4 07/13] sched: Split scheduler execution context

From: Peter Zijlstra <[email protected]>

Lets define the scheduling context as all the scheduler state in
task_struct and the execution context as all state required to run
the task.

Currently both are intertwined in task_struct. We want to logically
split these such that we can run the execution context of one task
with the scheduling context of another.

To this purpose introduce rq_selected() macro to point to the
task_struct used for scheduler state and preserve rq->curr to
denote the execution context.

NOTE: Peter previously mentioned he didn't like the name
"rq_selected()", but I've not come up with a better alternative.
I'm very open to other name proposals.

Question for Peter: Dietmar suggested you'd prefer I drop the
conditionalization of the scheduler context pointer on the rq
(so rq_selected() would be open coded as rq->curr_sched or
whatever we agree on for a name), but I'd think in the
!CONFIG_PROXY_EXEC case we'd want to avoid the wasted pointer
and its use (since it curr_sched would always be == curr)?
If I'm wrong I'm fine switching this, but would appreciate
clarification.

Cc: Joel Fernandes <[email protected]>
Cc: Qais Yousef <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Zimuzo Ezeozue <[email protected]>
Cc: Youssef Esmat <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Waiman Long <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: "Paul E . McKenney" <[email protected]>
Cc: [email protected]
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Signed-off-by: Juri Lelli <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
[add additional comments and update more sched_class code to use
rq::proxy]
Signed-off-by: Connor O'Brien <[email protected]>
[jstultz: Rebased and resolved minor collisions, reworked to use
accessors, tweaked update_curr_common to use rq_proxy fixing rt
scheduling issues]
Signed-off-by: John Stultz <[email protected]>
---
v2:
* Reworked to use accessors
* Fixed update_curr_common to use proxy instead of curr
v3:
* Tweaked wrapper names
* Swapped proxy for selected for clarity
v4:
* Minor variable name tweaks for readability
* Use a macro instead of a inline function and drop
other helper functions as suggested by Peter.
* Remove verbose comments/questions to avoid review
distractions, as suggested by Dietmar
---
kernel/sched/core.c | 38 +++++++++++++++++++++++++-------------
kernel/sched/deadline.c | 35 ++++++++++++++++++-----------------
kernel/sched/fair.c | 18 +++++++++---------
kernel/sched/rt.c | 41 ++++++++++++++++++++---------------------
kernel/sched/sched.h | 37 +++++++++++++++++++++++++++++++++++--
5 files changed, 107 insertions(+), 62 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index a68d1276bab0..ace75aadb90b 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -793,7 +793,7 @@ static enum hrtimer_restart hrtick(struct hrtimer *timer)

rq_lock(rq, &rf);
update_rq_clock(rq);
- rq->curr->sched_class->task_tick(rq, rq->curr, 1);
+ rq_selected(rq)->sched_class->task_tick(rq, rq_selected(rq), 1);
rq_unlock(rq, &rf);

return HRTIMER_NORESTART;
@@ -2200,16 +2200,18 @@ static inline void check_class_changed(struct rq *rq, struct task_struct *p,

void check_preempt_curr(struct rq *rq, struct task_struct *p, int flags)
{
- if (p->sched_class == rq->curr->sched_class)
- rq->curr->sched_class->check_preempt_curr(rq, p, flags);
- else if (sched_class_above(p->sched_class, rq->curr->sched_class))
+ struct task_struct *curr = rq_selected(rq);
+
+ if (p->sched_class == curr->sched_class)
+ curr->sched_class->check_preempt_curr(rq, p, flags);
+ else if (sched_class_above(p->sched_class, curr->sched_class))
resched_curr(rq);

/*
* A queue event has occurred, and we're going to schedule. In
* this case, we can save a useless back to back clock update.
*/
- if (task_on_rq_queued(rq->curr) && test_tsk_need_resched(rq->curr))
+ if (task_on_rq_queued(curr) && test_tsk_need_resched(rq->curr))
rq_clock_skip_update(rq);
}

@@ -2599,7 +2601,7 @@ __do_set_cpus_allowed(struct task_struct *p, struct affinity_context *ctx)
lockdep_assert_held(&p->pi_lock);

queued = task_on_rq_queued(p);
- running = task_current(rq, p);
+ running = task_current_selected(rq, p);

if (queued) {
/*
@@ -5535,7 +5537,7 @@ unsigned long long task_sched_runtime(struct task_struct *p)
* project cycles that may never be accounted to this
* thread, breaking clock_gettime().
*/
- if (task_current(rq, p) && task_on_rq_queued(p)) {
+ if (task_current_selected(rq, p) && task_on_rq_queued(p)) {
prefetch_curr_exec_start(p);
update_rq_clock(rq);
p->sched_class->update_curr(rq);
@@ -5603,7 +5605,8 @@ void scheduler_tick(void)
{
int cpu = smp_processor_id();
struct rq *rq = cpu_rq(cpu);
- struct task_struct *curr = rq->curr;
+ /* accounting goes to the selected task */
+ struct task_struct *curr = rq_selected(rq);
struct rq_flags rf;
unsigned long thermal_pressure;
u64 resched_latency;
@@ -5701,6 +5704,13 @@ static void sched_tick_remote(struct work_struct *work)
if (cpu_is_offline(cpu))
goto out_unlock;

+ /*
+ * Since this is a remote tick for full dynticks mode, we are
+ * always sure that there is no proxy (only a single task is
+ * running).
+ */
+ SCHED_WARN_ON(rq->curr != rq_selected(rq));
+
update_rq_clock(rq);

if (!is_idle_task(curr)) {
@@ -6631,6 +6641,7 @@ static void __sched notrace __schedule(unsigned int sched_mode)
}

next = pick_next_task(rq, prev, &rf);
+ rq_set_selected(rq, next);
clear_tsk_need_resched(prev);
clear_preempt_need_resched();
#ifdef CONFIG_SCHED_DEBUG
@@ -7097,7 +7108,7 @@ void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)

prev_class = p->sched_class;
queued = task_on_rq_queued(p);
- running = task_current(rq, p);
+ running = task_current_selected(rq, p);
if (queued)
dequeue_task(rq, p, queue_flag);
if (running)
@@ -7185,7 +7196,7 @@ void set_user_nice(struct task_struct *p, long nice)
goto out_unlock;
}
queued = task_on_rq_queued(p);
- running = task_current(rq, p);
+ running = task_current_selected(rq, p);
if (queued)
dequeue_task(rq, p, DEQUEUE_SAVE | DEQUEUE_NOCLOCK);
if (running)
@@ -7749,7 +7760,7 @@ static int __sched_setscheduler(struct task_struct *p,
}

queued = task_on_rq_queued(p);
- running = task_current(rq, p);
+ running = task_current_selected(rq, p);
if (queued)
dequeue_task(rq, p, queue_flags);
if (running)
@@ -9249,6 +9260,7 @@ void __init init_idle(struct task_struct *idle, int cpu)
rcu_read_unlock();

rq->idle = idle;
+ rq_set_selected(rq, idle);
rcu_assign_pointer(rq->curr, idle);
idle->on_rq = TASK_ON_RQ_QUEUED;
#ifdef CONFIG_SMP
@@ -9351,7 +9363,7 @@ void sched_setnuma(struct task_struct *p, int nid)

rq = task_rq_lock(p, &rf);
queued = task_on_rq_queued(p);
- running = task_current(rq, p);
+ running = task_current_selected(rq, p);

if (queued)
dequeue_task(rq, p, DEQUEUE_SAVE);
@@ -10478,7 +10490,7 @@ void sched_move_task(struct task_struct *tsk)

update_rq_clock(rq);

- running = task_current(rq, tsk);
+ running = task_current_selected(rq, tsk);
queued = task_on_rq_queued(tsk);

if (queued)
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index f6f746d52410..d41d562df078 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1179,7 +1179,7 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer)
#endif

enqueue_task_dl(rq, p, ENQUEUE_REPLENISH);
- if (dl_task(rq->curr))
+ if (dl_task(rq_selected(rq)))
check_preempt_curr_dl(rq, p, 0);
else
resched_curr(rq);
@@ -1306,7 +1306,7 @@ static u64 grub_reclaim(u64 delta, struct rq *rq, struct sched_dl_entity *dl_se)
*/
static void update_curr_dl(struct rq *rq)
{
- struct task_struct *curr = rq->curr;
+ struct task_struct *curr = rq_selected(rq);
struct sched_dl_entity *dl_se = &curr->dl;
s64 delta_exec, scaled_delta_exec;
int cpu = cpu_of(rq);
@@ -1819,7 +1819,7 @@ static int find_later_rq(struct task_struct *task);
static int
select_task_rq_dl(struct task_struct *p, int cpu, int flags)
{
- struct task_struct *curr;
+ struct task_struct *curr, *selected;
bool select_rq;
struct rq *rq;

@@ -1830,6 +1830,7 @@ select_task_rq_dl(struct task_struct *p, int cpu, int flags)

rcu_read_lock();
curr = READ_ONCE(rq->curr); /* unlocked access */
+ selected = READ_ONCE(rq_selected(rq));

/*
* If we are dealing with a -deadline task, we must
@@ -1840,9 +1841,9 @@ select_task_rq_dl(struct task_struct *p, int cpu, int flags)
* other hand, if it has a shorter deadline, we
* try to make it stay here, it might be important.
*/
- select_rq = unlikely(dl_task(curr)) &&
+ select_rq = unlikely(dl_task(selected)) &&
(curr->nr_cpus_allowed < 2 ||
- !dl_entity_preempt(&p->dl, &curr->dl)) &&
+ !dl_entity_preempt(&p->dl, &selected->dl)) &&
p->nr_cpus_allowed > 1;

/*
@@ -1905,7 +1906,7 @@ static void check_preempt_equal_dl(struct rq *rq, struct task_struct *p)
* let's hope p can move out.
*/
if (rq->curr->nr_cpus_allowed == 1 ||
- !cpudl_find(&rq->rd->cpudl, rq->curr, NULL))
+ !cpudl_find(&rq->rd->cpudl, rq_selected(rq), NULL))
return;

/*
@@ -1944,7 +1945,7 @@ static int balance_dl(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p,
int flags)
{
- if (dl_entity_preempt(&p->dl, &rq->curr->dl)) {
+ if (dl_entity_preempt(&p->dl, &rq_selected(rq)->dl)) {
resched_curr(rq);
return;
}
@@ -1954,7 +1955,7 @@ static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p,
* In the unlikely case current and p have the same deadline
* let us try to decide what's the best thing to do...
*/
- if ((p->dl.deadline == rq->curr->dl.deadline) &&
+ if ((p->dl.deadline == rq_selected(rq)->dl.deadline) &&
!test_tsk_need_resched(rq->curr))
check_preempt_equal_dl(rq, p);
#endif /* CONFIG_SMP */
@@ -1989,7 +1990,7 @@ static void set_next_task_dl(struct rq *rq, struct task_struct *p, bool first)
if (hrtick_enabled_dl(rq))
start_hrtick_dl(rq, p);

- if (rq->curr->sched_class != &dl_sched_class)
+ if (rq_selected(rq)->sched_class != &dl_sched_class)
update_dl_rq_load_avg(rq_clock_pelt(rq), rq, 0);

deadline_queue_push_tasks(rq);
@@ -2306,8 +2307,8 @@ static int push_dl_task(struct rq *rq)
* can move away, it makes sense to just reschedule
* without going further in pushing next_task.
*/
- if (dl_task(rq->curr) &&
- dl_time_before(next_task->dl.deadline, rq->curr->dl.deadline) &&
+ if (dl_task(rq_selected(rq)) &&
+ dl_time_before(next_task->dl.deadline, rq_selected(rq)->dl.deadline) &&
rq->curr->nr_cpus_allowed > 1) {
resched_curr(rq);
return 0;
@@ -2432,7 +2433,7 @@ static void pull_dl_task(struct rq *this_rq)
* deadline than the current task of its runqueue.
*/
if (dl_time_before(p->dl.deadline,
- src_rq->curr->dl.deadline))
+ rq_selected(src_rq)->dl.deadline))
goto skip;

if (is_migration_disabled(p)) {
@@ -2471,9 +2472,9 @@ static void task_woken_dl(struct rq *rq, struct task_struct *p)
if (!task_on_cpu(rq, p) &&
!test_tsk_need_resched(rq->curr) &&
p->nr_cpus_allowed > 1 &&
- dl_task(rq->curr) &&
+ dl_task(rq_selected(rq)) &&
(rq->curr->nr_cpus_allowed < 2 ||
- !dl_entity_preempt(&p->dl, &rq->curr->dl))) {
+ !dl_entity_preempt(&p->dl, &rq_selected(rq)->dl))) {
push_dl_tasks(rq);
}
}
@@ -2636,12 +2637,12 @@ static void switched_to_dl(struct rq *rq, struct task_struct *p)
return;
}

- if (rq->curr != p) {
+ if (rq_selected(rq) != p) {
#ifdef CONFIG_SMP
if (p->nr_cpus_allowed > 1 && rq->dl.overloaded)
deadline_queue_push_tasks(rq);
#endif
- if (dl_task(rq->curr))
+ if (dl_task(rq_selected(rq)))
check_preempt_curr_dl(rq, p, 0);
else
resched_curr(rq);
@@ -2670,7 +2671,7 @@ static void prio_changed_dl(struct rq *rq, struct task_struct *p,
if (!rq->dl.overloaded)
deadline_queue_pull_task(rq);

- if (task_current(rq, p)) {
+ if (task_current_selected(rq, p)) {
/*
* If we now have a earlier deadline task than p,
* then reschedule, provided p is still on this
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index bf9e8f29398e..62c3c1762004 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -919,7 +919,7 @@ static s64 update_curr_se(struct rq *rq, struct sched_entity *curr)
*/
s64 update_curr_common(struct rq *rq)
{
- struct task_struct *curr = rq->curr;
+ struct task_struct *curr = rq_selected(rq);
s64 delta_exec;

delta_exec = update_curr_se(rq, &curr->se);
@@ -964,7 +964,7 @@ static void update_curr(struct cfs_rq *cfs_rq)

static void update_curr_fair(struct rq *rq)
{
- update_curr(cfs_rq_of(&rq->curr->se));
+ update_curr(cfs_rq_of(&rq_selected(rq)->se));
}

static inline void
@@ -6230,7 +6230,7 @@ static void hrtick_start_fair(struct rq *rq, struct task_struct *p)
s64 delta = slice - ran;

if (delta < 0) {
- if (task_current(rq, p))
+ if (task_current_selected(rq, p))
resched_curr(rq);
return;
}
@@ -6245,7 +6245,7 @@ static void hrtick_start_fair(struct rq *rq, struct task_struct *p)
*/
static void hrtick_update(struct rq *rq)
{
- struct task_struct *curr = rq->curr;
+ struct task_struct *curr = rq_selected(rq);

if (!hrtick_enabled_fair(rq) || curr->sched_class != &fair_sched_class)
return;
@@ -7882,7 +7882,7 @@ static void set_skip_buddy(struct sched_entity *se)
*/
static void check_preempt_wakeup(struct rq *rq, struct task_struct *p, int wake_flags)
{
- struct task_struct *curr = rq->curr;
+ struct task_struct *curr = rq_selected(rq);
struct sched_entity *se = &curr->se, *pse = &p->se;
struct cfs_rq *cfs_rq = task_cfs_rq(curr);
int scale = cfs_rq->nr_running >= sched_nr_latency;
@@ -7916,7 +7916,7 @@ static void check_preempt_wakeup(struct rq *rq, struct task_struct *p, int wake_
* prevents us from potentially nominating it as a false LAST_BUDDY
* below.
*/
- if (test_tsk_need_resched(curr))
+ if (test_tsk_need_resched(rq->curr))
return;

/* Idle tasks are by definition preempted by non-idle tasks. */
@@ -8915,7 +8915,7 @@ static bool __update_blocked_others(struct rq *rq, bool *done)
* update_load_avg() can call cpufreq_update_util(). Make sure that RT,
* DL and IRQ signals have been updated before updating CFS.
*/
- curr_class = rq->curr->sched_class;
+ curr_class = rq_selected(rq)->sched_class;

thermal_pressure = arch_scale_thermal_pressure(cpu_of(rq));

@@ -12162,7 +12162,7 @@ prio_changed_fair(struct rq *rq, struct task_struct *p, int oldprio)
* our priority decreased, or if we are not currently running on
* this runqueue and our priority is higher than the current's
*/
- if (task_current(rq, p)) {
+ if (task_current_selected(rq, p)) {
if (p->prio > oldprio)
resched_curr(rq);
} else
@@ -12307,7 +12307,7 @@ static void switched_to_fair(struct rq *rq, struct task_struct *p)
* kick off the schedule if running, otherwise just see
* if we can still preempt the current task.
*/
- if (task_current(rq, p))
+ if (task_current_selected(rq, p))
resched_curr(rq);
else
check_preempt_curr(rq, p, 0);
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 0d0b276c447d..3ba24c3fce20 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -574,7 +574,7 @@ static void dequeue_rt_entity(struct sched_rt_entity *rt_se, unsigned int flags)

static void sched_rt_rq_enqueue(struct rt_rq *rt_rq)
{
- struct task_struct *curr = rq_of_rt_rq(rt_rq)->curr;
+ struct task_struct *curr = rq_selected(rq_of_rt_rq(rt_rq));
struct rq *rq = rq_of_rt_rq(rt_rq);
struct sched_rt_entity *rt_se;

@@ -1044,7 +1044,7 @@ static int sched_rt_runtime_exceeded(struct rt_rq *rt_rq)
*/
static void update_curr_rt(struct rq *rq)
{
- struct task_struct *curr = rq->curr;
+ struct task_struct *curr = rq_selected(rq);
struct sched_rt_entity *rt_se = &curr->rt;
s64 delta_exec;

@@ -1591,7 +1591,7 @@ static int find_lowest_rq(struct task_struct *task);
static int
select_task_rq_rt(struct task_struct *p, int cpu, int flags)
{
- struct task_struct *curr;
+ struct task_struct *curr, *selected;
struct rq *rq;
bool test;

@@ -1603,6 +1603,7 @@ select_task_rq_rt(struct task_struct *p, int cpu, int flags)

rcu_read_lock();
curr = READ_ONCE(rq->curr); /* unlocked access */
+ selected = READ_ONCE(rq_selected(rq));

/*
* If the current task on @p's runqueue is an RT task, then
@@ -1631,8 +1632,8 @@ select_task_rq_rt(struct task_struct *p, int cpu, int flags)
* systems like big.LITTLE.
*/
test = curr &&
- unlikely(rt_task(curr)) &&
- (curr->nr_cpus_allowed < 2 || curr->prio <= p->prio);
+ unlikely(rt_task(selected)) &&
+ (curr->nr_cpus_allowed < 2 || selected->prio <= p->prio);

if (test || !rt_task_fits_capacity(p, cpu)) {
int target = find_lowest_rq(p);
@@ -1662,12 +1663,8 @@ select_task_rq_rt(struct task_struct *p, int cpu, int flags)

static void check_preempt_equal_prio(struct rq *rq, struct task_struct *p)
{
- /*
- * Current can't be migrated, useless to reschedule,
- * let's hope p can move out.
- */
if (rq->curr->nr_cpus_allowed == 1 ||
- !cpupri_find(&rq->rd->cpupri, rq->curr, NULL))
+ !cpupri_find(&rq->rd->cpupri, rq_selected(rq), NULL))
return;

/*
@@ -1710,7 +1707,9 @@ static int balance_rt(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
*/
static void check_preempt_curr_rt(struct rq *rq, struct task_struct *p, int flags)
{
- if (p->prio < rq->curr->prio) {
+ struct task_struct *curr = rq_selected(rq);
+
+ if (p->prio < curr->prio) {
resched_curr(rq);
return;
}
@@ -1728,7 +1727,7 @@ static void check_preempt_curr_rt(struct rq *rq, struct task_struct *p, int flag
* to move current somewhere else, making room for our non-migratable
* task.
*/
- if (p->prio == rq->curr->prio && !test_tsk_need_resched(rq->curr))
+ if (p->prio == curr->prio && !test_tsk_need_resched(rq->curr))
check_preempt_equal_prio(rq, p);
#endif
}
@@ -1753,7 +1752,7 @@ static inline void set_next_task_rt(struct rq *rq, struct task_struct *p, bool f
* utilization. We only care of the case where we start to schedule a
* rt task
*/
- if (rq->curr->sched_class != &rt_sched_class)
+ if (rq_selected(rq)->sched_class != &rt_sched_class)
update_rt_rq_load_avg(rq_clock_pelt(rq), rq, 0);

rt_queue_push_tasks(rq);
@@ -2033,7 +2032,7 @@ static struct task_struct *pick_next_pushable_task(struct rq *rq)
struct task_struct, pushable_tasks);

BUG_ON(rq->cpu != task_cpu(p));
- BUG_ON(task_current(rq, p));
+ BUG_ON(task_current(rq, p) || task_current_selected(rq, p));
BUG_ON(p->nr_cpus_allowed <= 1);

BUG_ON(!task_on_rq_queued(p));
@@ -2066,7 +2065,7 @@ static int push_rt_task(struct rq *rq, bool pull)
* higher priority than current. If that's the case
* just reschedule current.
*/
- if (unlikely(next_task->prio < rq->curr->prio)) {
+ if (unlikely(next_task->prio < rq_selected(rq)->prio)) {
resched_curr(rq);
return 0;
}
@@ -2419,7 +2418,7 @@ static void pull_rt_task(struct rq *this_rq)
* p if it is lower in priority than the
* current task on the run queue
*/
- if (p->prio < src_rq->curr->prio)
+ if (p->prio < rq_selected(src_rq)->prio)
goto skip;

if (is_migration_disabled(p)) {
@@ -2461,9 +2460,9 @@ static void task_woken_rt(struct rq *rq, struct task_struct *p)
bool need_to_push = !task_on_cpu(rq, p) &&
!test_tsk_need_resched(rq->curr) &&
p->nr_cpus_allowed > 1 &&
- (dl_task(rq->curr) || rt_task(rq->curr)) &&
+ (dl_task(rq_selected(rq)) || rt_task(rq_selected(rq))) &&
(rq->curr->nr_cpus_allowed < 2 ||
- rq->curr->prio <= p->prio);
+ rq_selected(rq)->prio <= p->prio);

if (need_to_push)
push_rt_tasks(rq);
@@ -2547,7 +2546,7 @@ static void switched_to_rt(struct rq *rq, struct task_struct *p)
if (p->nr_cpus_allowed > 1 && rq->rt.overloaded)
rt_queue_push_tasks(rq);
#endif /* CONFIG_SMP */
- if (p->prio < rq->curr->prio && cpu_online(cpu_of(rq)))
+ if (p->prio < rq_selected(rq)->prio && cpu_online(cpu_of(rq)))
resched_curr(rq);
}
}
@@ -2562,7 +2561,7 @@ prio_changed_rt(struct rq *rq, struct task_struct *p, int oldprio)
if (!task_on_rq_queued(p))
return;

- if (task_current(rq, p)) {
+ if (task_current_selected(rq, p)) {
#ifdef CONFIG_SMP
/*
* If our priority decreases while running, we
@@ -2588,7 +2587,7 @@ prio_changed_rt(struct rq *rq, struct task_struct *p, int oldprio)
* greater than the current running task
* then reschedule.
*/
- if (p->prio < rq->curr->prio)
+ if (p->prio < rq_selected(rq)->prio)
resched_curr(rq);
}
}
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 4a1ef64449b2..29597a6fd65b 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1008,7 +1008,10 @@ struct rq {
*/
unsigned int nr_uninterruptible;

- struct task_struct __rcu *curr;
+ struct task_struct __rcu *curr; /* Execution context */
+#ifdef CONFIG_PROXY_EXEC
+ struct task_struct __rcu *curr_sched; /* Scheduling context (policy) */
+#endif
struct task_struct *idle;
struct task_struct *stop;
unsigned long next_balance;
@@ -1207,6 +1210,22 @@ DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
#define cpu_curr(cpu) (cpu_rq(cpu)->curr)
#define raw_rq() raw_cpu_ptr(&runqueues)

+#ifdef CONFIG_PROXY_EXEC
+#define rq_selected(rq) ((rq)->curr_sched)
+#define cpu_curr_selected(cpu) (cpu_rq(cpu)->curr_sched)
+static inline void rq_set_selected(struct rq *rq, struct task_struct *t)
+{
+ rcu_assign_pointer(rq->curr_sched, t);
+}
+#else
+#define rq_selected(rq) ((rq)->curr)
+#define cpu_curr_selected(cpu) (cpu_rq(cpu)->curr)
+static inline void rq_set_selected(struct rq *rq, struct task_struct *t)
+{
+ /* Do nothing */
+}
+#endif
+
struct sched_group;
#ifdef CONFIG_SCHED_CORE
static inline struct cpumask *sched_group_span(struct sched_group *sg);
@@ -2068,11 +2087,25 @@ static inline u64 global_rt_runtime(void)
return (u64)sysctl_sched_rt_runtime * NSEC_PER_USEC;
}

+/*
+ * Is p the current execution context?
+ */
static inline int task_current(struct rq *rq, struct task_struct *p)
{
return rq->curr == p;
}

+/*
+ * Is p the current scheduling context?
+ *
+ * Note that it might be the current execution context at the same time if
+ * rq->curr == rq_selected() == p.
+ */
+static inline int task_current_selected(struct rq *rq, struct task_struct *p)
+{
+ return rq_selected(rq) == p;
+}
+
static inline int task_on_cpu(struct rq *rq, struct task_struct *p)
{
#ifdef CONFIG_SMP
@@ -2234,7 +2267,7 @@ struct sched_class {

static inline void put_prev_task(struct rq *rq, struct task_struct *prev)
{
- WARN_ON_ONCE(rq->curr != prev);
+ WARN_ON_ONCE(rq_selected(rq) != prev);
prev->sched_class->put_prev_task(rq, prev);
}

--
2.41.0.rc0.172.g3f132b7071-goog


2023-06-01 06:24:21

by John Stultz

[permalink] [raw]
Subject: [PATCH v4 03/13] locking/mutex: make mutex::wait_lock irq safe

From: Juri Lelli <[email protected]>

mutex::wait_lock might be nested under rq->lock.

Make it irq safe then.

Cc: Joel Fernandes <[email protected]>
Cc: Qais Yousef <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Zimuzo Ezeozue <[email protected]>
Cc: Youssef Esmat <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Waiman Long <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: "Paul E . McKenney" <[email protected]>
Cc: [email protected]
Signed-off-by: Juri Lelli <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
[rebase & fix {un,}lock_wait_lock helpers in ww_mutex.h]
Signed-off-by: Connor O'Brien <[email protected]>
Signed-off-by: John Stultz <[email protected]>
---
v3:
* Re-added this patch after it was dropped in v2 which
caused lockdep warnings to trip.
---
kernel/locking/mutex.c | 18 ++++++++++--------
kernel/locking/ww_mutex.h | 22 ++++++++++++----------
2 files changed, 22 insertions(+), 18 deletions(-)

diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 1582756914df..a528e7f42caa 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -572,6 +572,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
{
struct mutex_waiter waiter;
struct ww_mutex *ww;
+ unsigned long flags;
int ret;

if (!use_ww_ctx)
@@ -614,7 +615,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
return 0;
}

- raw_spin_lock(&lock->wait_lock);
+ raw_spin_lock_irqsave(&lock->wait_lock, flags);
/*
* After waiting to acquire the wait_lock, try again.
*/
@@ -675,7 +676,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
goto err;
}

- raw_spin_unlock(&lock->wait_lock);
+ raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
if (ww_ctx)
ww_ctx_wake(ww_ctx);
schedule_preempt_disabled();
@@ -698,9 +699,9 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
trace_contention_begin(lock, LCB_F_MUTEX);
}

- raw_spin_lock(&lock->wait_lock);
+ raw_spin_lock_irqsave(&lock->wait_lock, flags);
}
- raw_spin_lock(&lock->wait_lock);
+ raw_spin_lock_irqsave(&lock->wait_lock, flags);
acquired:
__set_current_state(TASK_RUNNING);

@@ -726,7 +727,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
if (ww_ctx)
ww_mutex_lock_acquired(ww, ww_ctx);

- raw_spin_unlock(&lock->wait_lock);
+ raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
if (ww_ctx)
ww_ctx_wake(ww_ctx);
preempt_enable();
@@ -737,7 +738,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
__mutex_remove_waiter(lock, &waiter);
err_early_kill:
trace_contention_end(lock, ret);
- raw_spin_unlock(&lock->wait_lock);
+ raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
debug_mutex_free_waiter(&waiter);
mutex_release(&lock->dep_map, ip);
if (ww_ctx)
@@ -909,6 +910,7 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne
struct task_struct *next = NULL;
DEFINE_WAKE_Q(wake_q);
unsigned long owner;
+ unsigned long flags;

mutex_release(&lock->dep_map, ip);

@@ -935,7 +937,7 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne
}
}

- raw_spin_lock(&lock->wait_lock);
+ raw_spin_lock_irqsave(&lock->wait_lock, flags);
debug_mutex_unlock(lock);
if (!list_empty(&lock->wait_list)) {
/* get the first entry from the wait-list: */
@@ -953,7 +955,7 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne
__mutex_handoff(lock, next);

preempt_disable();
- raw_spin_unlock(&lock->wait_lock);
+ raw_spin_unlock_irqrestore(&lock->wait_lock, flags);

wake_up_q(&wake_q);
preempt_enable();
diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h
index e49ea5336473..984a4e0bff36 100644
--- a/kernel/locking/ww_mutex.h
+++ b/kernel/locking/ww_mutex.h
@@ -70,14 +70,14 @@ __ww_mutex_has_waiters(struct mutex *lock)
return atomic_long_read(&lock->owner) & MUTEX_FLAG_WAITERS;
}

-static inline void lock_wait_lock(struct mutex *lock)
+static inline void lock_wait_lock(struct mutex *lock, unsigned long *flags)
{
- raw_spin_lock(&lock->wait_lock);
+ raw_spin_lock_irqsave(&lock->wait_lock, *flags);
}

-static inline void unlock_wait_lock(struct mutex *lock)
+static inline void unlock_wait_lock(struct mutex *lock, unsigned long flags)
{
- raw_spin_unlock(&lock->wait_lock);
+ raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
}

static inline void lockdep_assert_wait_lock_held(struct mutex *lock)
@@ -144,14 +144,14 @@ __ww_mutex_has_waiters(struct rt_mutex *lock)
return rt_mutex_has_waiters(&lock->rtmutex);
}

-static inline void lock_wait_lock(struct rt_mutex *lock)
+static inline void lock_wait_lock(struct rt_mutex *lock, unsigned long *flags)
{
- raw_spin_lock(&lock->rtmutex.wait_lock);
+ raw_spin_lock_irqsave(&lock->rtmutex.wait_lock, *flags);
}

-static inline void unlock_wait_lock(struct rt_mutex *lock)
+static inline void unlock_wait_lock(struct rt_mutex *lock, flags)
{
- raw_spin_unlock(&lock->rtmutex.wait_lock);
+ raw_spin_unlock_irqrestore(&lock->rtmutex.wait_lock, flags);
}

static inline void lockdep_assert_wait_lock_held(struct rt_mutex *lock)
@@ -383,6 +383,8 @@ __ww_mutex_check_waiters(struct MUTEX *lock, struct ww_acquire_ctx *ww_ctx)
static __always_inline void
ww_mutex_set_context_fastpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
{
+ unsigned long flags;
+
ww_mutex_lock_acquired(lock, ctx);

/*
@@ -410,9 +412,9 @@ ww_mutex_set_context_fastpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
* Uh oh, we raced in fastpath, check if any of the waiters need to
* die or wound us.
*/
- lock_wait_lock(&lock->base);
+ lock_wait_lock(&lock->base, &flags);
__ww_mutex_check_waiters(&lock->base, ctx);
- unlock_wait_lock(&lock->base);
+ unlock_wait_lock(&lock->base, flags);
}

static __always_inline int
--
2.41.0.rc0.172.g3f132b7071-goog


2023-06-01 06:25:22

by John Stultz

[permalink] [raw]
Subject: [PATCH v4 12/13] sched: Attempt to fix rt/dl load balancing via chain level balance

From: Connor O'Brien <[email protected]>

RT/DL balancing is supposed to guarantee that with N cpus available &
CPU affinity permitting, the top N RT/DL tasks will get spread across
the CPUs and all get to run. Proxy exec greatly complicates this as
blocked tasks remain on the rq but cannot be usefully migrated away
from their lock owning tasks. This has two major consequences:
1. In order to get the desired properties we need to migrate a blocked
task, its would-be proxy, and everything in between, all together -
i.e., we need to push/pull "blocked chains" rather than individual
tasks.
2. Tasks that are part of rq->curr's "blocked tree" therefore should
not be pushed or pulled. Options for enforcing this seem to include
a) create some sort of complex data structure for tracking
pushability, updating it whenever the blocked tree for rq->curr
changes (e.g. on mutex handoffs, migrations, etc.) as well as on
context switches.
b) give up on O(1) pushability checks, and search through the pushable
list every push/pull until we find a pushable "chain"
c) Extend option "b" with some sort of caching to avoid repeated work.

For the sake of simplicity & separating the "chain level balancing"
concerns from complicated optimizations, this patch focuses on trying
to implement option "b" correctly. This can then hopefully provide a
baseline for "correct load balancing behavior" that optimizations can
try to implement more efficiently.

Note:
The inability to atomically check "is task enqueued on a specific rq"
creates 2 possible races when following a blocked chain:
- If we check task_rq() first on a task that is dequeued from its rq,
it can be woken and enqueued on another rq before the call to
task_on_rq_queued()
- If we call task_on_rq_queued() first on a task that is on another
rq, it can be dequeued (since we don't hold its rq's lock) and then
be set to the current rq before we check task_rq().

Maybe there's a more elegant solution that would work, but for now,
just sandwich the task_rq() check between two task_on_rq_queued()
checks, all separated by smp_rmb() calls. Since we hold rq's lock,
task can't be enqueued or dequeued from rq, so neither race should be
possible.

extensive comments on various pitfalls, races, etc. included inline.

Cc: Joel Fernandes <[email protected]>
Cc: Qais Yousef <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Zimuzo Ezeozue <[email protected]>
Cc: Youssef Esmat <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Waiman Long <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: "Paul E . McKenney" <[email protected]>
Cc: [email protected]
Signed-off-by: Connor O'Brien <[email protected]>
[jstultz: rebased & sorted minor conflicts, folded down numerous
fixes from Connor, fixed number of checkpatch issues]
Signed-off-by: John Stultz <[email protected]>
---
v3:
* Fix crash by checking find_exec_ctx return for NULL before using it
v4:
* Remove verbose comments/questions to avoid review
distractions, as suggested by Dietmar
* Moved most added functions from sched.h into core.c to be able
to access __mutex_owner()
---
kernel/sched/core.c | 108 +++++++++++++++++++++-
kernel/sched/cpudeadline.c | 12 +--
kernel/sched/cpudeadline.h | 3 +-
kernel/sched/cpupri.c | 28 +++---
kernel/sched/cpupri.h | 6 +-
kernel/sched/deadline.c | 139 +++++++++++++++++-----------
kernel/sched/rt.c | 179 +++++++++++++++++++++++++------------
kernel/sched/sched.h | 8 +-
8 files changed, 352 insertions(+), 131 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index c56921dc427e..e0e6c2feefd0 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2540,9 +2540,7 @@ int push_cpu_stop(void *arg)

// XXX validate p is still the highest prio task
if (task_rq(p) == rq) {
- deactivate_task(rq, p, 0);
- set_task_cpu(p, lowest_rq->cpu);
- activate_task(lowest_rq, p, 0);
+ push_task_chain(rq, lowest_rq, p);
resched_curr(lowest_rq);
}

@@ -3824,6 +3822,110 @@ ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags,
#endif
}

+static inline bool task_queued_on_rq(struct rq *rq, struct task_struct *task)
+{
+ if (!task_on_rq_queued(task))
+ return false;
+ smp_rmb();
+ if (task_rq(task) != rq)
+ return false;
+ smp_rmb();
+ if (!task_on_rq_queued(task))
+ return false;
+ return true;
+}
+
+void push_task_chain(struct rq *rq, struct rq *dst_rq, struct task_struct *task)
+{
+ struct task_struct *owner;
+
+ lockdep_assert_rq_held(rq);
+ lockdep_assert_rq_held(dst_rq);
+
+ BUG_ON(!task_queued_on_rq(rq, task));
+ BUG_ON(task_current_selected(rq, task));
+
+ while (task) {
+ if (!task_queued_on_rq(rq, task) || task_current_selected(rq, task))
+ break;
+
+ if (task_is_blocked(task))
+ owner = __mutex_owner(task->blocked_on);
+ else
+ owner = NULL;
+
+ deactivate_task(rq, task, 0);
+ set_task_cpu(task, dst_rq->cpu);
+ activate_task(dst_rq, task, 0);
+ if (task == owner)
+ break;
+ task = owner;
+ }
+}
+
+/*
+ * Returns the unblocked task at the end of the blocked chain starting with p
+ * if that chain is composed entirely of tasks enqueued on rq, or NULL otherwise.
+ */
+struct task_struct *find_exec_ctx(struct rq *rq, struct task_struct *p)
+{
+ struct task_struct *exec_ctx, *owner;
+ struct mutex *mutex;
+
+ lockdep_assert_rq_held(rq);
+
+ for (exec_ctx = p; task_is_blocked(exec_ctx) && !task_on_cpu(rq, exec_ctx);
+ exec_ctx = owner) {
+ mutex = exec_ctx->blocked_on;
+ owner = __mutex_owner(mutex);
+ if (owner == exec_ctx)
+ break;
+
+ if (!task_queued_on_rq(rq, owner) || task_current_selected(rq, owner)) {
+ exec_ctx = NULL;
+ break;
+ }
+ }
+ return exec_ctx;
+}
+
+/*
+ * Returns:
+ * 1 if chain is pushable and affinity does not prevent pushing to cpu
+ * 0 if chain is unpushable
+ * -1 if chain is pushable but affinity blocks running on cpu.
+ */
+int pushable_chain(struct rq *rq, struct task_struct *p, int cpu)
+{
+ struct task_struct *exec_ctx;
+
+ lockdep_assert_rq_held(rq);
+
+ if (task_rq(p) != rq || !task_on_rq_queued(p))
+ return 0;
+
+ exec_ctx = find_exec_ctx(rq, p);
+ /*
+ * Chain leads off the rq, we're free to push it anywhere.
+ *
+ * One wrinkle with relying on find_exec_ctx is that when the chain
+ * leads to a task currently migrating to rq, we see the chain as
+ * pushable & push everything prior to the migrating task. Even if
+ * we checked explicitly for this case, we could still race with a
+ * migration after the check.
+ * This shouldn't permanently produce a bad state though, as proxy()
+ * will send the chain back to rq and by that point the migration
+ * should be complete & a proper push can occur.
+ */
+ if (!exec_ctx)
+ return 1;
+
+ if (task_on_cpu(rq, exec_ctx) || exec_ctx->nr_cpus_allowed <= 1)
+ return 0;
+
+ return cpumask_test_cpu(cpu, &exec_ctx->cpus_mask) ? 1 : -1;
+}
+
#ifdef CONFIG_PROXY_EXEC
bool ttwu_proxy_skip_wakeup(struct rq *rq, struct task_struct *p)
{
diff --git a/kernel/sched/cpudeadline.c b/kernel/sched/cpudeadline.c
index 57c92d751bcd..efd6d716a3f2 100644
--- a/kernel/sched/cpudeadline.c
+++ b/kernel/sched/cpudeadline.c
@@ -113,13 +113,13 @@ static inline int cpudl_maximum(struct cpudl *cp)
*
* Returns: int - CPUs were found
*/
-int cpudl_find(struct cpudl *cp, struct task_struct *p,
+int cpudl_find(struct cpudl *cp, struct task_struct *sched_ctx, struct task_struct *exec_ctx,
struct cpumask *later_mask)
{
- const struct sched_dl_entity *dl_se = &p->dl;
+ const struct sched_dl_entity *dl_se = &sched_ctx->dl;

if (later_mask &&
- cpumask_and(later_mask, cp->free_cpus, &p->cpus_mask)) {
+ cpumask_and(later_mask, cp->free_cpus, &exec_ctx->cpus_mask)) {
unsigned long cap, max_cap = 0;
int cpu, max_cpu = -1;

@@ -128,13 +128,13 @@ int cpudl_find(struct cpudl *cp, struct task_struct *p,

/* Ensure the capacity of the CPUs fits the task. */
for_each_cpu(cpu, later_mask) {
- if (!dl_task_fits_capacity(p, cpu)) {
+ if (!dl_task_fits_capacity(sched_ctx, cpu)) {
cpumask_clear_cpu(cpu, later_mask);

cap = capacity_orig_of(cpu);

if (cap > max_cap ||
- (cpu == task_cpu(p) && cap == max_cap)) {
+ (cpu == task_cpu(exec_ctx) && cap == max_cap)) {
max_cap = cap;
max_cpu = cpu;
}
@@ -150,7 +150,7 @@ int cpudl_find(struct cpudl *cp, struct task_struct *p,

WARN_ON(best_cpu != -1 && !cpu_present(best_cpu));

- if (cpumask_test_cpu(best_cpu, &p->cpus_mask) &&
+ if (cpumask_test_cpu(best_cpu, &exec_ctx->cpus_mask) &&
dl_time_before(dl_se->deadline, cp->elements[0].dl)) {
if (later_mask)
cpumask_set_cpu(best_cpu, later_mask);
diff --git a/kernel/sched/cpudeadline.h b/kernel/sched/cpudeadline.h
index 0adeda93b5fb..6bb27f70e9d2 100644
--- a/kernel/sched/cpudeadline.h
+++ b/kernel/sched/cpudeadline.h
@@ -16,7 +16,8 @@ struct cpudl {
};

#ifdef CONFIG_SMP
-int cpudl_find(struct cpudl *cp, struct task_struct *p, struct cpumask *later_mask);
+int cpudl_find(struct cpudl *cp, struct task_struct *sched_ctx,
+ struct task_struct *exec_ctx, struct cpumask *later_mask);
void cpudl_set(struct cpudl *cp, int cpu, u64 dl);
void cpudl_clear(struct cpudl *cp, int cpu);
int cpudl_init(struct cpudl *cp);
diff --git a/kernel/sched/cpupri.c b/kernel/sched/cpupri.c
index a286e726eb4b..fb4ddfde221e 100644
--- a/kernel/sched/cpupri.c
+++ b/kernel/sched/cpupri.c
@@ -96,11 +96,15 @@ static inline int __cpupri_find(struct cpupri *cp, struct task_struct *p,
if (skip)
return 0;

- if (cpumask_any_and(&p->cpus_mask, vec->mask) >= nr_cpu_ids)
+ if ((p && cpumask_any_and(&p->cpus_mask, vec->mask) >= nr_cpu_ids) ||
+ (!p && cpumask_any(vec->mask) >= nr_cpu_ids))
return 0;

if (lowest_mask) {
- cpumask_and(lowest_mask, &p->cpus_mask, vec->mask);
+ if (p)
+ cpumask_and(lowest_mask, &p->cpus_mask, vec->mask);
+ else
+ cpumask_copy(lowest_mask, vec->mask);

/*
* We have to ensure that we have at least one bit
@@ -117,10 +121,11 @@ static inline int __cpupri_find(struct cpupri *cp, struct task_struct *p,
return 1;
}

-int cpupri_find(struct cpupri *cp, struct task_struct *p,
+int cpupri_find(struct cpupri *cp, struct task_struct *sched_ctx,
+ struct task_struct *exec_ctx,
struct cpumask *lowest_mask)
{
- return cpupri_find_fitness(cp, p, lowest_mask, NULL);
+ return cpupri_find_fitness(cp, sched_ctx, exec_ctx, lowest_mask, NULL);
}

/**
@@ -140,18 +145,19 @@ int cpupri_find(struct cpupri *cp, struct task_struct *p,
*
* Return: (int)bool - CPUs were found
*/
-int cpupri_find_fitness(struct cpupri *cp, struct task_struct *p,
- struct cpumask *lowest_mask,
- bool (*fitness_fn)(struct task_struct *p, int cpu))
+int cpupri_find_fitness(struct cpupri *cp, struct task_struct *sched_ctx,
+ struct task_struct *exec_ctx,
+ struct cpumask *lowest_mask,
+ bool (*fitness_fn)(struct task_struct *p, int cpu))
{
- int task_pri = convert_prio(p->prio);
+ int task_pri = convert_prio(sched_ctx->prio);
int idx, cpu;

WARN_ON_ONCE(task_pri >= CPUPRI_NR_PRIORITIES);

for (idx = 0; idx < task_pri; idx++) {

- if (!__cpupri_find(cp, p, lowest_mask, idx))
+ if (!__cpupri_find(cp, exec_ctx, lowest_mask, idx))
continue;

if (!lowest_mask || !fitness_fn)
@@ -159,7 +165,7 @@ int cpupri_find_fitness(struct cpupri *cp, struct task_struct *p,

/* Ensure the capacity of the CPUs fit the task */
for_each_cpu(cpu, lowest_mask) {
- if (!fitness_fn(p, cpu))
+ if (!fitness_fn(sched_ctx, cpu))
cpumask_clear_cpu(cpu, lowest_mask);
}

@@ -191,7 +197,7 @@ int cpupri_find_fitness(struct cpupri *cp, struct task_struct *p,
* really care.
*/
if (fitness_fn)
- return cpupri_find(cp, p, lowest_mask);
+ return cpupri_find(cp, sched_ctx, exec_ctx, lowest_mask);

return 0;
}
diff --git a/kernel/sched/cpupri.h b/kernel/sched/cpupri.h
index d6cba0020064..bde7243cec2e 100644
--- a/kernel/sched/cpupri.h
+++ b/kernel/sched/cpupri.h
@@ -18,9 +18,11 @@ struct cpupri {
};

#ifdef CONFIG_SMP
-int cpupri_find(struct cpupri *cp, struct task_struct *p,
+int cpupri_find(struct cpupri *cp, struct task_struct *sched_ctx,
+ struct task_struct *exec_ctx,
struct cpumask *lowest_mask);
-int cpupri_find_fitness(struct cpupri *cp, struct task_struct *p,
+int cpupri_find_fitness(struct cpupri *cp, struct task_struct *sched_ctx,
+ struct task_struct *exec_ctx,
struct cpumask *lowest_mask,
bool (*fitness_fn)(struct task_struct *p, int cpu));
void cpupri_set(struct cpupri *cp, int cpu, int pri);
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 1d2711aee448..3cc8f96480e8 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1814,7 +1814,7 @@ static inline bool dl_task_is_earliest_deadline(struct task_struct *p,
rq->dl.earliest_dl.curr));
}

-static int find_later_rq(struct task_struct *task);
+static int find_later_rq(struct task_struct *sched_ctx, struct task_struct *exec_ctx);

static int
select_task_rq_dl(struct task_struct *p, int cpu, int flags)
@@ -1854,7 +1854,7 @@ select_task_rq_dl(struct task_struct *p, int cpu, int flags)
select_rq |= !dl_task_fits_capacity(p, cpu);

if (select_rq) {
- int target = find_later_rq(p);
+ int target = find_later_rq(p, p);

if (target != -1 &&
dl_task_is_earliest_deadline(p, cpu_rq(target)))
@@ -1901,12 +1901,18 @@ static void migrate_task_rq_dl(struct task_struct *p, int new_cpu __maybe_unused

static void check_preempt_equal_dl(struct rq *rq, struct task_struct *p)
{
+ struct task_struct *exec_ctx;
+
/*
* Current can't be migrated, useless to reschedule,
* let's hope p can move out.
*/
if (rq->curr->nr_cpus_allowed == 1 ||
- !cpudl_find(&rq->rd->cpudl, rq_selected(rq), NULL))
+ !cpudl_find(&rq->rd->cpudl, rq_selected(rq), rq->curr, NULL))
+ return;
+
+ exec_ctx = find_exec_ctx(rq, p);
+ if (task_current(rq, exec_ctx))
return;

/*
@@ -1914,7 +1920,7 @@ static void check_preempt_equal_dl(struct rq *rq, struct task_struct *p)
* see if it is pushed or pulled somewhere else.
*/
if (p->nr_cpus_allowed != 1 &&
- cpudl_find(&rq->rd->cpudl, p, NULL))
+ cpudl_find(&rq->rd->cpudl, p, exec_ctx, NULL))
return;

resched_curr(rq);
@@ -2084,14 +2090,6 @@ static void task_fork_dl(struct task_struct *p)
/* Only try algorithms three times */
#define DL_MAX_TRIES 3

-static int pick_dl_task(struct rq *rq, struct task_struct *p, int cpu)
-{
- if (!task_on_cpu(rq, p) &&
- cpumask_test_cpu(cpu, &p->cpus_mask))
- return 1;
- return 0;
-}
-
/*
* Return the earliest pushable rq's task, which is suitable to be executed
* on the CPU, NULL otherwise:
@@ -2110,7 +2108,7 @@ static struct task_struct *pick_earliest_pushable_dl_task(struct rq *rq, int cpu
if (next_node) {
p = __node_2_pdl(next_node);

- if (pick_dl_task(rq, p, cpu))
+ if (pushable_chain(rq, p, cpu) == 1)
return p;

next_node = rb_next(next_node);
@@ -2122,25 +2120,25 @@ static struct task_struct *pick_earliest_pushable_dl_task(struct rq *rq, int cpu

static DEFINE_PER_CPU(cpumask_var_t, local_cpu_mask_dl);

-static int find_later_rq(struct task_struct *task)
+static int find_later_rq(struct task_struct *sched_ctx, struct task_struct *exec_ctx)
{
struct sched_domain *sd;
struct cpumask *later_mask = this_cpu_cpumask_var_ptr(local_cpu_mask_dl);
int this_cpu = smp_processor_id();
- int cpu = task_cpu(task);
+ int cpu = task_cpu(sched_ctx);

/* Make sure the mask is initialized first */
if (unlikely(!later_mask))
return -1;

- if (task->nr_cpus_allowed == 1)
+ if (exec_ctx && exec_ctx->nr_cpus_allowed == 1)
return -1;

/*
* We have to consider system topology and task affinity
* first, then we can look for a suitable CPU.
*/
- if (!cpudl_find(&task_rq(task)->rd->cpudl, task, later_mask))
+ if (!cpudl_find(&task_rq(exec_ctx)->rd->cpudl, sched_ctx, exec_ctx, later_mask))
return -1;

/*
@@ -2209,15 +2207,62 @@ static int find_later_rq(struct task_struct *task)
return -1;
}

+static struct task_struct *pick_next_pushable_dl_task(struct rq *rq)
+{
+ struct task_struct *p = NULL;
+ struct rb_node *next_node;
+
+ if (!has_pushable_dl_tasks(rq))
+ return NULL;
+
+ next_node = rb_first_cached(&rq->dl.pushable_dl_tasks_root);
+
+next_node:
+ if (next_node) {
+ p = __node_2_pdl(next_node);
+
+ /*
+ * cpu argument doesn't matter because we treat a -1 result
+ * (pushable but can't go to cpu0) the same as a 1 result
+ * (pushable to cpu0). All we care about here is general
+ * pushability.
+ */
+ if (pushable_chain(rq, p, 0))
+ return p;
+
+ next_node = rb_next(next_node);
+ goto next_node;
+ }
+
+ if (!p)
+ return NULL;
+
+ WARN_ON_ONCE(rq->cpu != task_cpu(p));
+ WARN_ON_ONCE(task_current(rq, p));
+ WARN_ON_ONCE(p->nr_cpus_allowed <= 1);
+
+ WARN_ON_ONCE(!task_on_rq_queued(p));
+ WARN_ON_ONCE(!dl_task(p));
+
+ return p;
+}
+
/* Locks the rq it finds */
static struct rq *find_lock_later_rq(struct task_struct *task, struct rq *rq)
{
+ struct task_struct *exec_ctx;
struct rq *later_rq = NULL;
+ bool retry;
int tries;
int cpu;

for (tries = 0; tries < DL_MAX_TRIES; tries++) {
- cpu = find_later_rq(task);
+ retry = false;
+ exec_ctx = find_exec_ctx(rq, task);
+ if (!exec_ctx)
+ break;
+
+ cpu = find_later_rq(task, exec_ctx);

if ((cpu == -1) || (cpu == rq->cpu))
break;
@@ -2236,12 +2281,29 @@ static struct rq *find_lock_later_rq(struct task_struct *task, struct rq *rq)

/* Retry if something changed. */
if (double_lock_balance(rq, later_rq)) {
- if (unlikely(task_rq(task) != rq ||
- !cpumask_test_cpu(later_rq->cpu, &task->cpus_mask) ||
- task_on_cpu(rq, task) ||
- !dl_task(task) ||
- is_migration_disabled(task) ||
- !task_on_rq_queued(task))) {
+ bool fail = false;
+
+ if (!dl_task(task) || is_migration_disabled(task)) {
+ fail = true;
+ } else if (rq != this_rq()) {
+ struct task_struct *next_task = pick_next_pushable_dl_task(rq);
+
+ if (next_task != task) {
+ fail = true;
+ } else {
+ exec_ctx = find_exec_ctx(rq, next_task);
+ retry = (exec_ctx &&
+ !cpumask_test_cpu(later_rq->cpu,
+ &exec_ctx->cpus_mask));
+ }
+ } else {
+ int pushable = pushable_chain(rq, task, later_rq->cpu);
+
+ fail = !pushable;
+ retry = pushable == -1;
+ }
+
+ if (unlikely(fail)) {
double_unlock_balance(rq, later_rq);
later_rq = NULL;
break;
@@ -2253,7 +2315,7 @@ static struct rq *find_lock_later_rq(struct task_struct *task, struct rq *rq)
* its earliest one has a later deadline than our
* task, the rq is a good one.
*/
- if (dl_task_is_earliest_deadline(task, later_rq))
+ if (!retry && dl_task_is_earliest_deadline(task, later_rq))
break;

/* Otherwise we try again. */
@@ -2264,25 +2326,6 @@ static struct rq *find_lock_later_rq(struct task_struct *task, struct rq *rq)
return later_rq;
}

-static struct task_struct *pick_next_pushable_dl_task(struct rq *rq)
-{
- struct task_struct *p;
-
- if (!has_pushable_dl_tasks(rq))
- return NULL;
-
- p = __node_2_pdl(rb_first_cached(&rq->dl.pushable_dl_tasks_root));
-
- WARN_ON_ONCE(rq->cpu != task_cpu(p));
- WARN_ON_ONCE(task_current(rq, p));
- WARN_ON_ONCE(p->nr_cpus_allowed <= 1);
-
- WARN_ON_ONCE(!task_on_rq_queued(p));
- WARN_ON_ONCE(!dl_task(p));
-
- return p;
-}
-
/*
* See if the non running -deadline tasks on this rq
* can be sent to some other CPU where they can preempt
@@ -2351,9 +2394,7 @@ static int push_dl_task(struct rq *rq)
goto retry;
}

- deactivate_task(rq, next_task, 0);
- set_task_cpu(next_task, later_rq->cpu);
- activate_task(later_rq, next_task, 0);
+ push_task_chain(rq, later_rq, next_task);
ret = 1;

resched_curr(later_rq);
@@ -2439,9 +2480,7 @@ static void pull_dl_task(struct rq *this_rq)
if (is_migration_disabled(p)) {
push_task = get_push_task(src_rq);
} else {
- deactivate_task(src_rq, p, 0);
- set_task_cpu(p, this_cpu);
- activate_task(this_rq, p, 0);
+ push_task_chain(src_rq, this_rq, p);
dmin = p->dl.deadline;
resched = true;
}
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index d6bffcf31de0..a1780c2c7101 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1599,7 +1599,7 @@ static void yield_task_rt(struct rq *rq)
}

#ifdef CONFIG_SMP
-static int find_lowest_rq(struct task_struct *task);
+static int find_lowest_rq(struct task_struct *sched_ctx, struct task_struct *exec_ctx);

static int
select_task_rq_rt(struct task_struct *p, int cpu, int flags)
@@ -1649,7 +1649,7 @@ select_task_rq_rt(struct task_struct *p, int cpu, int flags)
(curr->nr_cpus_allowed < 2 || selected->prio <= p->prio);

if (test || !rt_task_fits_capacity(p, cpu)) {
- int target = find_lowest_rq(p);
+ int target = find_lowest_rq(p, p);

/*
* Bail out if we were forcing a migration to find a better
@@ -1676,8 +1676,18 @@ select_task_rq_rt(struct task_struct *p, int cpu, int flags)

static void check_preempt_equal_prio(struct rq *rq, struct task_struct *p)
{
+ struct task_struct *exec_ctx = p;
+ /*
+ * Current can't be migrated, useless to reschedule,
+ * let's hope p can move out.
+ */
if (rq->curr->nr_cpus_allowed == 1 ||
- !cpupri_find(&rq->rd->cpupri, rq_selected(rq), NULL))
+ !cpupri_find(&rq->rd->cpupri, rq_selected(rq), rq->curr, NULL))
+ return;
+
+ /* No reason to preempt since rq->curr wouldn't change anyway */
+ exec_ctx = find_exec_ctx(rq, p);
+ if (task_current(rq, exec_ctx))
return;

/*
@@ -1685,7 +1695,7 @@ static void check_preempt_equal_prio(struct rq *rq, struct task_struct *p)
* see if it is pushed or pulled somewhere else.
*/
if (p->nr_cpus_allowed != 1 &&
- cpupri_find(&rq->rd->cpupri, p, NULL))
+ cpupri_find(&rq->rd->cpupri, p, exec_ctx, NULL))
return;

/*
@@ -1851,15 +1861,6 @@ static void put_prev_task_rt(struct rq *rq, struct task_struct *p)
/* Only try algorithms three times */
#define RT_MAX_TRIES 3

-static int pick_rt_task(struct rq *rq, struct task_struct *p, int cpu)
-{
- if (!task_on_cpu(rq, p) &&
- cpumask_test_cpu(cpu, &p->cpus_mask))
- return 1;
-
- return 0;
-}
-
/*
* Return the highest pushable rq's task, which is suitable to be executed
* on the CPU, NULL otherwise
@@ -1873,7 +1874,7 @@ static struct task_struct *pick_highest_pushable_task(struct rq *rq, int cpu)
return NULL;

plist_for_each_entry(p, head, pushable_tasks) {
- if (pick_rt_task(rq, p, cpu))
+ if (pushable_chain(rq, p, cpu) == 1)
return p;
}

@@ -1882,19 +1883,19 @@ static struct task_struct *pick_highest_pushable_task(struct rq *rq, int cpu)

static DEFINE_PER_CPU(cpumask_var_t, local_cpu_mask);

-static int find_lowest_rq(struct task_struct *task)
+static int find_lowest_rq(struct task_struct *sched_ctx, struct task_struct *exec_ctx)
{
struct sched_domain *sd;
struct cpumask *lowest_mask = this_cpu_cpumask_var_ptr(local_cpu_mask);
int this_cpu = smp_processor_id();
- int cpu = task_cpu(task);
+ int cpu = task_cpu(sched_ctx);
int ret;

/* Make sure the mask is initialized first */
if (unlikely(!lowest_mask))
return -1;

- if (task->nr_cpus_allowed == 1)
+ if (exec_ctx && exec_ctx->nr_cpus_allowed == 1)
return -1; /* No other targets possible */

/*
@@ -1903,13 +1904,13 @@ static int find_lowest_rq(struct task_struct *task)
*/
if (sched_asym_cpucap_active()) {

- ret = cpupri_find_fitness(&task_rq(task)->rd->cpupri,
- task, lowest_mask,
+ ret = cpupri_find_fitness(&task_rq(sched_ctx)->rd->cpupri,
+ sched_ctx, exec_ctx, lowest_mask,
rt_task_fits_capacity);
} else {

- ret = cpupri_find(&task_rq(task)->rd->cpupri,
- task, lowest_mask);
+ ret = cpupri_find(&task_rq(sched_ctx)->rd->cpupri,
+ sched_ctx, exec_ctx, lowest_mask);
}

if (!ret)
@@ -1973,15 +1974,45 @@ static int find_lowest_rq(struct task_struct *task)
return -1;
}

+static struct task_struct *pick_next_pushable_task(struct rq *rq)
+{
+ struct plist_head *head = &rq->rt.pushable_tasks;
+ struct task_struct *p, *push_task = NULL;
+
+ if (!has_pushable_tasks(rq))
+ return NULL;
+
+ plist_for_each_entry(p, head, pushable_tasks) {
+ if (pushable_chain(rq, p, 0)) {
+ push_task = p;
+ break;
+ }
+ }
+
+ if (!push_task)
+ return NULL;
+
+ BUG_ON(rq->cpu != task_cpu(push_task));
+ BUG_ON(task_current(rq, push_task) || task_current_selected(rq, push_task));
+ BUG_ON(!task_on_rq_queued(push_task));
+ BUG_ON(!rt_task(push_task));
+
+ return p;
+}
+
/* Will lock the rq it finds */
static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq)
{
+ struct task_struct *exec_ctx;
struct rq *lowest_rq = NULL;
+ bool retry;
int tries;
int cpu;

for (tries = 0; tries < RT_MAX_TRIES; tries++) {
- cpu = find_lowest_rq(task);
+ retry = false;
+ exec_ctx = find_exec_ctx(rq, task);
+ cpu = find_lowest_rq(task, exec_ctx);

if ((cpu == -1) || (cpu == rq->cpu))
break;
@@ -2000,6 +2031,7 @@ static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq)

/* if the prio of this runqueue changed, try again */
if (double_lock_balance(rq, lowest_rq)) {
+ bool fail = false;
/*
* We had to unlock the run queue. In
* the mean time, task could have
@@ -2008,14 +2040,71 @@ static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq)
* It is possible the task was scheduled, set
* "migrate_disabled" and then got preempted, so we must
* check the task migration disable flag here too.
+ *
+ * Releasing the rq lock means we need to re-check pushability.
+ * Some scenarios:
+ * 1) If a migration from another CPU sent a task/chain to rq
+ * that made task newly unpushable by completing a chain
+ * from task to rq->curr, then we need to bail out and push something
+ * else.
+ * 2) If our chain led off this CPU or to a dequeued task, the last waiter
+ * on this CPU might have acquired the lock and woken (or even migrated
+ * & run, handed off the lock it held, etc...). This can invalidate the
+ * result of find_lowest_rq() if our chain previously ended in a blocked
+ * task whose affinity we could ignore, but now ends in an unblocked
+ * task that can't run on lowest_rq.
+ * 3) Race described at https://lore.kernel.org/all/[email protected]/
+ *
+ * Notes on these:
+ * - Scenario #2 is properly handled by rerunning find_lowest_rq
+ * - Scenario #1 requires that we fail
+ * - Scenario #3 can AFAICT only occur when rq is not this_rq(). And the
+ * suggested fix is not universally correct now that push_cpu_stop() can
+ * call this function.
*/
- if (unlikely(task_rq(task) != rq ||
- !cpumask_test_cpu(lowest_rq->cpu, &task->cpus_mask) ||
- task_on_cpu(rq, task) ||
- !rt_task(task) ||
- is_migration_disabled(task) ||
- !task_on_rq_queued(task))) {
+ if (!rt_task(task) || is_migration_disabled(task)) {
+ fail = true;
+ } else if (rq != this_rq()) {
+ /*
+ * If we are dealing with a remote rq, then all bets are off
+ * because task might have run & then been dequeued since we
+ * released the lock, at which point our normal checks can race
+ * with migration, as described in
+ * https://lore.kernel.org/all/[email protected]/
+ * Need to repick to ensure we avoid a race.
+ * But re-picking would be unnecessary & incorrect in the
+ * push_cpu_stop() path.
+ */
+ struct task_struct *next_task = pick_next_pushable_task(rq);
+
+ if (next_task != task) {
+ fail = true;
+ } else {
+ exec_ctx = find_exec_ctx(rq, next_task);
+ retry = (exec_ctx &&
+ !cpumask_test_cpu(lowest_rq->cpu,
+ &exec_ctx->cpus_mask));
+ }
+ } else {
+ /*
+ * Chain level balancing introduces new ways for our choice of
+ * task & rq to become invalid when we release the rq lock, e.g.:
+ * 1) Migration to rq from another CPU makes task newly unpushable
+ * by completing a "blocked chain" from task to rq->curr.
+ * Fail so a different task can be chosen for push.
+ * 2) In cases where task's blocked chain led to a dequeued task
+ * or one on another rq, the last waiter in the chain on this
+ * rq might have acquired the lock and woken, meaning we must
+ * pick a different rq if its affinity prevents running on
+ * lowest_rq.
+ */
+ int pushable = pushable_chain(rq, task, lowest_rq->cpu);

+ fail = !pushable;
+ retry = pushable == -1;
+ }
+
+ if (unlikely(fail)) {
double_unlock_balance(rq, lowest_rq);
lowest_rq = NULL;
break;
@@ -2023,7 +2112,7 @@ static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq)
}

/* If this rq is still suitable use it. */
- if (lowest_rq->rt.highest_prio.curr > task->prio)
+ if (lowest_rq->rt.highest_prio.curr > task->prio && !retry)
break;

/* try again */
@@ -2034,26 +2123,6 @@ static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq)
return lowest_rq;
}

-static struct task_struct *pick_next_pushable_task(struct rq *rq)
-{
- struct task_struct *p;
-
- if (!has_pushable_tasks(rq))
- return NULL;
-
- p = plist_first_entry(&rq->rt.pushable_tasks,
- struct task_struct, pushable_tasks);
-
- BUG_ON(rq->cpu != task_cpu(p));
- BUG_ON(task_current(rq, p) || task_current_selected(rq, p));
- BUG_ON(p->nr_cpus_allowed <= 1);
-
- BUG_ON(!task_on_rq_queued(p));
- BUG_ON(!rt_task(p));
-
- return p;
-}
-
/*
* If the current CPU has more than one RT task, see if the non
* running task can migrate over to a CPU that is running a task
@@ -2099,10 +2168,10 @@ static int push_rt_task(struct rq *rq, bool pull)
* Note that the stoppers are masqueraded as SCHED_FIFO
* (cf. sched_set_stop_task()), so we can't rely on rt_task().
*/
- if (rq->curr->sched_class != &rt_sched_class)
+ if (rq_selected(rq)->sched_class != &rt_sched_class)
return 0;

- cpu = find_lowest_rq(rq->curr);
+ cpu = find_lowest_rq(rq_selected(rq), rq->curr);
if (cpu == -1 || cpu == rq->cpu)
return 0;

@@ -2164,9 +2233,7 @@ static int push_rt_task(struct rq *rq, bool pull)
goto retry;
}

- deactivate_task(rq, next_task, 0);
- set_task_cpu(next_task, lowest_rq->cpu);
- activate_task(lowest_rq, next_task, 0);
+ push_task_chain(rq, lowest_rq, next_task);
resched_curr(lowest_rq);
ret = 1;

@@ -2437,9 +2504,7 @@ static void pull_rt_task(struct rq *this_rq)
if (is_migration_disabled(p)) {
push_task = get_push_task(src_rq);
} else {
- deactivate_task(src_rq, p, 0);
- set_task_cpu(p, this_cpu);
- activate_task(this_rq, p, 0);
+ push_task_chain(src_rq, this_rq, p);
resched = true;
}
/*
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 1c832516b7e8..3a80cc4278ca 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2366,7 +2366,7 @@ extern void set_cpus_allowed_common(struct task_struct *p, struct affinity_conte

static inline struct task_struct *get_push_task(struct rq *rq)
{
- struct task_struct *p = rq->curr;
+ struct task_struct *p = rq_selected(rq);

lockdep_assert_rq_held(rq);

@@ -3530,4 +3530,10 @@ static inline void task_tick_mm_cid(struct rq *rq, struct task_struct *curr) { }
static inline void init_sched_mm_cid(struct task_struct *t) { }
#endif

+#ifdef CONFIG_SMP
+void push_task_chain(struct rq *rq, struct rq *dst_rq, struct task_struct *task);
+struct task_struct *find_exec_ctx(struct rq *rq, struct task_struct *p);
+int pushable_chain(struct rq *rq, struct task_struct *p, int cpu);
+#endif
+
#endif /* _KERNEL_SCHED_SCHED_H */
--
2.41.0.rc0.172.g3f132b7071-goog


2023-06-01 06:25:47

by John Stultz

[permalink] [raw]
Subject: [PATCH v4 05/13] locking/mutex: Add task_struct::blocked_lock to serialize changes to the blocked_on state

From: Peter Zijlstra <[email protected]>

This patch was split out from the later "sched: Add proxy
execution" patch.

Adds blocked_lock to the task_struct so we can safely keep track
of which tasks are blocked on us.

This will be used for tracking blocked-task/mutex chains with
the prox-execution patch in a similar fashion to how priority
inheritence is done with rt_mutexes.

Cc: Joel Fernandes <[email protected]>
Cc: Qais Yousef <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Zimuzo Ezeozue <[email protected]>
Cc: Youssef Esmat <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Waiman Long <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: "Paul E . McKenney" <[email protected]>
Cc: [email protected]
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
[rebased, added comments and changelog]
Signed-off-by: Juri Lelli <[email protected]>
[Fixed rebase conflicts]
[squashed sched: Ensure blocked_on is always guarded by blocked_lock]
Signed-off-by: Valentin Schneider <[email protected]>
[fix rebase conflicts, various fixes & tweaks commented inline]
[squashed sched: Use rq->curr vs rq->proxy checks]
Signed-off-by: Connor O'Brien <[email protected]>
[jstultz: Split out from bigger patch]
Signed-off-by: John Stultz <[email protected]>
---
v2:
* Split out into its own patch
v4:
* Remove verbose comments/questions to avoid review
distractions, as suggested by Dietmar
* Fixed nested block_on locking for ww_mutex access
---
include/linux/sched.h | 1 +
init/init_task.c | 1 +
kernel/fork.c | 1 +
kernel/locking/mutex.c | 22 ++++++++++++++++++----
kernel/locking/ww_mutex.h | 6 ++++++
5 files changed, 27 insertions(+), 4 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index a312a2ff47bf..6b0d4b398b31 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1142,6 +1142,7 @@ struct task_struct {
#endif

struct mutex *blocked_on; /* lock we're blocked on */
+ raw_spinlock_t blocked_lock;

#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
int non_block_count;
diff --git a/init/init_task.c b/init/init_task.c
index ff6c4b9bfe6b..189ce67e9704 100644
--- a/init/init_task.c
+++ b/init/init_task.c
@@ -130,6 +130,7 @@ struct task_struct init_task
.journal_info = NULL,
INIT_CPU_TIMERS(init_task)
.pi_lock = __RAW_SPIN_LOCK_UNLOCKED(init_task.pi_lock),
+ .blocked_lock = __RAW_SPIN_LOCK_UNLOCKED(init_task.blocked_lock),
.timer_slack_ns = 50000, /* 50 usec default slack */
.thread_pid = &init_struct_pid,
.thread_group = LIST_HEAD_INIT(init_task.thread_group),
diff --git a/kernel/fork.c b/kernel/fork.c
index 9244c540bb13..1ea1b2d527bb 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -2359,6 +2359,7 @@ __latent_entropy struct task_struct *copy_process(
ftrace_graph_init_task(p);

rt_mutex_init_task(p);
+ raw_spin_lock_init(&p->blocked_lock);

lockdep_assert_irqs_enabled();
#ifdef CONFIG_PROVE_LOCKING
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index d7a202c35ebe..ac3d2e350fac 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -616,6 +616,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
}

raw_spin_lock_irqsave(&lock->wait_lock, flags);
+ raw_spin_lock(&current->blocked_lock);
/*
* After waiting to acquire the wait_lock, try again.
*/
@@ -677,6 +678,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
goto err;
}

+ raw_spin_unlock(&current->blocked_lock);
raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
if (ww_ctx)
ww_ctx_wake(ww_ctx);
@@ -684,6 +686,8 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas

first = __mutex_waiter_is_first(lock, &waiter);

+ raw_spin_lock_irqsave(&lock->wait_lock, flags);
+ raw_spin_lock(&current->blocked_lock);
/*
* Gets reset by ttwu_runnable().
*/
@@ -698,15 +702,23 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
break;

if (first) {
+ bool acquired;
+
+ /*
+ * mutex_optimistic_spin() can schedule, so we need to
+ * release these locks before calling it.
+ */
+ raw_spin_unlock(&current->blocked_lock);
+ raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
trace_contention_begin(lock, LCB_F_MUTEX | LCB_F_SPIN);
- if (mutex_optimistic_spin(lock, ww_ctx, &waiter))
+ acquired = mutex_optimistic_spin(lock, ww_ctx, &waiter);
+ raw_spin_lock_irqsave(&lock->wait_lock, flags);
+ raw_spin_lock(&current->blocked_lock);
+ if (acquired)
break;
trace_contention_begin(lock, LCB_F_MUTEX);
}
-
- raw_spin_lock_irqsave(&lock->wait_lock, flags);
}
- raw_spin_lock_irqsave(&lock->wait_lock, flags);
acquired:
__set_current_state(TASK_RUNNING);

@@ -733,6 +745,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
if (ww_ctx)
ww_mutex_lock_acquired(ww, ww_ctx);

+ raw_spin_unlock(&current->blocked_lock);
raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
if (ww_ctx)
ww_ctx_wake(ww_ctx);
@@ -745,6 +758,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
__mutex_remove_waiter(lock, &waiter);
err_early_kill:
trace_contention_end(lock, ret);
+ raw_spin_unlock(&current->blocked_lock);
raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
debug_mutex_free_waiter(&waiter);
mutex_release(&lock->dep_map, ip);
diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h
index 7d623417b496..8378b533bb1e 100644
--- a/kernel/locking/ww_mutex.h
+++ b/kernel/locking/ww_mutex.h
@@ -287,6 +287,8 @@ __ww_mutex_die(struct MUTEX *lock, struct MUTEX_WAITER *waiter,
return false;

if (waiter->ww_ctx->acquired > 0 && __ww_ctx_less(waiter->ww_ctx, ww_ctx)) {
+ /* nested as we should hold current->blocked_lock already */
+ raw_spin_lock_nested(&waiter->task->blocked_lock, SINGLE_DEPTH_NESTING);
#ifndef WW_RT
debug_mutex_wake_waiter(lock, waiter);
#endif
@@ -297,6 +299,7 @@ __ww_mutex_die(struct MUTEX *lock, struct MUTEX_WAITER *waiter,
* blocked_on relationships that can't resolve.
*/
waiter->task->blocked_on = NULL;
+ raw_spin_unlock(&waiter->task->blocked_lock);
}

return true;
@@ -343,6 +346,8 @@ static bool __ww_mutex_wound(struct MUTEX *lock,
* wakeup pending to re-read the wounded state.
*/
if (owner != current) {
+ /* nested as we should hold current->blocked_lock already */
+ raw_spin_lock_nested(&owner->blocked_lock, SINGLE_DEPTH_NESTING);
wake_q_add(&ww_ctx->wake_q, owner);
/*
* When waking up the task to wound, be sure to clear the
@@ -350,6 +355,7 @@ static bool __ww_mutex_wound(struct MUTEX *lock,
* blocked_on relationships that can't resolve.
*/
owner->blocked_on = NULL;
+ raw_spin_unlock(&owner->blocked_lock);
}
return true;
}
--
2.41.0.rc0.172.g3f132b7071-goog


2023-06-01 06:26:02

by John Stultz

[permalink] [raw]
Subject: [PATCH v4 13/13] sched: Fixups to find_exec_ctx

find_exec_ctx() would sometimes cause the rt task pushing
to try to push tasks in the chain that ends in the rq->curr.

This caused lots of migration noise and effecively livelock
where tasks would get pushed off to other cpus, then
proxy-migrated back to the lockowner's cpu, over and over.

This kept other cpus constantly proxy-migrating away and
never actually selecting a task to run - effectively
hanging the system.

So this patch reworks some of the find_exec_ctx logic
so we stop when we hit rq->curr, and changes the logic
that was returning NULL when we came across
rq_selected(), as I'm not sure why we'd stop there.

Cc: Joel Fernandes <[email protected]>
Cc: Qais Yousef <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Zimuzo Ezeozue <[email protected]>
Cc: Youssef Esmat <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Waiman Long <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: "Paul E . McKenney" <[email protected]>
Cc: [email protected]
Signed-off-by: John Stultz <[email protected]>
---
kernel/sched/core.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index e0e6c2feefd0..9cdabb79d450 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3881,7 +3881,15 @@ struct task_struct *find_exec_ctx(struct rq *rq, struct task_struct *p)
if (owner == exec_ctx)
break;

- if (!task_queued_on_rq(rq, owner) || task_current_selected(rq, owner)) {
+ /* If we get to current, that's the exec ctx! */
+ if (task_current(rq, owner))
+ return owner;
+
+ /*
+ * XXX This previously was checking task_current_selected()
+ * but that doesnt' make much sense to me. -jstultz
+ */
+ if (!task_queued_on_rq(rq, owner)) {
exec_ctx = NULL;
break;
}
--
2.41.0.rc0.172.g3f132b7071-goog


2023-06-01 06:27:57

by John Stultz

[permalink] [raw]
Subject: [PATCH v4 09/13] sched: Add proxy execution

From: Peter Zijlstra <[email protected]>

The classic solution to priority-inversion is priority-
inheritance, which the kernel supports via the rt_mutex.

However, priority-inheritance only really works for interactions
between RT tasks and lower-priority tasks (RT or OTHER), as it
utilizes RT's strict priority ordering. With CFS and DEADLINE
classes, the next task chosen by the scheduler does not use a
linear priority ordering. So a more general solution is needed.

Proxy Execution provides just that: It allows mutex owner to be
run using the entire scheduling-context of tasks that are blocked
waiting on that mutex.

The basic mechanism is implemented by this patch, the core of
which resides in the proxy() function. Tasks blocked on mutexes
are not dequeued, so, if one of them is selected by schedule()
as the next task to be run on a CPU, proxy() is used to walk the
blocked_on relation and find a proxy task (a lock owner) to run
on the lock-waiters behalf (utilizing the lock-waiters
scheduling context).

This can be thought of as similar to rt_mutex priority-
inheritance, but utilizes the scheduler's pick_next_task()
function to determine the most important task to run next, (from
the set of runnable *and* mutex blocked tasks) rather then a
integer priority value. Then the proxy() function finds a
dependent lock owner to run, effecively boosting it by running
with the selected tasks scheduler context.

Here come the tricky bits. In fact, the owner task might be in
all sort of states when a proxy is found (blocked, executing on
a different CPU, etc.). Details on how to handle different
situations are to be found in proxy() code comments.

Cc: Joel Fernandes <[email protected]>
Cc: Qais Yousef <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Zimuzo Ezeozue <[email protected]>
Cc: Youssef Esmat <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Waiman Long <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: "Paul E . McKenney" <[email protected]>
Cc: [email protected]
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
[rebased, added comments and changelog]
Signed-off-by: Juri Lelli <[email protected]>
[Fixed rebase conflicts]
[squashed sched: Ensure blocked_on is always guarded by blocked_lock]
Signed-off-by: Valentin Schneider <[email protected]>
[fix rebase conflicts, various fixes & tweaks commented inline]
[squashed sched: Use rq->curr vs rq->proxy checks]
Signed-off-by: Connor O'Brien <[email protected]>
[jstultz: Rebased, split up, and folded in changes from Juri
Lelli and Connor O'Brian, added additional locking on
get_task_blocked_on(next) logic, pretty major rework to better
conditionalize logic on CONFIG_PROXY_EXEC and split up the very
large proxy() function - hopefully without changes to logic /
behavior]
Signed-off-by: John Stultz <[email protected]>
---
v2:
* Numerous changes folded in
* Split out some of the logic into separate patches
* Break up the proxy() function so its a bit easier to read
and is better conditionalized on CONFIG_PROXY_EXEC
v3:
* Improve comments
* Added fix to call __balance_callbacks before we call
pick_next_task() again, as a callback may have been set
causing rq_pin_lock to generate warnings.
* Added fix to call __balance_callbacks before we drop
the rq lock in proxy_migrate_task, to avoid rq_pin_lock
from generating warnings if a callback was set
v4:
* Rename blocked_proxy -> blocked_donor to clarify relationship
* Fix null ptr deref at end of proxy()
* Fix null ptr deref in ttwu_proxy_skip_wakeup() path
* Remove verbose comments/questions to avoid review
distractions, as suggested by Dietmar
* Reword and expand commit message to provide more detailed
context on how the idea works.
* Minor rebase for moving *_task_blocked_on() wrappers to be
a later add on to the main patch series.

TODO: Finish conditionalization edge cases
---
include/linux/sched.h | 2 +
init/Kconfig | 7 +
kernel/Kconfig.locks | 2 +-
kernel/fork.c | 2 +
kernel/locking/mutex.c | 37 ++-
kernel/sched/core.c | 525 +++++++++++++++++++++++++++++++++++++++-
kernel/sched/deadline.c | 2 +-
kernel/sched/fair.c | 9 +-
kernel/sched/rt.c | 3 +-
kernel/sched/sched.h | 20 +-
10 files changed, 594 insertions(+), 15 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 6b0d4b398b31..8ac9db6ca747 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1141,7 +1141,9 @@ struct task_struct {
struct rt_mutex_waiter *pi_blocked_on;
#endif

+ struct task_struct *blocked_donor; /* task that is boosting us */
struct mutex *blocked_on; /* lock we're blocked on */
+ struct list_head blocked_entry; /* tasks blocked on us */
raw_spinlock_t blocked_lock;

#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
diff --git a/init/Kconfig b/init/Kconfig
index 32c24950c4ce..43abaffc7dfa 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -907,6 +907,13 @@ config NUMA_BALANCING_DEFAULT_ENABLED
If set, automatic NUMA balancing will be enabled if running on a NUMA
machine.

+config PROXY_EXEC
+ bool "Proxy Execution"
+ default n
+ help
+ This option enables proxy execution, a mechanism for mutex owning
+ tasks to inherit the scheduling context of higher priority waiters.
+
menuconfig CGROUPS
bool "Control Group support"
select KERNFS
diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
index 4198f0273ecd..791c98f1d329 100644
--- a/kernel/Kconfig.locks
+++ b/kernel/Kconfig.locks
@@ -226,7 +226,7 @@ config ARCH_SUPPORTS_ATOMIC_RMW

config MUTEX_SPIN_ON_OWNER
def_bool y
- depends on SMP && ARCH_SUPPORTS_ATOMIC_RMW
+ depends on SMP && ARCH_SUPPORTS_ATOMIC_RMW && !PROXY_EXEC

config RWSEM_SPIN_ON_OWNER
def_bool y
diff --git a/kernel/fork.c b/kernel/fork.c
index 1ea1b2d527bb..2451eb8bcfe7 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -2462,7 +2462,9 @@ __latent_entropy struct task_struct *copy_process(
lockdep_init_task(p);
#endif

+ p->blocked_donor = NULL; /* nobody is boosting us yet */
p->blocked_on = NULL; /* not blocked yet */
+ INIT_LIST_HEAD(&p->blocked_entry);

#ifdef CONFIG_BCACHE
p->sequential_io = 0;
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 8c9f9dffe473..eabfd66ce224 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -905,11 +905,13 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne
{
struct task_struct *next = NULL;
DEFINE_WAKE_Q(wake_q);
- unsigned long owner;
+ /* Always force HANDOFF for Proxy Exec for now. Revisit. */
+ unsigned long owner = MUTEX_FLAG_HANDOFF;
unsigned long flags;

mutex_release(&lock->dep_map, ip);

+#ifndef CONFIG_PROXY_EXEC
/*
* Release the lock before (potentially) taking the spinlock such that
* other contenders can get on with things ASAP.
@@ -932,10 +934,38 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne
return;
}
}
+#endif

raw_spin_lock_irqsave(&lock->wait_lock, flags);
debug_mutex_unlock(lock);
- if (!list_empty(&lock->wait_list)) {
+
+#ifdef CONFIG_PROXY_EXEC
+ raw_spin_lock(&current->blocked_lock);
+ /*
+ * If we have a task boosting us, and that task was boosting us through
+ * this lock, hand the lock to that task, as that is the highest
+ * waiter, as selected by the scheduling function.
+ */
+ next = current->blocked_donor;
+ if (next) {
+ struct mutex *next_lock;
+
+ raw_spin_lock_nested(&next->blocked_lock, SINGLE_DEPTH_NESTING);
+ next_lock = next->blocked_on;
+ raw_spin_unlock(&next->blocked_lock);
+ if (next_lock != lock) {
+ next = NULL;
+ } else {
+ wake_q_add(&wake_q, next);
+ current->blocked_donor = NULL;
+ }
+ }
+#endif
+
+ /*
+ * Failing that, pick any on the wait list.
+ */
+ if (!next && !list_empty(&lock->wait_list)) {
/* get the first entry from the wait-list: */
struct mutex_waiter *waiter =
list_first_entry(&lock->wait_list,
@@ -951,6 +981,9 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne
__mutex_handoff(lock, next);

preempt_disable();
+#ifdef CONFIG_PROXY_EXEC
+ raw_spin_unlock(&current->blocked_lock);
+#endif
raw_spin_unlock_irqrestore(&lock->wait_lock, flags);

wake_up_q(&wake_q);
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 3dce69feb934..328776421c7a 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -95,6 +95,7 @@
#include "../workqueue_internal.h"
#include "../../io_uring/io-wq.h"
#include "../smpboot.h"
+#include "../locking/mutex.h"

EXPORT_TRACEPOINT_SYMBOL_GPL(ipi_send_cpu);
EXPORT_TRACEPOINT_SYMBOL_GPL(ipi_send_cpumask);
@@ -2799,8 +2800,15 @@ static int affine_move_task(struct rq *rq, struct task_struct *p, struct rq_flag
struct set_affinity_pending my_pending = { }, *pending = NULL;
bool stop_pending, complete = false;

- /* Can the task run on the task's current CPU? If so, we're done */
- if (cpumask_test_cpu(task_cpu(p), &p->cpus_mask)) {
+ /*
+ * Can the task run on the task's current CPU? If so, we're done
+ *
+ * We are also done if the task is selected, boosting a lock-
+ * holding proxy, (and potentially has been migrated outside its
+ * current or previous affinity mask)
+ */
+ if (cpumask_test_cpu(task_cpu(p), &p->cpus_mask) ||
+ (task_current_selected(rq, p) && !task_current(rq, p))) {
struct task_struct *push_task = NULL;

if ((flags & SCA_MIGRATE_ENABLE) &&
@@ -3713,6 +3721,54 @@ static inline void ttwu_do_wakeup(struct task_struct *p)
trace_sched_wakeup(p);
}

+#ifdef CONFIG_PROXY_EXEC
+static void activate_task_and_blocked_ent(struct rq *rq, struct task_struct *p, int en_flags)
+{
+ /*
+ * By calling activate_task with blocked_lock held, we order against
+ * the proxy() blocked_task case such that no more blocked tasks will
+ * be enqueued on p once we release p->blocked_lock.
+ */
+ raw_spin_lock(&p->blocked_lock);
+ activate_task(rq, p, en_flags);
+ raw_spin_unlock(&p->blocked_lock);
+
+ /*
+ * A whole bunch of 'proxy' tasks back this blocked task, wake
+ * them all up to give this task its 'fair' share.
+ */
+ while (!list_empty(&p->blocked_entry)) {
+ struct task_struct *pp =
+ list_first_entry(&p->blocked_entry,
+ struct task_struct,
+ blocked_entry);
+ raw_spin_lock(&pp->blocked_lock);
+ BUG_ON(pp->blocked_entry.prev != &p->blocked_entry);
+
+ list_del_init(&pp->blocked_entry);
+ if (READ_ONCE(pp->on_rq)) {
+ /*
+ * We raced with a non mutex handoff activation of pp.
+ * That activation will also take care of activating
+ * all of the tasks after pp in the blocked_entry list,
+ * so we're done here.
+ */
+ raw_spin_unlock(&pp->blocked_lock);
+ break;
+ }
+ __set_task_cpu(pp, cpu_of(rq));
+ activate_task(rq, pp, en_flags);
+ resched_curr(rq);
+ raw_spin_unlock(&pp->blocked_lock);
+ }
+}
+#else
+static inline void activate_task_and_blocked_ent(struct rq *rq, struct task_struct *p, int en_flags)
+{
+ activate_task(rq, p, en_flags);
+}
+#endif
+
static void
ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags,
struct rq_flags *rf)
@@ -3734,7 +3790,8 @@ ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags,
atomic_dec(&task_rq(p)->nr_iowait);
}

- activate_task(rq, p, en_flags);
+ activate_task_and_blocked_ent(rq, p, en_flags);
+
check_preempt_curr(rq, p, wake_flags);

ttwu_do_wakeup(p);
@@ -3767,6 +3824,75 @@ ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags,
#endif
}

+#ifdef CONFIG_PROXY_EXEC
+bool ttwu_proxy_skip_wakeup(struct rq *rq, struct task_struct *p)
+{
+ if (task_current(rq, p)) {
+ bool ret = true;
+
+ raw_spin_lock(&p->blocked_lock);
+ if (task_is_blocked(p) && __mutex_owner(p->blocked_on) == p)
+ p->blocked_on = NULL;
+ if (!task_is_blocked(p))
+ ret = false;
+ raw_spin_unlock(&p->blocked_lock);
+ return ret;
+ }
+
+ /*
+ * Since we don't dequeue for blocked-on relations, we'll always
+ * trigger the on_rq_queued() clause for them.
+ */
+ if (task_is_blocked(p)) {
+ raw_spin_lock(&p->blocked_lock);
+
+ if (!p->blocked_on || __mutex_owner(p->blocked_on) != p) {
+ /*
+ * p already woke, ran and blocked on another mutex.
+ * Since a successful wakeup already happened, we're
+ * done.
+ */
+ raw_spin_unlock(&p->blocked_lock);
+ return true;
+ }
+
+ p->blocked_on = NULL;
+ if (!cpumask_test_cpu(cpu_of(rq), p->cpus_ptr)) {
+ /*
+ * proxy stuff moved us outside of the affinity mask
+ * 'sleep' now and fail the direct wakeup so that the
+ * normal wakeup path will fix things.
+ */
+ deactivate_task(rq, p, DEQUEUE_SLEEP | DEQUEUE_NOCLOCK);
+ if (task_current_selected(rq, p)) {
+ /*
+ * If p is the proxy, then remove lingering
+ * references to it from rq and sched_class structs after
+ * dequeueing.
+ */
+ put_prev_task(rq, p);
+ rq_set_selected(rq, rq->idle);
+ }
+ resched_curr(rq);
+ raw_spin_unlock(&p->blocked_lock);
+ return true;
+ }
+ /*
+ * Must resched after killing a blocked_on relation. The currently
+ * executing context might not be the most elegible anymore.
+ */
+ resched_curr(rq);
+ raw_spin_unlock(&p->blocked_lock);
+ }
+ return false;
+}
+#else
+static inline bool ttwu_proxy_skip_wakeup(struct rq *rq, struct task_struct *p)
+{
+ return false;
+}
+#endif
+
/*
* Consider @p being inside a wait loop:
*
@@ -3799,9 +3925,15 @@ static int ttwu_runnable(struct task_struct *p, int wake_flags)
int ret = 0;

rq = __task_rq_lock(p, &rf);
- if (!task_on_rq_queued(p))
+ if (!task_on_rq_queued(p)) {
+ BUG_ON(task_is_running(p));
goto out_unlock;
+ }

+ /*
+ * ttwu_do_wakeup()->
+ * check_preempt_curr() may use rq clock
+ */
if (!task_on_cpu(rq, p)) {
/*
* When on_rq && !on_cpu the task is preempted, see if
@@ -3810,8 +3942,13 @@ static int ttwu_runnable(struct task_struct *p, int wake_flags)
update_rq_clock(rq);
check_preempt_curr(rq, p, wake_flags);
}
+
+ if (ttwu_proxy_skip_wakeup(rq, p))
+ goto out_unlock;
+
ttwu_do_wakeup(p);
ret = 1;
+
out_unlock:
__task_rq_unlock(rq, &rf);

@@ -4225,6 +4362,11 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
if (READ_ONCE(p->on_rq) && ttwu_runnable(p, wake_flags))
goto unlock;

+ if (task_is_blocked(p)) {
+ success = 0;
+ goto unlock;
+ }
+
#ifdef CONFIG_SMP
/*
* Ensure we load p->on_cpu _after_ p->on_rq, otherwise it would be
@@ -5620,6 +5762,15 @@ void scheduler_tick(void)

rq_lock(rq, &rf);

+#ifdef CONFIG_PROXY_EXEC
+ if (task_cpu(curr) != cpu) {
+ BUG_ON(!test_preempt_need_resched() &&
+ !tif_need_resched());
+ rq_unlock(rq, &rf);
+ return;
+ }
+#endif
+
update_rq_clock(rq);
thermal_pressure = arch_scale_thermal_pressure(cpu_of(rq));
update_thermal_load_avg(rq_clock_thermal(rq), rq, thermal_pressure);
@@ -6520,6 +6671,332 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
# define SM_MASK_PREEMPT SM_PREEMPT
#endif

+#ifdef CONFIG_PROXY_EXEC
+
+static struct task_struct *
+proxy_migrate_task(struct rq *rq, struct task_struct *next,
+ struct rq_flags *rf, struct task_struct *p,
+ int that_cpu, bool curr_in_chain)
+{
+ struct rq *that_rq;
+ LIST_HEAD(migrate_list);
+
+ /*
+ * If the blocked-on relationship crosses CPUs, migrate @p to the
+ * @owner's CPU.
+ *
+ * This is because we must respect the CPU affinity of execution
+ * contexts (@owner) but we can ignore affinity for scheduling
+ * contexts (@p). So we have to move scheduling contexts towards
+ * potential execution contexts.
+ */
+ that_rq = cpu_rq(that_cpu);
+
+ /*
+ * @owner can disappear, simply migrate to @that_cpu and leave that CPU
+ * to sort things out.
+ */
+
+ /*
+ * Since we're going to drop @rq, we have to put(@next) first,
+ * otherwise we have a reference that no longer belongs to us. Use
+ * @fake_task to fill the void and make the next pick_next_task()
+ * invocation happy.
+ *
+ * CPU0 CPU1
+ *
+ * B mutex_lock(X)
+ *
+ * A mutex_lock(X) <- B
+ * A __schedule()
+ * A pick->A
+ * A proxy->B
+ * A migrate A to CPU1
+ * B mutex_unlock(X) -> A
+ * B __schedule()
+ * B pick->A
+ * B switch_to (A)
+ * A ... does stuff
+ * A ... is still running here
+ *
+ * * BOOM *
+ */
+ put_prev_task(rq, next);
+ if (curr_in_chain) {
+ rq_set_selected(rq, rq->idle);
+ set_tsk_need_resched(rq->idle);
+ return rq->idle;
+ }
+ rq_set_selected(rq, rq->idle);
+
+ for (; p; p = p->blocked_donor) {
+ int wake_cpu = p->wake_cpu;
+
+ WARN_ON(p == rq->curr);
+
+ deactivate_task(rq, p, 0);
+ set_task_cpu(p, that_cpu);
+ /*
+ * We can abuse blocked_entry to migrate the thing,
+ * because @p is still on the rq.
+ */
+ list_add(&p->blocked_entry, &migrate_list);
+
+ /*
+ * Preserve p->wake_cpu, such that we can tell where it
+ * used to run later.
+ */
+ p->wake_cpu = wake_cpu;
+ }
+
+ if (rq->balance_callback)
+ __balance_callbacks(rq);
+
+ rq_unpin_lock(rq, rf);
+ raw_spin_rq_unlock(rq);
+ raw_spin_rq_lock(that_rq);
+
+ while (!list_empty(&migrate_list)) {
+ p = list_first_entry(&migrate_list, struct task_struct, blocked_entry);
+ list_del_init(&p->blocked_entry);
+
+ enqueue_task(that_rq, p, 0);
+ check_preempt_curr(that_rq, p, 0);
+ p->on_rq = TASK_ON_RQ_QUEUED;
+ }
+
+ raw_spin_rq_unlock(that_rq);
+ raw_spin_rq_lock(rq);
+ rq_repin_lock(rq, rf);
+
+ return NULL; /* Retry task selection on _this_ CPU. */
+}
+
+static inline struct task_struct *
+proxy_resched_idle(struct rq *rq, struct task_struct *next)
+{
+ put_prev_task(rq, next);
+ rq_set_selected(rq, rq->idle);
+ set_tsk_need_resched(rq->idle);
+ return rq->idle;
+}
+
+static void proxy_enqueue_on_owner(struct rq *rq, struct task_struct *p,
+ struct task_struct *owner,
+ struct task_struct *next)
+{
+ /*
+ * Walk back up the blocked_donor relation and enqueue them all on @owner
+ *
+ * ttwu_activate() will pick them up and place them on whatever rq
+ * @owner will run next.
+ */
+ if (!owner->on_rq) {
+ for (; p; p = p->blocked_donor) {
+ if (p == owner)
+ continue;
+ BUG_ON(!p->on_rq);
+ deactivate_task(rq, p, DEQUEUE_SLEEP);
+ if (task_current_selected(rq, p)) {
+ put_prev_task(rq, next);
+ rq_set_selected(rq, rq->idle);
+ }
+ /*
+ * ttwu_do_activate must not have a chance to activate p
+ * elsewhere before it's fully extricated from its old rq.
+ */
+ smp_mb();
+ list_add(&p->blocked_entry, &owner->blocked_entry);
+ }
+ }
+}
+
+/*
+ * Find who @next (currently blocked on a mutex) can proxy for.
+ *
+ * Follow the blocked-on relation:
+ *
+ * ,-> task
+ * | | blocked-on
+ * | v
+ * blocked_donor | mutex
+ * | | owner
+ * | v
+ * `-- task
+ *
+ * and set the blocked_donor relation, this latter is used by the mutex
+ * code to find which (blocked) task to hand-off to.
+ *
+ * Lock order:
+ *
+ * p->pi_lock
+ * rq->lock
+ * mutex->wait_lock
+ * p->blocked_lock
+ *
+ * Returns the task that is going to be used as execution context (the one
+ * that is actually going to be put to run on cpu_of(rq)).
+ */
+static struct task_struct *
+proxy(struct rq *rq, struct task_struct *next, struct rq_flags *rf)
+{
+ struct task_struct *p = next;
+ struct task_struct *owner = NULL;
+ bool curr_in_chain = false;
+ int this_cpu, that_cpu;
+ struct mutex *mutex;
+
+ this_cpu = cpu_of(rq);
+
+ /*
+ * Follow blocked_on chain.
+ *
+ * TODO: deadlock detection
+ */
+ for (p = next; p->blocked_on; p = owner) {
+ mutex = p->blocked_on;
+ /* Something changed in the chain, pick_again */
+ if (!mutex)
+ return NULL;
+
+ /*
+ * By taking mutex->wait_lock we hold off concurrent mutex_unlock()
+ * and ensure @owner sticks around.
+ */
+ raw_spin_lock(&mutex->wait_lock);
+ raw_spin_lock(&p->blocked_lock);
+
+ /* Check again that p is blocked with blocked_lock held */
+ if (!task_is_blocked(p) || mutex != p->blocked_on) {
+ /*
+ * Something changed in the blocked_on chain and
+ * we don't know if only at this level. So, let's
+ * just bail out completely and let __schedule
+ * figure things out (pick_again loop).
+ */
+ raw_spin_unlock(&p->blocked_lock);
+ raw_spin_unlock(&mutex->wait_lock);
+ return NULL;
+ }
+
+ if (task_current(rq, p))
+ curr_in_chain = true;
+
+ owner = __mutex_owner(mutex);
+ if (task_cpu(owner) != this_cpu) {
+ that_cpu = task_cpu(owner);
+ /*
+ * @owner can disappear, simply migrate to @that_cpu and leave that CPU
+ * to sort things out.
+ */
+ raw_spin_unlock(&p->blocked_lock);
+ raw_spin_unlock(&mutex->wait_lock);
+
+ return proxy_migrate_task(rq, next, rf, p, that_cpu, curr_in_chain);
+ }
+
+ if (task_on_rq_migrating(owner)) {
+ /*
+ * One of the chain of mutex owners is currently migrating to this
+ * CPU, but has not yet been enqueued because we are holding the
+ * rq lock. As a simple solution, just schedule rq->idle to give
+ * the migration a chance to complete. Much like the migrate_task
+ * case we should end up back in proxy(), this time hopefully with
+ * all relevant tasks already enqueued.
+ */
+ raw_spin_unlock(&p->blocked_lock);
+ raw_spin_unlock(&mutex->wait_lock);
+ return proxy_resched_idle(rq, next);
+ }
+
+ if (!owner->on_rq) {
+ /*
+ * rq->curr must not be added to the blocked_entry list or else
+ * ttwu_do_activate could enqueue it elsewhere before it switches
+ * out here. The approach to avoiding this is the same as in the
+ * migrate_task case.
+ */
+ if (curr_in_chain) {
+ raw_spin_unlock(&p->blocked_lock);
+ raw_spin_unlock(&mutex->wait_lock);
+ return proxy_resched_idle(rq, next);
+ }
+
+ /*
+ * If !@owner->on_rq, holding @rq->lock will not pin the task,
+ * so we cannot drop @mutex->wait_lock until we're sure its a blocked
+ * task on this rq.
+ *
+ * We use @owner->blocked_lock to serialize against ttwu_activate().
+ * Either we see its new owner->on_rq or it will see our list_add().
+ */
+ if (owner != p) {
+ raw_spin_unlock(&p->blocked_lock);
+ raw_spin_lock(&owner->blocked_lock);
+ }
+
+ proxy_enqueue_on_owner(rq, p, owner, next);
+
+ if (task_current_selected(rq, next)) {
+ put_prev_task(rq, next);
+ rq_set_selected(rq, rq->idle);
+ }
+ raw_spin_unlock(&owner->blocked_lock);
+ raw_spin_unlock(&mutex->wait_lock);
+
+ return NULL; /* retry task selection */
+ }
+
+ if (owner == p) {
+ /*
+ * Its possible we interleave with mutex_unlock like:
+ *
+ * lock(&rq->lock);
+ * proxy()
+ * mutex_unlock()
+ * lock(&wait_lock);
+ * next(owner) = current->blocked_donor;
+ * unlock(&wait_lock);
+ *
+ * wake_up_q();
+ * ...
+ * ttwu_runnable()
+ * __task_rq_lock()
+ * lock(&wait_lock);
+ * owner == p
+ *
+ * Which leaves us to finish the ttwu_runnable() and make it go.
+ *
+ * So schedule rq->idle so that ttwu_runnable can get the rq lock
+ * and mark owner as running.
+ */
+ raw_spin_unlock(&p->blocked_lock);
+ raw_spin_unlock(&mutex->wait_lock);
+ return proxy_resched_idle(rq, next);
+ }
+
+ /*
+ * OK, now we're absolutely sure @owner is not blocked _and_
+ * on this rq, therefore holding @rq->lock is sufficient to
+ * guarantee its existence, as per ttwu_remote().
+ */
+ raw_spin_unlock(&p->blocked_lock);
+ raw_spin_unlock(&mutex->wait_lock);
+
+ owner->blocked_donor = p;
+ }
+
+ WARN_ON_ONCE(owner && !owner->on_rq);
+ return owner;
+}
+#else /* PROXY_EXEC */
+static struct task_struct *
+proxy(struct rq *rq, struct task_struct *next, struct rq_flags *rf)
+{
+ return next;
+}
+#endif /* PROXY_EXEC */
+
/*
* __schedule() is the main scheduler function.
*
@@ -6567,6 +7044,7 @@ static void __sched notrace __schedule(unsigned int sched_mode)
struct rq_flags rf;
struct rq *rq;
int cpu;
+ bool preserve_need_resched = false;

cpu = smp_processor_id();
rq = cpu_rq(cpu);
@@ -6612,7 +7090,7 @@ static void __sched notrace __schedule(unsigned int sched_mode)
if (!(sched_mode & SM_MASK_PREEMPT) && prev_state) {
if (signal_pending_state(prev_state, prev)) {
WRITE_ONCE(prev->__state, TASK_RUNNING);
- } else {
+ } else if (!task_is_blocked(prev)) {
prev->sched_contributes_to_load =
(prev_state & TASK_UNINTERRUPTIBLE) &&
!(prev_state & TASK_NOLOAD) &&
@@ -6638,13 +7116,43 @@ static void __sched notrace __schedule(unsigned int sched_mode)
atomic_inc(&rq->nr_iowait);
delayacct_blkio_start();
}
+ } else {
+ /*
+ * Let's make this task, which is blocked on
+ * a mutex, (push/pull)able (RT/DL).
+ * Unfortunately we can only deal with that by
+ * means of a dequeue/enqueue cycle. :-/
+ */
+ dequeue_task(rq, prev, 0);
+ enqueue_task(rq, prev, 0);
}
switch_count = &prev->nvcsw;
}

- next = pick_next_task(rq, prev, &rf);
+pick_again:
+ /*
+ * If picked task is actually blocked it means that it can act as a
+ * proxy for the task that is holding the mutex picked task is blocked
+ * on. Get a reference to the blocked (going to be proxy) task here.
+ * Note that if next isn't actually blocked we will have rq->proxy ==
+ * rq->curr == next in the end, which is intended and means that proxy
+ * execution is currently "not in use".
+ */
+ next = pick_next_task(rq, rq_selected(rq), &rf);
rq_set_selected(rq, next);
- clear_tsk_need_resched(prev);
+ next->blocked_donor = NULL;
+ if (unlikely(task_is_blocked(next))) {
+ next = proxy(rq, next, &rf);
+ if (!next) {
+ __balance_callbacks(rq);
+ goto pick_again;
+ }
+ if (next == rq->idle && prev == rq->idle)
+ preserve_need_resched = true;
+ }
+
+ if (!preserve_need_resched)
+ clear_tsk_need_resched(prev);
clear_preempt_need_resched();
#ifdef CONFIG_SCHED_DEBUG
rq->last_seen_need_resched_ns = 0;
@@ -6731,6 +7239,9 @@ static inline void sched_submit_work(struct task_struct *tsk)
*/
SCHED_WARN_ON(current->__state & TASK_RTLOCK_WAIT);

+ if (task_is_blocked(tsk))
+ return;
+
/*
* If we are going to sleep and we have plugged IO queued,
* make sure to submit it to avoid deadlocks.
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index d41d562df078..1d2711aee448 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1740,7 +1740,7 @@ static void enqueue_task_dl(struct rq *rq, struct task_struct *p, int flags)

enqueue_dl_entity(&p->dl, flags);

- if (!task_current(rq, p) && p->nr_cpus_allowed > 1)
+ if (!task_current(rq, p) && p->nr_cpus_allowed > 1 && !task_is_blocked(p))
enqueue_pushable_dl_task(rq, p);
}

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 62c3c1762004..43efc576d2c6 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8023,7 +8023,9 @@ pick_next_task_fair(struct rq *rq, struct task_struct *prev, struct rq_flags *rf
goto idle;

#ifdef CONFIG_FAIR_GROUP_SCHED
- if (!prev || prev->sched_class != &fair_sched_class)
+ if (!prev ||
+ prev->sched_class != &fair_sched_class ||
+ rq->curr != rq_selected(rq))
goto simple;

/*
@@ -8541,6 +8543,9 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env)

lockdep_assert_rq_held(env->src_rq);

+ if (task_is_blocked(p))
+ return 0;
+
/*
* We do not migrate tasks that are:
* 1) throttled_lb_pair, or
@@ -8591,7 +8596,7 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env)
/* Record that we found at least one task that could run on dst_cpu */
env->flags &= ~LBF_ALL_PINNED;

- if (task_on_cpu(env->src_rq, p)) {
+ if (task_on_cpu(env->src_rq, p) || task_current_selected(env->src_rq, p)) {
schedstat_inc(p->stats.nr_failed_migrations_running);
return 0;
}
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 3ba24c3fce20..f5b1075e8170 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1537,7 +1537,8 @@ enqueue_task_rt(struct rq *rq, struct task_struct *p, int flags)

enqueue_rt_entity(rt_se, flags);

- if (!task_current(rq, p) && p->nr_cpus_allowed > 1)
+ if (!task_current(rq, p) && p->nr_cpus_allowed > 1 &&
+ !task_is_blocked(p))
enqueue_pushable_task(rq, p);
}

diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 29597a6fd65b..1c832516b7e8 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2106,6 +2106,19 @@ static inline int task_current_selected(struct rq *rq, struct task_struct *p)
return rq_selected(rq) == p;
}

+#ifdef CONFIG_PROXY_EXEC
+static inline bool task_is_blocked(struct task_struct *p)
+{
+ return !!p->blocked_on;
+}
+#else /* !PROXY_EXEC */
+static inline bool task_is_blocked(struct task_struct *p)
+{
+ return false;
+}
+
+#endif /* PROXY_EXEC */
+
static inline int task_on_cpu(struct rq *rq, struct task_struct *p)
{
#ifdef CONFIG_SMP
@@ -2267,12 +2280,17 @@ struct sched_class {

static inline void put_prev_task(struct rq *rq, struct task_struct *prev)
{
- WARN_ON_ONCE(rq_selected(rq) != prev);
+ WARN_ON_ONCE(rq->curr != prev && prev != rq_selected(rq));
+
+ if (prev == rq_selected(rq) && task_cpu(prev) != cpu_of(rq))
+ return;
+
prev->sched_class->put_prev_task(rq, prev);
}

static inline void set_next_task(struct rq *rq, struct task_struct *next)
{
+ WARN_ON_ONCE(!task_current_selected(rq, next));
next->sched_class->set_next_task(rq, next, false);
}

--
2.41.0.rc0.172.g3f132b7071-goog


2023-06-01 06:28:42

by John Stultz

[permalink] [raw]
Subject: [PATCH v4 11/13] sched: Fix runtime accounting w/ proxy-execution

The idea here is we want to charge the selected task's vruntime
but charge the executed task's sum_exec_runtime.

This way cputime accounting goes against the task actually running
but vruntime accounting goes against the selected task so we get
proper fairness.

Cc: Joel Fernandes <[email protected]>
Cc: Qais Yousef <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Zimuzo Ezeozue <[email protected]>
Cc: Youssef Esmat <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Waiman Long <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: "Paul E . McKenney" <[email protected]>
Cc: [email protected]
Signed-off-by: John Stultz <[email protected]>
---
kernel/sched/fair.c | 24 +++++++++++++++++++-----
1 file changed, 19 insertions(+), 5 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 43efc576d2c6..c2e17bfa6b31 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -891,22 +891,36 @@ static void update_tg_load_avg(struct cfs_rq *cfs_rq)
}
#endif /* CONFIG_SMP */

-static s64 update_curr_se(struct rq *rq, struct sched_entity *curr)
+static s64 update_curr_se(struct rq *rq, struct sched_entity *se)
{
u64 now = rq_clock_task(rq);
s64 delta_exec;

- delta_exec = now - curr->exec_start;
+ /* Calculate the delta from selected se */
+ delta_exec = now - se->exec_start;
if (unlikely(delta_exec <= 0))
return delta_exec;

- curr->exec_start = now;
- curr->sum_exec_runtime += delta_exec;
+ /* Update selected se's exec_start */
+ se->exec_start = now;
+ if (entity_is_task(se)) {
+ struct task_struct *running = rq->curr;
+ /*
+ * If se is a task, we account the time
+ * against the running task, as w/ proxy-exec
+ * they may not be the same.
+ */
+ running->se.exec_start = now;
+ running->se.sum_exec_runtime += delta_exec;
+ } else {
+ /* If not task, account the time against se */
+ se->sum_exec_runtime += delta_exec;
+ }

if (schedstat_enabled()) {
struct sched_statistics *stats;

- stats = __schedstats_from_se(curr);
+ stats = __schedstats_from_se(se);
__schedstat_set(stats->exec_max,
max(delta_exec, stats->exec_max));
}
--
2.41.0.rc0.172.g3f132b7071-goog


2023-06-01 06:30:09

by John Stultz

[permalink] [raw]
Subject: [PATCH v4 02/13] locking/ww_mutex: Remove wakeups from under mutex::wait_lock

From: Peter Zijlstra <[email protected]>

In preparation to nest mutex::wait_lock under rq::lock we need to remove
wakeups from under it.

Cc: Joel Fernandes <[email protected]>
Cc: Qais Yousef <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Zimuzo Ezeozue <[email protected]>
Cc: Youssef Esmat <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Waiman Long <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: "Paul E . McKenney" <[email protected]>
Cc: [email protected]
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Signed-off-by: Connor O'Brien <[email protected]>
Signed-off-by: John Stultz <[email protected]>
---
v2:
* Move wake_q_init() as suggested by Waiman Long
---
include/linux/ww_mutex.h | 3 +++
kernel/locking/mutex.c | 8 ++++++++
kernel/locking/ww_mutex.h | 10 ++++++++--
3 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/include/linux/ww_mutex.h b/include/linux/ww_mutex.h
index bb763085479a..9335b2202017 100644
--- a/include/linux/ww_mutex.h
+++ b/include/linux/ww_mutex.h
@@ -19,6 +19,7 @@

#include <linux/mutex.h>
#include <linux/rtmutex.h>
+#include <linux/sched/wake_q.h>

#if defined(CONFIG_DEBUG_MUTEXES) || \
(defined(CONFIG_PREEMPT_RT) && defined(CONFIG_DEBUG_RT_MUTEXES))
@@ -58,6 +59,7 @@ struct ww_acquire_ctx {
unsigned int acquired;
unsigned short wounded;
unsigned short is_wait_die;
+ struct wake_q_head wake_q;
#ifdef DEBUG_WW_MUTEXES
unsigned int done_acquire;
struct ww_class *ww_class;
@@ -137,6 +139,7 @@ static inline void ww_acquire_init(struct ww_acquire_ctx *ctx,
ctx->acquired = 0;
ctx->wounded = false;
ctx->is_wait_die = ww_class->is_wait_die;
+ wake_q_init(&ctx->wake_q);
#ifdef DEBUG_WW_MUTEXES
ctx->ww_class = ww_class;
ctx->done_acquire = 0;
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index d973fe6041bf..1582756914df 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -676,6 +676,8 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
}

raw_spin_unlock(&lock->wait_lock);
+ if (ww_ctx)
+ ww_ctx_wake(ww_ctx);
schedule_preempt_disabled();

first = __mutex_waiter_is_first(lock, &waiter);
@@ -725,6 +727,8 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
ww_mutex_lock_acquired(ww, ww_ctx);

raw_spin_unlock(&lock->wait_lock);
+ if (ww_ctx)
+ ww_ctx_wake(ww_ctx);
preempt_enable();
return 0;

@@ -736,6 +740,8 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
raw_spin_unlock(&lock->wait_lock);
debug_mutex_free_waiter(&waiter);
mutex_release(&lock->dep_map, ip);
+ if (ww_ctx)
+ ww_ctx_wake(ww_ctx);
preempt_enable();
return ret;
}
@@ -946,9 +952,11 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne
if (owner & MUTEX_FLAG_HANDOFF)
__mutex_handoff(lock, next);

+ preempt_disable();
raw_spin_unlock(&lock->wait_lock);

wake_up_q(&wake_q);
+ preempt_enable();
}

#ifndef CONFIG_DEBUG_LOCK_ALLOC
diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h
index 56f139201f24..e49ea5336473 100644
--- a/kernel/locking/ww_mutex.h
+++ b/kernel/locking/ww_mutex.h
@@ -161,6 +161,12 @@ static inline void lockdep_assert_wait_lock_held(struct rt_mutex *lock)

#endif /* WW_RT */

+void ww_ctx_wake(struct ww_acquire_ctx *ww_ctx)
+{
+ wake_up_q(&ww_ctx->wake_q);
+ wake_q_init(&ww_ctx->wake_q);
+}
+
/*
* Wait-Die:
* The newer transactions are killed when:
@@ -284,7 +290,7 @@ __ww_mutex_die(struct MUTEX *lock, struct MUTEX_WAITER *waiter,
#ifndef WW_RT
debug_mutex_wake_waiter(lock, waiter);
#endif
- wake_up_process(waiter->task);
+ wake_q_add(&ww_ctx->wake_q, waiter->task);
}

return true;
@@ -331,7 +337,7 @@ static bool __ww_mutex_wound(struct MUTEX *lock,
* wakeup pending to re-read the wounded state.
*/
if (owner != current)
- wake_up_process(owner);
+ wake_q_add(&ww_ctx->wake_q, owner);

return true;
}
--
2.41.0.rc0.172.g3f132b7071-goog


2023-06-01 06:30:21

by John Stultz

[permalink] [raw]
Subject: [PATCH v4 08/13] sched: Unnest ttwu_runnable in prep for proxy-execution

Slightly rework ttwu_runnable to minimize the nesting to help
make the proxy-execution changes easier to read.

Should be no logical change here.

Cc: Joel Fernandes <[email protected]>
Cc: Qais Yousef <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Zimuzo Ezeozue <[email protected]>
Cc: Youssef Esmat <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Waiman Long <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: "Paul E . McKenney" <[email protected]>
Cc: [email protected]
Signed-off-by: John Stultz <[email protected]>
---
kernel/sched/core.c | 24 +++++++++++++-----------
1 file changed, 13 insertions(+), 11 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index ace75aadb90b..3dce69feb934 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3799,18 +3799,20 @@ static int ttwu_runnable(struct task_struct *p, int wake_flags)
int ret = 0;

rq = __task_rq_lock(p, &rf);
- if (task_on_rq_queued(p)) {
- if (!task_on_cpu(rq, p)) {
- /*
- * When on_rq && !on_cpu the task is preempted, see if
- * it should preempt the task that is current now.
- */
- update_rq_clock(rq);
- check_preempt_curr(rq, p, wake_flags);
- }
- ttwu_do_wakeup(p);
- ret = 1;
+ if (!task_on_rq_queued(p))
+ goto out_unlock;
+
+ if (!task_on_cpu(rq, p)) {
+ /*
+ * When on_rq && !on_cpu the task is preempted, see if
+ * it should preempt the task that is current now.
+ */
+ update_rq_clock(rq);
+ check_preempt_curr(rq, p, wake_flags);
}
+ ttwu_do_wakeup(p);
+ ret = 1;
+out_unlock:
__task_rq_unlock(rq, &rf);

return ret;
--
2.41.0.rc0.172.g3f132b7071-goog


2023-06-01 14:23:34

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v4 12/13] sched: Attempt to fix rt/dl load balancing via chain level balance

Hi John,

kernel test robot noticed the following build warnings:

[auto build test WARNING on tip/sched/core]
[also build test WARNING on tip/locking/core tip/master tip/auto-latest linus/master v6.4-rc4 next-20230601]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url: https://github.com/intel-lab-lkp/linux/commits/John-Stultz/sched-Unify-runtime-accounting-across-classes/20230601-140200
base: tip/sched/core
patch link: https://lore.kernel.org/r/20230601055846.2349566-13-jstultz%40google.com
patch subject: [PATCH v4 12/13] sched: Attempt to fix rt/dl load balancing via chain level balance
config: m68k-allyesconfig (https://download.01.org/0day-ci/archive/20230601/[email protected]/config)
compiler: m68k-linux-gcc (GCC) 12.3.0
reproduce (this is a W=1 build):
mkdir -p ~/bin
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/intel-lab-lkp/linux/commit/82593266398c6feaed3ed5ec458986d8e16b6b74
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review John-Stultz/sched-Unify-runtime-accounting-across-classes/20230601-140200
git checkout 82593266398c6feaed3ed5ec458986d8e16b6b74
# save the config file
mkdir build_dir && cp config build_dir/.config
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.3.0 ~/bin/make.cross W=1 O=build_dir ARCH=m68k olddefconfig
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.3.0 ~/bin/make.cross W=1 O=build_dir ARCH=m68k SHELL=/bin/bash kernel/

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/

All warnings (new ones prefixed by >>):

>> kernel/sched/core.c:3838:6: warning: no previous prototype for 'push_task_chain' [-Wmissing-prototypes]
3838 | void push_task_chain(struct rq *rq, struct rq *dst_rq, struct task_struct *task)
| ^~~~~~~~~~~~~~~
kernel/sched/core.c: In function 'push_task_chain':
kernel/sched/core.c:3858:42: error: 'struct rq' has no member named 'cpu'
3858 | set_task_cpu(task, dst_rq->cpu);
| ^~
kernel/sched/core.c: At top level:
>> kernel/sched/core.c:3870:21: warning: no previous prototype for 'find_exec_ctx' [-Wmissing-prototypes]
3870 | struct task_struct *find_exec_ctx(struct rq *rq, struct task_struct *p)
| ^~~~~~~~~~~~~
>> kernel/sched/core.c:3898:5: warning: no previous prototype for 'pushable_chain' [-Wmissing-prototypes]
3898 | int pushable_chain(struct rq *rq, struct task_struct *p, int cpu)
| ^~~~~~~~~~~~~~
kernel/sched/core.c:3930:6: warning: no previous prototype for 'ttwu_proxy_skip_wakeup' [-Wmissing-prototypes]
3930 | bool ttwu_proxy_skip_wakeup(struct rq *rq, struct task_struct *p)
| ^~~~~~~~~~~~~~~~~~~~~~
kernel/sched/core.c: In function 'proxy_migrate_task':
kernel/sched/core.c:6835:35: error: 'struct task_struct' has no member named 'wake_cpu'; did you mean 'wake_q'?
6835 | int wake_cpu = p->wake_cpu;
| ^~~~~~~~
| wake_q
kernel/sched/core.c:6851:20: error: 'struct task_struct' has no member named 'wake_cpu'; did you mean 'wake_q'?
6851 | p->wake_cpu = wake_cpu;
| ^~~~~~~~
| wake_q
kernel/sched/core.c:6854:15: error: 'struct rq' has no member named 'balance_callback'
6854 | if (rq->balance_callback)
| ^~


vim +/push_task_chain +3838 kernel/sched/core.c

3837
> 3838 void push_task_chain(struct rq *rq, struct rq *dst_rq, struct task_struct *task)
3839 {
3840 struct task_struct *owner;
3841
3842 lockdep_assert_rq_held(rq);
3843 lockdep_assert_rq_held(dst_rq);
3844
3845 BUG_ON(!task_queued_on_rq(rq, task));
3846 BUG_ON(task_current_selected(rq, task));
3847
3848 while (task) {
3849 if (!task_queued_on_rq(rq, task) || task_current_selected(rq, task))
3850 break;
3851
3852 if (task_is_blocked(task))
3853 owner = __mutex_owner(task->blocked_on);
3854 else
3855 owner = NULL;
3856
3857 deactivate_task(rq, task, 0);
> 3858 set_task_cpu(task, dst_rq->cpu);
3859 activate_task(dst_rq, task, 0);
3860 if (task == owner)
3861 break;
3862 task = owner;
3863 }
3864 }
3865
3866 /*
3867 * Returns the unblocked task at the end of the blocked chain starting with p
3868 * if that chain is composed entirely of tasks enqueued on rq, or NULL otherwise.
3869 */
> 3870 struct task_struct *find_exec_ctx(struct rq *rq, struct task_struct *p)
3871 {
3872 struct task_struct *exec_ctx, *owner;
3873 struct mutex *mutex;
3874
3875 lockdep_assert_rq_held(rq);
3876
3877 for (exec_ctx = p; task_is_blocked(exec_ctx) && !task_on_cpu(rq, exec_ctx);
3878 exec_ctx = owner) {
3879 mutex = exec_ctx->blocked_on;
3880 owner = __mutex_owner(mutex);
3881 if (owner == exec_ctx)
3882 break;
3883
3884 if (!task_queued_on_rq(rq, owner) || task_current_selected(rq, owner)) {
3885 exec_ctx = NULL;
3886 break;
3887 }
3888 }
3889 return exec_ctx;
3890 }
3891
3892 /*
3893 * Returns:
3894 * 1 if chain is pushable and affinity does not prevent pushing to cpu
3895 * 0 if chain is unpushable
3896 * -1 if chain is pushable but affinity blocks running on cpu.
3897 */
> 3898 int pushable_chain(struct rq *rq, struct task_struct *p, int cpu)
3899 {
3900 struct task_struct *exec_ctx;
3901
3902 lockdep_assert_rq_held(rq);
3903
3904 if (task_rq(p) != rq || !task_on_rq_queued(p))
3905 return 0;
3906
3907 exec_ctx = find_exec_ctx(rq, p);
3908 /*
3909 * Chain leads off the rq, we're free to push it anywhere.
3910 *
3911 * One wrinkle with relying on find_exec_ctx is that when the chain
3912 * leads to a task currently migrating to rq, we see the chain as
3913 * pushable & push everything prior to the migrating task. Even if
3914 * we checked explicitly for this case, we could still race with a
3915 * migration after the check.
3916 * This shouldn't permanently produce a bad state though, as proxy()
3917 * will send the chain back to rq and by that point the migration
3918 * should be complete & a proper push can occur.
3919 */
3920 if (!exec_ctx)
3921 return 1;
3922
3923 if (task_on_cpu(rq, exec_ctx) || exec_ctx->nr_cpus_allowed <= 1)
3924 return 0;
3925
3926 return cpumask_test_cpu(cpu, &exec_ctx->cpus_mask) ? 1 : -1;
3927 }
3928

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

2023-06-01 14:37:55

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v4 12/13] sched: Attempt to fix rt/dl load balancing via chain level balance

Hi John,

kernel test robot noticed the following build errors:

[auto build test ERROR on tip/sched/core]
[also build test ERROR on tip/locking/core tip/master tip/auto-latest linus/master v6.4-rc4 next-20230601]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url: https://github.com/intel-lab-lkp/linux/commits/John-Stultz/sched-Unify-runtime-accounting-across-classes/20230601-140200
base: tip/sched/core
patch link: https://lore.kernel.org/r/20230601055846.2349566-13-jstultz%40google.com
patch subject: [PATCH v4 12/13] sched: Attempt to fix rt/dl load balancing via chain level balance
config: csky-randconfig-r016-20230531 (https://download.01.org/0day-ci/archive/20230601/[email protected]/config)
compiler: csky-linux-gcc (GCC) 12.3.0
reproduce (this is a W=1 build):
mkdir -p ~/bin
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/intel-lab-lkp/linux/commit/82593266398c6feaed3ed5ec458986d8e16b6b74
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review John-Stultz/sched-Unify-runtime-accounting-across-classes/20230601-140200
git checkout 82593266398c6feaed3ed5ec458986d8e16b6b74
# save the config file
mkdir build_dir && cp config build_dir/.config
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.3.0 ~/bin/make.cross W=1 O=build_dir ARCH=csky olddefconfig
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.3.0 ~/bin/make.cross W=1 O=build_dir ARCH=csky SHELL=/bin/bash kernel/sched/

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/

All error/warnings (new ones prefixed by >>):

>> kernel/sched/core.c:3838:6: warning: no previous prototype for 'push_task_chain' [-Wmissing-prototypes]
3838 | void push_task_chain(struct rq *rq, struct rq *dst_rq, struct task_struct *task)
| ^~~~~~~~~~~~~~~
kernel/sched/core.c: In function 'push_task_chain':
>> kernel/sched/core.c:3858:42: error: 'struct rq' has no member named 'cpu'
3858 | set_task_cpu(task, dst_rq->cpu);
| ^~
kernel/sched/core.c: At top level:
>> kernel/sched/core.c:3870:21: warning: no previous prototype for 'find_exec_ctx' [-Wmissing-prototypes]
3870 | struct task_struct *find_exec_ctx(struct rq *rq, struct task_struct *p)
| ^~~~~~~~~~~~~
>> kernel/sched/core.c:3898:5: warning: no previous prototype for 'pushable_chain' [-Wmissing-prototypes]
3898 | int pushable_chain(struct rq *rq, struct task_struct *p, int cpu)
| ^~~~~~~~~~~~~~


vim +3858 kernel/sched/core.c

3837
> 3838 void push_task_chain(struct rq *rq, struct rq *dst_rq, struct task_struct *task)
3839 {
3840 struct task_struct *owner;
3841
3842 lockdep_assert_rq_held(rq);
3843 lockdep_assert_rq_held(dst_rq);
3844
3845 BUG_ON(!task_queued_on_rq(rq, task));
3846 BUG_ON(task_current_selected(rq, task));
3847
3848 while (task) {
3849 if (!task_queued_on_rq(rq, task) || task_current_selected(rq, task))
3850 break;
3851
3852 if (task_is_blocked(task))
3853 owner = __mutex_owner(task->blocked_on);
3854 else
3855 owner = NULL;
3856
3857 deactivate_task(rq, task, 0);
> 3858 set_task_cpu(task, dst_rq->cpu);
3859 activate_task(dst_rq, task, 0);
3860 if (task == owner)
3861 break;
3862 task = owner;
3863 }
3864 }
3865
3866 /*
3867 * Returns the unblocked task at the end of the blocked chain starting with p
3868 * if that chain is composed entirely of tasks enqueued on rq, or NULL otherwise.
3869 */
> 3870 struct task_struct *find_exec_ctx(struct rq *rq, struct task_struct *p)
3871 {
3872 struct task_struct *exec_ctx, *owner;
3873 struct mutex *mutex;
3874
3875 lockdep_assert_rq_held(rq);
3876
3877 for (exec_ctx = p; task_is_blocked(exec_ctx) && !task_on_cpu(rq, exec_ctx);
3878 exec_ctx = owner) {
3879 mutex = exec_ctx->blocked_on;
3880 owner = __mutex_owner(mutex);
3881 if (owner == exec_ctx)
3882 break;
3883
3884 if (!task_queued_on_rq(rq, owner) || task_current_selected(rq, owner)) {
3885 exec_ctx = NULL;
3886 break;
3887 }
3888 }
3889 return exec_ctx;
3890 }
3891
3892 /*
3893 * Returns:
3894 * 1 if chain is pushable and affinity does not prevent pushing to cpu
3895 * 0 if chain is unpushable
3896 * -1 if chain is pushable but affinity blocks running on cpu.
3897 */
> 3898 int pushable_chain(struct rq *rq, struct task_struct *p, int cpu)
3899 {
3900 struct task_struct *exec_ctx;
3901
3902 lockdep_assert_rq_held(rq);
3903
3904 if (task_rq(p) != rq || !task_on_rq_queued(p))
3905 return 0;
3906
3907 exec_ctx = find_exec_ctx(rq, p);
3908 /*
3909 * Chain leads off the rq, we're free to push it anywhere.
3910 *
3911 * One wrinkle with relying on find_exec_ctx is that when the chain
3912 * leads to a task currently migrating to rq, we see the chain as
3913 * pushable & push everything prior to the migrating task. Even if
3914 * we checked explicitly for this case, we could still race with a
3915 * migration after the check.
3916 * This shouldn't permanently produce a bad state though, as proxy()
3917 * will send the chain back to rq and by that point the migration
3918 * should be complete & a proper push can occur.
3919 */
3920 if (!exec_ctx)
3921 return 1;
3922
3923 if (task_on_cpu(rq, exec_ctx) || exec_ctx->nr_cpus_allowed <= 1)
3924 return 0;
3925
3926 return cpumask_test_cpu(cpu, &exec_ctx->cpus_mask) ? 1 : -1;
3927 }
3928

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

2023-06-01 14:38:03

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v4 12/13] sched: Attempt to fix rt/dl load balancing via chain level balance

Hi John,

kernel test robot noticed the following build warnings:

[auto build test WARNING on tip/sched/core]
[also build test WARNING on tip/locking/core tip/master tip/auto-latest linus/master v6.4-rc4 next-20230601]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url: https://github.com/intel-lab-lkp/linux/commits/John-Stultz/sched-Unify-runtime-accounting-across-classes/20230601-140200
base: tip/sched/core
patch link: https://lore.kernel.org/r/20230601055846.2349566-13-jstultz%40google.com
patch subject: [PATCH v4 12/13] sched: Attempt to fix rt/dl load balancing via chain level balance
config: hexagon-randconfig-r013-20230531 (https://download.01.org/0day-ci/archive/20230601/[email protected]/config)
compiler: clang version 17.0.0 (https://github.com/llvm/llvm-project 4faf3aaf28226a4e950c103a14f6fc1d1fdabb1b)
reproduce (this is a W=1 build):
mkdir -p ~/bin
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/intel-lab-lkp/linux/commit/82593266398c6feaed3ed5ec458986d8e16b6b74
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review John-Stultz/sched-Unify-runtime-accounting-across-classes/20230601-140200
git checkout 82593266398c6feaed3ed5ec458986d8e16b6b74
# save the config file
mkdir build_dir && cp config build_dir/.config
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang ~/bin/make.cross W=1 O=build_dir ARCH=hexagon olddefconfig
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang ~/bin/make.cross W=1 O=build_dir ARCH=hexagon SHELL=/bin/bash kernel/sched/

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/

All warnings (new ones prefixed by >>):

In file included from kernel/sched/core.c:9:
In file included from include/linux/highmem.h:12:
In file included from include/linux/hardirq.h:11:
In file included from ./arch/hexagon/include/generated/asm/hardirq.h:1:
In file included from include/asm-generic/hardirq.h:17:
In file included from include/linux/irq.h:20:
In file included from include/linux/io.h:13:
In file included from arch/hexagon/include/asm/io.h:334:
include/asm-generic/io.h:547:31: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
val = __raw_readb(PCI_IOBASE + addr);
~~~~~~~~~~ ^
include/asm-generic/io.h:560:61: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
val = __le16_to_cpu((__le16 __force)__raw_readw(PCI_IOBASE + addr));
~~~~~~~~~~ ^
include/uapi/linux/byteorder/little_endian.h:37:51: note: expanded from macro '__le16_to_cpu'
#define __le16_to_cpu(x) ((__force __u16)(__le16)(x))
^
In file included from kernel/sched/core.c:9:
In file included from include/linux/highmem.h:12:
In file included from include/linux/hardirq.h:11:
In file included from ./arch/hexagon/include/generated/asm/hardirq.h:1:
In file included from include/asm-generic/hardirq.h:17:
In file included from include/linux/irq.h:20:
In file included from include/linux/io.h:13:
In file included from arch/hexagon/include/asm/io.h:334:
include/asm-generic/io.h:573:61: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
val = __le32_to_cpu((__le32 __force)__raw_readl(PCI_IOBASE + addr));
~~~~~~~~~~ ^
include/uapi/linux/byteorder/little_endian.h:35:51: note: expanded from macro '__le32_to_cpu'
#define __le32_to_cpu(x) ((__force __u32)(__le32)(x))
^
In file included from kernel/sched/core.c:9:
In file included from include/linux/highmem.h:12:
In file included from include/linux/hardirq.h:11:
In file included from ./arch/hexagon/include/generated/asm/hardirq.h:1:
In file included from include/asm-generic/hardirq.h:17:
In file included from include/linux/irq.h:20:
In file included from include/linux/io.h:13:
In file included from arch/hexagon/include/asm/io.h:334:
include/asm-generic/io.h:584:33: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
__raw_writeb(value, PCI_IOBASE + addr);
~~~~~~~~~~ ^
include/asm-generic/io.h:594:59: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
__raw_writew((u16 __force)cpu_to_le16(value), PCI_IOBASE + addr);
~~~~~~~~~~ ^
include/asm-generic/io.h:604:59: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
__raw_writel((u32 __force)cpu_to_le32(value), PCI_IOBASE + addr);
~~~~~~~~~~ ^
kernel/sched/core.c:3858:30: error: no member named 'cpu' in 'struct rq'
set_task_cpu(task, dst_rq->cpu);
~~~~~~ ^
>> kernel/sched/core.c:3838:6: warning: no previous prototype for function 'push_task_chain' [-Wmissing-prototypes]
void push_task_chain(struct rq *rq, struct rq *dst_rq, struct task_struct *task)
^
kernel/sched/core.c:3838:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
void push_task_chain(struct rq *rq, struct rq *dst_rq, struct task_struct *task)
^
static
>> kernel/sched/core.c:3870:21: warning: no previous prototype for function 'find_exec_ctx' [-Wmissing-prototypes]
struct task_struct *find_exec_ctx(struct rq *rq, struct task_struct *p)
^
kernel/sched/core.c:3870:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
struct task_struct *find_exec_ctx(struct rq *rq, struct task_struct *p)
^
static
>> kernel/sched/core.c:3898:5: warning: no previous prototype for function 'pushable_chain' [-Wmissing-prototypes]
int pushable_chain(struct rq *rq, struct task_struct *p, int cpu)
^
kernel/sched/core.c:3898:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
int pushable_chain(struct rq *rq, struct task_struct *p, int cpu)
^
static
9 warnings and 1 error generated.


vim +/push_task_chain +3838 kernel/sched/core.c

3837
> 3838 void push_task_chain(struct rq *rq, struct rq *dst_rq, struct task_struct *task)
3839 {
3840 struct task_struct *owner;
3841
3842 lockdep_assert_rq_held(rq);
3843 lockdep_assert_rq_held(dst_rq);
3844
3845 BUG_ON(!task_queued_on_rq(rq, task));
3846 BUG_ON(task_current_selected(rq, task));
3847
3848 while (task) {
3849 if (!task_queued_on_rq(rq, task) || task_current_selected(rq, task))
3850 break;
3851
3852 if (task_is_blocked(task))
3853 owner = __mutex_owner(task->blocked_on);
3854 else
3855 owner = NULL;
3856
3857 deactivate_task(rq, task, 0);
3858 set_task_cpu(task, dst_rq->cpu);
3859 activate_task(dst_rq, task, 0);
3860 if (task == owner)
3861 break;
3862 task = owner;
3863 }
3864 }
3865
3866 /*
3867 * Returns the unblocked task at the end of the blocked chain starting with p
3868 * if that chain is composed entirely of tasks enqueued on rq, or NULL otherwise.
3869 */
> 3870 struct task_struct *find_exec_ctx(struct rq *rq, struct task_struct *p)
3871 {
3872 struct task_struct *exec_ctx, *owner;
3873 struct mutex *mutex;
3874
3875 lockdep_assert_rq_held(rq);
3876
3877 for (exec_ctx = p; task_is_blocked(exec_ctx) && !task_on_cpu(rq, exec_ctx);
3878 exec_ctx = owner) {
3879 mutex = exec_ctx->blocked_on;
3880 owner = __mutex_owner(mutex);
3881 if (owner == exec_ctx)
3882 break;
3883
3884 if (!task_queued_on_rq(rq, owner) || task_current_selected(rq, owner)) {
3885 exec_ctx = NULL;
3886 break;
3887 }
3888 }
3889 return exec_ctx;
3890 }
3891
3892 /*
3893 * Returns:
3894 * 1 if chain is pushable and affinity does not prevent pushing to cpu
3895 * 0 if chain is unpushable
3896 * -1 if chain is pushable but affinity blocks running on cpu.
3897 */
> 3898 int pushable_chain(struct rq *rq, struct task_struct *p, int cpu)
3899 {
3900 struct task_struct *exec_ctx;
3901
3902 lockdep_assert_rq_held(rq);
3903
3904 if (task_rq(p) != rq || !task_on_rq_queued(p))
3905 return 0;
3906
3907 exec_ctx = find_exec_ctx(rq, p);
3908 /*
3909 * Chain leads off the rq, we're free to push it anywhere.
3910 *
3911 * One wrinkle with relying on find_exec_ctx is that when the chain
3912 * leads to a task currently migrating to rq, we see the chain as
3913 * pushable & push everything prior to the migrating task. Even if
3914 * we checked explicitly for this case, we could still race with a
3915 * migration after the check.
3916 * This shouldn't permanently produce a bad state though, as proxy()
3917 * will send the chain back to rq and by that point the migration
3918 * should be complete & a proper push can occur.
3919 */
3920 if (!exec_ctx)
3921 return 1;
3922
3923 if (task_on_cpu(rq, exec_ctx) || exec_ctx->nr_cpus_allowed <= 1)
3924 return 0;
3925
3926 return cpumask_test_cpu(cpu, &exec_ctx->cpus_mask) ? 1 : -1;
3927 }
3928

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

2023-06-12 17:30:34

by K Prateek Nayak

[permalink] [raw]
Subject: Re: [PATCH v4 00/13] Generalized Priority Inheritance via Proxy Execution v3

Hello John,

On 6/1/2023 11:28 AM, John Stultz wrote:
> After having to catch up on other work after OSPM[1], I've finally
> gotten back to focusing on Proxy Execution and wanted to send out this
> next iteration of the patch series for review, testing, and feedback.
> (Many thanks to folks who provided feedback on the last revision!)
>
> As mentioned previously, this Proxy Execution series has a long history:
> First described in a paper[2] by Watkins, Straub, Niehaus, then from
> patches from Peter Zijlstra, extended with lots of work by Juri Lelli,
> Valentin Schneider, and Connor O'Brien. (and thank you to Steven Rostedt
> for providing additional details here!)
>
> So again, many thanks to those above, as all the credit for this series
> really is due to them - while the mistakes are likely mine.
>
> Overview:
> —----------
> Proxy Execution is a generalized form of priority inheritance. Classic
> priority inheritance works well for real-time tasks where there is a
> straight forward priority order to how things are run. But it breaks
> down when used between CFS or DEADLINE tasks, as there are lots
> of parameters involved outside of just the task’s nice value when
> selecting the next task to run (via pick_next_task()). So ideally we
> want to imbue the mutex holder with all the scheduler attributes of
> the blocked waiting task.
>
> Proxy Execution does this via a few changes:
> * Keeping tasks that are blocked on a mutex *on* the runqueue
> * Keeping additional tracking of which mutex a task is blocked on, and
> which task holds a specific mutex.
> * Special handling for when we select a blocked task to run, so that we
> instead run the mutex holder.
>
> The first of these is the most difficult to grasp (I do get the mental
> friction here: blocked tasks on the *run*queue sounds like nonsense!
> Personally I like to think of the runqueue in this model more like a
> “task-selection queue”).
>
> By leaving blocked tasks on the runqueue, we allow pick_next_task() to
> choose the task that should run next (even if it’s blocked waiting on a
> mutex). If we do select a blocked task, we look at the task’s blocked_on
> mutex and from there look at the mutex’s owner task. And in the simple
> case, the task which owns the mutex is what we then choose to run,
> allowing it to release the mutex.
>
> This means that instead of just tracking “curr”, the scheduler needs to
> track both the scheduler context (what was picked and all the state used
> for scheduling decisions), and the execution context (what we’re
> running)
>
> In this way, the mutex owner is run “on behalf” of the blocked task
> that was picked to run, essentially inheriting the scheduler context of
> the blocked task.
>
> As Connor outlined in a previous submission of this patch series, this
> raises a number of complicated situations: The mutex owner might itself
> be blocked on another mutex, or it could be sleeping, running on a
> different CPU, in the process of migrating between CPUs, etc.
>
> But the functionality provided by Proxy Execution is useful, as in
> Android we have a number of cases where we are seeing priority inversion
> (not unbounded, but longer than we’d like) between “foreground” and
> “background” SCHED_NORMAL applications, so having a generalized solution
> would be very useful.
>
> New in v4:
> —------
> * Fixed deadlock that was caused by wait/wound mutexes having circular
> blocked_on references by clearing the blocked_on pointer on the task
> we are waking to wound/die.
>
> * Tried to resolve an issue Dietmar raised with RT balancing where the
> proxy migration and push_rt_task() were fighting ping-ponging tasks
> back and forth, caused by push_rt_task() migrating tasks even if they
> were in the chain that ends with the current running task. Though this
> likely needs more work, as this change resulted in different migration
> quirks (see below).
>
> * Fixed a number of null-pointer traversals that the changes were
> occasionally tripping on
>
> * Reworked patch that exposes __mutex_owner() to the scheduler to ensure
> it doesn’t expose it any more than necessary, as suggested by Peter.
>
> * To address some of Peter’s complaints, backed out the rq_curr()
> wrapper, and reworked rq_selected() to be a macro to avoid needing
> multiple accessors for READ_ONCE/rcu_dereference() type accesses.
>
> * Removed verbose legacy comments from previous developers of the series
> as Dietmar was finding them distracting when reviewing the diffs
> (Though, to ensure I heed the warnings from previous experienced
> travelers, I’ve preserved the comments/questions in a separate patch
> in my own development tree).
>
> * Dropped patch that added *_task_blocked_on() wrappers to check locking
> correctness. Mostly as Peter didn’t seem happy with the wrappers in
> other patches, but I still think it's useful for testing (and the
> blocked_on locking still needs some work), so I’m carrying it in my
> personal development tree.
>
> Issues still to address:
> —----------
> * Occasional getting null scheduler entities from pick_next_entity() in
> CFS. I’m a little stumped as to where this is going awry just yet, and
> delayed sending this out, but figured it was worth getting it out for
> review on the other issues while I chase this down.

I'm consistently hitting the above issue early during the boot when I was
trying to test the series on a dual socket 3rd Generation EPYC platform
(2 x 64C/128T). Sharing the trace below:

[ 20.821808] ------------[ cut here ]------------
[ 20.826432] kernel BUG at kernel/sched/core.c:7462!
[ 20.831322] invalid opcode: 0000 [#1] PREEMPT SMP NOPTI
[ 20.836545] CPU: 250 PID: 0 Comm: swapper/250 Not tainted 6.4.0-rc4-proxy-execution-v4+ #474
[ 20.844976] Hardware name: Dell Inc. PowerEdge R6525/024PW1, BIOS 2.7.3 03/30/2022
[ 20.852544] RIP: 0010:__schedule+0x18b6/0x20a0
[ 20.856998] Code: 0f 85 51 04 00 00 83 ad 50 ff ff ff 01 0f 85 05 e9 ff ff f3 0f 1e fa 48 8b 35 0e 0c fe 00 48 c7 c7 33 a1 c1 85 e8 ca 37 23 ff <0f> 0b 4c 89 ff 4c 8b 6d 98 e8 1c 82 00 00 4c 89 f7 e8 14 82 00 00
[ 20.875744] RSP: 0018:ffffbd1e4d1d7dd0 EFLAGS: 00010082
[ 20.880970] RAX: 0000000000000001 RBX: 0000000000000000 RCX: 0000000000000005
[ 20.888104] RDX: ffff9d4d0006b000 RSI: 0000000000000200 RDI: ffff9d4d0004d400
[ 20.895235] RBP: ffffbd1e4d1d7e98 R08: 0000000000000024 R09: ffffffffff7edbd0
[ 20.902369] R10: 0000000000000000 R11: 0000000000000000 R12: ffff9d4d12e25a20
[ 20.909501] R13: ffff9dcbffab3840 R14: ffffbd1e4d1d7e50 R15: ffff9dcbff2b3840
[ 20.916632] FS: 0000000000000000(0000) GS:ffff9dcbffa80000(0000) knlGS:0000000000000000
[ 20.924709] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 20.930449] CR2: 00007f92120c4800 CR3: 000000011477a002 CR4: 0000000000770ee0
[ 20.937581] PKRU: 55555554
[ 20.940292] Call Trace:
[ 20.942741] <TASK>
[ 20.944845] ? show_regs+0x6e/0x80
[ 20.948259] ? die+0x3c/0xa0
[ 20.951146] ? do_trap+0xd4/0xf0
[ 20.954377] ? do_error_trap+0x75/0xa0
[ 20.958129] ? __schedule+0x18b6/0x20a0
[ 20.961971] ? exc_invalid_op+0x57/0x80
[ 20.965808] ? __schedule+0x18b6/0x20a0
[ 20.969648] ? asm_exc_invalid_op+0x1f/0x30
[ 20.973835] ? __schedule+0x18b6/0x20a0
[ 20.977672] ? cpuidle_enter_state+0xde/0x710
[ 20.982033] schedule_idle+0x2e/0x50
[ 20.985614] do_idle+0x15d/0x240
[ 20.988845] cpu_startup_entry+0x24/0x30
[ 20.992772] start_secondary+0x126/0x160
[ 20.996695] secondary_startup_64_no_verify+0x10b/0x10b
[ 21.001924] </TASK>
[ 21.004117] Modules linked in: sch_fq_codel dm_multipath scsi_dh_rdac scsi_dh_emc scsi_dh_alua ipmi_devintf ipmi_msghandler msr ramoops reed_solomon pstore_blk pstore_zone efi_pstore
ip_tables x_tables autofs4 btrfs blake2b_generic raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid1 raid0 multipath linear mgag200
i2c_algo_bit drm_shmem_helper drm_kms_helper ghash_clmulni_intel syscopyarea sysfillrect aesni_intel sysimgblt crypto_simd crc32_pclmul cryptd crct10dif_pclmul sha512_ssse3 xhci_pci tg3 drm
xhci_pci_renesas megaraid_sas wmi
[ 21.055707] Dumping ftrace buffer:
[ 21.059291] ---------------------------------
[ 21.063697] <idle>-0 250dn.2. 21175635us : __schedule: JDB: BUG!!! pick next retry_count > 50
[ 21.072915] ---------------------------------
[ 21.077282] ---[ end trace 0000000000000000 ]---

$ sed -n 7460,7462p kernel/sched/core.c
if (retry_count++ > 50) {
trace_printk("JDB: BUG!!! pick next retry_count > 50\n");
BUG();

Hope it helps during the debug. If you have a fix in mind that you
would like me to test, please do let me know.

>
> * Better deadlock handling in proxy(): With the ww_mutex issues
> resolved, we shouldn’t see circular blocked_on references, but a
> number of the bugs I’ve been chasing recently come from getting stuck
> with proxy() returning null forcing a reselection over and over. These
> are still bugs to address, but my current thinking is that if we get
> stuck like this, we can start to remove the selected mutex blocked
> tasks from the rq, and let them be woken from the mutex waiters list
> as is done currently? Thoughts here would be appreciated.
>
> * More work on migration correctness (RT/DL load balancing,etc). I’m
> still seeing occasional trouble as cpu counts go up which seems to be
> due to a bunch of tasks being proxy migrated to a cpu, then having to
> migrate them all away at once (seeing lots of pick again iterations).
> This may actually be correct, due to chain migration, but it ends up
> looking similar to a deadlock.
>
> * “rq_selected()” naming. Peter doesn’t like it, but I’ve not thought of
> a better name. Open to suggestions.
>
> * As discussed at OSPM, I want to split pick_next_task() up into two
> phases selecting and setting the next tasks, as currently
> pick_next_task() assumes the returned task will be run which results
> in various side-effects in sched class logic when it’s run. This
> causes trouble should proxy() require us to re-select a task due to
> migration or other edge cases.
>
> * CFS load balancing. Blocked tasks may carry forward load (PELT) to the
> lock owner's CPU, so CPU may look like it is overloaded.
>
> * I still want to push down the split scheduler and execution context
> awareness further through the scheduling code, as lots of logic still
> assumes there’s only a single “rq->curr” task.
>
> * Optimization to avoid migrating blocked tasks (allowing for optimistic
> spinning) if the runnable lock-owner at the end of the blocked_on chain
> is already running.
>
>
> Performance:
> —----------
> This patch series switches mutexes to use handoff mode rather than
> optimistic spinning. This is a potential concern where locks are under
> high contention. However, so far in our initial performance analysis (on
> both x86 and mobile devices) we’ve not seen major regressions. That
> said, Chenyu did report a regression[3], which we’ll need to look
> further into. As mentioned above, there may be some optimizations that
> can help here, but my focus is on getting the code working well before I
> spend time optimizing.
>
> Review and feedback would be greatly appreciated!
>
> If folks find it easier to test/tinker with, this patch series can also
> be found here (along with some debug patches):
> https://github.com/johnstultz-work/linux-dev/commits/proxy-exec-v4-6.4-rc3

I'm using the same tree with the HEAD at commit 821c8a48233f ("HACK:
sched: Add BUG_ONs for proxy related loops")

>
> Thanks so much!
> -john
>
> [1] https://youtu.be/QEWqRhVS3lI (video of my OSPM talk)
> [2] https://static.lwn.net/images/conf/rtlws11/papers/proc/p38.pdf
> [3] https://lore.kernel.org/lkml/Y7vVqE0M%2FAoDoVbj@chenyu5-mobl1/
>
> Cc: Joel Fernandes <[email protected]>
> Cc: Qais Yousef <[email protected]>
> Cc: Ingo Molnar <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> Cc: Juri Lelli <[email protected]>
> Cc: Vincent Guittot <[email protected]>
> Cc: Dietmar Eggemann <[email protected]>
> Cc: Valentin Schneider <[email protected]>
> Cc: Steven Rostedt <[email protected]>
> Cc: Ben Segall <[email protected]>
> Cc: Zimuzo Ezeozue <[email protected]>
> Cc: Youssef Esmat <[email protected]>
> Cc: Mel Gorman <[email protected]>
> Cc: Daniel Bristot de Oliveira <[email protected]>
> Cc: Will Deacon <[email protected]>
> Cc: Waiman Long <[email protected]>
> Cc: Boqun Feng <[email protected]>
> Cc: "Paul E . McKenney" <[email protected]>
> Cc: [email protected]
>
> Connor O'Brien (1):
> sched: Attempt to fix rt/dl load balancing via chain level balance
>
> John Stultz (3):
> sched: Unnest ttwu_runnable in prep for proxy-execution
> sched: Fix runtime accounting w/ proxy-execution
> sched: Fixups to find_exec_ctx
>
> Juri Lelli (2):
> locking/mutex: make mutex::wait_lock irq safe
> locking/mutex: Expose __mutex_owner()
>
> Peter Zijlstra (6):
> sched: Unify runtime accounting across classes
> locking/ww_mutex: Remove wakeups from under mutex::wait_lock
> locking/mutex: Rework task_struct::blocked_on
> locking/mutex: Add task_struct::blocked_lock to serialize changes to
> the blocked_on state
> sched: Split scheduler execution context
> sched: Add proxy execution
>
> Valentin Schneider (1):
> sched/rt: Fix proxy/current (push,pull)ability
>
> include/linux/sched.h | 10 +-
> include/linux/ww_mutex.h | 3 +
> init/Kconfig | 7 +
> init/init_task.c | 1 +
> kernel/Kconfig.locks | 2 +-
> kernel/fork.c | 6 +-
> kernel/locking/mutex-debug.c | 9 +-
> kernel/locking/mutex.c | 113 ++++--
> kernel/locking/mutex.h | 25 ++
> kernel/locking/ww_mutex.h | 54 ++-
> kernel/sched/core.c | 719 +++++++++++++++++++++++++++++++++--
> kernel/sched/cpudeadline.c | 12 +-
> kernel/sched/cpudeadline.h | 3 +-
> kernel/sched/cpupri.c | 28 +-
> kernel/sched/cpupri.h | 6 +-
> kernel/sched/deadline.c | 187 +++++----
> kernel/sched/fair.c | 99 +++--
> kernel/sched/rt.c | 242 +++++++-----
> kernel/sched/sched.h | 75 +++-
> kernel/sched/stop_task.c | 13 +-
> 20 files changed, 1284 insertions(+), 330 deletions(-)
>

--
Thanks and Regards,
Prateek

2023-06-12 19:15:31

by John Stultz

[permalink] [raw]
Subject: Re: [PATCH v4 00/13] Generalized Priority Inheritance via Proxy Execution v3

On Mon, Jun 12, 2023 at 10:21 AM K Prateek Nayak <[email protected]> wrote:
> On 6/1/2023 11:28 AM, John Stultz wrote:
> > After having to catch up on other work after OSPM[1], I've finally
> > gotten back to focusing on Proxy Execution and wanted to send out this
> > next iteration of the patch series for review, testing, and feedback.
> > (Many thanks to folks who provided feedback on the last revision!)
> >
> > As mentioned previously, this Proxy Execution series has a long history:
> > First described in a paper[2] by Watkins, Straub, Niehaus, then from
> > patches from Peter Zijlstra, extended with lots of work by Juri Lelli,
> > Valentin Schneider, and Connor O'Brien. (and thank you to Steven Rostedt
> > for providing additional details here!)
> >
> > So again, many thanks to those above, as all the credit for this series
> > really is due to them - while the mistakes are likely mine.
> >
> > Overview:
> > —----------
> > Proxy Execution is a generalized form of priority inheritance. Classic
> > priority inheritance works well for real-time tasks where there is a
> > straight forward priority order to how things are run. But it breaks
> > down when used between CFS or DEADLINE tasks, as there are lots
> > of parameters involved outside of just the task’s nice value when
> > selecting the next task to run (via pick_next_task()). So ideally we
> > want to imbue the mutex holder with all the scheduler attributes of
> > the blocked waiting task.
> >
> > Proxy Execution does this via a few changes:
> > * Keeping tasks that are blocked on a mutex *on* the runqueue
> > * Keeping additional tracking of which mutex a task is blocked on, and
> > which task holds a specific mutex.
> > * Special handling for when we select a blocked task to run, so that we
> > instead run the mutex holder.
> >
> > The first of these is the most difficult to grasp (I do get the mental
> > friction here: blocked tasks on the *run*queue sounds like nonsense!
> > Personally I like to think of the runqueue in this model more like a
> > “task-selection queue”).
> >
> > By leaving blocked tasks on the runqueue, we allow pick_next_task() to
> > choose the task that should run next (even if it’s blocked waiting on a
> > mutex). If we do select a blocked task, we look at the task’s blocked_on
> > mutex and from there look at the mutex’s owner task. And in the simple
> > case, the task which owns the mutex is what we then choose to run,
> > allowing it to release the mutex.
> >
> > This means that instead of just tracking “curr”, the scheduler needs to
> > track both the scheduler context (what was picked and all the state used
> > for scheduling decisions), and the execution context (what we’re
> > running)
> >
> > In this way, the mutex owner is run “on behalf” of the blocked task
> > that was picked to run, essentially inheriting the scheduler context of
> > the blocked task.
> >
> > As Connor outlined in a previous submission of this patch series, this
> > raises a number of complicated situations: The mutex owner might itself
> > be blocked on another mutex, or it could be sleeping, running on a
> > different CPU, in the process of migrating between CPUs, etc.
> >
> > But the functionality provided by Proxy Execution is useful, as in
> > Android we have a number of cases where we are seeing priority inversion
> > (not unbounded, but longer than we’d like) between “foreground” and
> > “background” SCHED_NORMAL applications, so having a generalized solution
> > would be very useful.
> >
> > New in v4:
> > —------
> > * Fixed deadlock that was caused by wait/wound mutexes having circular
> > blocked_on references by clearing the blocked_on pointer on the task
> > we are waking to wound/die.
> >
> > * Tried to resolve an issue Dietmar raised with RT balancing where the
> > proxy migration and push_rt_task() were fighting ping-ponging tasks
> > back and forth, caused by push_rt_task() migrating tasks even if they
> > were in the chain that ends with the current running task. Though this
> > likely needs more work, as this change resulted in different migration
> > quirks (see below).
> >
> > * Fixed a number of null-pointer traversals that the changes were
> > occasionally tripping on
> >
> > * Reworked patch that exposes __mutex_owner() to the scheduler to ensure
> > it doesn’t expose it any more than necessary, as suggested by Peter.
> >
> > * To address some of Peter’s complaints, backed out the rq_curr()
> > wrapper, and reworked rq_selected() to be a macro to avoid needing
> > multiple accessors for READ_ONCE/rcu_dereference() type accesses.
> >
> > * Removed verbose legacy comments from previous developers of the series
> > as Dietmar was finding them distracting when reviewing the diffs
> > (Though, to ensure I heed the warnings from previous experienced
> > travelers, I’ve preserved the comments/questions in a separate patch
> > in my own development tree).
> >
> > * Dropped patch that added *_task_blocked_on() wrappers to check locking
> > correctness. Mostly as Peter didn’t seem happy with the wrappers in
> > other patches, but I still think it's useful for testing (and the
> > blocked_on locking still needs some work), so I’m carrying it in my
> > personal development tree.
> >
> > Issues still to address:
> > —----------
> > * Occasional getting null scheduler entities from pick_next_entity() in
> > CFS. I’m a little stumped as to where this is going awry just yet, and
> > delayed sending this out, but figured it was worth getting it out for
> > review on the other issues while I chase this down.
>
> I'm consistently hitting the above issue early during the boot when I was
> trying to test the series on a dual socket 3rd Generation EPYC platform
> (2 x 64C/128T). Sharing the trace below:
>
> [ 20.821808] ------------[ cut here ]------------
> [ 20.826432] kernel BUG at kernel/sched/core.c:7462!
> [ 20.831322] invalid opcode: 0000 [#1] PREEMPT SMP NOPTI
> [ 20.836545] CPU: 250 PID: 0 Comm: swapper/250 Not tainted 6.4.0-rc4-proxy-execution-v4+ #474
> [ 20.844976] Hardware name: Dell Inc. PowerEdge R6525/024PW1, BIOS 2.7.3 03/30/2022
> [ 20.852544] RIP: 0010:__schedule+0x18b6/0x20a0
> [ 20.856998] Code: 0f 85 51 04 00 00 83 ad 50 ff ff ff 01 0f 85 05 e9 ff ff f3 0f 1e fa 48 8b 35 0e 0c fe 00 48 c7 c7 33 a1 c1 85 e8 ca 37 23 ff <0f> 0b 4c 89 ff 4c 8b 6d 98 e8 1c 82 00 00 4c 89 f7 e8 14 82 00 00
> [ 20.875744] RSP: 0018:ffffbd1e4d1d7dd0 EFLAGS: 00010082
> [ 20.880970] RAX: 0000000000000001 RBX: 0000000000000000 RCX: 0000000000000005
> [ 20.888104] RDX: ffff9d4d0006b000 RSI: 0000000000000200 RDI: ffff9d4d0004d400
> [ 20.895235] RBP: ffffbd1e4d1d7e98 R08: 0000000000000024 R09: ffffffffff7edbd0
> [ 20.902369] R10: 0000000000000000 R11: 0000000000000000 R12: ffff9d4d12e25a20
> [ 20.909501] R13: ffff9dcbffab3840 R14: ffffbd1e4d1d7e50 R15: ffff9dcbff2b3840
> [ 20.916632] FS: 0000000000000000(0000) GS:ffff9dcbffa80000(0000) knlGS:0000000000000000
> [ 20.924709] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 20.930449] CR2: 00007f92120c4800 CR3: 000000011477a002 CR4: 0000000000770ee0
> [ 20.937581] PKRU: 55555554
> [ 20.940292] Call Trace:
> [ 20.942741] <TASK>
> [ 20.944845] ? show_regs+0x6e/0x80
> [ 20.948259] ? die+0x3c/0xa0
> [ 20.951146] ? do_trap+0xd4/0xf0
> [ 20.954377] ? do_error_trap+0x75/0xa0
> [ 20.958129] ? __schedule+0x18b6/0x20a0
> [ 20.961971] ? exc_invalid_op+0x57/0x80
> [ 20.965808] ? __schedule+0x18b6/0x20a0
> [ 20.969648] ? asm_exc_invalid_op+0x1f/0x30
> [ 20.973835] ? __schedule+0x18b6/0x20a0
> [ 20.977672] ? cpuidle_enter_state+0xde/0x710
> [ 20.982033] schedule_idle+0x2e/0x50
> [ 20.985614] do_idle+0x15d/0x240
> [ 20.988845] cpu_startup_entry+0x24/0x30
> [ 20.992772] start_secondary+0x126/0x160
> [ 20.996695] secondary_startup_64_no_verify+0x10b/0x10b
> [ 21.001924] </TASK>
> [ 21.004117] Modules linked in: sch_fq_codel dm_multipath scsi_dh_rdac scsi_dh_emc scsi_dh_alua ipmi_devintf ipmi_msghandler msr ramoops reed_solomon pstore_blk pstore_zone efi_pstore
> ip_tables x_tables autofs4 btrfs blake2b_generic raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid1 raid0 multipath linear mgag200
> i2c_algo_bit drm_shmem_helper drm_kms_helper ghash_clmulni_intel syscopyarea sysfillrect aesni_intel sysimgblt crypto_simd crc32_pclmul cryptd crct10dif_pclmul sha512_ssse3 xhci_pci tg3 drm
> xhci_pci_renesas megaraid_sas wmi
> [ 21.055707] Dumping ftrace buffer:
> [ 21.059291] ---------------------------------
> [ 21.063697] <idle>-0 250dn.2. 21175635us : __schedule: JDB: BUG!!! pick next retry_count > 50
> [ 21.072915] ---------------------------------
> [ 21.077282] ---[ end trace 0000000000000000 ]---
>
> $ sed -n 7460,7462p kernel/sched/core.c
> if (retry_count++ > 50) {
> trace_printk("JDB: BUG!!! pick next retry_count > 50\n");
> BUG();
>
> Hope it helps during the debug. If you have a fix in mind that you
> would like me to test, please do let me know.

Thank you for the testing and feedback here! I really appreciate it!
And my apologies that you're hitting trouble here!

> > * Better deadlock handling in proxy(): With the ww_mutex issues
> > resolved, we shouldn’t see circular blocked_on references, but a
> > number of the bugs I’ve been chasing recently come from getting stuck
> > with proxy() returning null forcing a reselection over and over. These
> > are still bugs to address, but my current thinking is that if we get
> > stuck like this, we can start to remove the selected mutex blocked
> > tasks from the rq, and let them be woken from the mutex waiters list
> > as is done currently? Thoughts here would be appreciated.
> >
> > * More work on migration correctness (RT/DL load balancing,etc). I’m
> > still seeing occasional trouble as cpu counts go up which seems to be
> > due to a bunch of tasks being proxy migrated to a cpu, then having to
> > migrate them all away at once (seeing lots of pick again iterations).
> > This may actually be correct, due to chain migration, but it ends up
> > looking similar to a deadlock.

So I suspect what you're seeing is a combination of the two items
above. With 128 threads, my deadlock detection BUG() at 50 is probably
far too low, as migration chains can get pretty long.
Clearly BUG'ing at a fixed count is the wrong approach (but was
helpful for quickly catching problems and debugging in my
environment).

My main priority is trying to get the null se crashes resolved (almost
have my hands around it, but took a little break from it end of last
week), and once I have something there I'll update and share my WIP
tree:
https://github.com/johnstultz-work/linux-dev/commits/proxy-exec-WIP

To include some extra trace logging and I'll reach out to see if you
can capture the issue again.

Thanks so much again for your interest and help in testing!
-john

2023-06-13 03:37:09

by K Prateek Nayak

[permalink] [raw]
Subject: Re: [PATCH v4 00/13] Generalized Priority Inheritance via Proxy Execution v3

Hello John,

On 6/13/2023 12:22 AM, John Stultz wrote:
> On Mon, Jun 12, 2023 at 10:21 AM K Prateek Nayak <[email protected]> wrote:
>> On 6/1/2023 11:28 AM, John Stultz wrote:
>>> [..snip..]
>>>
>>> Issues still to address:
>>> —----------
>>> * Occasional getting null scheduler entities from pick_next_entity() in
>>> CFS. I’m a little stumped as to where this is going awry just yet, and
>>> delayed sending this out, but figured it was worth getting it out for
>>> review on the other issues while I chase this down.
>>
>> I'm consistently hitting the above issue early during the boot when I was
>> trying to test the series on a dual socket 3rd Generation EPYC platform
>> (2 x 64C/128T). Sharing the trace below:
>>
>> [ 20.821808] ------------[ cut here ]------------
>> [ 20.826432] kernel BUG at kernel/sched/core.c:7462!
>> [ 20.831322] invalid opcode: 0000 [#1] PREEMPT SMP NOPTI
>> [ 20.836545] CPU: 250 PID: 0 Comm: swapper/250 Not tainted 6.4.0-rc4-proxy-execution-v4+ #474
>> [ 20.844976] Hardware name: Dell Inc. PowerEdge R6525/024PW1, BIOS 2.7.3 03/30/2022
>> [ 20.852544] RIP: 0010:__schedule+0x18b6/0x20a0
>> [ 20.856998] Code: 0f 85 51 04 00 00 83 ad 50 ff ff ff 01 0f 85 05 e9 ff ff f3 0f 1e fa 48 8b 35 0e 0c fe 00 48 c7 c7 33 a1 c1 85 e8 ca 37 23 ff <0f> 0b 4c 89 ff 4c 8b 6d 98 e8 1c 82 00 00 4c 89 f7 e8 14 82 00 00
>> [ 20.875744] RSP: 0018:ffffbd1e4d1d7dd0 EFLAGS: 00010082
>> [ 20.880970] RAX: 0000000000000001 RBX: 0000000000000000 RCX: 0000000000000005
>> [ 20.888104] RDX: ffff9d4d0006b000 RSI: 0000000000000200 RDI: ffff9d4d0004d400
>> [ 20.895235] RBP: ffffbd1e4d1d7e98 R08: 0000000000000024 R09: ffffffffff7edbd0
>> [ 20.902369] R10: 0000000000000000 R11: 0000000000000000 R12: ffff9d4d12e25a20
>> [ 20.909501] R13: ffff9dcbffab3840 R14: ffffbd1e4d1d7e50 R15: ffff9dcbff2b3840
>> [ 20.916632] FS: 0000000000000000(0000) GS:ffff9dcbffa80000(0000) knlGS:0000000000000000
>> [ 20.924709] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>> [ 20.930449] CR2: 00007f92120c4800 CR3: 000000011477a002 CR4: 0000000000770ee0
>> [ 20.937581] PKRU: 55555554
>> [ 20.940292] Call Trace:
>> [ 20.942741] <TASK>
>> [ 20.944845] ? show_regs+0x6e/0x80
>> [ 20.948259] ? die+0x3c/0xa0
>> [ 20.951146] ? do_trap+0xd4/0xf0
>> [ 20.954377] ? do_error_trap+0x75/0xa0
>> [ 20.958129] ? __schedule+0x18b6/0x20a0
>> [ 20.961971] ? exc_invalid_op+0x57/0x80
>> [ 20.965808] ? __schedule+0x18b6/0x20a0
>> [ 20.969648] ? asm_exc_invalid_op+0x1f/0x30
>> [ 20.973835] ? __schedule+0x18b6/0x20a0
>> [ 20.977672] ? cpuidle_enter_state+0xde/0x710
>> [ 20.982033] schedule_idle+0x2e/0x50
>> [ 20.985614] do_idle+0x15d/0x240
>> [ 20.988845] cpu_startup_entry+0x24/0x30
>> [ 20.992772] start_secondary+0x126/0x160
>> [ 20.996695] secondary_startup_64_no_verify+0x10b/0x10b
>> [ 21.001924] </TASK>
>> [ 21.004117] Modules linked in: sch_fq_codel dm_multipath scsi_dh_rdac scsi_dh_emc scsi_dh_alua ipmi_devintf ipmi_msghandler msr ramoops reed_solomon pstore_blk pstore_zone efi_pstore
>> ip_tables x_tables autofs4 btrfs blake2b_generic raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid1 raid0 multipath linear mgag200
>> i2c_algo_bit drm_shmem_helper drm_kms_helper ghash_clmulni_intel syscopyarea sysfillrect aesni_intel sysimgblt crypto_simd crc32_pclmul cryptd crct10dif_pclmul sha512_ssse3 xhci_pci tg3 drm
>> xhci_pci_renesas megaraid_sas wmi
>> [ 21.055707] Dumping ftrace buffer:
>> [ 21.059291] ---------------------------------
>> [ 21.063697] <idle>-0 250dn.2. 21175635us : __schedule: JDB: BUG!!! pick next retry_count > 50
>> [ 21.072915] ---------------------------------
>> [ 21.077282] ---[ end trace 0000000000000000 ]---
>>
>> $ sed -n 7460,7462p kernel/sched/core.c
>> if (retry_count++ > 50) {
>> trace_printk("JDB: BUG!!! pick next retry_count > 50\n");
>> BUG();
>>
>> Hope it helps during the debug. If you have a fix in mind that you
>> would like me to test, please do let me know.
>
> Thank you for the testing and feedback here! I really appreciate it!
> And my apologies that you're hitting trouble here!

No worries! Better to hit the snags now than later :)

>
>>> * Better deadlock handling in proxy(): With the ww_mutex issues
>>> resolved, we shouldn’t see circular blocked_on references, but a
>>> number of the bugs I’ve been chasing recently come from getting stuck
>>> with proxy() returning null forcing a reselection over and over. These
>>> are still bugs to address, but my current thinking is that if we get
>>> stuck like this, we can start to remove the selected mutex blocked
>>> tasks from the rq, and let them be woken from the mutex waiters list
>>> as is done currently? Thoughts here would be appreciated.
>>>
>>> * More work on migration correctness (RT/DL load balancing,etc). I’m
>>> still seeing occasional trouble as cpu counts go up which seems to be
>>> due to a bunch of tasks being proxy migrated to a cpu, then having to
>>> migrate them all away at once (seeing lots of pick again iterations).
>>> This may actually be correct, due to chain migration, but it ends up
>>> looking similar to a deadlock.
>
> So I suspect what you're seeing is a combination of the two items
> above. With 128 threads, my deadlock detection BUG() at 50 is probably
> far too low, as migration chains can get pretty long.
> Clearly BUG'ing at a fixed count is the wrong approach (but was
> helpful for quickly catching problems and debugging in my
> environment).

Ah! I see. Thank you for clarifying. Let me check if going to commit
259a8134aa27 ("sched: Potential fixup, not sure why rq_selected() is used
here") helps.

>
> My main priority is trying to get the null se crashes resolved (almost
> have my hands around it, but took a little break from it end of last
> week), and once I have something there I'll update and share my WIP
> tree:
> https://github.com/johnstultz-work/linux-dev/commits/proxy-exec-WIP

I'll keep an eye out for any updates on the branch.

>
> To include some extra trace logging and I'll reach out to see if you
> can capture the issue again.
>
> Thanks so much again for your interest and help in testing!
> -john


--
Thanks and Regards,
Prateek

2023-06-13 17:41:52

by Dietmar Eggemann

[permalink] [raw]
Subject: Re: [PATCH v4 07/13] sched: Split scheduler execution context

On 01/06/2023 07:58, John Stultz wrote:
> From: Peter Zijlstra <[email protected]>
>
> Lets define the scheduling context as all the scheduler state in
> task_struct and the execution context as all state required to run
> the task.
>
> Currently both are intertwined in task_struct. We want to logically
> split these such that we can run the execution context of one task
> with the scheduling context of another.
>
> To this purpose introduce rq_selected() macro to point to the
> task_struct used for scheduler state and preserve rq->curr to
> denote the execution context.
>
> NOTE: Peter previously mentioned he didn't like the name
> "rq_selected()", but I've not come up with a better alternative.
> I'm very open to other name proposals.
>
> Question for Peter: Dietmar suggested you'd prefer I drop the
> conditionalization of the scheduler context pointer on the rq
> (so rq_selected() would be open coded as rq->curr_sched or
> whatever we agree on for a name), but I'd think in the
> !CONFIG_PROXY_EXEC case we'd want to avoid the wasted pointer
> and its use (since it curr_sched would always be == curr)?
> If I'm wrong I'm fine switching this, but would appreciate
> clarification.

IMHO, keeping both, rq->curr and rq->proxy (latter for rq->curr_sched)
would make it easier to navigate through the different versions of this
patch-set while reviewing.

I do understand that you have issues with the function name proxy() not
returning the proxy (task blocked on a mutex) but the mutex owner instead.

The header of v3 'sched: Add proxy execution'
https://lkml.kernel.org/r/[email protected]
mentions:

" ... Potential proxies (i.e., tasks blocked on a mutex) are not
dequeued, so, if one of them is actually selected by schedule() as the
next task to be put to run on a CPU, proxy() is used to walk the
blocked_on relation and find which task (mutex owner) might be able to
use the proxy's scheduling context. ..."

But as I can see now, you changed this patch header in v4 to explain the
PE model slightly differently.

[...]

2023-06-13 18:10:24

by Dietmar Eggemann

[permalink] [raw]
Subject: Re: [PATCH v4 00/13] Generalized Priority Inheritance via Proxy Execution v3

On 01/06/2023 07:58, John Stultz wrote:
> After having to catch up on other work after OSPM[1], I've finally
> gotten back to focusing on Proxy Execution and wanted to send out this
> next iteration of the patch series for review, testing, and feedback.
> (Many thanks to folks who provided feedback on the last revision!)
>
> As mentioned previously, this Proxy Execution series has a long history:
> First described in a paper[2] by Watkins, Straub, Niehaus, then from
> patches from Peter Zijlstra, extended with lots of work by Juri Lelli,
> Valentin Schneider, and Connor O'Brien. (and thank you to Steven Rostedt
> for providing additional details here!)
>
> So again, many thanks to those above, as all the credit for this series
> really is due to them - while the mistakes are likely mine.
>
> Overview:
> —----------
> Proxy Execution is a generalized form of priority inheritance. Classic
> priority inheritance works well for real-time tasks where there is a
> straight forward priority order to how things are run. But it breaks
> down when used between CFS or DEADLINE tasks, as there are lots
> of parameters involved outside of just the task’s nice value when
> selecting the next task to run (via pick_next_task()). So ideally we
> want to imbue the mutex holder with all the scheduler attributes of
> the blocked waiting task.
>
> Proxy Execution does this via a few changes:
> * Keeping tasks that are blocked on a mutex *on* the runqueue
> * Keeping additional tracking of which mutex a task is blocked on, and
> which task holds a specific mutex.
> * Special handling for when we select a blocked task to run, so that we
> instead run the mutex holder.
>
> The first of these is the most difficult to grasp (I do get the mental
> friction here: blocked tasks on the *run*queue sounds like nonsense!
> Personally I like to think of the runqueue in this model more like a
> “task-selection queue”).
>
> By leaving blocked tasks on the runqueue, we allow pick_next_task() to
> choose the task that should run next (even if it’s blocked waiting on a
> mutex). If we do select a blocked task, we look at the task’s blocked_on
> mutex and from there look at the mutex’s owner task. And in the simple
> case, the task which owns the mutex is what we then choose to run,
> allowing it to release the mutex.
>
> This means that instead of just tracking “curr”, the scheduler needs to
> track both the scheduler context (what was picked and all the state used
> for scheduling decisions), and the execution context (what we’re
> running)
>
> In this way, the mutex owner is run “on behalf” of the blocked task
> that was picked to run, essentially inheriting the scheduler context of
> the blocked task.
>
> As Connor outlined in a previous submission of this patch series, this
> raises a number of complicated situations: The mutex owner might itself
> be blocked on another mutex, or it could be sleeping, running on a
> different CPU, in the process of migrating between CPUs, etc.
>
> But the functionality provided by Proxy Execution is useful, as in
> Android we have a number of cases where we are seeing priority inversion
> (not unbounded, but longer than we’d like) between “foreground” and
> “background” SCHED_NORMAL applications, so having a generalized solution
> would be very useful.
>
> New in v4:
> —------
> * Fixed deadlock that was caused by wait/wound mutexes having circular
> blocked_on references by clearing the blocked_on pointer on the task
> we are waking to wound/die.

I always get this when running `insmod ./test-ww_mutex.ko` with default
SCHED_FEAT(TTWU_QUEUE, true) with this fix. Don't understand the issue
fully yet:

qemu-system-x86_64 ... -smp cores=64 -enable-kvm ...

[ 21.109134] Beginning ww mutex selftests
[ 26.397545] ------------[ cut here ]------------
[ 26.397951] WARNING: CPU: 41 PID: 0 at kernel/sched/core.c:4126 sched_ttwu_pending+0xc5/0x120
[ 26.398590] Modules linked in: test_ww_mutex(+)
[ 26.398916] CPU: 41 PID: 0 Comm: swapper/41 Not tainted 6.4.0-rc1-00054-gb4baf2e792df-dirty #9
[ 26.399506] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1.1 04/01/2014
[ 26.400193] RIP: 0010:sched_ttwu_pending+0xc5/0x120
[ 26.400515] Code: c8 75 ba 41 c7 46 48 00 00 00 00 4c 89 f7 e8 32 b5 d4 00 41 f7 c4 00 02 00 00 74 01
fb 5b 5d 41 5c 41 5d 41 5e c3 cc cc cc cc <0f> 0b 44 8b 45 14 8b 8d 20 05 00 00 48
8d 95 18 07 00 00 48 c7 c6
[ 26.401840] RSP: 0018:ffffa31940990fc0 EFLAGS: 00010006
[ 26.402178] RAX: 0000000000000012 RBX: ffffffffffffffc8 RCX: 00000006256a6d58
[ 26.402631] RDX: 000000000001c9f4 RSI: ffff9dc5012fe180 RDI: ffffffff97320a40
[ 26.403096] RBP: ffff9dc50552d140 R08: 00000006256a6d58 R09: 0000000000000029
[ 26.403607] R10: 0000000000000000 R11: ffffa31940990ff8 R12: 0000000000000086
[ 26.404117] R13: ffffffffffffffc8 R14: ffff9dc57d86b3c0 R15: 0000000000000000
[ 26.404691] FS: 0000000000000000(0000) GS:ffff9dc57d840000(0000) knlGS:0000000000000000
[ 26.405236] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 26.405663] CR2: 00007ffeda3d7b00 CR3: 0000000013e2e003 CR4: 0000000000370ee0
[ 26.406236] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 26.406715] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 26.407219] Call Trace:
[ 26.407390] <IRQ>
[ 26.407571] __sysvec_call_function_single+0x28/0xc0
[ 26.407988] sysvec_call_function_single+0x69/0x90
[ 26.408312] </IRQ>
[ 26.408467] <TASK>
[ 26.408612] asm_sysvec_call_function_single+0x1a/0x20
[ 26.408992] RIP: 0010:default_idle+0xf/0x20
[ 26.409267] Code: 4c 01 c7 4c 29 c2 e9 72 ff ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90
f3 0f 1e fa 66 90 0f 00 2d d3 00 40 00 fb f4 <fa> c3 cc cc cc cc 66 66 2e 0f 1f
84 00 00 00 00 00 90 90 90 90 90
[ 26.410629] RSP: 0018:ffffa319401cbed8 EFLAGS: 00000252
[ 26.411073] RAX: ffff9dc57d867f80 RBX: ffff9dc5012fe180 RCX: 4000000000000000
[ 26.411625] RDX: 0000000000000001 RSI: 0000000000000087 RDI: 00000000000ed25c
[ 26.411788] ------------[ cut here ]------------

extra debug:

sched_ttwu_pending [kworker/u128:87 738] task_cpu(p)=29 cpu_of(rq)=41

[...]

2023-06-13 20:45:26

by John Stultz

[permalink] [raw]
Subject: Re: [PATCH v4 07/13] sched: Split scheduler execution context

On Tue, Jun 13, 2023 at 10:12 AM Dietmar Eggemann
<[email protected]> wrote:
> On 01/06/2023 07:58, John Stultz wrote:
> > NOTE: Peter previously mentioned he didn't like the name
> > "rq_selected()", but I've not come up with a better alternative.
> > I'm very open to other name proposals.
> >
> > Question for Peter: Dietmar suggested you'd prefer I drop the
> > conditionalization of the scheduler context pointer on the rq
> > (so rq_selected() would be open coded as rq->curr_sched or
> > whatever we agree on for a name), but I'd think in the
> > !CONFIG_PROXY_EXEC case we'd want to avoid the wasted pointer
> > and its use (since it curr_sched would always be == curr)?
> > If I'm wrong I'm fine switching this, but would appreciate
> > clarification.
>
> IMHO, keeping both, rq->curr and rq->proxy (latter for rq->curr_sched)
> would make it easier to navigate through the different versions of this
> patch-set while reviewing.
>
> I do understand that you have issues with the function name proxy() not
> returning the proxy (task blocked on a mutex) but the mutex owner instead.
>
> The header of v3 'sched: Add proxy execution'
> https://lkml.kernel.org/r/[email protected]
> mentions:
>
> " ... Potential proxies (i.e., tasks blocked on a mutex) are not
> dequeued, so, if one of them is actually selected by schedule() as the
> next task to be put to run on a CPU, proxy() is used to walk the
> blocked_on relation and find which task (mutex owner) might be able to
> use the proxy's scheduling context. ..."
>
> But as I can see now, you changed this patch header in v4 to explain the
> PE model slightly differently.

Yeah. (As you know from offline discussions :) I do feel a bit
strongly that using the term proxy for the scheduler context is
unnecessarily confusing and requires some mental contortions to try to
make it fit the metaphor being used.

In my mind, the task chosen by pick_next_task() is what we want to
run, but if it is blocked on a mutex, we let the mutex owner run on
its behalf, with the "authority" (ie: scheduling context) of the
originally chosen task.

This is a direct parallel to proxy voting where a person who needs to
vote cannot attend, so someone else is sent to vote on their behalf,
and does so with the authority of the person who cannot attend.

So, much like the person who votes on behalf of another is the proxy,
with proxy-execution it makes most sense that the task that runs on
the selected task's behalf is the proxy.

Calling the selected scheduler context the proxy makes it very
difficult to use the metaphor to help in understanding what is being
done. I'll grant you can try to twist it around and view it so that
the blocked tasks are sort of proxy-voters left on the runqueue and
sent to the pick_next_task() function to vote on behalf of a mutex
owner, but that would be more like "proxy-scheduling". And it breaks
down further as the blocked tasks actually don't know who they are
voting for until after they are selected and we run proxy() to walk
the blocked_on chain. It just doesn't fit the metaphor very well
(maybe "puppet candidates " would be better in this model?) and I
think it only adds confusion.

This logic is already subtle and complex enough - we don't need to add
stroop effects[1] to keep it interesting. :)

But I agree the historic usage of the term proxy in the patch series
makes it hard to simply switch it around, which is why I tried instead
to reduce the use of the term proxy where it didn't seem appropriate,
replacing it with "selected" or "donor".

Again, I'm happy to switch to other terms that make sense.

thanks
-john

[1] https://en.wikipedia.org/wiki/Stroop_effect

2023-06-13 21:57:07

by John Stultz

[permalink] [raw]
Subject: Re: [PATCH v4 00/13] Generalized Priority Inheritance via Proxy Execution v3

On Tue, Jun 13, 2023 at 10:36 AM Dietmar Eggemann
<[email protected]> wrote:
>
> On 01/06/2023 07:58, John Stultz wrote:
> > After having to catch up on other work after OSPM[1], I've finally
> > gotten back to focusing on Proxy Execution and wanted to send out this
> > next iteration of the patch series for review, testing, and feedback.
> > (Many thanks to folks who provided feedback on the last revision!)
> >
> > As mentioned previously, this Proxy Execution series has a long history:
> > First described in a paper[2] by Watkins, Straub, Niehaus, then from
> > patches from Peter Zijlstra, extended with lots of work by Juri Lelli,
> > Valentin Schneider, and Connor O'Brien. (and thank you to Steven Rostedt
> > for providing additional details here!)
> >
> > So again, many thanks to those above, as all the credit for this series
> > really is due to them - while the mistakes are likely mine.
> >
> > Overview:
> > —----------
> > Proxy Execution is a generalized form of priority inheritance. Classic
> > priority inheritance works well for real-time tasks where there is a
> > straight forward priority order to how things are run. But it breaks
> > down when used between CFS or DEADLINE tasks, as there are lots
> > of parameters involved outside of just the task’s nice value when
> > selecting the next task to run (via pick_next_task()). So ideally we
> > want to imbue the mutex holder with all the scheduler attributes of
> > the blocked waiting task.
> >
> > Proxy Execution does this via a few changes:
> > * Keeping tasks that are blocked on a mutex *on* the runqueue
> > * Keeping additional tracking of which mutex a task is blocked on, and
> > which task holds a specific mutex.
> > * Special handling for when we select a blocked task to run, so that we
> > instead run the mutex holder.
> >
> > The first of these is the most difficult to grasp (I do get the mental
> > friction here: blocked tasks on the *run*queue sounds like nonsense!
> > Personally I like to think of the runqueue in this model more like a
> > “task-selection queue”).
> >
> > By leaving blocked tasks on the runqueue, we allow pick_next_task() to
> > choose the task that should run next (even if it’s blocked waiting on a
> > mutex). If we do select a blocked task, we look at the task’s blocked_on
> > mutex and from there look at the mutex’s owner task. And in the simple
> > case, the task which owns the mutex is what we then choose to run,
> > allowing it to release the mutex.
> >
> > This means that instead of just tracking “curr”, the scheduler needs to
> > track both the scheduler context (what was picked and all the state used
> > for scheduling decisions), and the execution context (what we’re
> > running)
> >
> > In this way, the mutex owner is run “on behalf” of the blocked task
> > that was picked to run, essentially inheriting the scheduler context of
> > the blocked task.
> >
> > As Connor outlined in a previous submission of this patch series, this
> > raises a number of complicated situations: The mutex owner might itself
> > be blocked on another mutex, or it could be sleeping, running on a
> > different CPU, in the process of migrating between CPUs, etc.
> >
> > But the functionality provided by Proxy Execution is useful, as in
> > Android we have a number of cases where we are seeing priority inversion
> > (not unbounded, but longer than we’d like) between “foreground” and
> > “background” SCHED_NORMAL applications, so having a generalized solution
> > would be very useful.
> >
> > New in v4:
> > —------
> > * Fixed deadlock that was caused by wait/wound mutexes having circular
> > blocked_on references by clearing the blocked_on pointer on the task
> > we are waking to wound/die.
>
> I always get this when running `insmod ./test-ww_mutex.ko` with default
> SCHED_FEAT(TTWU_QUEUE, true) with this fix. Don't understand the issue
> fully yet:
>
> qemu-system-x86_64 ... -smp cores=64 -enable-kvm ...
>
> [ 21.109134] Beginning ww mutex selftests
> [ 26.397545] ------------[ cut here ]------------
> [ 26.397951] WARNING: CPU: 41 PID: 0 at kernel/sched/core.c:4126 sched_ttwu_pending+0xc5/0x120

Thanks for testing it out and sharing this!

Yeah. I've seen this as well, and I suspect this is tied to the
runqueue accounting issues that are causing the null sched entity
issue I've been chasing.

One issue seems to be the blocked_on manipulation done in the
try_to_wakeup() path. For mutex blocked tasks, try_to_wakeup() needs
to clear blocked_on so the task can actually run and try to acquire
the mutex. This is why __mutex_lock_slowpath_common() sets blocked_on
twice. I'm seeing some trouble where try_to_wakeup() ends up
migrating the still technically blocked task (as it's not yet acquired
the mutex, just waking to try to take it) back to the p->wake_cpu -
but the issue is that the task has not been deactivated from the
current runqueue.

The "fix" for ww_mutex circular dependencies resolution in
__ww_mutex_die() is similar, as it clears the blocked_on value and
tries to wake the task, but then ttwu often migrates the task without
deactivating it from the current rq.

Unfortunately once the rq accounting is gone wrong, the failures seem
multi-modal, so trying to get enough tracing logs in place to get your
head around what's going wrong in one case is hard because run-to-run
it will fail differently.

In my current tree I've extended this so we have a separate
task->blocked_on_waking flag, so we don't mess with the
task->blocked_on state in ttwu & __wwmutex_die and can distinguish
between waking a blocked task and waking an unblocked task. However
that in itself isn't working yet, so I'm currently deconstructing the
patch series a bit to try to understand that logic better and if we
can avoid special casing as much in the ttwu path (I'd like to be able
to take mutex blocked tasks and also just drop them from the runqueue
- as we do now - and have things work so we have fallback options if
proxy() migrations are taking too long to resolve).

thanks
-john

2023-06-23 12:41:55

by Dietmar Eggemann

[permalink] [raw]
Subject: Re: [PATCH v4 02/13] locking/ww_mutex: Remove wakeups from under mutex::wait_lock

Hi John,

On 01/06/2023 07:58, John Stultz wrote:
> From: Peter Zijlstra <[email protected]>
>
> In preparation to nest mutex::wait_lock under rq::lock we need to remove
> wakeups from under it.

[...]

> Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
> Signed-off-by: Connor O'Brien <[email protected]>
> Signed-off-by: John Stultz <[email protected]>
> ---
> v2:
> * Move wake_q_init() as suggested by Waiman Long
> ---
> include/linux/ww_mutex.h | 3 +++
> kernel/locking/mutex.c | 8 ++++++++
> kernel/locking/ww_mutex.h | 10 ++++++++--
> 3 files changed, 19 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/ww_mutex.h b/include/linux/ww_mutex.h
> index bb763085479a..9335b2202017 100644
> --- a/include/linux/ww_mutex.h
> +++ b/include/linux/ww_mutex.h
> @@ -19,6 +19,7 @@
>
> #include <linux/mutex.h>
> #include <linux/rtmutex.h>
> +#include <linux/sched/wake_q.h>
>
> #if defined(CONFIG_DEBUG_MUTEXES) || \
> (defined(CONFIG_PREEMPT_RT) && defined(CONFIG_DEBUG_RT_MUTEXES))
> @@ -58,6 +59,7 @@ struct ww_acquire_ctx {
> unsigned int acquired;
> unsigned short wounded;
> unsigned short is_wait_die;
> + struct wake_q_head wake_q;

you told me that there is already an issue in this patch even w/o PE
when running `insmod /lib/modules/test-ww_mutex.ko`.

The issue is related to Connor's version (1):

https://lkml.kernel.org/r/[email protected]

struct ww_acquire_ctx {

struct wake_q_head wake_q;


__mutex_lock_common()

if (ww_ctx)
ww_ctx_wake(ww_ctx)

wake_up_q(&ww_ctx->wake_q);
wake_q_init(&ww_ctx->wake_q);


Juri's version (2):

https://lkml.kernel.org/r/[email protected]

__mutex_lock_common()

DEFINE_WAKE_Q(wake_q) <-- !!!

__ww_mutex_check_waiters(..., wake_q)

__ww_mutex_die(..., wake_q)

wake_q_add(wake_q, waiter->task)

wake_up_q(&wake_q)


`insmod /lib/modules/test-ww_mutex.ko` runs fine with (2) but not with
(1) (both w/o the remaining PE patches).

So to test the PE issues we talked about already which come with `[PATCH
v4 09/13] sched: Add proxy execution` and CONFIG_PROXY_EXEC=y we need to
fix (1) or go back to (2).

I haven't found any clues why (2) was changed to (1) so far.

[...]

2023-06-24 00:51:56

by John Stultz

[permalink] [raw]
Subject: Re: [PATCH v4 02/13] locking/ww_mutex: Remove wakeups from under mutex::wait_lock

On Fri, Jun 23, 2023 at 5:21 AM Dietmar Eggemann
<[email protected]> wrote:
>
> Hi John,
>
> On 01/06/2023 07:58, John Stultz wrote:
> > From: Peter Zijlstra <[email protected]>
> >
> > In preparation to nest mutex::wait_lock under rq::lock we need to remove
> > wakeups from under it.
>
> [...]
>
> > Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
> > Signed-off-by: Connor O'Brien <[email protected]>
> > Signed-off-by: John Stultz <[email protected]>
> > ---
> > v2:
> > * Move wake_q_init() as suggested by Waiman Long
> > ---
> > include/linux/ww_mutex.h | 3 +++
> > kernel/locking/mutex.c | 8 ++++++++
> > kernel/locking/ww_mutex.h | 10 ++++++++--
> > 3 files changed, 19 insertions(+), 2 deletions(-)
> >
> > diff --git a/include/linux/ww_mutex.h b/include/linux/ww_mutex.h
> > index bb763085479a..9335b2202017 100644
> > --- a/include/linux/ww_mutex.h
> > +++ b/include/linux/ww_mutex.h
> > @@ -19,6 +19,7 @@
> >
> > #include <linux/mutex.h>
> > #include <linux/rtmutex.h>
> > +#include <linux/sched/wake_q.h>
> >
> > #if defined(CONFIG_DEBUG_MUTEXES) || \
> > (defined(CONFIG_PREEMPT_RT) && defined(CONFIG_DEBUG_RT_MUTEXES))
> > @@ -58,6 +59,7 @@ struct ww_acquire_ctx {
> > unsigned int acquired;
> > unsigned short wounded;
> > unsigned short is_wait_die;
> > + struct wake_q_head wake_q;
>
> you told me that there is already an issue in this patch even w/o PE
> when running `insmod /lib/modules/test-ww_mutex.ko`.
>
> The issue is related to Connor's version (1):
>
> https://lkml.kernel.org/r/[email protected]
>
> struct ww_acquire_ctx {
>
> struct wake_q_head wake_q;
>
>
> __mutex_lock_common()
>
> if (ww_ctx)
> ww_ctx_wake(ww_ctx)
>
> wake_up_q(&ww_ctx->wake_q);
> wake_q_init(&ww_ctx->wake_q);
>
>
> Juri's version (2):
>
> https://lkml.kernel.org/r/[email protected]
>
> __mutex_lock_common()
>
> DEFINE_WAKE_Q(wake_q) <-- !!!
>
> __ww_mutex_check_waiters(..., wake_q)
>
> __ww_mutex_die(..., wake_q)
>
> wake_q_add(wake_q, waiter->task)
>
> wake_up_q(&wake_q)
>
>
> `insmod /lib/modules/test-ww_mutex.ko` runs fine with (2) but not with
> (1) (both w/o the remaining PE patches).
>
> So to test the PE issues we talked about already which come with `[PATCH
> v4 09/13] sched: Add proxy execution` and CONFIG_PROXY_EXEC=y we need to
> fix (1) or go back to (2).
>
> I haven't found any clues why (2) was changed to (1) so far.

Right. I don't have context for why, but moving the wake_q to the
ww_ctx does seem to break the wake_q assumptions, and results in lost
wakeups.

In my current tree, I've switched back to Juri's older version of the
patch, but adding one fix from Connor's, and with that the patch
doesn't run into this trouble.

That said, I still am catching and debugging problems with later
patches in the series, which has required breaking up the core proxy
change into much finer grained changes so I can debug what's going
wrong. My v5 patch set will reflect this.

thanks
-john