2023-03-20 23:37:36

by John Stultz

[permalink] [raw]
Subject: [PATCH v2 00/12] Reviving the Proxy Execution Series v2

Hey All,

I’ve recently picked up the proxy execution effort, so I wanted to send
this out to share the current state of the patch series.

So first of all, this Proxy Execution series has a long history.
Starting from initial patches from Peter Zijlstra, then extended with
lots of work by Juri Lelli, Valentin Schneider, and Connor O'Brien.

So all the credit for this series really is due to the developers
above, while the mistakes are likely mine. :) I wanted to particularly
appreciate Connor’s recent efforts, as I worked closely with him and
it’s only due to his knowledge, patience, and clear explanations of the
changes and issues that I was able to come to my current understanding
of the series so quickly.

Overview:
---------
I like to think of Proxy Execution as a generalized form of priority
inheritance. Classic priority inheritance works well for real-time tasks
where there is a straight forward priority order to how things are run.
But it breaks down when used between CFS tasks, as there are lots of
parameters involved outside of just the task’s nice value when selecting
the next task to run (via pick_next_task()). So ideally we want to imbue
the mutex holder with all the scheduler attributes of the waiting task.

Proxy Execution tries to do this via a few changes:
* Keeping tasks that are blocked on a mutex *on* the runqueue
* Keeping additional tracking of which mutex a task is blocked on, and
which task holds a specific mutex.
* Special handling for when we select a blocked task to run, so that we
instead run the mutex holder.

The first of these is the most difficult to grasp (I do get the mental
friction here: blocked tasks on the *run*queue sounds like nonsense!
Personally I like to think of the runqueue in this model more like a
“task-selection queue”).

By leaving blocked tasks on the runqueue, we allow pick_next_task() to
choose the task that should run next (even if it’s blocked waiting on a
mutex). If we do select a blocked task, we look at the task’s blocked_on
mutex and from there look at the mutex’s owner task. And in the simple
case, the task which owns the mutex is what we then run, allowing it to
release the mutex.

This means that instead of just tracking “curr”, the scheduler needs to
track both the scheduler context (what was picked and all the state used
for scheduling decisions), and the execution context (what we’re
running)

In this way, the mutex owner is run “on behalf” of the blocked task
that was picked to run, essentially inheriting the scheduler context of
the blocked task.

As Connor outlined in a previous submission[1] of this patch series,
this raises a number of complicated situations: The mutex owner might
itself be blocked on another mutex, or it could be sleeping, running on
a different CPU, in the process of migrating between CPUs, etc.

But the functionality provided is useful, as in Android we have a number
of cases where we are seeing priority inversion (not unbounded, but
longer than we’d like) between “foreground” and “background”
SCHED_NORMAL applications, so having a generalized solution would be
great.


Issues still to address:
------------------------
The last time this patch series was submitted, a number of issues were
identified that need to be addressed:
* cputime accounting is a little unintuitive, as time used by the proxy
was being charged to the blocked task. Connor began work to address
this but it was not complete, so it is not included in this version of
the series.
* RT/DL load balancing. There is a scheduling invariant that we always
need to run the top N highest priority RT tasks across the N cpus.
However keeping blocked tasks on the runqueue greatly complicates the
load balancing for this. Connor took an initial stab at this with
“chain level balancing” included in this series. Feedback on this
would be appreciated!
* CFS load balancing. Blocked tasks may carry forward load (PELT) to the
lock owner's CPU, so CPU may look like it is overloaded.
* Terminology/BikeShedding: I think much of the terminology makes these
changes harder to reason about.
- As noted above, blocked-tasks on the run-queue breaks some mental
models
- The “proxy” relationship seems a bit inverted. Peter’s “Split
scheduler execution context” describes the proxy as the scheduler
context, and curr being the execution context (the blocked task
being the “scheduler proxy” for the lock-holder). Personally, I
think of proxies as those who do-on-others-behalf, so to me it would
make more sense to have rq_curr() be the selected task (scheduler
context) and rq_proxy() be the run task (execution context), as its
running on behalf of the selected but blocked task. Unless folks
object, I’ll likely change this in a future submission.
- Also, the rq_curr() and rq_proxy() distinction is easily confused. I
think it’s sane to try to keep close to the existing curr based
logic, but swapping rq_proxy in for some cases makes the change
smaller, but the resulting code less clear. I worry folks focused on
the !CONFIG_PROXY_EXEC case might easily mix up which to use when
creating new logic.
* Resolving open questions in comments: I’ve left these in for now, but
I hope to review and try to make some choices where there are open
questions. If folks have specific feedback or suggestions here, it
would be great!

Performance:
------------
With the current patch series the mutexes are handed off, instead of
using optimistic spinning. This is a potential concern where locks are
under high contention. However, so far in our performance analysis (on
both x86 and mobile devices) we’ve not seen any major regressions.

Changes since Connor’s last submission:
---------------------------------------
* Connor took a swing at addressing the RT/DL load balancing issue
* Connor fixed a number of issues found in testing
* I added rq_curr() and rq_proxy() accessors to abstract rq->curr and
rq->proxy references (so it collapses in the !CONFIG_PROXY_EXECUTION
case)
* I’ve gone through the series trying to split up the larger functions
and better conditionalize the logic on CONFIG_PROXY_EXECUTION
* I’ve broken out some of the logic in larger patches into separate
patches, as well as split out small prep changes to the logic so the
patches are easier to review.
* I’ve also found and addressed a few edge cases in locking and mutex
owner handling.
* I dropped the patch changing mutex::wait_lock to always save/restore irq
flags (as Joel raised a concern that the patch wasn’t actually
necessary).
* I’ve folded a number of fixes down into the relevant patches.

For the most part, my changes have been mechanical reworking of the
logic, avoiding any changes in behavior. Though after this, I’ll likely
start trying to rework some of the logic further.

So while there is still more to do, I wanted to send this series out so
folks could see work is continuing, and be able to get some additional
review on recent changes. Review and feedback would be greatly
appreciated!

If folks find it easier to test/tinker with, this patch series can also
be found here:
https://github.com/johnstultz-work/linux-dev.git proxy-exec-v2-6.3-rc

Thanks so much!
-john

[1] https://lore.kernel.org/lkml/[email protected]/

Cc: Joel Fernandes <[email protected]>
Cc: Qais Yousef <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Zimuzo Ezeozue <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Waiman Long <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: "Paul E . McKenney" <[email protected]>
Cc: [email protected]

Connor O'Brien (1):
sched: Attempt to fix rt/dl load balancing via chain level balance

John Stultz (2):
sched: Replace rq->curr access w/ rq_curr(rq)
sched: Unnest ttwu_runnable in prep for proxy-execution

Juri Lelli (1):
locking/mutex: Expose mutex_owner()

Peter Zijlstra (6):
locking/ww_mutex: Remove wakeups from under mutex::wait_lock
locking/mutex: Rework task_struct::blocked_on
locking/mutex: Add task_struct::blocked_lock to serialize changes to
the blocked_on state
sched: Unify runtime accounting across classes
sched: Split scheduler execution context
sched: Add proxy execution

Valentin Schneider (2):
locking/mutex: Add p->blocked_on wrappers
sched/rt: Fix proxy/current (push,pull)ability

include/linux/mutex.h | 2 +
include/linux/sched.h | 24 +-
include/linux/ww_mutex.h | 3 +
init/Kconfig | 7 +
init/init_task.c | 1 +
kernel/Kconfig.locks | 2 +-
kernel/fork.c | 6 +-
kernel/locking/mutex-debug.c | 9 +-
kernel/locking/mutex.c | 103 ++++-
kernel/locking/ww_mutex.h | 10 +-
kernel/sched/core.c | 787 ++++++++++++++++++++++++++++++++---
kernel/sched/core_sched.c | 2 +-
kernel/sched/cpudeadline.c | 12 +-
kernel/sched/cpudeadline.h | 3 +-
kernel/sched/cpupri.c | 29 +-
kernel/sched/cpupri.h | 6 +-
kernel/sched/deadline.c | 213 ++++++----
kernel/sched/debug.c | 2 +-
kernel/sched/fair.c | 109 +++--
kernel/sched/membarrier.c | 8 +-
kernel/sched/pelt.h | 2 +-
kernel/sched/rt.c | 301 +++++++++-----
kernel/sched/sched.h | 286 ++++++++++++-
kernel/sched/stop_task.c | 13 +-
24 files changed, 1599 insertions(+), 341 deletions(-)

--
2.40.0.rc1.284.g88254d51c5-goog



2023-03-20 23:37:42

by John Stultz

[permalink] [raw]
Subject: [PATCH v2 01/12] locking/ww_mutex: Remove wakeups from under mutex::wait_lock

From: Peter Zijlstra <[email protected]>

In preparation to nest mutex::wait_lock under rq::lock we need to remove
wakeups from under it.

Cc: Joel Fernandes <[email protected]>
Cc: Qais Yousef <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Zimuzo Ezeozue <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Waiman Long <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: "Paul E . McKenney" <[email protected]>
Cc: [email protected]
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Signed-off-by: Connor O'Brien <[email protected]>
Signed-off-by: John Stultz <[email protected]>
---
v2:
* Move wake_q_init() as suggested by Waiman Long
---
include/linux/ww_mutex.h | 3 +++
kernel/locking/mutex.c | 8 ++++++++
kernel/locking/ww_mutex.h | 10 ++++++++--
3 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/include/linux/ww_mutex.h b/include/linux/ww_mutex.h
index bb763085479a..9335b2202017 100644
--- a/include/linux/ww_mutex.h
+++ b/include/linux/ww_mutex.h
@@ -19,6 +19,7 @@

#include <linux/mutex.h>
#include <linux/rtmutex.h>
+#include <linux/sched/wake_q.h>

#if defined(CONFIG_DEBUG_MUTEXES) || \
(defined(CONFIG_PREEMPT_RT) && defined(CONFIG_DEBUG_RT_MUTEXES))
@@ -58,6 +59,7 @@ struct ww_acquire_ctx {
unsigned int acquired;
unsigned short wounded;
unsigned short is_wait_die;
+ struct wake_q_head wake_q;
#ifdef DEBUG_WW_MUTEXES
unsigned int done_acquire;
struct ww_class *ww_class;
@@ -137,6 +139,7 @@ static inline void ww_acquire_init(struct ww_acquire_ctx *ctx,
ctx->acquired = 0;
ctx->wounded = false;
ctx->is_wait_die = ww_class->is_wait_die;
+ wake_q_init(&ctx->wake_q);
#ifdef DEBUG_WW_MUTEXES
ctx->ww_class = ww_class;
ctx->done_acquire = 0;
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index d973fe6041bf..1582756914df 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -676,6 +676,8 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
}

raw_spin_unlock(&lock->wait_lock);
+ if (ww_ctx)
+ ww_ctx_wake(ww_ctx);
schedule_preempt_disabled();

first = __mutex_waiter_is_first(lock, &waiter);
@@ -725,6 +727,8 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
ww_mutex_lock_acquired(ww, ww_ctx);

raw_spin_unlock(&lock->wait_lock);
+ if (ww_ctx)
+ ww_ctx_wake(ww_ctx);
preempt_enable();
return 0;

@@ -736,6 +740,8 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
raw_spin_unlock(&lock->wait_lock);
debug_mutex_free_waiter(&waiter);
mutex_release(&lock->dep_map, ip);
+ if (ww_ctx)
+ ww_ctx_wake(ww_ctx);
preempt_enable();
return ret;
}
@@ -946,9 +952,11 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne
if (owner & MUTEX_FLAG_HANDOFF)
__mutex_handoff(lock, next);

+ preempt_disable();
raw_spin_unlock(&lock->wait_lock);

wake_up_q(&wake_q);
+ preempt_enable();
}

#ifndef CONFIG_DEBUG_LOCK_ALLOC
diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h
index 56f139201f24..e49ea5336473 100644
--- a/kernel/locking/ww_mutex.h
+++ b/kernel/locking/ww_mutex.h
@@ -161,6 +161,12 @@ static inline void lockdep_assert_wait_lock_held(struct rt_mutex *lock)

#endif /* WW_RT */

+void ww_ctx_wake(struct ww_acquire_ctx *ww_ctx)
+{
+ wake_up_q(&ww_ctx->wake_q);
+ wake_q_init(&ww_ctx->wake_q);
+}
+
/*
* Wait-Die:
* The newer transactions are killed when:
@@ -284,7 +290,7 @@ __ww_mutex_die(struct MUTEX *lock, struct MUTEX_WAITER *waiter,
#ifndef WW_RT
debug_mutex_wake_waiter(lock, waiter);
#endif
- wake_up_process(waiter->task);
+ wake_q_add(&ww_ctx->wake_q, waiter->task);
}

return true;
@@ -331,7 +337,7 @@ static bool __ww_mutex_wound(struct MUTEX *lock,
* wakeup pending to re-read the wounded state.
*/
if (owner != current)
- wake_up_process(owner);
+ wake_q_add(&ww_ctx->wake_q, owner);

return true;
}
--
2.40.0.rc1.284.g88254d51c5-goog


2023-03-20 23:37:47

by John Stultz

[permalink] [raw]
Subject: [PATCH v2 02/12] locking/mutex: Rework task_struct::blocked_on

From: Peter Zijlstra <[email protected]>

Track the blocked-on relation for mutexes, this allows following this
relation at schedule time.

task
| blocked-on
v
mutex
| owner
v
task

Cc: Joel Fernandes <[email protected]>
Cc: Qais Yousef <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Zimuzo Ezeozue <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Waiman Long <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: "Paul E . McKenney" <[email protected]>
Cc: [email protected]
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
[minor changes while rebasing]
Signed-off-by: Juri Lelli <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Signed-off-by: Connor O'Brien <[email protected]>
[jstultz: Fix blocked_on tracking in __mutex_lock_common in error paths]
Signed-off-by: John Stultz <[email protected]>
---
v2:
* Fixed blocked_on tracking in error paths that was causing crashes
---
include/linux/sched.h | 6 ++----
kernel/fork.c | 4 ++--
kernel/locking/mutex-debug.c | 9 +++++----
kernel/locking/mutex.c | 7 +++++++
4 files changed, 16 insertions(+), 10 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 63d242164b1a..9924d1926bc3 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1139,10 +1139,8 @@ struct task_struct {
struct rt_mutex_waiter *pi_blocked_on;
#endif

-#ifdef CONFIG_DEBUG_MUTEXES
- /* Mutex deadlock detection: */
- struct mutex_waiter *blocked_on;
-#endif
+ struct task_struct *blocked_proxy; /* task that is boosting us */
+ struct mutex *blocked_on; /* lock we're blocked on */

#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
int non_block_count;
diff --git a/kernel/fork.c b/kernel/fork.c
index d8cda4c6de6c..ad27fa09fe70 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -2223,9 +2223,9 @@ static __latent_entropy struct task_struct *copy_process(
lockdep_init_task(p);
#endif

-#ifdef CONFIG_DEBUG_MUTEXES
+ p->blocked_proxy = NULL; /* nobody is boosting us yet */
p->blocked_on = NULL; /* not blocked yet */
-#endif
+
#ifdef CONFIG_BCACHE
p->sequential_io = 0;
p->sequential_io_avg = 0;
diff --git a/kernel/locking/mutex-debug.c b/kernel/locking/mutex-debug.c
index bc8abb8549d2..7228909c3e62 100644
--- a/kernel/locking/mutex-debug.c
+++ b/kernel/locking/mutex-debug.c
@@ -52,17 +52,18 @@ void debug_mutex_add_waiter(struct mutex *lock, struct mutex_waiter *waiter,
{
lockdep_assert_held(&lock->wait_lock);

- /* Mark the current thread as blocked on the lock: */
- task->blocked_on = waiter;
+ /* Current thread can't be already blocked (since it's executing!) */
+ DEBUG_LOCKS_WARN_ON(task->blocked_on);
}

void debug_mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter,
struct task_struct *task)
{
+ struct mutex *blocked_on = READ_ONCE(task->blocked_on);
+
DEBUG_LOCKS_WARN_ON(list_empty(&waiter->list));
DEBUG_LOCKS_WARN_ON(waiter->task != task);
- DEBUG_LOCKS_WARN_ON(task->blocked_on != waiter);
- task->blocked_on = NULL;
+ DEBUG_LOCKS_WARN_ON(blocked_on && blocked_on != lock);

INIT_LIST_HEAD(&waiter->list);
waiter->task = NULL;
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 1582756914df..f5296aa82255 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -645,6 +645,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
goto err_early_kill;
}

+ current->blocked_on = lock;
set_current_state(state);
trace_contention_begin(lock, LCB_F_MUTEX);
for (;;) {
@@ -682,6 +683,10 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas

first = __mutex_waiter_is_first(lock, &waiter);

+ /*
+ * Gets reset by ttwu_runnable().
+ */
+ current->blocked_on = lock;
set_current_state(state);
/*
* Here we order against unlock; we must either see it change
@@ -719,6 +724,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
debug_mutex_free_waiter(&waiter);

skip_wait:
+ current->blocked_on = NULL;
/* got the lock - cleanup and rejoice! */
lock_acquired(&lock->dep_map, ip);
trace_contention_end(lock, 0);
@@ -733,6 +739,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
return 0;

err:
+ current->blocked_on = NULL;
__set_current_state(TASK_RUNNING);
__mutex_remove_waiter(lock, &waiter);
err_early_kill:
--
2.40.0.rc1.284.g88254d51c5-goog


2023-03-20 23:37:51

by John Stultz

[permalink] [raw]
Subject: [PATCH v2 03/12] locking/mutex: Add task_struct::blocked_lock to serialize changes to the blocked_on state

From: Peter Zijlstra <[email protected]>

This patch was split out from the later "sched: Add proxy
execution" patch.

Adds blocked_lock to the task_struct so we can safely keep track
of which tasks are blocked on us.

This will be used for tracking blocked-task/mutex chains with
the prox-execution patch in a similar fashion to how priority
inheritence is done with rt_mutexes.

Cc: Joel Fernandes <[email protected]>
Cc: Qais Yousef <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Zimuzo Ezeozue <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Waiman Long <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: "Paul E . McKenney" <[email protected]>
Cc: [email protected]
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
[rebased, added comments and changelog]
Signed-off-by: Juri Lelli <[email protected]>
[Fixed rebase conflicts]
[squashed sched: Ensure blocked_on is always guarded by blocked_lock]
Signed-off-by: Valentin Schneider <[email protected]>
[fix rebase conflicts, various fixes & tweaks commented inline]
[squashed sched: Use rq->curr vs rq->proxy checks]
Signed-off-by: Connor O'Brien <[email protected]>
[jstultz: Split out from bigger patch]
Signed-off-by: John Stultz <[email protected]>
---
v2:
* Split out into its own patch

TODO: Still need to clarify some of the locking changes here
---
include/linux/sched.h | 1 +
init/init_task.c | 1 +
kernel/fork.c | 1 +
kernel/locking/mutex.c | 27 +++++++++++++++++++++++----
4 files changed, 26 insertions(+), 4 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 9924d1926bc3..031615b5dc2a 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1141,6 +1141,7 @@ struct task_struct {

struct task_struct *blocked_proxy; /* task that is boosting us */
struct mutex *blocked_on; /* lock we're blocked on */
+ raw_spinlock_t blocked_lock;

#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
int non_block_count;
diff --git a/init/init_task.c b/init/init_task.c
index ff6c4b9bfe6b..189ce67e9704 100644
--- a/init/init_task.c
+++ b/init/init_task.c
@@ -130,6 +130,7 @@ struct task_struct init_task
.journal_info = NULL,
INIT_CPU_TIMERS(init_task)
.pi_lock = __RAW_SPIN_LOCK_UNLOCKED(init_task.pi_lock),
+ .blocked_lock = __RAW_SPIN_LOCK_UNLOCKED(init_task.blocked_lock),
.timer_slack_ns = 50000, /* 50 usec default slack */
.thread_pid = &init_struct_pid,
.thread_group = LIST_HEAD_INIT(init_task.thread_group),
diff --git a/kernel/fork.c b/kernel/fork.c
index ad27fa09fe70..95410333332f 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -2121,6 +2121,7 @@ static __latent_entropy struct task_struct *copy_process(
ftrace_graph_init_task(p);

rt_mutex_init_task(p);
+ raw_spin_lock_init(&p->blocked_lock);

lockdep_assert_irqs_enabled();
#ifdef CONFIG_PROVE_LOCKING
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index f5296aa82255..2f31ebb08b4a 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -615,6 +615,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
}

raw_spin_lock(&lock->wait_lock);
+ raw_spin_lock(&current->blocked_lock);
/*
* After waiting to acquire the wait_lock, try again.
*/
@@ -676,6 +677,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
goto err;
}

+ raw_spin_unlock(&current->blocked_lock);
raw_spin_unlock(&lock->wait_lock);
if (ww_ctx)
ww_ctx_wake(ww_ctx);
@@ -683,6 +685,8 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas

first = __mutex_waiter_is_first(lock, &waiter);

+ raw_spin_lock(&lock->wait_lock);
+ raw_spin_lock(&current->blocked_lock);
/*
* Gets reset by ttwu_runnable().
*/
@@ -697,15 +701,28 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
break;

if (first) {
+ bool acquired;
+
+ /*
+ * XXX connoro: mutex_optimistic_spin() can schedule, so
+ * we need to release these locks before calling it.
+ * This needs refactoring though b/c currently we take
+ * the locks earlier than necessary when proxy exec is
+ * disabled and release them unnecessarily when it's
+ * enabled. At a minimum, need to verify that releasing
+ * blocked_lock here doesn't create any races.
+ */
+ raw_spin_unlock(&current->blocked_lock);
+ raw_spin_unlock(&lock->wait_lock);
trace_contention_begin(lock, LCB_F_MUTEX | LCB_F_SPIN);
- if (mutex_optimistic_spin(lock, ww_ctx, &waiter))
+ acquired = mutex_optimistic_spin(lock, ww_ctx, &waiter);
+ raw_spin_lock(&lock->wait_lock);
+ raw_spin_lock(&current->blocked_lock);
+ if (acquired)
break;
trace_contention_begin(lock, LCB_F_MUTEX);
}
-
- raw_spin_lock(&lock->wait_lock);
}
- raw_spin_lock(&lock->wait_lock);
acquired:
__set_current_state(TASK_RUNNING);

@@ -732,6 +749,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
if (ww_ctx)
ww_mutex_lock_acquired(ww, ww_ctx);

+ raw_spin_unlock(&current->blocked_lock);
raw_spin_unlock(&lock->wait_lock);
if (ww_ctx)
ww_ctx_wake(ww_ctx);
@@ -744,6 +762,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
__mutex_remove_waiter(lock, &waiter);
err_early_kill:
trace_contention_end(lock, ret);
+ raw_spin_unlock(&current->blocked_lock);
raw_spin_unlock(&lock->wait_lock);
debug_mutex_free_waiter(&waiter);
mutex_release(&lock->dep_map, ip);
--
2.40.0.rc1.284.g88254d51c5-goog


2023-03-20 23:37:56

by John Stultz

[permalink] [raw]
Subject: [PATCH v2 04/12] locking/mutex: Add p->blocked_on wrappers

From: Valentin Schneider <[email protected]>

This lets us assert p->blocked_lock is held whenever we access
p->blocked_on.

Cc: Joel Fernandes <[email protected]>
Cc: Qais Yousef <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Zimuzo Ezeozue <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Waiman Long <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: "Paul E . McKenney" <[email protected]>
Cc: [email protected]
Signed-off-by: Valentin Schneider <[email protected]>
[fix conflicts, call in more places]
Signed-off-by: Connor O'Brien <[email protected]>
[jstultz: tweaked commit subject, added get_task_blocked_on() as well]
Signed-off-by: John Stultz <[email protected]>
---
v2:
* Added get_task_blocked_on() accessor
---
include/linux/sched.h | 14 ++++++++++++++
kernel/locking/mutex-debug.c | 4 ++--
kernel/locking/mutex.c | 8 ++++----
3 files changed, 20 insertions(+), 6 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 031615b5dc2a..a1606d0bd3fe 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -2223,6 +2223,20 @@ static inline int rwlock_needbreak(rwlock_t *lock)
#endif
}

+static inline void set_task_blocked_on(struct task_struct *p, struct mutex *m)
+{
+ lockdep_assert_held(&p->blocked_lock);
+
+ p->blocked_on = m;
+}
+
+static inline struct mutex *get_task_blocked_on(struct task_struct *p)
+{
+ lockdep_assert_held(&p->blocked_lock);
+
+ return p->blocked_on;
+}
+
static __always_inline bool need_resched(void)
{
return unlikely(tif_need_resched());
diff --git a/kernel/locking/mutex-debug.c b/kernel/locking/mutex-debug.c
index 7228909c3e62..e3cd64ae6ea4 100644
--- a/kernel/locking/mutex-debug.c
+++ b/kernel/locking/mutex-debug.c
@@ -53,13 +53,13 @@ void debug_mutex_add_waiter(struct mutex *lock, struct mutex_waiter *waiter,
lockdep_assert_held(&lock->wait_lock);

/* Current thread can't be already blocked (since it's executing!) */
- DEBUG_LOCKS_WARN_ON(task->blocked_on);
+ DEBUG_LOCKS_WARN_ON(get_task_blocked_on(task));
}

void debug_mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter,
struct task_struct *task)
{
- struct mutex *blocked_on = READ_ONCE(task->blocked_on);
+ struct mutex *blocked_on = get_task_blocked_on(task); /*XXX jstultz: dropped READ_ONCE here, revisit.*/

DEBUG_LOCKS_WARN_ON(list_empty(&waiter->list));
DEBUG_LOCKS_WARN_ON(waiter->task != task);
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 2f31ebb08b4a..d322f7c1c8fa 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -646,7 +646,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
goto err_early_kill;
}

- current->blocked_on = lock;
+ set_task_blocked_on(current, lock);
set_current_state(state);
trace_contention_begin(lock, LCB_F_MUTEX);
for (;;) {
@@ -690,7 +690,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
/*
* Gets reset by ttwu_runnable().
*/
- current->blocked_on = lock;
+ set_task_blocked_on(current, lock);
set_current_state(state);
/*
* Here we order against unlock; we must either see it change
@@ -741,7 +741,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
debug_mutex_free_waiter(&waiter);

skip_wait:
- current->blocked_on = NULL;
+ set_task_blocked_on(current, NULL);
/* got the lock - cleanup and rejoice! */
lock_acquired(&lock->dep_map, ip);
trace_contention_end(lock, 0);
@@ -757,7 +757,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
return 0;

err:
- current->blocked_on = NULL;
+ set_task_blocked_on(current, NULL);
__set_current_state(TASK_RUNNING);
__mutex_remove_waiter(lock, &waiter);
err_early_kill:
--
2.40.0.rc1.284.g88254d51c5-goog


2023-03-20 23:37:59

by John Stultz

[permalink] [raw]
Subject: [PATCH v2 05/12] locking/mutex: Expose mutex_owner()

From: Juri Lelli <[email protected]>

Implementing proxy execution requires that scheduler code be able to
identify the current owner of a mutex. Expose a new helper
mutex_owner() for this purpose.

Cc: Joel Fernandes <[email protected]>
Cc: Qais Yousef <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Zimuzo Ezeozue <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Waiman Long <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: "Paul E . McKenney" <[email protected]>
Cc: [email protected]
Signed-off-by: Juri Lelli <[email protected]>
[Removed the EXPORT_SYMBOL]
Signed-off-by: Valentin Schneider <[email protected]>
Signed-off-by: Connor O'Brien <[email protected]>
[jstultz: Tweaked subject line]
Signed-off-by: John Stultz <[email protected]>
---
include/linux/mutex.h | 2 ++
kernel/locking/mutex.c | 5 +++++
2 files changed, 7 insertions(+)

diff --git a/include/linux/mutex.h b/include/linux/mutex.h
index 8f226d460f51..ebdc59cb0bf6 100644
--- a/include/linux/mutex.h
+++ b/include/linux/mutex.h
@@ -118,6 +118,8 @@ do { \
extern void __mutex_init(struct mutex *lock, const char *name,
struct lock_class_key *key);

+extern struct task_struct *mutex_owner(struct mutex *lock);
+
/**
* mutex_is_locked - is the mutex locked
* @lock: the mutex to be queried
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index d322f7c1c8fa..ead4213232cc 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -81,6 +81,11 @@ static inline struct task_struct *__mutex_owner(struct mutex *lock)
return (struct task_struct *)(atomic_long_read(&lock->owner) & ~MUTEX_FLAGS);
}

+struct task_struct *mutex_owner(struct mutex *lock)
+{
+ return __mutex_owner(lock);
+}
+
static inline struct task_struct *__owner_task(unsigned long owner)
{
return (struct task_struct *)(owner & ~MUTEX_FLAGS);
--
2.40.0.rc1.284.g88254d51c5-goog


2023-03-20 23:38:03

by John Stultz

[permalink] [raw]
Subject: [PATCH v2 06/12] sched: Unify runtime accounting across classes

From: Peter Zijlstra <[email protected]>

All classes use sched_entity::exec_start to track runtime and have
copies of the exact same code around to compute runtime.

Collapse all that.

Cc: Joel Fernandes <[email protected]>
Cc: Qais Yousef <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Zimuzo Ezeozue <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Waiman Long <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: "Paul E . McKenney" <[email protected]>
Cc: [email protected]
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
[fix conflicts, fold in update_current_exec_runtime]
Signed-off-by: Connor O'Brien <[email protected]>
[jstultz: rebased, resovling minor conflicts]
Signed-off-by: John Stultz <[email protected]>
---
include/linux/sched.h | 2 +-
kernel/sched/deadline.c | 13 +++-------
kernel/sched/fair.c | 56 ++++++++++++++++++++++++++++++----------
kernel/sched/rt.c | 13 +++-------
kernel/sched/sched.h | 12 ++-------
kernel/sched/stop_task.c | 13 +---------
6 files changed, 52 insertions(+), 57 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index a1606d0bd3fe..fc75fcc696db 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -519,7 +519,7 @@ struct sched_statistics {
u64 block_max;
s64 sum_block_runtime;

- u64 exec_max;
+ s64 exec_max;
u64 slice_max;

u64 nr_migrations_cold;
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 71b24371a6f7..5a7c4edd5b13 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1308,9 +1308,8 @@ static void update_curr_dl(struct rq *rq)
{
struct task_struct *curr = rq->curr;
struct sched_dl_entity *dl_se = &curr->dl;
- u64 delta_exec, scaled_delta_exec;
+ s64 delta_exec, scaled_delta_exec;
int cpu = cpu_of(rq);
- u64 now;

if (!dl_task(curr) || !on_dl_rq(dl_se))
return;
@@ -1323,21 +1322,15 @@ static void update_curr_dl(struct rq *rq)
* natural solution, but the full ramifications of this
* approach need further study.
*/
- now = rq_clock_task(rq);
- delta_exec = now - curr->se.exec_start;
- if (unlikely((s64)delta_exec <= 0)) {
+ delta_exec = update_curr_common(rq);
+ if (unlikely(delta_exec <= 0)) {
if (unlikely(dl_se->dl_yielded))
goto throttle;
return;
}

- schedstat_set(curr->stats.exec_max,
- max(curr->stats.exec_max, delta_exec));
-
trace_sched_stat_runtime(curr, delta_exec, 0);

- update_current_exec_runtime(curr, now, delta_exec);
-
if (dl_entity_is_special(dl_se))
return;

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 7a1b1f855b96..03e61be5c94f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -891,23 +891,17 @@ static void update_tg_load_avg(struct cfs_rq *cfs_rq)
}
#endif /* CONFIG_SMP */

-/*
- * Update the current task's runtime statistics.
- */
-static void update_curr(struct cfs_rq *cfs_rq)
+static s64 update_curr_se(struct rq *rq, struct sched_entity *curr)
{
- struct sched_entity *curr = cfs_rq->curr;
- u64 now = rq_clock_task(rq_of(cfs_rq));
- u64 delta_exec;
-
- if (unlikely(!curr))
- return;
+ u64 now = rq_clock_task(rq);
+ s64 delta_exec;

delta_exec = now - curr->exec_start;
- if (unlikely((s64)delta_exec <= 0))
- return;
+ if (unlikely(delta_exec <= 0))
+ return delta_exec;

curr->exec_start = now;
+ curr->sum_exec_runtime += delta_exec;

if (schedstat_enabled()) {
struct sched_statistics *stats;
@@ -917,9 +911,43 @@ static void update_curr(struct cfs_rq *cfs_rq)
max(delta_exec, stats->exec_max));
}

- curr->sum_exec_runtime += delta_exec;
- schedstat_add(cfs_rq->exec_clock, delta_exec);
+ return delta_exec;
+}
+
+/*
+ * Used by other classes to account runtime.
+ */
+s64 update_curr_common(struct rq *rq)
+{
+ struct task_struct *curr = rq->curr;
+ s64 delta_exec;

+ delta_exec = update_curr_se(rq, &curr->se);
+ if (unlikely(delta_exec <= 0))
+ return delta_exec;
+
+ account_group_exec_runtime(curr, delta_exec);
+ cgroup_account_cputime(curr, delta_exec);
+
+ return delta_exec;
+}
+
+/*
+ * Update the current task's runtime statistics.
+ */
+static void update_curr(struct cfs_rq *cfs_rq)
+{
+ struct sched_entity *curr = cfs_rq->curr;
+ s64 delta_exec;
+
+ if (unlikely(!curr))
+ return;
+
+ delta_exec = update_curr_se(rq_of(cfs_rq), curr);
+ if (unlikely(delta_exec <= 0))
+ return;
+
+ schedstat_add(cfs_rq->exec_clock, delta_exec);
curr->vruntime += calc_delta_fair(delta_exec, curr);
update_min_vruntime(cfs_rq);

diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 0a11f44adee5..18eb6ce60c5c 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1046,24 +1046,17 @@ static void update_curr_rt(struct rq *rq)
{
struct task_struct *curr = rq->curr;
struct sched_rt_entity *rt_se = &curr->rt;
- u64 delta_exec;
- u64 now;
+ s64 delta_exec;

if (curr->sched_class != &rt_sched_class)
return;

- now = rq_clock_task(rq);
- delta_exec = now - curr->se.exec_start;
- if (unlikely((s64)delta_exec <= 0))
+ delta_exec = update_curr_common(rq);
+ if (unlikely(delta_exec < 0))
return;

- schedstat_set(curr->stats.exec_max,
- max(curr->stats.exec_max, delta_exec));
-
trace_sched_stat_runtime(curr, delta_exec, 0);

- update_current_exec_runtime(curr, now, delta_exec);
-
if (!rt_bandwidth_enabled())
return;

diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 3e8df6d31c1e..d18e3c3a3f40 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2166,6 +2166,8 @@ struct affinity_context {
unsigned int flags;
};

+extern s64 update_curr_common(struct rq *rq);
+
struct sched_class {

#ifdef CONFIG_UCLAMP_TASK
@@ -3238,16 +3240,6 @@ extern int sched_dynamic_mode(const char *str);
extern void sched_dynamic_update(int mode);
#endif

-static inline void update_current_exec_runtime(struct task_struct *curr,
- u64 now, u64 delta_exec)
-{
- curr->se.sum_exec_runtime += delta_exec;
- account_group_exec_runtime(curr, delta_exec);
-
- curr->se.exec_start = now;
- cgroup_account_cputime(curr, delta_exec);
-}
-
#ifdef CONFIG_SCHED_MM_CID
static inline int __mm_cid_get(struct mm_struct *mm)
{
diff --git a/kernel/sched/stop_task.c b/kernel/sched/stop_task.c
index 85590599b4d6..7595494ceb6d 100644
--- a/kernel/sched/stop_task.c
+++ b/kernel/sched/stop_task.c
@@ -70,18 +70,7 @@ static void yield_task_stop(struct rq *rq)

static void put_prev_task_stop(struct rq *rq, struct task_struct *prev)
{
- struct task_struct *curr = rq->curr;
- u64 now, delta_exec;
-
- now = rq_clock_task(rq);
- delta_exec = now - curr->se.exec_start;
- if (unlikely((s64)delta_exec < 0))
- delta_exec = 0;
-
- schedstat_set(curr->stats.exec_max,
- max(curr->stats.exec_max, delta_exec));
-
- update_current_exec_runtime(curr, now, delta_exec);
+ update_curr_common(rq);
}

/*
--
2.40.0.rc1.284.g88254d51c5-goog


2023-03-20 23:38:15

by John Stultz

[permalink] [raw]
Subject: [PATCH v2 07/12] sched: Replace rq->curr access w/ rq_curr(rq)

In preparing for proxy-execution changes add a bit of
indirection for reading and writing rq->curr.

Cc: Joel Fernandes <[email protected]>
Cc: Qais Yousef <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Zimuzo Ezeozue <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Waiman Long <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: "Paul E . McKenney" <[email protected]>
Cc: [email protected]
Signed-off-by: John Stultz <[email protected]>
---
kernel/sched/core.c | 56 ++++++++++++++++++++-------------------
kernel/sched/core_sched.c | 2 +-
kernel/sched/deadline.c | 50 +++++++++++++++++-----------------
kernel/sched/debug.c | 2 +-
kernel/sched/fair.c | 25 ++++++++---------
kernel/sched/membarrier.c | 8 +++---
kernel/sched/pelt.h | 2 +-
kernel/sched/rt.c | 44 +++++++++++++++---------------
kernel/sched/sched.h | 42 ++++++++++++++++++++++++-----
9 files changed, 132 insertions(+), 99 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 488655f2319f..faaad249f8f7 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -257,7 +257,7 @@ void sched_core_dequeue(struct rq *rq, struct task_struct *p, int flags)
* and re-examine whether the core is still in forced idle state.
*/
if (!(flags & DEQUEUE_SAVE) && rq->nr_running == 1 &&
- rq->core->core_forceidle_count && rq->curr == rq->idle)
+ rq->core->core_forceidle_count && rq_curr(rq) == rq->idle)
resched_curr(rq);
}

@@ -703,7 +703,7 @@ static void update_rq_clock_task(struct rq *rq, s64 delta)

rq->prev_irq_time += irq_delta;
delta -= irq_delta;
- psi_account_irqtime(rq->curr, irq_delta);
+ psi_account_irqtime(rq_curr(rq), irq_delta);
#endif
#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
if (static_key_false((&paravirt_steal_rq_enabled))) {
@@ -773,7 +773,7 @@ static enum hrtimer_restart hrtick(struct hrtimer *timer)

rq_lock(rq, &rf);
update_rq_clock(rq);
- rq->curr->sched_class->task_tick(rq, rq->curr, 1);
+ rq_curr(rq)->sched_class->task_tick(rq, rq_curr(rq), 1);
rq_unlock(rq, &rf);

return HRTIMER_NORESTART;
@@ -1020,7 +1020,7 @@ void wake_up_q(struct wake_q_head *head)
*/
void resched_curr(struct rq *rq)
{
- struct task_struct *curr = rq->curr;
+ struct task_struct *curr = rq_curr(rq);
int cpu;

lockdep_assert_rq_held(rq);
@@ -2175,16 +2175,18 @@ static inline void check_class_changed(struct rq *rq, struct task_struct *p,

void check_preempt_curr(struct rq *rq, struct task_struct *p, int flags)
{
- if (p->sched_class == rq->curr->sched_class)
- rq->curr->sched_class->check_preempt_curr(rq, p, flags);
- else if (sched_class_above(p->sched_class, rq->curr->sched_class))
+ struct task_struct *curr = rq_curr(rq);
+
+ if (p->sched_class == curr->sched_class)
+ curr->sched_class->check_preempt_curr(rq, p, flags);
+ else if (sched_class_above(p->sched_class, curr->sched_class))
resched_curr(rq);

/*
* A queue event has occurred, and we're going to schedule. In
* this case, we can save a useless back to back clock update.
*/
- if (task_on_rq_queued(rq->curr) && test_tsk_need_resched(rq->curr))
+ if (task_on_rq_queued(curr) && test_tsk_need_resched(curr))
rq_clock_skip_update(rq);
}

@@ -3859,11 +3861,11 @@ void wake_up_if_idle(int cpu)

rcu_read_lock();

- if (!is_idle_task(rcu_dereference(rq->curr)))
+ if (!is_idle_task(rq_curr_unlocked(rq)))
goto out;

rq_lock_irqsave(rq, &rf);
- if (is_idle_task(rq->curr))
+ if (is_idle_task(rq_curr(rq)))
resched_curr(rq);
/* Else CPU is not idle, do nothing here: */
rq_unlock_irqrestore(rq, &rf);
@@ -4388,7 +4390,7 @@ struct task_struct *cpu_curr_snapshot(int cpu)
struct task_struct *t;

smp_mb(); /* Pairing determined by caller's synchronization design. */
- t = rcu_dereference(cpu_curr(cpu));
+ t = cpu_curr_unlocked(cpu);
smp_mb(); /* Pairing determined by caller's synchronization design. */
return t;
}
@@ -5197,7 +5199,7 @@ static struct rq *finish_task_switch(struct task_struct *prev)
* kernel thread and not issued an IPI. It is therefore possible to
* schedule between user->kernel->user threads without passing though
* switch_mm(). Membarrier requires a barrier after storing to
- * rq->curr, before returning to userspace, so provide them here:
+ * rq_curr(rq), before returning to userspace, so provide them here:
*
* - a full memory barrier for {PRIVATE,GLOBAL}_EXPEDITED, implicitly
* provided by mmdrop(),
@@ -5280,7 +5282,7 @@ context_switch(struct rq *rq, struct task_struct *prev,
membarrier_switch_mm(rq, prev->active_mm, next->mm);
/*
* sys_membarrier() requires an smp_mb() between setting
- * rq->curr / membarrier_switch_mm() and returning to userspace.
+ * rq_curr(rq) / membarrier_switch_mm() and returning to userspace.
*
* The below provides this either through switch_mm(), or in
* case 'prev->active_mm == next->mm' through
@@ -5564,7 +5566,7 @@ void scheduler_tick(void)
{
int cpu = smp_processor_id();
struct rq *rq = cpu_rq(cpu);
- struct task_struct *curr = rq->curr;
+ struct task_struct *curr = rq_curr(rq);
struct rq_flags rf;
unsigned long thermal_pressure;
u64 resched_latency;
@@ -5657,7 +5659,7 @@ static void sched_tick_remote(struct work_struct *work)
goto out_requeue;

rq_lock_irq(rq, &rf);
- curr = rq->curr;
+ curr = rq_curr(rq);
if (cpu_is_offline(cpu))
goto out_unlock;

@@ -6201,7 +6203,7 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
/* Did we break L1TF mitigation requirements? */
WARN_ON_ONCE(!cookie_match(next, rq_i->core_pick));

- if (rq_i->curr == rq_i->core_pick) {
+ if (rq_curr(rq_i) == rq_i->core_pick) {
rq_i->core_pick = NULL;
continue;
}
@@ -6232,7 +6234,7 @@ static bool try_steal_cookie(int this, int that)
if (!cookie)
goto unlock;

- if (dst->curr != dst->idle)
+ if (rq_curr(dst) != dst->idle)
goto unlock;

p = sched_core_find(src, cookie);
@@ -6240,7 +6242,7 @@ static bool try_steal_cookie(int this, int that)
goto unlock;

do {
- if (p == src->core_pick || p == src->curr)
+ if (p == src->core_pick || p == rq_curr(src))
goto next;

if (!is_cpu_allowed(p, this))
@@ -6511,7 +6513,7 @@ static void __sched notrace __schedule(unsigned int sched_mode)

cpu = smp_processor_id();
rq = cpu_rq(cpu);
- prev = rq->curr;
+ prev = rq_curr(rq);

schedule_debug(prev, !!sched_mode);

@@ -6534,7 +6536,7 @@ static void __sched notrace __schedule(unsigned int sched_mode)
* if (signal_pending_state()) if (p->state & @state)
*
* Also, the membarrier system call requires a full memory barrier
- * after coming from user-space, before storing to rq->curr.
+ * after coming from user-space, before storing to rq_curr().
*/
rq_lock(rq, &rf);
smp_mb__after_spinlock();
@@ -6593,14 +6595,14 @@ static void __sched notrace __schedule(unsigned int sched_mode)
if (likely(prev != next)) {
rq->nr_switches++;
/*
- * RCU users of rcu_dereference(rq->curr) may not see
+ * RCU users of rcu_dereference(rq_curr(rq)) may not see
* changes to task_struct made by pick_next_task().
*/
- RCU_INIT_POINTER(rq->curr, next);
+ rq_set_curr_rcu_init(rq, next);
/*
* The membarrier system call requires each architecture
* to have a full memory barrier after updating
- * rq->curr, before returning to user-space.
+ * rq_curr(rq), before returning to user-space.
*
* Here are the schemes providing that barrier on the
* various architectures:
@@ -7037,7 +7039,7 @@ void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
* real need to boost.
*/
if (unlikely(p == rq->idle)) {
- WARN_ON(p != rq->curr);
+ WARN_ON(p != rq_curr(rq));
WARN_ON(p->pi_blocked_on);
goto out_unlock;
}
@@ -7253,7 +7255,7 @@ int idle_cpu(int cpu)
{
struct rq *rq = cpu_rq(cpu);

- if (rq->curr != rq->idle)
+ if (rq_curr(rq) != rq->idle)
return 0;

if (rq->nr_running)
@@ -9154,7 +9156,7 @@ void __init init_idle(struct task_struct *idle, int cpu)
rcu_read_unlock();

rq->idle = idle;
- rcu_assign_pointer(rq->curr, idle);
+ rq_set_curr(rq, idle);
idle->on_rq = TASK_ON_RQ_QUEUED;
#ifdef CONFIG_SMP
idle->on_cpu = 1;
@@ -9328,7 +9330,7 @@ static DEFINE_PER_CPU(struct cpu_stop_work, push_work);
*/
static void balance_push(struct rq *rq)
{
- struct task_struct *push_task = rq->curr;
+ struct task_struct *push_task = rq_curr(rq);

lockdep_assert_rq_held(rq);

diff --git a/kernel/sched/core_sched.c b/kernel/sched/core_sched.c
index a57fd8f27498..ece2157a265d 100644
--- a/kernel/sched/core_sched.c
+++ b/kernel/sched/core_sched.c
@@ -273,7 +273,7 @@ void __sched_core_account_forceidle(struct rq *rq)

for_each_cpu(i, smt_mask) {
rq_i = cpu_rq(i);
- p = rq_i->core_pick ?: rq_i->curr;
+ p = rq_i->core_pick ?: rq_curr(rq_i);

if (p == rq_i->idle)
continue;
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 5a7c4edd5b13..4e3acc76708f 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1179,7 +1179,7 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer)
#endif

enqueue_task_dl(rq, p, ENQUEUE_REPLENISH);
- if (dl_task(rq->curr))
+ if (dl_task(rq_curr(rq)))
check_preempt_curr_dl(rq, p, 0);
else
resched_curr(rq);
@@ -1306,7 +1306,7 @@ static u64 grub_reclaim(u64 delta, struct rq *rq, struct sched_dl_entity *dl_se)
*/
static void update_curr_dl(struct rq *rq)
{
- struct task_struct *curr = rq->curr;
+ struct task_struct *curr = rq_curr(rq);
struct sched_dl_entity *dl_se = &curr->dl;
s64 delta_exec, scaled_delta_exec;
int cpu = cpu_of(rq);
@@ -1792,7 +1792,7 @@ static void yield_task_dl(struct rq *rq)
* it and the bandwidth timer will wake it up and will give it
* new scheduling parameters (thanks to dl_yielded=1).
*/
- rq->curr->dl.dl_yielded = 1;
+ rq_curr(rq)->dl.dl_yielded = 1;

update_rq_clock(rq);
update_curr_dl(rq);
@@ -1829,7 +1829,7 @@ select_task_rq_dl(struct task_struct *p, int cpu, int flags)
rq = cpu_rq(cpu);

rcu_read_lock();
- curr = READ_ONCE(rq->curr); /* unlocked access */
+ curr = rq_curr_unlocked(rq); /* XXX jstultz: using rcu_dereference intead of READ_ONCE */

/*
* If we are dealing with a -deadline task, we must
@@ -1904,8 +1904,8 @@ static void check_preempt_equal_dl(struct rq *rq, struct task_struct *p)
* Current can't be migrated, useless to reschedule,
* let's hope p can move out.
*/
- if (rq->curr->nr_cpus_allowed == 1 ||
- !cpudl_find(&rq->rd->cpudl, rq->curr, NULL))
+ if (rq_curr(rq)->nr_cpus_allowed == 1 ||
+ !cpudl_find(&rq->rd->cpudl, rq_curr(rq), NULL))
return;

/*
@@ -1944,7 +1944,7 @@ static int balance_dl(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p,
int flags)
{
- if (dl_entity_preempt(&p->dl, &rq->curr->dl)) {
+ if (dl_entity_preempt(&p->dl, &rq_curr(rq)->dl)) {
resched_curr(rq);
return;
}
@@ -1954,8 +1954,8 @@ static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p,
* In the unlikely case current and p have the same deadline
* let us try to decide what's the best thing to do...
*/
- if ((p->dl.deadline == rq->curr->dl.deadline) &&
- !test_tsk_need_resched(rq->curr))
+ if ((p->dl.deadline == rq_curr(rq)->dl.deadline) &&
+ !test_tsk_need_resched(rq_curr(rq)))
check_preempt_equal_dl(rq, p);
#endif /* CONFIG_SMP */
}
@@ -1989,7 +1989,7 @@ static void set_next_task_dl(struct rq *rq, struct task_struct *p, bool first)
if (hrtick_enabled_dl(rq))
start_hrtick_dl(rq, p);

- if (rq->curr->sched_class != &dl_sched_class)
+ if (rq_curr(rq)->sched_class != &dl_sched_class)
update_dl_rq_load_avg(rq_clock_pelt(rq), rq, 0);

deadline_queue_push_tasks(rq);
@@ -2301,13 +2301,13 @@ static int push_dl_task(struct rq *rq)

retry:
/*
- * If next_task preempts rq->curr, and rq->curr
+ * If next_task preempts rq_curr(rq), and rq_curr(rq)
* can move away, it makes sense to just reschedule
* without going further in pushing next_task.
*/
- if (dl_task(rq->curr) &&
- dl_time_before(next_task->dl.deadline, rq->curr->dl.deadline) &&
- rq->curr->nr_cpus_allowed > 1) {
+ if (dl_task(rq_curr(rq)) &&
+ dl_time_before(next_task->dl.deadline, rq_curr(rq)->dl.deadline) &&
+ rq_curr(rq)->nr_cpus_allowed > 1) {
resched_curr(rq);
return 0;
}
@@ -2315,7 +2315,7 @@ static int push_dl_task(struct rq *rq)
if (is_migration_disabled(next_task))
return 0;

- if (WARN_ON(next_task == rq->curr))
+ if (WARN_ON(next_task == rq_curr(rq)))
return 0;

/* We might release rq lock */
@@ -2423,7 +2423,7 @@ static void pull_dl_task(struct rq *this_rq)
*/
if (p && dl_time_before(p->dl.deadline, dmin) &&
dl_task_is_earliest_deadline(p, this_rq)) {
- WARN_ON(p == src_rq->curr);
+ WARN_ON(p == rq_curr(src_rq));
WARN_ON(!task_on_rq_queued(p));

/*
@@ -2431,7 +2431,7 @@ static void pull_dl_task(struct rq *this_rq)
* deadline than the current task of its runqueue.
*/
if (dl_time_before(p->dl.deadline,
- src_rq->curr->dl.deadline))
+ rq_curr(src_rq)->dl.deadline))
goto skip;

if (is_migration_disabled(p)) {
@@ -2468,11 +2468,11 @@ static void pull_dl_task(struct rq *this_rq)
static void task_woken_dl(struct rq *rq, struct task_struct *p)
{
if (!task_on_cpu(rq, p) &&
- !test_tsk_need_resched(rq->curr) &&
+ !test_tsk_need_resched(rq_curr(rq)) &&
p->nr_cpus_allowed > 1 &&
- dl_task(rq->curr) &&
- (rq->curr->nr_cpus_allowed < 2 ||
- !dl_entity_preempt(&p->dl, &rq->curr->dl))) {
+ dl_task(rq_curr(rq)) &&
+ (rq_curr(rq)->nr_cpus_allowed < 2 ||
+ !dl_entity_preempt(&p->dl, &rq_curr(rq)->dl))) {
push_dl_tasks(rq);
}
}
@@ -2635,12 +2635,12 @@ static void switched_to_dl(struct rq *rq, struct task_struct *p)
return;
}

- if (rq->curr != p) {
+ if (rq_curr(rq) != p) {
#ifdef CONFIG_SMP
if (p->nr_cpus_allowed > 1 && rq->dl.overloaded)
deadline_queue_push_tasks(rq);
#endif
- if (dl_task(rq->curr))
+ if (dl_task(rq_curr(rq)))
check_preempt_curr_dl(rq, p, 0);
else
resched_curr(rq);
@@ -2684,8 +2684,8 @@ static void prio_changed_dl(struct rq *rq, struct task_struct *p,
*
* Otherwise, if p was given an earlier deadline, reschedule.
*/
- if (!dl_task(rq->curr) ||
- dl_time_before(p->dl.deadline, rq->curr->dl.deadline))
+ if (!dl_task(rq_curr(rq)) ||
+ dl_time_before(p->dl.deadline, rq_curr(rq)->dl.deadline))
resched_curr(rq);
}
#else
diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index 1637b65ba07a..55f57156502d 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -743,7 +743,7 @@ do { \
P(nr_switches);
P(nr_uninterruptible);
PN(next_balance);
- SEQ_printf(m, " .%-30s: %ld\n", "curr->pid", (long)(task_pid_nr(rq->curr)));
+ SEQ_printf(m, " .%-30s: %ld\n", "curr->pid", (long)(task_pid_nr(rq_curr(rq))));
PN(clock);
PN(clock_task);
#undef P
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 03e61be5c94f..8b35dfc0c442 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -919,7 +919,7 @@ static s64 update_curr_se(struct rq *rq, struct sched_entity *curr)
*/
s64 update_curr_common(struct rq *rq)
{
- struct task_struct *curr = rq->curr;
+ struct task_struct *curr = rq_curr(rq);
s64 delta_exec;

delta_exec = update_curr_se(rq, &curr->se);
@@ -964,7 +964,7 @@ static void update_curr(struct cfs_rq *cfs_rq)

static void update_curr_fair(struct rq *rq)
{
- update_curr(cfs_rq_of(&rq->curr->se));
+ update_curr(cfs_rq_of(&rq_curr(rq)->se));
}

static inline void
@@ -1958,7 +1958,7 @@ static bool task_numa_compare(struct task_numa_env *env,
return false;

rcu_read_lock();
- cur = rcu_dereference(dst_rq->curr);
+ cur = rcu_dereference(rq_curr(dst_rq));
if (cur && ((cur->flags & PF_EXITING) || is_idle_task(cur)))
cur = NULL;

@@ -2747,7 +2747,7 @@ static void task_numa_group(struct task_struct *p, int cpupid, int flags,
}

rcu_read_lock();
- tsk = READ_ONCE(cpu_rq(cpu)->curr);
+ tsk = READ_ONCE(cpu_curr(cpu));

if (!cpupid_match_pid(tsk, cpupid))
goto no_join;
@@ -3969,7 +3969,7 @@ static inline void migrate_se_pelt_lag(struct sched_entity *se)
rq = rq_of(cfs_rq);

rcu_read_lock();
- is_idle = is_idle_task(rcu_dereference(rq->curr));
+ is_idle = is_idle_task(rq_curr_unlocked(rq));
rcu_read_unlock();

/*
@@ -5498,7 +5498,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq)
assert_list_leaf_cfs_rq(rq);

/* Determine whether we need to wake up potentially idle CPU: */
- if (rq->curr == rq->idle && rq->cfs.nr_running)
+ if (rq_curr(rq) == rq->idle && rq->cfs.nr_running)
resched_curr(rq);
}

@@ -6148,7 +6148,7 @@ static void hrtick_start_fair(struct rq *rq, struct task_struct *p)
*/
static void hrtick_update(struct rq *rq)
{
- struct task_struct *curr = rq->curr;
+ struct task_struct *curr = rq_curr(rq);

if (!hrtick_enabled_fair(rq) || curr->sched_class != &fair_sched_class)
return;
@@ -7788,7 +7788,7 @@ static void set_skip_buddy(struct sched_entity *se)
*/
static void check_preempt_wakeup(struct rq *rq, struct task_struct *p, int wake_flags)
{
- struct task_struct *curr = rq->curr;
+ struct task_struct *curr = rq_curr(rq);
struct sched_entity *se = &curr->se, *pse = &p->se;
struct cfs_rq *cfs_rq = task_cfs_rq(curr);
int scale = cfs_rq->nr_running >= sched_nr_latency;
@@ -8086,7 +8086,7 @@ static void put_prev_task_fair(struct rq *rq, struct task_struct *prev)
*/
static void yield_task_fair(struct rq *rq)
{
- struct task_struct *curr = rq->curr;
+ struct task_struct *curr = rq_curr(rq);
struct cfs_rq *cfs_rq = task_cfs_rq(curr);
struct sched_entity *se = &curr->se;

@@ -8821,7 +8821,7 @@ static bool __update_blocked_others(struct rq *rq, bool *done)
* update_load_avg() can call cpufreq_update_util(). Make sure that RT,
* DL and IRQ signals have been updated before updating CFS.
*/
- curr_class = rq->curr->sched_class;
+ curr_class = rq_curr(rq)->sched_class;

thermal_pressure = arch_scale_thermal_pressure(cpu_of(rq));

@@ -9640,8 +9640,9 @@ static unsigned int task_running_on_cpu(int cpu, struct task_struct *p)
static int idle_cpu_without(int cpu, struct task_struct *p)
{
struct rq *rq = cpu_rq(cpu);
+ struct task_struct *curr = rq_curr(rq);

- if (rq->curr != rq->idle && rq->curr != p)
+ if (curr != rq->idle && curr != p)
return 0;

/*
@@ -10839,7 +10840,7 @@ static int load_balance(int this_cpu, struct rq *this_rq,
* if the curr task on busiest CPU can't be
* moved to this_cpu:
*/
- if (!cpumask_test_cpu(this_cpu, busiest->curr->cpus_ptr)) {
+ if (!cpumask_test_cpu(this_cpu, rq_curr(busiest)->cpus_ptr)) {
raw_spin_rq_unlock_irqrestore(busiest, flags);
goto out_one_pinned;
}
diff --git a/kernel/sched/membarrier.c b/kernel/sched/membarrier.c
index 2ad881d07752..e55e39f74ea4 100644
--- a/kernel/sched/membarrier.c
+++ b/kernel/sched/membarrier.c
@@ -283,7 +283,7 @@ static int membarrier_global_expedited(void)
* Skip the CPU if it runs a kernel thread which is not using
* a task mm.
*/
- p = rcu_dereference(cpu_rq(cpu)->curr);
+ p = cpu_curr_unlocked(cpu);
if (!p->mm)
continue;

@@ -355,7 +355,7 @@ static int membarrier_private_expedited(int flags, int cpu_id)
if (cpu_id >= nr_cpu_ids || !cpu_online(cpu_id))
goto out;
rcu_read_lock();
- p = rcu_dereference(cpu_rq(cpu_id)->curr);
+ p = cpu_curr_unlocked(cpu_id);
if (!p || p->mm != mm) {
rcu_read_unlock();
goto out;
@@ -368,7 +368,7 @@ static int membarrier_private_expedited(int flags, int cpu_id)
for_each_online_cpu(cpu) {
struct task_struct *p;

- p = rcu_dereference(cpu_rq(cpu)->curr);
+ p = cpu_curr_unlocked(cpu);
if (p && p->mm == mm)
__cpumask_set_cpu(cpu, tmpmask);
}
@@ -466,7 +466,7 @@ static int sync_runqueues_membarrier_state(struct mm_struct *mm)
struct rq *rq = cpu_rq(cpu);
struct task_struct *p;

- p = rcu_dereference(rq->curr);
+ p = rq_curr_unlocked(rq);
if (p && p->mm == mm)
__cpumask_set_cpu(cpu, tmpmask);
}
diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
index 3a0e0dc28721..bf3276f8df78 100644
--- a/kernel/sched/pelt.h
+++ b/kernel/sched/pelt.h
@@ -94,7 +94,7 @@ static inline void _update_idle_rq_clock_pelt(struct rq *rq)
*/
static inline void update_rq_clock_pelt(struct rq *rq, s64 delta)
{
- if (unlikely(is_idle_task(rq->curr))) {
+ if (unlikely(is_idle_task(rq_curr(rq)))) {
_update_idle_rq_clock_pelt(rq);
return;
}
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 18eb6ce60c5c..91c992230aba 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -574,7 +574,7 @@ static void dequeue_rt_entity(struct sched_rt_entity *rt_se, unsigned int flags)

static void sched_rt_rq_enqueue(struct rt_rq *rt_rq)
{
- struct task_struct *curr = rq_of_rt_rq(rt_rq)->curr;
+ struct task_struct *curr = rq_curr(rq_of_rt_rq(rt_rq));
struct rq *rq = rq_of_rt_rq(rt_rq);
struct sched_rt_entity *rt_se;

@@ -958,7 +958,7 @@ static int do_sched_rt_period_timer(struct rt_bandwidth *rt_b, int overrun)
* and this unthrottle will get accounted as
* 'runtime'.
*/
- if (rt_rq->rt_nr_running && rq->curr == rq->idle)
+ if (rt_rq->rt_nr_running && rq_curr(rq) == rq->idle)
rq_clock_cancel_skipupdate(rq);
}
if (rt_rq->rt_time || rt_rq->rt_nr_running)
@@ -1044,7 +1044,7 @@ static int sched_rt_runtime_exceeded(struct rt_rq *rt_rq)
*/
static void update_curr_rt(struct rq *rq)
{
- struct task_struct *curr = rq->curr;
+ struct task_struct *curr = rq_curr(rq);
struct sched_rt_entity *rt_se = &curr->rt;
s64 delta_exec;

@@ -1582,7 +1582,7 @@ static void requeue_task_rt(struct rq *rq, struct task_struct *p, int head)

static void yield_task_rt(struct rq *rq)
{
- requeue_task_rt(rq, rq->curr, 0);
+ requeue_task_rt(rq, rq_curr(rq), 0);
}

#ifdef CONFIG_SMP
@@ -1602,7 +1602,7 @@ select_task_rq_rt(struct task_struct *p, int cpu, int flags)
rq = cpu_rq(cpu);

rcu_read_lock();
- curr = READ_ONCE(rq->curr); /* unlocked access */
+ curr = rq_curr_unlocked(rq); /* XXX jstultz: using rcu_dereference intead of READ_ONCE */

/*
* If the current task on @p's runqueue is an RT task, then
@@ -1666,8 +1666,8 @@ static void check_preempt_equal_prio(struct rq *rq, struct task_struct *p)
* Current can't be migrated, useless to reschedule,
* let's hope p can move out.
*/
- if (rq->curr->nr_cpus_allowed == 1 ||
- !cpupri_find(&rq->rd->cpupri, rq->curr, NULL))
+ if (rq_curr(rq)->nr_cpus_allowed == 1 ||
+ !cpupri_find(&rq->rd->cpupri, rq_curr(rq), NULL))
return;

/*
@@ -1710,7 +1710,7 @@ static int balance_rt(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
*/
static void check_preempt_curr_rt(struct rq *rq, struct task_struct *p, int flags)
{
- if (p->prio < rq->curr->prio) {
+ if (p->prio < rq_curr(rq)->prio) {
resched_curr(rq);
return;
}
@@ -1728,7 +1728,7 @@ static void check_preempt_curr_rt(struct rq *rq, struct task_struct *p, int flag
* to move current somewhere else, making room for our non-migratable
* task.
*/
- if (p->prio == rq->curr->prio && !test_tsk_need_resched(rq->curr))
+ if (p->prio == rq_curr(rq)->prio && !test_tsk_need_resched(rq_curr(rq)))
check_preempt_equal_prio(rq, p);
#endif
}
@@ -1753,7 +1753,7 @@ static inline void set_next_task_rt(struct rq *rq, struct task_struct *p, bool f
* utilization. We only care of the case where we start to schedule a
* rt task
*/
- if (rq->curr->sched_class != &rt_sched_class)
+ if (rq_curr(rq)->sched_class != &rt_sched_class)
update_rt_rq_load_avg(rq_clock_pelt(rq), rq, 0);

rt_queue_push_tasks(rq);
@@ -2062,7 +2062,7 @@ static int push_rt_task(struct rq *rq, bool pull)
* higher priority than current. If that's the case
* just reschedule current.
*/
- if (unlikely(next_task->prio < rq->curr->prio)) {
+ if (unlikely(next_task->prio < rq_curr(rq)->prio)) {
resched_curr(rq);
return 0;
}
@@ -2083,10 +2083,10 @@ static int push_rt_task(struct rq *rq, bool pull)
* Note that the stoppers are masqueraded as SCHED_FIFO
* (cf. sched_set_stop_task()), so we can't rely on rt_task().
*/
- if (rq->curr->sched_class != &rt_sched_class)
+ if (rq_curr(rq)->sched_class != &rt_sched_class)
return 0;

- cpu = find_lowest_rq(rq->curr);
+ cpu = find_lowest_rq(rq_curr(rq));
if (cpu == -1 || cpu == rq->cpu)
return 0;

@@ -2107,7 +2107,7 @@ static int push_rt_task(struct rq *rq, bool pull)
return 0;
}

- if (WARN_ON(next_task == rq->curr))
+ if (WARN_ON(next_task == rq_curr(rq)))
return 0;

/* We might release rq lock */
@@ -2404,7 +2404,7 @@ static void pull_rt_task(struct rq *this_rq)
* the to-be-scheduled task?
*/
if (p && (p->prio < this_rq->rt.highest_prio.curr)) {
- WARN_ON(p == src_rq->curr);
+ WARN_ON(p == rq_curr(src_rq));
WARN_ON(!task_on_rq_queued(p));

/*
@@ -2415,7 +2415,7 @@ static void pull_rt_task(struct rq *this_rq)
* p if it is lower in priority than the
* current task on the run queue
*/
- if (p->prio < src_rq->curr->prio)
+ if (p->prio < rq_curr(src_rq)->prio)
goto skip;

if (is_migration_disabled(p)) {
@@ -2455,11 +2455,11 @@ static void pull_rt_task(struct rq *this_rq)
static void task_woken_rt(struct rq *rq, struct task_struct *p)
{
bool need_to_push = !task_on_cpu(rq, p) &&
- !test_tsk_need_resched(rq->curr) &&
+ !test_tsk_need_resched(rq_curr(rq)) &&
p->nr_cpus_allowed > 1 &&
- (dl_task(rq->curr) || rt_task(rq->curr)) &&
- (rq->curr->nr_cpus_allowed < 2 ||
- rq->curr->prio <= p->prio);
+ (dl_task(rq_curr(rq)) || rt_task(rq_curr(rq))) &&
+ (rq_curr(rq)->nr_cpus_allowed < 2 ||
+ rq_curr(rq)->prio <= p->prio);

if (need_to_push)
push_rt_tasks(rq);
@@ -2543,7 +2543,7 @@ static void switched_to_rt(struct rq *rq, struct task_struct *p)
if (p->nr_cpus_allowed > 1 && rq->rt.overloaded)
rt_queue_push_tasks(rq);
#endif /* CONFIG_SMP */
- if (p->prio < rq->curr->prio && cpu_online(cpu_of(rq)))
+ if (p->prio < rq_curr(rq)->prio && cpu_online(cpu_of(rq)))
resched_curr(rq);
}
}
@@ -2584,7 +2584,7 @@ prio_changed_rt(struct rq *rq, struct task_struct *p, int oldprio)
* greater than the current running task
* then reschedule.
*/
- if (p->prio < rq->curr->prio)
+ if (p->prio < rq_curr(rq)->prio)
resched_curr(rq);
}
}
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index d18e3c3a3f40..00f73ca4c618 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1008,7 +1008,7 @@ struct rq {
*/
unsigned int nr_uninterruptible;

- struct task_struct __rcu *curr;
+ struct task_struct __rcu *curr_exec;
struct task_struct *idle;
struct task_struct *stop;
unsigned long next_balance;
@@ -1201,12 +1201,42 @@ static inline bool is_migration_disabled(struct task_struct *p)

DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);

+static inline struct task_struct *rq_curr(struct rq *rq)
+{
+ return rq->curr_exec;
+}
+
+/* XXX jstultz: Would rq_curr_rcu() be a better name? */
+static inline struct task_struct *rq_curr_unlocked(struct rq *rq)
+{
+ return rcu_dereference(rq->curr_exec);
+}
+
+static inline void rq_set_curr(struct rq *rq, struct task_struct *task)
+{
+ rcu_assign_pointer(rq->curr_exec, task);
+}
+
+/*
+ * XXX jstultz: seems like rcu_assign_pointer above would also
+ * work for this, but trying to match usage.
+ */
+static inline void rq_set_curr_rcu_init(struct rq *rq, struct task_struct *task)
+{
+ RCU_INIT_POINTER(rq->curr_exec, task);
+}
+
#define cpu_rq(cpu) (&per_cpu(runqueues, (cpu)))
#define this_rq() this_cpu_ptr(&runqueues)
#define task_rq(p) cpu_rq(task_cpu(p))
-#define cpu_curr(cpu) (cpu_rq(cpu)->curr)
+#define cpu_curr(cpu) (rq_curr(cpu_rq(cpu)))
#define raw_rq() raw_cpu_ptr(&runqueues)

+static inline struct task_struct *cpu_curr_unlocked(int cpu)
+{
+ return rq_curr_unlocked(cpu_rq(cpu));
+}
+
struct sched_group;
#ifdef CONFIG_SCHED_CORE
static inline struct cpumask *sched_group_span(struct sched_group *sg);
@@ -2070,7 +2100,7 @@ static inline u64 global_rt_runtime(void)

static inline int task_current(struct rq *rq, struct task_struct *p)
{
- return rq->curr == p;
+ return rq_curr(rq) == p;
}

static inline int task_on_cpu(struct rq *rq, struct task_struct *p)
@@ -2230,7 +2260,7 @@ struct sched_class {

static inline void put_prev_task(struct rq *rq, struct task_struct *prev)
{
- WARN_ON_ONCE(rq->curr != prev);
+ WARN_ON_ONCE(rq_curr(rq) != prev);
prev->sched_class->put_prev_task(rq, prev);
}

@@ -2311,7 +2341,7 @@ extern void set_cpus_allowed_common(struct task_struct *p, struct affinity_conte

static inline struct task_struct *get_push_task(struct rq *rq)
{
- struct task_struct *p = rq->curr;
+ struct task_struct *p = rq_curr(rq);

lockdep_assert_rq_held(rq);

@@ -3193,7 +3223,7 @@ static inline bool sched_energy_enabled(void) { return false; }
* The scheduler provides memory barriers required by membarrier between:
* - prior user-space memory accesses and store to rq->membarrier_state,
* - store to rq->membarrier_state and following user-space memory accesses.
- * In the same way it provides those guarantees around store to rq->curr.
+ * In the same way it provides those guarantees around store to rq_curr(rq).
*/
static inline void membarrier_switch_mm(struct rq *rq,
struct mm_struct *prev_mm,
--
2.40.0.rc1.284.g88254d51c5-goog


2023-03-20 23:38:19

by John Stultz

[permalink] [raw]
Subject: [PATCH v2 09/12] sched: Unnest ttwu_runnable in prep for proxy-execution

Slightly rework ttwu_runnable to minimize the nesting to help
make the proxy-execution changes easier to read.

Should be no logical change here.

Cc: Joel Fernandes <[email protected]>
Cc: Qais Yousef <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Zimuzo Ezeozue <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Waiman Long <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: "Paul E . McKenney" <[email protected]>
Cc: [email protected]
Signed-off-by: John Stultz <[email protected]>
---
kernel/sched/core.c | 24 +++++++++++++-----------
1 file changed, 13 insertions(+), 11 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index ab6b1917bc70..82f5b29ae675 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3773,18 +3773,20 @@ static int ttwu_runnable(struct task_struct *p, int wake_flags)
int ret = 0;

rq = __task_rq_lock(p, &rf);
- if (task_on_rq_queued(p)) {
- if (!task_on_cpu(rq, p)) {
- /*
- * When on_rq && !on_cpu the task is preempted, see if
- * it should preempt the task that is current now.
- */
- update_rq_clock(rq);
- check_preempt_curr(rq, p, wake_flags);
- }
- ttwu_do_wakeup(p);
- ret = 1;
+ if (!task_on_rq_queued(p))
+ goto out_unlock;
+
+ if (!task_on_cpu(rq, p)) {
+ /*
+ * When on_rq && !on_cpu the task is preempted, see if
+ * it should preempt the task that is current now.
+ */
+ update_rq_clock(rq);
+ check_preempt_curr(rq, p, wake_flags);
}
+ ttwu_do_wakeup(p);
+ ret = 1;
+out_unlock:
__task_rq_unlock(rq, &rf);

return ret;
--
2.40.0.rc1.284.g88254d51c5-goog


2023-03-20 23:38:27

by John Stultz

[permalink] [raw]
Subject: [PATCH v2 08/12] sched: Split scheduler execution context

From: Peter Zijlstra <[email protected]>

Lets define the scheduling context as all the scheduler state in
task_struct and the execution context as all state required to run the
task.

Currently both are intertwined in task_struct. We want to logically
split these such that we can run the execution context of one task
with the scheduling context of another.

To this purpose introduce rq::proxy to point to the task_struct used
for scheduler state and preserve rq::curr to denote the execution
context.

XXX connoro: some further work may be needed in RT/DL load balancing
paths to properly handle split context; see comments in code for
specifics

Cc: Joel Fernandes <[email protected]>
Cc: Qais Yousef <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Zimuzo Ezeozue <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Waiman Long <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: "Paul E . McKenney" <[email protected]>
Cc: [email protected]
[added lot of comments/questions - identifiable by XXX]
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Signed-off-by: Juri Lelli <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
[add additional comments and update more sched_class code to use
rq::proxy]
Signed-off-by: Connor O'Brien <[email protected]>
[jstultz: Rebased and resolved minor collisions, reworked to use
accessors, tweaked update_curr_common to use rq_proxy fixing rt
scheduling issues]
Signed-off-by: John Stultz <[email protected]>
---
v2:
* Reworked to use accessors
* Fixed update_curr_common to use proxy instead of curr
---
kernel/sched/core.c | 41 +++++++++++++++++-------
kernel/sched/deadline.c | 36 +++++++++++----------
kernel/sched/fair.c | 22 +++++++------
kernel/sched/rt.c | 59 +++++++++++++++++++++++------------
kernel/sched/sched.h | 69 +++++++++++++++++++++++++++++++++++++++--
5 files changed, 168 insertions(+), 59 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index faaad249f8f7..ab6b1917bc70 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -773,7 +773,7 @@ static enum hrtimer_restart hrtick(struct hrtimer *timer)

rq_lock(rq, &rf);
update_rq_clock(rq);
- rq_curr(rq)->sched_class->task_tick(rq, rq_curr(rq), 1);
+ rq_proxy(rq)->sched_class->task_tick(rq, rq_proxy(rq), 1);
rq_unlock(rq, &rf);

return HRTIMER_NORESTART;
@@ -2175,7 +2175,7 @@ static inline void check_class_changed(struct rq *rq, struct task_struct *p,

void check_preempt_curr(struct rq *rq, struct task_struct *p, int flags)
{
- struct task_struct *curr = rq_curr(rq);
+ struct task_struct *curr = rq_proxy(rq);

if (p->sched_class == curr->sched_class)
curr->sched_class->check_preempt_curr(rq, p, flags);
@@ -2186,7 +2186,7 @@ void check_preempt_curr(struct rq *rq, struct task_struct *p, int flags)
* A queue event has occurred, and we're going to schedule. In
* this case, we can save a useless back to back clock update.
*/
- if (task_on_rq_queued(curr) && test_tsk_need_resched(curr))
+ if (task_on_rq_queued(curr) && test_tsk_need_resched(rq_curr(rq)))
rq_clock_skip_update(rq);
}

@@ -2576,7 +2576,7 @@ __do_set_cpus_allowed(struct task_struct *p, struct affinity_context *ctx)
lockdep_assert_held(&p->pi_lock);

queued = task_on_rq_queued(p);
- running = task_current(rq, p);
+ running = task_current_proxy(rq, p);

if (queued) {
/*
@@ -5498,7 +5498,7 @@ unsigned long long task_sched_runtime(struct task_struct *p)
* project cycles that may never be accounted to this
* thread, breaking clock_gettime().
*/
- if (task_current(rq, p) && task_on_rq_queued(p)) {
+ if (task_current_proxy(rq, p) && task_on_rq_queued(p)) {
prefetch_curr_exec_start(p);
update_rq_clock(rq);
p->sched_class->update_curr(rq);
@@ -5566,7 +5566,8 @@ void scheduler_tick(void)
{
int cpu = smp_processor_id();
struct rq *rq = cpu_rq(cpu);
- struct task_struct *curr = rq_curr(rq);
+ /* accounting goes to the proxy task */
+ struct task_struct *curr = rq_proxy(rq);
struct rq_flags rf;
unsigned long thermal_pressure;
u64 resched_latency;
@@ -5663,6 +5664,13 @@ static void sched_tick_remote(struct work_struct *work)
if (cpu_is_offline(cpu))
goto out_unlock;

+ /*
+ * XXX don't we need to account to rq->proxy?
+ * Maybe, since this is a remote tick for full dynticks mode, we are
+ * always sure that there is no proxy (only a single task is running.
+ */
+ SCHED_WARN_ON(rq_curr(rq) != rq_proxy(rq));
+
update_rq_clock(rq);

if (!is_idle_task(curr)) {
@@ -6586,6 +6594,7 @@ static void __sched notrace __schedule(unsigned int sched_mode)
}

next = pick_next_task(rq, prev, &rf);
+ rq_set_proxy(rq, next);
clear_tsk_need_resched(prev);
clear_preempt_need_resched();
#ifdef CONFIG_SCHED_DEBUG
@@ -7052,7 +7061,10 @@ void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)

prev_class = p->sched_class;
queued = task_on_rq_queued(p);
- running = task_current(rq, p);
+ /*
+ * XXX how does (proxy exec) mutexes and RT_mutexes work together?!
+ */
+ running = task_current_proxy(rq, p);
if (queued)
dequeue_task(rq, p, queue_flag);
if (running)
@@ -7140,7 +7152,10 @@ void set_user_nice(struct task_struct *p, long nice)
goto out_unlock;
}
queued = task_on_rq_queued(p);
- running = task_current(rq, p);
+ /*
+ * XXX see concerns about do_set_cpus_allowed, rt_mutex_prio & Co.
+ */
+ running = task_current_proxy(rq, p);
if (queued)
dequeue_task(rq, p, DEQUEUE_SAVE | DEQUEUE_NOCLOCK);
if (running)
@@ -7704,7 +7719,10 @@ static int __sched_setscheduler(struct task_struct *p,
}

queued = task_on_rq_queued(p);
- running = task_current(rq, p);
+ /*
+ * XXX and again, how is this safe w.r.t. proxy exec?
+ */
+ running = task_current_proxy(rq, p);
if (queued)
dequeue_task(rq, p, queue_flags);
if (running)
@@ -9156,6 +9174,7 @@ void __init init_idle(struct task_struct *idle, int cpu)
rcu_read_unlock();

rq->idle = idle;
+ rq_set_proxy(rq, idle);
rq_set_curr(rq, idle);
idle->on_rq = TASK_ON_RQ_QUEUED;
#ifdef CONFIG_SMP
@@ -9258,7 +9277,7 @@ void sched_setnuma(struct task_struct *p, int nid)

rq = task_rq_lock(p, &rf);
queued = task_on_rq_queued(p);
- running = task_current(rq, p);
+ running = task_current_proxy(rq, p);

if (queued)
dequeue_task(rq, p, DEQUEUE_SAVE);
@@ -10370,7 +10389,7 @@ void sched_move_task(struct task_struct *tsk)
rq = task_rq_lock(tsk, &rf);
update_rq_clock(rq);

- running = task_current(rq, tsk);
+ running = task_current_proxy(rq, tsk);
queued = task_on_rq_queued(tsk);

if (queued)
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 4e3acc76708f..6ec40f90317c 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1179,7 +1179,7 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer)
#endif

enqueue_task_dl(rq, p, ENQUEUE_REPLENISH);
- if (dl_task(rq_curr(rq)))
+ if (dl_task(rq_proxy(rq)))
check_preempt_curr_dl(rq, p, 0);
else
resched_curr(rq);
@@ -1306,7 +1306,7 @@ static u64 grub_reclaim(u64 delta, struct rq *rq, struct sched_dl_entity *dl_se)
*/
static void update_curr_dl(struct rq *rq)
{
- struct task_struct *curr = rq_curr(rq);
+ struct task_struct *curr = rq_proxy(rq);
struct sched_dl_entity *dl_se = &curr->dl;
s64 delta_exec, scaled_delta_exec;
int cpu = cpu_of(rq);
@@ -1819,7 +1819,7 @@ static int find_later_rq(struct task_struct *task);
static int
select_task_rq_dl(struct task_struct *p, int cpu, int flags)
{
- struct task_struct *curr;
+ struct task_struct *curr, *proxy;
bool select_rq;
struct rq *rq;

@@ -1830,6 +1830,7 @@ select_task_rq_dl(struct task_struct *p, int cpu, int flags)

rcu_read_lock();
curr = rq_curr_unlocked(rq); /* XXX jstultz: using rcu_dereference intead of READ_ONCE */
+ proxy = rq_proxy_unlocked(rq);

/*
* If we are dealing with a -deadline task, we must
@@ -1840,9 +1841,9 @@ select_task_rq_dl(struct task_struct *p, int cpu, int flags)
* other hand, if it has a shorter deadline, we
* try to make it stay here, it might be important.
*/
- select_rq = unlikely(dl_task(curr)) &&
+ select_rq = unlikely(dl_task(proxy)) &&
(curr->nr_cpus_allowed < 2 ||
- !dl_entity_preempt(&p->dl, &curr->dl)) &&
+ !dl_entity_preempt(&p->dl, &proxy->dl)) &&
p->nr_cpus_allowed > 1;

/*
@@ -1905,7 +1906,7 @@ static void check_preempt_equal_dl(struct rq *rq, struct task_struct *p)
* let's hope p can move out.
*/
if (rq_curr(rq)->nr_cpus_allowed == 1 ||
- !cpudl_find(&rq->rd->cpudl, rq_curr(rq), NULL))
+ !cpudl_find(&rq->rd->cpudl, rq_proxy(rq), NULL))
return;

/*
@@ -1944,7 +1945,7 @@ static int balance_dl(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p,
int flags)
{
- if (dl_entity_preempt(&p->dl, &rq_curr(rq)->dl)) {
+ if (dl_entity_preempt(&p->dl, &rq_proxy(rq)->dl)) {
resched_curr(rq);
return;
}
@@ -1954,7 +1955,7 @@ static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p,
* In the unlikely case current and p have the same deadline
* let us try to decide what's the best thing to do...
*/
- if ((p->dl.deadline == rq_curr(rq)->dl.deadline) &&
+ if ((p->dl.deadline == rq_proxy(rq)->dl.deadline) &&
!test_tsk_need_resched(rq_curr(rq)))
check_preempt_equal_dl(rq, p);
#endif /* CONFIG_SMP */
@@ -1989,7 +1990,7 @@ static void set_next_task_dl(struct rq *rq, struct task_struct *p, bool first)
if (hrtick_enabled_dl(rq))
start_hrtick_dl(rq, p);

- if (rq_curr(rq)->sched_class != &dl_sched_class)
+ if (rq_proxy(rq)->sched_class != &dl_sched_class)
update_dl_rq_load_avg(rq_clock_pelt(rq), rq, 0);

deadline_queue_push_tasks(rq);
@@ -2305,8 +2306,8 @@ static int push_dl_task(struct rq *rq)
* can move away, it makes sense to just reschedule
* without going further in pushing next_task.
*/
- if (dl_task(rq_curr(rq)) &&
- dl_time_before(next_task->dl.deadline, rq_curr(rq)->dl.deadline) &&
+ if (dl_task(rq_proxy(rq)) &&
+ dl_time_before(next_task->dl.deadline, rq_proxy(rq)->dl.deadline) &&
rq_curr(rq)->nr_cpus_allowed > 1) {
resched_curr(rq);
return 0;
@@ -2322,6 +2323,7 @@ static int push_dl_task(struct rq *rq)
get_task_struct(next_task);

/* Will lock the rq it'll find */
+ /* XXX connoro: update find_lock_later_rq() for split context? */
later_rq = find_lock_later_rq(next_task, rq);
if (!later_rq) {
struct task_struct *task;
@@ -2431,7 +2433,7 @@ static void pull_dl_task(struct rq *this_rq)
* deadline than the current task of its runqueue.
*/
if (dl_time_before(p->dl.deadline,
- rq_curr(src_rq)->dl.deadline))
+ rq_proxy(src_rq)->dl.deadline))
goto skip;

if (is_migration_disabled(p)) {
@@ -2470,9 +2472,9 @@ static void task_woken_dl(struct rq *rq, struct task_struct *p)
if (!task_on_cpu(rq, p) &&
!test_tsk_need_resched(rq_curr(rq)) &&
p->nr_cpus_allowed > 1 &&
- dl_task(rq_curr(rq)) &&
+ dl_task(rq_proxy(rq)) &&
(rq_curr(rq)->nr_cpus_allowed < 2 ||
- !dl_entity_preempt(&p->dl, &rq_curr(rq)->dl))) {
+ !dl_entity_preempt(&p->dl, &rq_proxy(rq)->dl))) {
push_dl_tasks(rq);
}
}
@@ -2635,12 +2637,12 @@ static void switched_to_dl(struct rq *rq, struct task_struct *p)
return;
}

- if (rq_curr(rq) != p) {
+ if (rq_proxy(rq) != p) {
#ifdef CONFIG_SMP
if (p->nr_cpus_allowed > 1 && rq->dl.overloaded)
deadline_queue_push_tasks(rq);
#endif
- if (dl_task(rq_curr(rq)))
+ if (dl_task(rq_proxy(rq)))
check_preempt_curr_dl(rq, p, 0);
else
resched_curr(rq);
@@ -2669,7 +2671,7 @@ static void prio_changed_dl(struct rq *rq, struct task_struct *p,
if (!rq->dl.overloaded)
deadline_queue_pull_task(rq);

- if (task_current(rq, p)) {
+ if (task_current_proxy(rq, p)) {
/*
* If we now have a earlier deadline task than p,
* then reschedule, provided p is still on this
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 8b35dfc0c442..1471519b95c5 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -919,7 +919,7 @@ static s64 update_curr_se(struct rq *rq, struct sched_entity *curr)
*/
s64 update_curr_common(struct rq *rq)
{
- struct task_struct *curr = rq_curr(rq);
+ struct task_struct *curr = rq_proxy(rq);
s64 delta_exec;

delta_exec = update_curr_se(rq, &curr->se);
@@ -964,7 +964,7 @@ static void update_curr(struct cfs_rq *cfs_rq)

static void update_curr_fair(struct rq *rq)
{
- update_curr(cfs_rq_of(&rq_curr(rq)->se));
+ update_curr(cfs_rq_of(&rq_proxy(rq)->se));
}

static inline void
@@ -6133,7 +6133,7 @@ static void hrtick_start_fair(struct rq *rq, struct task_struct *p)
s64 delta = slice - ran;

if (delta < 0) {
- if (task_current(rq, p))
+ if (task_current_proxy(rq, p))
resched_curr(rq);
return;
}
@@ -6148,7 +6148,7 @@ static void hrtick_start_fair(struct rq *rq, struct task_struct *p)
*/
static void hrtick_update(struct rq *rq)
{
- struct task_struct *curr = rq_curr(rq);
+ struct task_struct *curr = rq_proxy(rq);

if (!hrtick_enabled_fair(rq) || curr->sched_class != &fair_sched_class)
return;
@@ -7788,7 +7788,7 @@ static void set_skip_buddy(struct sched_entity *se)
*/
static void check_preempt_wakeup(struct rq *rq, struct task_struct *p, int wake_flags)
{
- struct task_struct *curr = rq_curr(rq);
+ struct task_struct *curr = rq_proxy(rq);
struct sched_entity *se = &curr->se, *pse = &p->se;
struct cfs_rq *cfs_rq = task_cfs_rq(curr);
int scale = cfs_rq->nr_running >= sched_nr_latency;
@@ -7822,7 +7822,7 @@ static void check_preempt_wakeup(struct rq *rq, struct task_struct *p, int wake_
* prevents us from potentially nominating it as a false LAST_BUDDY
* below.
*/
- if (test_tsk_need_resched(curr))
+ if (test_tsk_need_resched(rq_curr(rq)))
return;

/* Idle tasks are by definition preempted by non-idle tasks. */
@@ -8821,7 +8821,7 @@ static bool __update_blocked_others(struct rq *rq, bool *done)
* update_load_avg() can call cpufreq_update_util(). Make sure that RT,
* DL and IRQ signals have been updated before updating CFS.
*/
- curr_class = rq_curr(rq)->sched_class;
+ curr_class = rq_proxy(rq)->sched_class;

thermal_pressure = arch_scale_thermal_pressure(cpu_of(rq));

@@ -11984,6 +11984,10 @@ static void task_tick_fair(struct rq *rq, struct task_struct *curr, int queued)
entity_tick(cfs_rq, se, queued);
}

+ /*
+ * XXX need to use execution context (rq->curr) for task_tick_numa and
+ * update_misfit_status?
+ */
if (static_branch_unlikely(&sched_numa_balancing))
task_tick_numa(rq, curr);

@@ -12047,7 +12051,7 @@ prio_changed_fair(struct rq *rq, struct task_struct *p, int oldprio)
* our priority decreased, or if we are not currently running on
* this runqueue and our priority is higher than the current's
*/
- if (task_current(rq, p)) {
+ if (task_current_proxy(rq, p)) {
if (p->prio > oldprio)
resched_curr(rq);
} else
@@ -12192,7 +12196,7 @@ static void switched_to_fair(struct rq *rq, struct task_struct *p)
* kick off the schedule if running, otherwise just see
* if we can still preempt the current task.
*/
- if (task_current(rq, p))
+ if (task_current_proxy(rq, p))
resched_curr(rq);
else
check_preempt_curr(rq, p, 0);
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 91c992230aba..03e5d8fa67aa 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -574,7 +574,7 @@ static void dequeue_rt_entity(struct sched_rt_entity *rt_se, unsigned int flags)

static void sched_rt_rq_enqueue(struct rt_rq *rt_rq)
{
- struct task_struct *curr = rq_curr(rq_of_rt_rq(rt_rq));
+ struct task_struct *curr = rq_proxy(rq_of_rt_rq(rt_rq));
struct rq *rq = rq_of_rt_rq(rt_rq);
struct sched_rt_entity *rt_se;

@@ -1044,7 +1044,7 @@ static int sched_rt_runtime_exceeded(struct rt_rq *rt_rq)
*/
static void update_curr_rt(struct rq *rq)
{
- struct task_struct *curr = rq_curr(rq);
+ struct task_struct *curr = rq_proxy(rq);
struct sched_rt_entity *rt_se = &curr->rt;
s64 delta_exec;

@@ -1591,7 +1591,7 @@ static int find_lowest_rq(struct task_struct *task);
static int
select_task_rq_rt(struct task_struct *p, int cpu, int flags)
{
- struct task_struct *curr;
+ struct task_struct *curr, *proxy;
struct rq *rq;
bool test;

@@ -1603,6 +1603,7 @@ select_task_rq_rt(struct task_struct *p, int cpu, int flags)

rcu_read_lock();
curr = rq_curr_unlocked(rq); /* XXX jstultz: using rcu_dereference intead of READ_ONCE */
+ proxy = rq_proxy_unlocked(rq);

/*
* If the current task on @p's runqueue is an RT task, then
@@ -1631,8 +1632,8 @@ select_task_rq_rt(struct task_struct *p, int cpu, int flags)
* systems like big.LITTLE.
*/
test = curr &&
- unlikely(rt_task(curr)) &&
- (curr->nr_cpus_allowed < 2 || curr->prio <= p->prio);
+ unlikely(rt_task(proxy)) &&
+ (curr->nr_cpus_allowed < 2 || proxy->prio <= p->prio);

if (test || !rt_task_fits_capacity(p, cpu)) {
int target = find_lowest_rq(p);
@@ -1662,12 +1663,12 @@ select_task_rq_rt(struct task_struct *p, int cpu, int flags)

static void check_preempt_equal_prio(struct rq *rq, struct task_struct *p)
{
- /*
- * Current can't be migrated, useless to reschedule,
- * let's hope p can move out.
+ /* XXX connoro: need to revise cpupri_find() to reflect the split
+ * context since it should look at rq->proxy for priority but rq->curr
+ * for affinity.
*/
if (rq_curr(rq)->nr_cpus_allowed == 1 ||
- !cpupri_find(&rq->rd->cpupri, rq_curr(rq), NULL))
+ !cpupri_find(&rq->rd->cpupri, rq_proxy(rq), NULL))
return;

/*
@@ -1710,7 +1711,9 @@ static int balance_rt(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
*/
static void check_preempt_curr_rt(struct rq *rq, struct task_struct *p, int flags)
{
- if (p->prio < rq_curr(rq)->prio) {
+ struct task_struct *curr = rq_proxy(rq);
+
+ if (p->prio < curr->prio) {
resched_curr(rq);
return;
}
@@ -1728,7 +1731,7 @@ static void check_preempt_curr_rt(struct rq *rq, struct task_struct *p, int flag
* to move current somewhere else, making room for our non-migratable
* task.
*/
- if (p->prio == rq_curr(rq)->prio && !test_tsk_need_resched(rq_curr(rq)))
+ if (p->prio == curr->prio && !test_tsk_need_resched(rq_curr(rq)))
check_preempt_equal_prio(rq, p);
#endif
}
@@ -1753,7 +1756,7 @@ static inline void set_next_task_rt(struct rq *rq, struct task_struct *p, bool f
* utilization. We only care of the case where we start to schedule a
* rt task
*/
- if (rq_curr(rq)->sched_class != &rt_sched_class)
+ if (rq_proxy(rq)->sched_class != &rt_sched_class)
update_rt_rq_load_avg(rq_clock_pelt(rq), rq, 0);

rt_queue_push_tasks(rq);
@@ -2029,7 +2032,7 @@ static struct task_struct *pick_next_pushable_task(struct rq *rq)
struct task_struct, pushable_tasks);

BUG_ON(rq->cpu != task_cpu(p));
- BUG_ON(task_current(rq, p));
+ BUG_ON(task_current(rq, p) || task_current_proxy(rq, p));
BUG_ON(p->nr_cpus_allowed <= 1);

BUG_ON(!task_on_rq_queued(p));
@@ -2062,7 +2065,7 @@ static int push_rt_task(struct rq *rq, bool pull)
* higher priority than current. If that's the case
* just reschedule current.
*/
- if (unlikely(next_task->prio < rq_curr(rq)->prio)) {
+ if (unlikely(next_task->prio < rq_proxy(rq)->prio)) {
resched_curr(rq);
return 0;
}
@@ -2083,6 +2086,16 @@ static int push_rt_task(struct rq *rq, bool pull)
* Note that the stoppers are masqueraded as SCHED_FIFO
* (cf. sched_set_stop_task()), so we can't rely on rt_task().
*/
+ /*
+ * XXX connoro: seems like what we actually want here might be:
+ * 1) Enforce that rq->proxy must be RT
+ * 2) Revise find_lowest_rq() to handle split context, searching
+ * for an rq that can accommodate rq->proxy's prio and
+ * rq->curr's affinity
+ * 3) Send the whole chain to the new rq in push_cpu_stop()?
+ * If #3 is needed, might be best to make a separate patch with
+ * all the "chain-level load balancing" changes.
+ */
if (rq_curr(rq)->sched_class != &rt_sched_class)
return 0;

@@ -2114,6 +2127,12 @@ static int push_rt_task(struct rq *rq, bool pull)
get_task_struct(next_task);

/* find_lock_lowest_rq locks the rq if found */
+ /*
+ * XXX connoro: find_lock_lowest_rq() likely also needs split context
+ * support. This also needs to include something like an exec_ctx=NULL
+ * case for when we push a blocked task whose lock owner is not on
+ * this rq.
+ */
lowest_rq = find_lock_lowest_rq(next_task, rq);
if (!lowest_rq) {
struct task_struct *task;
@@ -2415,7 +2434,7 @@ static void pull_rt_task(struct rq *this_rq)
* p if it is lower in priority than the
* current task on the run queue
*/
- if (p->prio < rq_curr(src_rq)->prio)
+ if (p->prio < rq_proxy(src_rq)->prio)
goto skip;

if (is_migration_disabled(p)) {
@@ -2457,9 +2476,9 @@ static void task_woken_rt(struct rq *rq, struct task_struct *p)
bool need_to_push = !task_on_cpu(rq, p) &&
!test_tsk_need_resched(rq_curr(rq)) &&
p->nr_cpus_allowed > 1 &&
- (dl_task(rq_curr(rq)) || rt_task(rq_curr(rq))) &&
+ (dl_task(rq_proxy(rq)) || rt_task(rq_proxy(rq))) &&
(rq_curr(rq)->nr_cpus_allowed < 2 ||
- rq_curr(rq)->prio <= p->prio);
+ rq_proxy(rq)->prio <= p->prio);

if (need_to_push)
push_rt_tasks(rq);
@@ -2543,7 +2562,7 @@ static void switched_to_rt(struct rq *rq, struct task_struct *p)
if (p->nr_cpus_allowed > 1 && rq->rt.overloaded)
rt_queue_push_tasks(rq);
#endif /* CONFIG_SMP */
- if (p->prio < rq_curr(rq)->prio && cpu_online(cpu_of(rq)))
+ if (p->prio < rq_proxy(rq)->prio && cpu_online(cpu_of(rq)))
resched_curr(rq);
}
}
@@ -2558,7 +2577,7 @@ prio_changed_rt(struct rq *rq, struct task_struct *p, int oldprio)
if (!task_on_rq_queued(p))
return;

- if (task_current(rq, p)) {
+ if (task_current_proxy(rq, p)) {
#ifdef CONFIG_SMP
/*
* If our priority decreases while running, we
@@ -2584,7 +2603,7 @@ prio_changed_rt(struct rq *rq, struct task_struct *p, int oldprio)
* greater than the current running task
* then reschedule.
*/
- if (p->prio < rq_curr(rq)->prio)
+ if (p->prio < rq_proxy(rq)->prio)
resched_curr(rq);
}
}
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 00f73ca4c618..84e49c2530b0 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1008,7 +1008,17 @@ struct rq {
*/
unsigned int nr_uninterruptible;

- struct task_struct __rcu *curr_exec;
+ /*
+ * XXX jstultz: Personally, I feel like these should be switched.
+ * Where rq_curr() returns the selected scheduler context, and
+ * but rq_proxy() returns the execution context we're going to
+ * run on behalf of the selected task. But for now keeping it
+ * the same.
+ */
+ struct task_struct __rcu *curr_exec; /* Execution context */
+#ifdef CONFIG_PROXY_EXEC
+ struct task_struct __rcu *curr_sched; /* Scheduling context (policy) */
+#endif
struct task_struct *idle;
struct task_struct *stop;
unsigned long next_balance;
@@ -1226,6 +1236,38 @@ static inline void rq_set_curr_rcu_init(struct rq *rq, struct task_struct *task)
RCU_INIT_POINTER(rq->curr_exec, task);
}

+#ifdef CONFIG_PROXY_EXEC
+static inline struct task_struct *rq_proxy(struct rq *rq)
+{
+ return rq->curr_sched;
+}
+
+static inline struct task_struct *rq_proxy_unlocked(struct rq *rq)
+{
+ return rcu_dereference(rq->curr_sched);
+}
+
+static inline void rq_set_proxy(struct rq *rq, struct task_struct *t)
+{
+ rcu_assign_pointer(rq->curr_sched, t);
+}
+#else
+static inline struct task_struct *rq_proxy(struct rq *rq)
+{
+ return rq->curr_exec;
+}
+
+static inline struct task_struct *rq_proxy_unlocked(struct rq *rq)
+{
+ return rcu_dereference(rq->curr_exec);
+}
+
+static inline void rq_set_proxy(struct rq *rq, struct task_struct *t)
+{
+ /* Do nothing */
+}
+#endif
+
#define cpu_rq(cpu) (&per_cpu(runqueues, (cpu)))
#define this_rq() this_cpu_ptr(&runqueues)
#define task_rq(p) cpu_rq(task_cpu(p))
@@ -2098,11 +2140,25 @@ static inline u64 global_rt_runtime(void)
return (u64)sysctl_sched_rt_runtime * NSEC_PER_USEC;
}

+/*
+ * Is p the current execution context?
+ */
static inline int task_current(struct rq *rq, struct task_struct *p)
{
return rq_curr(rq) == p;
}

+/*
+ * Is p the current scheduling context?
+ *
+ * Note that it might be the current execution context at the same time if
+ * rq->curr == rq->proxy == p.
+ */
+static inline int task_current_proxy(struct rq *rq, struct task_struct *p)
+{
+ return rq_proxy(rq) == p;
+}
+
static inline int task_on_cpu(struct rq *rq, struct task_struct *p)
{
#ifdef CONFIG_SMP
@@ -2260,7 +2316,7 @@ struct sched_class {

static inline void put_prev_task(struct rq *rq, struct task_struct *prev)
{
- WARN_ON_ONCE(rq_curr(rq) != prev);
+ WARN_ON_ONCE(rq_proxy(rq) != prev);
prev->sched_class->put_prev_task(rq, prev);
}

@@ -2341,6 +2397,15 @@ extern void set_cpus_allowed_common(struct task_struct *p, struct affinity_conte

static inline struct task_struct *get_push_task(struct rq *rq)
{
+ /*
+ * XXX connoro: should this be rq->proxy? When rq->curr != rq->proxy,
+ * pushing rq->curr alone means it stops inheriting. Perhaps returning
+ * rq->proxy and pushing the entire chain would be correct? OTOH if we
+ * are guaranteed that rq->proxy is the highest prio task on the rq when
+ * get_push_task() is called, then proxy() will migrate the rest of the
+ * chain during the __schedule() call immediately after rq->curr is
+ * pushed.
+ */
struct task_struct *p = rq_curr(rq);

lockdep_assert_rq_held(rq);
--
2.40.0.rc1.284.g88254d51c5-goog


2023-03-20 23:38:30

by John Stultz

[permalink] [raw]
Subject: [PATCH v2 11/12] sched/rt: Fix proxy/current (push,pull)ability

From: Valentin Schneider <[email protected]>

Proxy execution forms atomic pairs of tasks: a proxy (scheduling context)
and an owner (execution context). The proxy, along with the rest of the
blocked chain, follows the owner wrt CPU placement.

They can be the same task, in which case push/pull doesn't need any
modification. When they are different, however,
FIFO1 & FIFO42:

,-> RT42
| | blocked-on
| v
proxied-by | mutex
| | owner
| v
`-- RT1

RT1
RT42

CPU0 CPU1
^ ^
| |
overloaded !overloaded
rq prio = 42 rq prio = 0

RT1 is eligible to be pushed to CPU1, but should that happen it will
"carry" RT42 along. Clearly here neither RT1 nor RT42 must be seen as
push/pullable.

Furthermore, tasks becoming blocked on a mutex don't need an explicit
dequeue/enqueue cycle to be made (push/pull)able: they have to be running
to block on a mutex, thus they will eventually hit put_prev_task().

XXX: pinned tasks becoming unblocked should be removed from the push/pull
lists, but those don't get to see __schedule() straight away.

Cc: Joel Fernandes <[email protected]>
Cc: Qais Yousef <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Zimuzo Ezeozue <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Waiman Long <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: "Paul E . McKenney" <[email protected]>
Cc: [email protected]
Signed-off-by: Valentin Schneider <[email protected]>
Signed-off-by: Connor O'Brien <[email protected]>
Signed-off-by: John Stultz <[email protected]>
---
kernel/sched/core.c | 36 ++++++++++++++++++++++++++----------
kernel/sched/rt.c | 22 +++++++++++++++++-----
2 files changed, 43 insertions(+), 15 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index d0f86670bdf8..11138277c7c8 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7062,12 +7062,28 @@ proxy(struct rq *rq, struct task_struct *next, struct rq_flags *rf)
WARN_ON_ONCE(!owner->on_rq);
return owner;
}
+
+static inline void proxy_tag_curr(struct rq *rq, struct task_struct *next)
+{
+ /*
+ * pick_next_task() calls set_next_task() on the proxy at some
+ * point, which ensures it is not push/pullable. However, the
+ * proxy *and* the owner form an atomic pair wrt push/pull.
+ *
+ * Make sure owner is not pushable. Unfortunately we can only
+ * deal with that by means of a dequeue/enqueue cycle. :-/
+ */
+ dequeue_task(rq, next, DEQUEUE_NOCLOCK | DEQUEUE_SAVE);
+ enqueue_task(rq, next, ENQUEUE_NOCLOCK | ENQUEUE_RESTORE);
+}
#else /* PROXY_EXEC */
static struct task_struct *
proxy(struct rq *rq, struct task_struct *next, struct rq_flags *rf)
{
return next;
}
+
+static inline void proxy_tag_curr(struct rq *rq, struct task_struct *next) { }
#endif /* PROXY_EXEC */

/*
@@ -7116,6 +7132,7 @@ static void __sched notrace __schedule(unsigned int sched_mode)
unsigned long prev_state;
struct rq_flags rf;
struct rq *rq;
+ bool proxied;
int cpu;
bool preserve_need_resched = false;

@@ -7189,20 +7206,11 @@ static void __sched notrace __schedule(unsigned int sched_mode)
atomic_inc(&rq->nr_iowait);
delayacct_blkio_start();
}
- } else {
- /*
- * XXX
- * Let's make this task, which is blocked on
- * a mutex, (push/pull)able (RT/DL).
- * Unfortunately we can only deal with that by
- * means of a dequeue/enqueue cycle. :-/
- */
- dequeue_task(rq, prev, 0);
- enqueue_task(rq, prev, 0);
}
switch_count = &prev->nvcsw;
}

+ proxied = !!prev->blocked_proxy;
pick_again:
/*
* If picked task is actually blocked it means that it can act as a
@@ -7244,6 +7252,10 @@ static void __sched notrace __schedule(unsigned int sched_mode)
* changes to task_struct made by pick_next_task().
*/
rq_set_curr_rcu_init(rq, next);
+
+ if (unlikely(!task_current_proxy(rq, next)))
+ proxy_tag_curr(rq, next);
+
/*
* The membarrier system call requires each architecture
* to have a full memory barrier after updating
@@ -7268,6 +7280,10 @@ static void __sched notrace __schedule(unsigned int sched_mode)
/* Also unlocks the rq: */
rq = context_switch(rq, prev, next, &rf);
} else {
+ /* In case next was already curr but just got blocked_proxy */
+ if (unlikely(!proxied && next->blocked_proxy))
+ proxy_tag_curr(rq, next);
+
rq->clock_update_flags &= ~(RQCF_ACT_SKIP|RQCF_REQ_SKIP);

rq_unpin_lock(rq, &rf);
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index d1c5a022eae4..419270b0918e 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1537,9 +1537,21 @@ enqueue_task_rt(struct rq *rq, struct task_struct *p, int flags)

enqueue_rt_entity(rt_se, flags);

- if (!task_current(rq, p) && p->nr_cpus_allowed > 1 &&
- !task_is_blocked(p))
- enqueue_pushable_task(rq, p);
+ /*
+ * Current can't be pushed away. Proxy is tied to current, so don't
+ * push it either.
+ */
+ if (task_current(rq, p) || task_current_proxy(rq, p))
+ return;
+
+ /*
+ * Pinned tasks can't be pushed.
+ * Affinity of blocked tasks doesn't matter.
+ */
+ if (!task_is_blocked(p) && p->nr_cpus_allowed == 1)
+ return;
+
+ enqueue_pushable_task(rq, p);
}

static void dequeue_task_rt(struct rq *rq, struct task_struct *p, int flags)
@@ -1832,9 +1844,9 @@ static void put_prev_task_rt(struct rq *rq, struct task_struct *p)

/*
* The previous task needs to be made eligible for pushing
- * if it is still active
+ * if it is still active. Affinity of blocked task doesn't matter.
*/
- if (on_rt_rq(&p->rt) && p->nr_cpus_allowed > 1)
+ if (on_rt_rq(&p->rt) && (p->nr_cpus_allowed > 1 || task_is_blocked(p)))
enqueue_pushable_task(rq, p);
}

--
2.40.0.rc1.284.g88254d51c5-goog


2023-03-20 23:38:35

by John Stultz

[permalink] [raw]
Subject: [PATCH v2 10/12] sched: Add proxy execution

From: Peter Zijlstra <[email protected]>

A task currently holding a mutex (executing a critical section)
might find benefit in using scheduling contexts of other tasks
blocked on the same mutex if they happen to have higher priority
of the current owner (e.g., to prevent priority inversions).

Proxy execution lets a task do exactly that: if a mutex owner
has waiters, it can use waiters' scheduling context to
potentially continue running if preempted.

The basic mechanism is implemented by this patch, the core of
which resides in the proxy() function. Potential proxies (i.e.,
tasks blocked on a mutex) are not dequeued, so, if one of them
is actually selected by schedule() as the next task to be put to
run on a CPU, proxy() is used to walk the blocked_on relation
and find which task (mutex owner) might be able to use the
proxy's scheduling context.

Here come the tricky bits. In fact, owner task might be in all
sort of states when a proxy is found (blocked, executing on a
different CPU, etc.). Details on how to handle different
situations are to be found in proxy() code comments.

Cc: Joel Fernandes <[email protected]>
Cc: Qais Yousef <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Zimuzo Ezeozue <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Waiman Long <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: "Paul E . McKenney" <[email protected]>
Cc: [email protected]
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
[rebased, added comments and changelog]
Signed-off-by: Juri Lelli <[email protected]>
[Fixed rebase conflicts]
[squashed sched: Ensure blocked_on is always guarded by blocked_lock]
Signed-off-by: Valentin Schneider <[email protected]>
[fix rebase conflicts, various fixes & tweaks commented inline]
[squashed sched: Use rq->curr vs rq->proxy checks]
Signed-off-by: Connor O'Brien <[email protected]>
[jstultz: Rebased, split up, and folded in changes from Juri
Lelli and Connor O'Brian, added additional locking on
get_task_blocked_on(next) logic, pretty major rework to better
conditionalize logic on CONFIG_PROXY_EXEC and split up the very
large proxy() function - hopefully without changes to logic /
behavior]
Signed-off-by: John Stultz <[email protected]>
---
v2:
* Numerous changes folded in
* Split out some of the logic into separate patches
* Break up the proxy() function so its a bit easier to read
and is better conditionalized on CONFIG_PROXY_EXEC

TODO: Finish conditionalization edge cases
---
include/linux/sched.h | 1 +
init/Kconfig | 7 +
kernel/Kconfig.locks | 2 +-
kernel/fork.c | 1 +
kernel/locking/mutex.c | 58 +++-
kernel/sched/core.c | 652 +++++++++++++++++++++++++++++++++++++++-
kernel/sched/deadline.c | 2 +-
kernel/sched/fair.c | 13 +-
kernel/sched/rt.c | 3 +-
kernel/sched/sched.h | 21 +-
10 files changed, 744 insertions(+), 16 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index fc75fcc696db..b88303ceacaf 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1141,6 +1141,7 @@ struct task_struct {

struct task_struct *blocked_proxy; /* task that is boosting us */
struct mutex *blocked_on; /* lock we're blocked on */
+ struct list_head blocked_entry; /* tasks blocked on us */
raw_spinlock_t blocked_lock;

#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
diff --git a/init/Kconfig b/init/Kconfig
index 1fb5f313d18f..38cdd2ccc538 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -935,6 +935,13 @@ config NUMA_BALANCING_DEFAULT_ENABLED
If set, automatic NUMA balancing will be enabled if running on a NUMA
machine.

+config PROXY_EXEC
+ bool "Proxy Execution"
+ default n
+ help
+ This option enables proxy execution, a mechanism for mutex owning
+ tasks to inherit the scheduling context of higher priority waiters.
+
menuconfig CGROUPS
bool "Control Group support"
select KERNFS
diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
index 4198f0273ecd..791c98f1d329 100644
--- a/kernel/Kconfig.locks
+++ b/kernel/Kconfig.locks
@@ -226,7 +226,7 @@ config ARCH_SUPPORTS_ATOMIC_RMW

config MUTEX_SPIN_ON_OWNER
def_bool y
- depends on SMP && ARCH_SUPPORTS_ATOMIC_RMW
+ depends on SMP && ARCH_SUPPORTS_ATOMIC_RMW && !PROXY_EXEC

config RWSEM_SPIN_ON_OWNER
def_bool y
diff --git a/kernel/fork.c b/kernel/fork.c
index 95410333332f..a56c59c7a4cf 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -2226,6 +2226,7 @@ static __latent_entropy struct task_struct *copy_process(

p->blocked_proxy = NULL; /* nobody is boosting us yet */
p->blocked_on = NULL; /* not blocked yet */
+ INIT_LIST_HEAD(&p->blocked_entry);

#ifdef CONFIG_BCACHE
p->sequential_io = 0;
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index ead4213232cc..370687cf79f8 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -939,10 +939,21 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne
{
struct task_struct *next = NULL;
DEFINE_WAKE_Q(wake_q);
- unsigned long owner;
+ /*
+ * XXX [juril] Proxy Exec forces always an HANDOFF (so that owner is
+ * never empty when there are waiters waiting?). Should we make this
+ * conditional on having proxy exec configured in?
+ */
+ unsigned long owner = MUTEX_FLAG_HANDOFF;

mutex_release(&lock->dep_map, ip);

+ /*
+ * XXX must always handoff the mutex to avoid !owner in proxy().
+ * scheduler delay is minimal since we hand off to the task that
+ * is to be scheduled next.
+ */
+#ifndef CONFIG_PROXY_EXEC
/*
* Release the lock before (potentially) taking the spinlock such that
* other contenders can get on with things ASAP.
@@ -965,10 +976,48 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne
return;
}
}
+#endif

raw_spin_lock(&lock->wait_lock);
debug_mutex_unlock(lock);
- if (!list_empty(&lock->wait_list)) {
+
+#ifdef CONFIG_PROXY_EXEC
+ raw_spin_lock(&current->blocked_lock);
+ /*
+ * If we have a task boosting us, and that task was boosting us through
+ * this lock, hand the lock to that task, as that is the highest
+ * waiter, as selected by the scheduling function.
+ */
+ next = current->blocked_proxy;
+ if (next) {
+ struct mutex *next_lock;
+
+ /*
+ * jstultz: get_task_blocked_on(next) seemed to be missing locking
+ * so I've added it here (which required nesting the locks).
+ */
+ raw_spin_lock_nested(&next->blocked_lock, SINGLE_DEPTH_NESTING);
+ next_lock = get_task_blocked_on(next);
+ raw_spin_unlock(&next->blocked_lock);
+ if (next_lock != lock) {
+ next = NULL;
+ } else {
+ wake_q_add(&wake_q, next);
+ current->blocked_proxy = NULL;
+ }
+ }
+
+ /*
+ * XXX if there was no higher prio proxy, ->blocked_task will not have
+ * been set. Therefore lower prio contending tasks are serviced in
+ * FIFO order.
+ */
+#endif
+
+ /*
+ * Failing that, pick any on the wait list.
+ */
+ if (!next && !list_empty(&lock->wait_list)) {
/* get the first entry from the wait-list: */
struct mutex_waiter *waiter =
list_first_entry(&lock->wait_list,
@@ -983,7 +1032,10 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne
if (owner & MUTEX_FLAG_HANDOFF)
__mutex_handoff(lock, next);

- preempt_disable();
+ preempt_disable(); /* XXX connoro: why disable preemption here? */
+#ifdef CONFIG_PROXY_EXEC
+ raw_spin_unlock(&current->blocked_lock);
+#endif
raw_spin_unlock(&lock->wait_lock);

wake_up_q(&wake_q);
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 82f5b29ae675..d0f86670bdf8 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -505,6 +505,8 @@ sched_core_dequeue(struct rq *rq, struct task_struct *p, int flags) { }
*
* task_cpu(p): is changed by set_task_cpu(), the rules are:
*
+ * XXX connoro: does it matter that ttwu_do_activate now calls __set_task_cpu
+ * on blocked tasks?
* - Don't call set_task_cpu() on a blocked task:
*
* We don't care what CPU we're not running on, this simplifies hotplug,
@@ -2774,8 +2776,15 @@ static int affine_move_task(struct rq *rq, struct task_struct *p, struct rq_flag
struct set_affinity_pending my_pending = { }, *pending = NULL;
bool stop_pending, complete = false;

- /* Can the task run on the task's current CPU? If so, we're done */
- if (cpumask_test_cpu(task_cpu(p), &p->cpus_mask)) {
+ /*
+ * Can the task run on the task's current CPU? If so, we're done
+ *
+ * We are also done if the task is currently acting as proxy (and
+ * potentially has been migrated outside its current or previous
+ * affinity mask)
+ */
+ if (cpumask_test_cpu(task_cpu(p), &p->cpus_mask) ||
+ (task_current_proxy(rq, p) && !task_current(rq, p))) {
struct task_struct *push_task = NULL;

if ((flags & SCA_MIGRATE_ENABLE) &&
@@ -3687,6 +3696,72 @@ static inline void ttwu_do_wakeup(struct task_struct *p)
trace_sched_wakeup(p);
}

+#ifdef CONFIG_PROXY_EXEC
+static void activate_task_and_blocked_ent(struct rq *rq, struct task_struct *p, int en_flags)
+{
+ /*
+ * XXX connoro: By calling activate_task with blocked_lock held, we order against
+ * the proxy() blocked_task case such that no more blocked tasks will
+ * be enqueued on p once we release p->blocked_lock.
+ */
+ raw_spin_lock(&p->blocked_lock);
+ /*
+ * XXX connoro: do we need to check p->on_rq here like we do for pp below?
+ * or does holding p->pi_lock ensure nobody else activates p first?
+ */
+ activate_task(rq, p, en_flags);
+ raw_spin_unlock(&p->blocked_lock);
+
+ /*
+ * A whole bunch of 'proxy' tasks back this blocked task, wake
+ * them all up to give this task its 'fair' share.
+ */
+ while (!list_empty(&p->blocked_entry)) {
+ struct task_struct *pp =
+ list_first_entry(&p->blocked_entry,
+ struct task_struct,
+ blocked_entry);
+ /*
+ * XXX connoro: proxy blocked_task case might be enqueuing more blocked tasks
+ * on pp. If those continue past when we delete pp from the list, we'll get an
+ * active with a non-empty blocked_entry list, which is no good. Locking
+ * pp->blocked_lock ensures either the blocked_task path gets the lock first and
+ * enqueues everything before we ever get the lock, or we get the lock first, the
+ * other path sees pp->on_rq != 0 and enqueues nothing.
+ */
+ raw_spin_lock(&pp->blocked_lock);
+ BUG_ON(pp->blocked_entry.prev != &p->blocked_entry);
+
+ list_del_init(&pp->blocked_entry);
+ if (READ_ONCE(pp->on_rq)) {
+ /*
+ * XXX connoro: We raced with a non mutex handoff activation of pp. That
+ * activation will also take care of activating all of the tasks after pp in
+ * the blocked_entry list, so we're done here.
+ */
+ raw_spin_unlock(&pp->blocked_lock);
+ break;
+ }
+ /* XXX can't call set_task_cpu() because we are not holding
+ * neither pp->pi_lock nor task's rq lock. This should however
+ * be fine as this task can't be woken up as it is blocked on
+ * this mutex atm.
+ * A problem however might be that __set_task_cpu() calls
+ * set_task_rq() which deals with groups and such...
+ */
+ __set_task_cpu(pp, cpu_of(rq));
+ activate_task(rq, pp, en_flags);
+ resched_curr(rq);
+ raw_spin_unlock(&pp->blocked_lock);
+ }
+}
+#else
+static inline void activate_task_and_blocked_ent(struct rq *rq, struct task_struct *p, int en_flags)
+{
+ activate_task(rq, p, en_flags);
+}
+#endif
+
static void
ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags,
struct rq_flags *rf)
@@ -3708,7 +3783,8 @@ ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags,
atomic_dec(&task_rq(p)->nr_iowait);
}

- activate_task(rq, p, en_flags);
+ activate_task_and_blocked_ent(rq, p, en_flags);
+
check_preempt_curr(rq, p, wake_flags);

ttwu_do_wakeup(p);
@@ -3741,6 +3817,95 @@ ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags,
#endif
}

+#ifdef CONFIG_PROXY_EXEC
+/* XXX jstultz: This needs a better name! */
+bool ttwu_proxy_skip_wakeup(struct rq *rq, struct task_struct *p)
+{
+ /*
+ * XXX connoro: wrap this case with #ifdef CONFIG_PROXY_EXEC?
+ */
+ if (task_current(rq, p)) {
+ bool ret = true;
+ /*
+ * XXX connoro: p is currently running. 3 cases are possible:
+ * 1) p is blocked on a lock it owns, but we got the rq lock before
+ * it could schedule out. Kill blocked_on relation and call
+ * ttwu_do_wakeup
+ * 2) p is blocked on a lock it does not own. Leave blocked_on
+ * unchanged, don't call ttwu_do_wakeup, and return 0.
+ * 3) p is unblocked, but unless we hold onto blocked_lock while
+ * calling ttwu_do_wakeup, we could race with it becoming
+ * blocked and overwrite the correct p->__state with TASK_RUNNING.
+ */
+ raw_spin_lock(&p->blocked_lock);
+ if (task_is_blocked(p) && mutex_owner(p->blocked_on) == p)
+ set_task_blocked_on(p, NULL);
+ if (!task_is_blocked(p))
+ ret = false;
+ raw_spin_unlock(&p->blocked_lock);
+ return ret;
+ }
+
+ /*
+ * Since we don't dequeue for blocked-on relations, we'll always
+ * trigger the on_rq_queued() clause for them.
+ */
+ if (task_is_blocked(p)) {
+ raw_spin_lock(&p->blocked_lock);
+
+ if (mutex_owner(p->blocked_on) != p) {
+ /*
+ * XXX connoro: p already woke, ran and blocked on
+ * another mutex. Since a successful wakeup already
+ * happened, we're done.
+ */
+ raw_spin_unlock(&p->blocked_lock);
+ return true;
+ }
+
+ set_task_blocked_on(p, NULL);
+ if (!cpumask_test_cpu(cpu_of(rq), p->cpus_ptr)) {
+ /*
+ * proxy stuff moved us outside of the affinity mask
+ * 'sleep' now and fail the direct wakeup so that the
+ * normal wakeup path will fix things.
+ */
+ deactivate_task(rq, p, DEQUEUE_SLEEP | DEQUEUE_NOCLOCK);
+ if (task_current_proxy(rq, p)) {
+ /*
+ * XXX connoro: If p is the proxy, then remove lingering
+ * references to it from rq and sched_class structs after
+ * dequeueing.
+ * can we get here while rq is inside __schedule?
+ * do any assumptions break if so?
+ */
+ put_prev_task(rq, p);
+ rq_set_proxy(rq, rq->idle);
+ }
+ resched_curr(rq);
+ raw_spin_unlock(&p->blocked_lock);
+ return true;
+ }
+ /* connoro: perhaps deq/enq here to get our task into the pushable task
+ * list again now that it's unblocked? Does that break if we're the proxy or
+ * does holding the rq lock make that OK?
+ */
+ /*
+ * Must resched after killing a blocked_on relation. The currently
+ * executing context might not be the most elegible anymore.
+ */
+ resched_curr(rq);
+ raw_spin_unlock(&p->blocked_lock);
+ }
+ return false;
+}
+#else
+static inline bool ttwu_proxy_skip_wakeup(struct rq *rq, struct task_struct *p)
+{
+ return false;
+}
+#endif
+
/*
* Consider @p being inside a wait loop:
*
@@ -3773,9 +3938,15 @@ static int ttwu_runnable(struct task_struct *p, int wake_flags)
int ret = 0;

rq = __task_rq_lock(p, &rf);
- if (!task_on_rq_queued(p))
+ if (!task_on_rq_queued(p)) {
+ BUG_ON(task_is_running(p));
goto out_unlock;
+ }

+ /*
+ * ttwu_do_wakeup()->
+ * check_preempt_curr() may use rq clock
+ */
if (!task_on_cpu(rq, p)) {
/*
* When on_rq && !on_cpu the task is preempted, see if
@@ -3784,8 +3955,14 @@ static int ttwu_runnable(struct task_struct *p, int wake_flags)
update_rq_clock(rq);
check_preempt_curr(rq, p, wake_flags);
}
+
+ /* XXX jstultz: This needs a better name! */
+ if (ttwu_proxy_skip_wakeup(rq, p))
+ goto out_unlock;
+
ttwu_do_wakeup(p);
ret = 1;
+
out_unlock:
__task_rq_unlock(rq, &rf);

@@ -4193,6 +4370,23 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
if (READ_ONCE(p->on_rq) && ttwu_runnable(p, wake_flags))
goto unlock;

+ if (task_is_blocked(p)) {
+ /*
+ * XXX connoro: we are in one of 2 cases:
+ * 1) p is blocked on a mutex it doesn't own but is still
+ * enqueued on a rq. We definitely don't want to keep going
+ * (and potentially activate it elsewhere without ever
+ * dequeueing) but maybe this is more properly handled by
+ * having ttwu_runnable() return 1 in this case?
+ * 2) p was removed from its rq and added to a blocked_entry
+ * list by proxy(). It should not be woken until the task at
+ * the head of the list gets a mutex handoff wakeup.
+ * Should try_to_wake_up() return 1 in either of these cases?
+ */
+ success = 0;
+ goto unlock;
+ }
+
#ifdef CONFIG_SMP
/*
* Ensure we load p->on_cpu _after_ p->on_rq, otherwise it would be
@@ -5581,6 +5775,18 @@ void scheduler_tick(void)

rq_lock(rq, &rf);

+#ifdef CONFIG_PROXY_EXEC
+ /*
+ * XXX connoro: is this check needed? Why?
+ */
+ if (task_cpu(curr) != cpu) {
+ BUG_ON(!test_preempt_need_resched() &&
+ !tif_need_resched());
+ rq_unlock(rq, &rf);
+ return;
+ }
+#endif
+
update_rq_clock(rq);
thermal_pressure = arch_scale_thermal_pressure(cpu_of(rq));
update_thermal_load_avg(rq_clock_thermal(rq), rq, thermal_pressure);
@@ -6473,6 +6679,397 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
# define SM_MASK_PREEMPT SM_PREEMPT
#endif

+#ifdef CONFIG_PROXY_EXEC
+
+static struct task_struct *
+proxy_migrate_task(struct rq *rq, struct task_struct *next,
+ struct rq_flags *rf, struct task_struct *p,
+ int that_cpu, bool curr_in_chain)
+{
+ struct rq *that_rq;
+ LIST_HEAD(migrate_list);
+
+ /*
+ * The blocked-on relation must not cross CPUs, if this happens
+ * migrate @p to the @owner's CPU.
+ *
+ * This is because we must respect the CPU affinity of execution
+ * contexts (@owner) but we can ignore affinity for scheduling
+ * contexts (@p). So we have to move scheduling contexts towards
+ * potential execution contexts.
+ *
+ * XXX [juril] what if @p is not the highest prio task once migrated
+ * to @owner's CPU?
+ *
+ * XXX [juril] also, after @p is migrated it is not migrated back once
+ * @owner releases the lock? Isn't this a potential problem w.r.t.
+ * @owner affinity settings?
+ * [juril] OK. It is migrated back into its affinity mask in
+ * ttwu_remote(), or by using wake_cpu via select_task_rq, guess we
+ * might want to add a comment about that here. :-)
+ *
+ * TODO: could optimize by finding the CPU of the final owner
+ * and migrating things there. Given:
+ *
+ * CPU0 CPU1 CPU2
+ *
+ * a ----> b ----> c
+ *
+ * the current scheme would result in migrating 'a' to CPU1,
+ * then CPU1 would migrate b and a to CPU2. Only then would
+ * CPU2 run c.
+ */
+ that_rq = cpu_rq(that_cpu);
+
+ /*
+ * @owner can disappear, simply migrate to @that_cpu and leave that CPU
+ * to sort things out.
+ */
+
+ /*
+ * Since we're going to drop @rq, we have to put(@next) first,
+ * otherwise we have a reference that no longer belongs to us. Use
+ * @fake_task to fill the void and make the next pick_next_task()
+ * invocation happy.
+ *
+ * XXX double, triple think about this.
+ * XXX put doesn't work with ON_RQ_MIGRATE
+ *
+ * CPU0 CPU1
+ *
+ * B mutex_lock(X)
+ *
+ * A mutex_lock(X) <- B
+ * A __schedule()
+ * A pick->A
+ * A proxy->B
+ * A migrate A to CPU1
+ * B mutex_unlock(X) -> A
+ * B __schedule()
+ * B pick->A
+ * B switch_to (A)
+ * A ... does stuff
+ * A ... is still running here
+ *
+ * * BOOM *
+ */
+ put_prev_task(rq, next);
+ if (curr_in_chain) {
+ rq_set_proxy(rq, rq->idle);
+ set_tsk_need_resched(rq->idle);
+ /*
+ * XXX [juril] don't we still need to migrate @next to
+ * @owner's CPU?
+ */
+ return rq->idle;
+ }
+ rq_set_proxy(rq, rq->idle);
+
+ for (; p; p = p->blocked_proxy) {
+ int wake_cpu = p->wake_cpu;
+
+ WARN_ON(p == rq_curr(rq));
+
+ deactivate_task(rq, p, 0);
+ set_task_cpu(p, that_cpu);
+ /*
+ * We can abuse blocked_entry to migrate the thing, because @p is
+ * still on the rq.
+ */
+ list_add(&p->blocked_entry, &migrate_list);
+
+ /*
+ * Preserve p->wake_cpu, such that we can tell where it
+ * used to run later.
+ */
+ p->wake_cpu = wake_cpu;
+ }
+
+ rq_unpin_lock(rq, rf);
+ raw_spin_rq_unlock(rq);
+ raw_spin_rq_lock(that_rq);
+
+ while (!list_empty(&migrate_list)) {
+ p = list_first_entry(&migrate_list, struct task_struct, blocked_entry);
+ list_del_init(&p->blocked_entry);
+
+ enqueue_task(that_rq, p, 0);
+ check_preempt_curr(that_rq, p, 0);
+ p->on_rq = TASK_ON_RQ_QUEUED;
+ /*
+ * check_preempt_curr has already called
+ * resched_curr(that_rq) in case it is
+ * needed.
+ */
+ }
+
+ raw_spin_rq_unlock(that_rq);
+ raw_spin_rq_lock(rq);
+ rq_repin_lock(rq, rf);
+
+ return NULL; /* Retry task selection on _this_ CPU. */
+}
+
+static inline struct task_struct *
+proxy_resched_idle(struct rq *rq, struct task_struct *next)
+{
+ put_prev_task(rq, next);
+ rq_set_proxy(rq, rq->idle);
+ set_tsk_need_resched(rq->idle);
+ return rq->idle;
+}
+
+static void proxy_enqueue_on_owner(struct rq *rq, struct task_struct *p,
+ struct task_struct *owner,
+ struct task_struct *next)
+{
+ /*
+ * Walk back up the blocked_proxy relation and enqueue them all on @owner
+ *
+ * ttwu_activate() will pick them up and place them on whatever rq
+ * @owner will run next.
+ * XXX connoro: originally we would jump back into the main proxy() loop
+ * owner->on_rq !=0 path, but if we then end up taking the owned_task path
+ * then we can overwrite p->on_rq after ttwu_do_activate sets it to 1 which breaks
+ * the assumptions made in ttwu_do_activate.
+ *
+ * Perhaps revisit whether retry is now possible given the changes to the
+ * owned_task path since I wrote the prior comment...
+ */
+ if (!owner->on_rq) {
+ /* jstultz: Err, do we need to hold a lock on p? (we gave it up for owner) */
+ for (; p; p = p->blocked_proxy) {
+ if (p == owner)
+ continue;
+ BUG_ON(!p->on_rq);
+ deactivate_task(rq, p, DEQUEUE_SLEEP);
+ if (task_current_proxy(rq, p)) {
+ put_prev_task(rq, next);
+ rq_set_proxy(rq, rq->idle);
+ }
+ /*
+ * XXX connoro: need to verify this is necessary. The rationale is that
+ * ttwu_do_activate must not have a chance to activate p elsewhere before
+ * it's fully extricated from its old rq.
+ */
+ smp_mb();
+ list_add(&p->blocked_entry, &owner->blocked_entry);
+ }
+ }
+}
+
+/*
+ * Find who @next (currently blocked on a mutex) can proxy for.
+ *
+ * Follow the blocked-on relation:
+ *
+ * ,-> task
+ * | | blocked-on
+ * | v
+ * proxied-by | mutex
+ * | | owner
+ * | v
+ * `-- task
+ *
+ * and set the proxied-by relation, this latter is used by the mutex code
+ * to find which (blocked) task to hand-off to.
+ *
+ * Lock order:
+ *
+ * p->pi_lock
+ * rq->lock
+ * mutex->wait_lock
+ * p->blocked_lock
+ *
+ * Returns the task that is going to be used as execution context (the one
+ * that is actually going to be put to run on cpu_of(rq)).
+ */
+static struct task_struct *
+proxy(struct rq *rq, struct task_struct *next, struct rq_flags *rf)
+{
+ struct task_struct *p = next;
+ struct task_struct *owner = NULL;
+ bool curr_in_chain = false;
+ int this_cpu, that_cpu;
+ struct mutex *mutex;
+
+ this_cpu = cpu_of(rq);
+
+ /*
+ * Follow blocked_on chain.
+ *
+ * TODO: deadlock detection
+ */
+ for (p = next; p->blocked_on; p = owner) {
+ mutex = p->blocked_on;
+ /* Something changed in the chain, pick_again */
+ if (!mutex)
+ return NULL;
+
+ /*
+ * By taking mutex->wait_lock we hold off concurrent mutex_unlock()
+ * and ensure @owner sticks around.
+ */
+ raw_spin_lock(&mutex->wait_lock);
+ raw_spin_lock(&p->blocked_lock);
+
+ /* Check again that p is blocked with blocked_lock held */
+ if (!task_is_blocked(p) || mutex != p->blocked_on) {
+ /*
+ * Something changed in the blocked_on chain and
+ * we don't know if only at this level. So, let's
+ * just bail out completely and let __schedule
+ * figure things out (pick_again loop).
+ */
+ raw_spin_unlock(&p->blocked_lock);
+ raw_spin_unlock(&mutex->wait_lock);
+ return NULL;
+ }
+
+ if (task_current(rq, p))
+ curr_in_chain = true;
+
+ owner = mutex_owner(mutex);
+ if (task_cpu(owner) != this_cpu) {
+ that_cpu = task_cpu(owner);
+ /*
+ * @owner can disappear, simply migrate to @that_cpu and leave that CPU
+ * to sort things out.
+ */
+ raw_spin_unlock(&p->blocked_lock);
+ raw_spin_unlock(&mutex->wait_lock);
+
+ return proxy_migrate_task(rq, next, rf, p, that_cpu, curr_in_chain);
+ }
+
+ if (task_on_rq_migrating(owner)) {
+ /*
+ * XXX connoro: one of the chain of mutex owners is currently
+ * migrating to this CPU, but has not yet been enqueued because
+ * we are holding the rq lock. As a simple solution, just schedule
+ * rq->idle to give the migration a chance to complete. Much like
+ * the migrate_task case we should end up back in proxy(), this
+ * time hopefully with all relevant tasks already enqueued.
+ */
+ raw_spin_unlock(&p->blocked_lock);
+ raw_spin_unlock(&mutex->wait_lock);
+ return proxy_resched_idle(rq, next);
+ }
+
+ if (!owner->on_rq) {
+ /*
+ * XXX connoro: rq->curr must not be added to the blocked_entry list
+ * or else ttwu_do_activate could enqueue it elsewhere before it
+ * switches out here. The approach to avoiding this is the same as in
+ * the migrate_task case.
+ */
+ if (curr_in_chain) {
+ /*
+ * This is identical to the owned_task handling, probably should
+ * fold them together...
+ */
+ raw_spin_unlock(&p->blocked_lock);
+ raw_spin_unlock(&mutex->wait_lock);
+ return proxy_resched_idle(rq, next);
+ }
+
+ /*
+ * If !@owner->on_rq, holding @rq->lock will not pin the task,
+ * so we cannot drop @mutex->wait_lock until we're sure its a blocked
+ * task on this rq.
+ *
+ * We use @owner->blocked_lock to serialize against ttwu_activate().
+ * Either we see its new owner->on_rq or it will see our list_add().
+ */
+ if (owner != p) {
+ raw_spin_unlock(&p->blocked_lock);
+ raw_spin_lock(&owner->blocked_lock);
+ }
+
+ proxy_enqueue_on_owner(rq, p, owner, next);
+
+ if (task_current_proxy(rq, next)) {
+ put_prev_task(rq, next);
+ rq_set_proxy(rq, rq->idle);
+ }
+ raw_spin_unlock(&owner->blocked_lock);
+ raw_spin_unlock(&mutex->wait_lock);
+
+ return NULL; /* retry task selection */
+ }
+
+ if (owner == p) {
+ /*
+ * Its possible we interleave with mutex_unlock like:
+ *
+ * lock(&rq->lock);
+ * proxy()
+ * mutex_unlock()
+ * lock(&wait_lock);
+ * next(owner) = current->blocked_proxy;
+ * unlock(&wait_lock);
+ *
+ * wake_up_q();
+ * ...
+ * ttwu_runnable()
+ * __task_rq_lock()
+ * lock(&wait_lock);
+ * owner == p
+ *
+ * Which leaves us to finish the ttwu_runnable() and make it go.
+ *
+ * XXX is this happening in case of an HANDOFF to p?
+ * In any case, reading of the owner in __mutex_unlock_slowpath is
+ * done atomically outside wait_lock (only adding waiters to wake_q is
+ * done inside the critical section).
+ * Does this means we can get to proxy _w/o an owner_ if that was
+ * cleared before grabbing wait_lock? Do we account for this case?
+ * OK we actually do (see PROXY_EXEC ifdeffery in unlock function).
+ */
+
+ /*
+ * XXX connoro: prior versions would clear p->blocked_on here, but I think
+ * that can race with the handoff wakeup path. If a wakeup reaches the
+ * call to ttwu_runnable after this point and finds that p is enqueued
+ * and marked as unblocked, it will happily do a ttwu_do_wakeup() call
+ * with zero regard for whether the task's affinity actually allows
+ * running it on this CPU.
+ */
+
+ /*
+ * XXX connoro: previous versions would immediately run owner here if
+ * it's allowed to run on this CPU, but this creates potential races
+ * with the wakeup logic. Instead we can just take the blocked_task path
+ * when owner is already !on_rq, or else schedule rq->idle so that
+ * ttwu_runnable can get the rq lock and mark owner as running.
+ */
+ raw_spin_unlock(&p->blocked_lock);
+ raw_spin_unlock(&mutex->wait_lock);
+ return proxy_resched_idle(rq, next);
+ }
+
+ /*
+ * OK, now we're absolutely sure @owner is not blocked _and_
+ * on this rq, therefore holding @rq->lock is sufficient to
+ * guarantee its existence, as per ttwu_remote().
+ */
+ raw_spin_unlock(&p->blocked_lock);
+ raw_spin_unlock(&mutex->wait_lock);
+
+ owner->blocked_proxy = p;
+ }
+
+ WARN_ON_ONCE(!owner->on_rq);
+ return owner;
+}
+#else /* PROXY_EXEC */
+static struct task_struct *
+proxy(struct rq *rq, struct task_struct *next, struct rq_flags *rf)
+{
+ return next;
+}
+#endif /* PROXY_EXEC */
+
/*
* __schedule() is the main scheduler function.
*
@@ -6520,6 +7117,7 @@ static void __sched notrace __schedule(unsigned int sched_mode)
struct rq_flags rf;
struct rq *rq;
int cpu;
+ bool preserve_need_resched = false;

cpu = smp_processor_id();
rq = cpu_rq(cpu);
@@ -6565,7 +7163,7 @@ static void __sched notrace __schedule(unsigned int sched_mode)
if (!(sched_mode & SM_MASK_PREEMPT) && prev_state) {
if (signal_pending_state(prev_state, prev)) {
WRITE_ONCE(prev->__state, TASK_RUNNING);
- } else {
+ } else if (!task_is_blocked(prev)) {
prev->sched_contributes_to_load =
(prev_state & TASK_UNINTERRUPTIBLE) &&
!(prev_state & TASK_NOLOAD) &&
@@ -6591,13 +7189,49 @@ static void __sched notrace __schedule(unsigned int sched_mode)
atomic_inc(&rq->nr_iowait);
delayacct_blkio_start();
}
+ } else {
+ /*
+ * XXX
+ * Let's make this task, which is blocked on
+ * a mutex, (push/pull)able (RT/DL).
+ * Unfortunately we can only deal with that by
+ * means of a dequeue/enqueue cycle. :-/
+ */
+ dequeue_task(rq, prev, 0);
+ enqueue_task(rq, prev, 0);
}
switch_count = &prev->nvcsw;
}

- next = pick_next_task(rq, prev, &rf);
+pick_again:
+ /*
+ * If picked task is actually blocked it means that it can act as a
+ * proxy for the task that is holding the mutex picked task is blocked
+ * on. Get a reference to the blocked (going to be proxy) task here.
+ * Note that if next isn't actually blocked we will have rq->proxy ==
+ * rq->curr == next in the end, which is intended and means that proxy
+ * execution is currently "not in use".
+ */
+ next = pick_next_task(rq, rq_proxy(rq), &rf);
rq_set_proxy(rq, next);
- clear_tsk_need_resched(prev);
+ next->blocked_proxy = NULL;
+ if (unlikely(task_is_blocked(next))) {
+ next = proxy(rq, next, &rf);
+ if (!next)
+ goto pick_again;
+ /*
+ * XXX connoro: when proxy() returns rq->idle it sets the
+ * TIF_NEED_RESCHED flag, but in the case where
+ * rq->idle == rq->prev, the flag would be cleared immediately,
+ * defeating the desired behavior. So, check explicitly for
+ * this case.
+ */
+ if (next == rq->idle && prev == rq->idle)
+ preserve_need_resched = true;
+ }
+
+ if (!preserve_need_resched)
+ clear_tsk_need_resched(prev);
clear_preempt_need_resched();
#ifdef CONFIG_SCHED_DEBUG
rq->last_seen_need_resched_ns = 0;
@@ -6684,6 +7318,10 @@ static inline void sched_submit_work(struct task_struct *tsk)
*/
SCHED_WARN_ON(current->__state & TASK_RTLOCK_WAIT);

+ /* XXX still necessary? tsk_is_pi_blocked check here was deleted... */
+ if (task_is_blocked(tsk))
+ return;
+
/*
* If we are going to sleep and we have plugged IO queued,
* make sure to submit it to avoid deadlocks.
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 6ec40f90317c..c9b6a23a99b3 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1740,7 +1740,7 @@ static void enqueue_task_dl(struct rq *rq, struct task_struct *p, int flags)

enqueue_dl_entity(&p->dl, flags);

- if (!task_current(rq, p) && p->nr_cpus_allowed > 1)
+ if (!task_current(rq, p) && p->nr_cpus_allowed > 1 && !task_is_blocked(p))
enqueue_pushable_dl_task(rq, p);
}

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1471519b95c5..aa8d772efadf 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7929,7 +7929,9 @@ pick_next_task_fair(struct rq *rq, struct task_struct *prev, struct rq_flags *rf
goto idle;

#ifdef CONFIG_FAIR_GROUP_SCHED
- if (!prev || prev->sched_class != &fair_sched_class)
+ if (!prev ||
+ prev->sched_class != &fair_sched_class ||
+ rq_curr(rq) != rq_proxy(rq))
goto simple;

/*
@@ -8447,6 +8449,9 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env)

lockdep_assert_rq_held(env->src_rq);

+ if (task_is_blocked(p))
+ return 0;
+
/*
* We do not migrate tasks that are:
* 1) throttled_lb_pair, or
@@ -8497,7 +8502,11 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env)
/* Record that we found at least one task that could run on dst_cpu */
env->flags &= ~LBF_ALL_PINNED;

- if (task_on_cpu(env->src_rq, p)) {
+ /*
+ * XXX mutex unlock path may have marked proxy as unblocked allowing us to
+ * reach this point, but we still shouldn't migrate it.
+ */
+ if (task_on_cpu(env->src_rq, p) || task_current_proxy(env->src_rq, p)) {
schedstat_inc(p->stats.nr_failed_migrations_running);
return 0;
}
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 03e5d8fa67aa..d1c5a022eae4 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1537,7 +1537,8 @@ enqueue_task_rt(struct rq *rq, struct task_struct *p, int flags)

enqueue_rt_entity(rt_se, flags);

- if (!task_current(rq, p) && p->nr_cpus_allowed > 1)
+ if (!task_current(rq, p) && p->nr_cpus_allowed > 1 &&
+ !task_is_blocked(p))
enqueue_pushable_task(rq, p);
}

diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 84e49c2530b0..01f82ace084a 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2159,6 +2159,19 @@ static inline int task_current_proxy(struct rq *rq, struct task_struct *p)
return rq_proxy(rq) == p;
}

+#ifdef CONFIG_PROXY_EXEC
+static inline bool task_is_blocked(struct task_struct *p)
+{
+ return !!p->blocked_on;
+}
+#else /* !PROXY_EXEC */
+static inline bool task_is_blocked(struct task_struct *p)
+{
+ return false;
+}
+
+#endif /* PROXY_EXEC */
+
static inline int task_on_cpu(struct rq *rq, struct task_struct *p)
{
#ifdef CONFIG_SMP
@@ -2316,12 +2329,18 @@ struct sched_class {

static inline void put_prev_task(struct rq *rq, struct task_struct *prev)
{
- WARN_ON_ONCE(rq_proxy(rq) != prev);
+ WARN_ON_ONCE(rq_curr(rq) != prev && prev != rq_proxy(rq));
+
+ /* XXX connoro: is this check necessary? */
+ if (prev == rq_proxy(rq) && task_cpu(prev) != cpu_of(rq))
+ return;
+
prev->sched_class->put_prev_task(rq, prev);
}

static inline void set_next_task(struct rq *rq, struct task_struct *next)
{
+ WARN_ON_ONCE(!task_current_proxy(rq, next));
next->sched_class->set_next_task(rq, next, false);
}

--
2.40.0.rc1.284.g88254d51c5-goog


2023-03-20 23:39:00

by John Stultz

[permalink] [raw]
Subject: [PATCH v2 12/12] sched: Attempt to fix rt/dl load balancing via chain level balance

From: Connor O'Brien <[email protected]>

RT/DL balancing is supposed to guarantee that with N cpus available &
CPU affinity permitting, the top N RT/DL tasks will get spread across
the CPUs and all get to run. Proxy exec greatly complicates this as
blocked tasks remain on the rq but cannot be usefully migrated away
from their lock owning tasks. This has two major consequences:
1. In order to get the desired properties we need to migrate a blocked
task, its would-be proxy, and everything in between, all together -
i.e., we need to push/pull "blocked chains" rather than individual
tasks.
2. Tasks that are part of rq->curr's "blocked tree" therefore should
not be pushed or pulled. Options for enforcing this seem to include
a) create some sort of complex data structure for tracking
pushability, updating it whenever the blocked tree for rq->curr
changes (e.g. on mutex handoffs, migrations, etc.) as well as on
context switches.
b) give up on O(1) pushability checks, and search through the pushable
list every push/pull until we find a pushable "chain"
c) Extend option "b" with some sort of caching to avoid repeated work.

For the sake of simplicity & separating the "chain level balancing"
concerns from complicated optimizations, this patch focuses on trying
to implement option "b" correctly. This can then hopefully provide a
baseline for "correct load balancing behavior" that optimizations can
try to implement more efficiently.

Note:
The inability to atomically check "is task enqueued on a specific rq"
creates 2 possible races when following a blocked chain:
- If we check task_rq() first on a task that is dequeued from its rq,
it can be woken and enqueued on another rq before the call to
task_on_rq_queued()
- If we call task_on_rq_queued() first on a task that is on another
rq, it can be dequeued (since we don't hold its rq's lock) and then
be set to the current rq before we check task_rq().

Maybe there's a more elegant solution that would work, but for now,
just sandwich the task_rq() check between two task_on_rq_queued()
checks, all separated by smp_rmb() calls. Since we hold rq's lock,
task can't be enqueued or dequeued from rq, so neither race should be
possible.

extensive comments on various pitfalls, races, etc. included inline.

TODO: Probably no good reason not to move the new helper
implementations from sched.h into core.c

Cc: Joel Fernandes <[email protected]>
Cc: Qais Yousef <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Zimuzo Ezeozue <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Waiman Long <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: "Paul E . McKenney" <[email protected]>
Cc: [email protected]
Signed-off-by: Connor O'Brien <[email protected]>
[jstultz: rebased & sorted minor conflicts, folded down numerous
fixes from Connor, fixed number of checkpatch issues]
Signed-off-by: John Stultz <[email protected]>
---
kernel/sched/core.c | 8 +-
kernel/sched/cpudeadline.c | 12 +--
kernel/sched/cpudeadline.h | 3 +-
kernel/sched/cpupri.c | 29 ++++--
kernel/sched/cpupri.h | 6 +-
kernel/sched/deadline.c | 140 ++++++++++++++++---------
kernel/sched/fair.c | 5 +
kernel/sched/rt.c | 202 +++++++++++++++++++++++++++----------
kernel/sched/sched.h | 150 ++++++++++++++++++++++++++-
9 files changed, 426 insertions(+), 129 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 11138277c7c8..4e7d24560f4c 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2492,6 +2492,10 @@ static int migration_cpu_stop(void *data)

int push_cpu_stop(void *arg)
{
+ /* XXX connoro: how do we handle this case when the rq->curr we push away
+ * is part of a proxy chain!?
+ * we actually push the old rq->proxy and its blocker chain.
+ */
struct rq *lowest_rq = NULL, *rq = this_rq();
struct task_struct *p = arg;

@@ -2516,9 +2520,7 @@ int push_cpu_stop(void *arg)

// XXX validate p is still the highest prio task
if (task_rq(p) == rq) {
- deactivate_task(rq, p, 0);
- set_task_cpu(p, lowest_rq->cpu);
- activate_task(lowest_rq, p, 0);
+ push_task_chain(rq, lowest_rq, p);
resched_curr(lowest_rq);
}

diff --git a/kernel/sched/cpudeadline.c b/kernel/sched/cpudeadline.c
index 57c92d751bcd..efd6d716a3f2 100644
--- a/kernel/sched/cpudeadline.c
+++ b/kernel/sched/cpudeadline.c
@@ -113,13 +113,13 @@ static inline int cpudl_maximum(struct cpudl *cp)
*
* Returns: int - CPUs were found
*/
-int cpudl_find(struct cpudl *cp, struct task_struct *p,
+int cpudl_find(struct cpudl *cp, struct task_struct *sched_ctx, struct task_struct *exec_ctx,
struct cpumask *later_mask)
{
- const struct sched_dl_entity *dl_se = &p->dl;
+ const struct sched_dl_entity *dl_se = &sched_ctx->dl;

if (later_mask &&
- cpumask_and(later_mask, cp->free_cpus, &p->cpus_mask)) {
+ cpumask_and(later_mask, cp->free_cpus, &exec_ctx->cpus_mask)) {
unsigned long cap, max_cap = 0;
int cpu, max_cpu = -1;

@@ -128,13 +128,13 @@ int cpudl_find(struct cpudl *cp, struct task_struct *p,

/* Ensure the capacity of the CPUs fits the task. */
for_each_cpu(cpu, later_mask) {
- if (!dl_task_fits_capacity(p, cpu)) {
+ if (!dl_task_fits_capacity(sched_ctx, cpu)) {
cpumask_clear_cpu(cpu, later_mask);

cap = capacity_orig_of(cpu);

if (cap > max_cap ||
- (cpu == task_cpu(p) && cap == max_cap)) {
+ (cpu == task_cpu(exec_ctx) && cap == max_cap)) {
max_cap = cap;
max_cpu = cpu;
}
@@ -150,7 +150,7 @@ int cpudl_find(struct cpudl *cp, struct task_struct *p,

WARN_ON(best_cpu != -1 && !cpu_present(best_cpu));

- if (cpumask_test_cpu(best_cpu, &p->cpus_mask) &&
+ if (cpumask_test_cpu(best_cpu, &exec_ctx->cpus_mask) &&
dl_time_before(dl_se->deadline, cp->elements[0].dl)) {
if (later_mask)
cpumask_set_cpu(best_cpu, later_mask);
diff --git a/kernel/sched/cpudeadline.h b/kernel/sched/cpudeadline.h
index 0adeda93b5fb..6bb27f70e9d2 100644
--- a/kernel/sched/cpudeadline.h
+++ b/kernel/sched/cpudeadline.h
@@ -16,7 +16,8 @@ struct cpudl {
};

#ifdef CONFIG_SMP
-int cpudl_find(struct cpudl *cp, struct task_struct *p, struct cpumask *later_mask);
+int cpudl_find(struct cpudl *cp, struct task_struct *sched_ctx,
+ struct task_struct *exec_ctx, struct cpumask *later_mask);
void cpudl_set(struct cpudl *cp, int cpu, u64 dl);
void cpudl_clear(struct cpudl *cp, int cpu);
int cpudl_init(struct cpudl *cp);
diff --git a/kernel/sched/cpupri.c b/kernel/sched/cpupri.c
index a286e726eb4b..285242b76597 100644
--- a/kernel/sched/cpupri.c
+++ b/kernel/sched/cpupri.c
@@ -64,6 +64,7 @@ static int convert_prio(int prio)
return cpupri;
}

+/* XXX connoro: the p passed in here should be exec ctx */
static inline int __cpupri_find(struct cpupri *cp, struct task_struct *p,
struct cpumask *lowest_mask, int idx)
{
@@ -96,11 +97,15 @@ static inline int __cpupri_find(struct cpupri *cp, struct task_struct *p,
if (skip)
return 0;

- if (cpumask_any_and(&p->cpus_mask, vec->mask) >= nr_cpu_ids)
+ if ((p && cpumask_any_and(&p->cpus_mask, vec->mask) >= nr_cpu_ids) ||
+ (!p && cpumask_any(vec->mask) >= nr_cpu_ids))
return 0;

if (lowest_mask) {
- cpumask_and(lowest_mask, &p->cpus_mask, vec->mask);
+ if (p)
+ cpumask_and(lowest_mask, &p->cpus_mask, vec->mask);
+ else
+ cpumask_copy(lowest_mask, vec->mask);

/*
* We have to ensure that we have at least one bit
@@ -117,10 +122,11 @@ static inline int __cpupri_find(struct cpupri *cp, struct task_struct *p,
return 1;
}

-int cpupri_find(struct cpupri *cp, struct task_struct *p,
+int cpupri_find(struct cpupri *cp, struct task_struct *sched_ctx,
+ struct task_struct *exec_ctx,
struct cpumask *lowest_mask)
{
- return cpupri_find_fitness(cp, p, lowest_mask, NULL);
+ return cpupri_find_fitness(cp, sched_ctx, exec_ctx, lowest_mask, NULL);
}

/**
@@ -140,18 +146,19 @@ int cpupri_find(struct cpupri *cp, struct task_struct *p,
*
* Return: (int)bool - CPUs were found
*/
-int cpupri_find_fitness(struct cpupri *cp, struct task_struct *p,
- struct cpumask *lowest_mask,
- bool (*fitness_fn)(struct task_struct *p, int cpu))
+int cpupri_find_fitness(struct cpupri *cp, struct task_struct *sched_ctx,
+ struct task_struct *exec_ctx,
+ struct cpumask *lowest_mask,
+ bool (*fitness_fn)(struct task_struct *p, int cpu))
{
- int task_pri = convert_prio(p->prio);
+ int task_pri = convert_prio(sched_ctx->prio);
int idx, cpu;

WARN_ON_ONCE(task_pri >= CPUPRI_NR_PRIORITIES);

for (idx = 0; idx < task_pri; idx++) {

- if (!__cpupri_find(cp, p, lowest_mask, idx))
+ if (!__cpupri_find(cp, exec_ctx, lowest_mask, idx))
continue;

if (!lowest_mask || !fitness_fn)
@@ -159,7 +166,7 @@ int cpupri_find_fitness(struct cpupri *cp, struct task_struct *p,

/* Ensure the capacity of the CPUs fit the task */
for_each_cpu(cpu, lowest_mask) {
- if (!fitness_fn(p, cpu))
+ if (!fitness_fn(sched_ctx, cpu))
cpumask_clear_cpu(cpu, lowest_mask);
}

@@ -191,7 +198,7 @@ int cpupri_find_fitness(struct cpupri *cp, struct task_struct *p,
* really care.
*/
if (fitness_fn)
- return cpupri_find(cp, p, lowest_mask);
+ return cpupri_find(cp, sched_ctx, exec_ctx, lowest_mask);

return 0;
}
diff --git a/kernel/sched/cpupri.h b/kernel/sched/cpupri.h
index d6cba0020064..bde7243cec2e 100644
--- a/kernel/sched/cpupri.h
+++ b/kernel/sched/cpupri.h
@@ -18,9 +18,11 @@ struct cpupri {
};

#ifdef CONFIG_SMP
-int cpupri_find(struct cpupri *cp, struct task_struct *p,
+int cpupri_find(struct cpupri *cp, struct task_struct *sched_ctx,
+ struct task_struct *exec_ctx,
struct cpumask *lowest_mask);
-int cpupri_find_fitness(struct cpupri *cp, struct task_struct *p,
+int cpupri_find_fitness(struct cpupri *cp, struct task_struct *sched_ctx,
+ struct task_struct *exec_ctx,
struct cpumask *lowest_mask,
bool (*fitness_fn)(struct task_struct *p, int cpu));
void cpupri_set(struct cpupri *cp, int cpu, int pri);
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index c9b6a23a99b3..83908d51f354 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1814,7 +1814,7 @@ static inline bool dl_task_is_earliest_deadline(struct task_struct *p,
rq->dl.earliest_dl.curr));
}

-static int find_later_rq(struct task_struct *task);
+static int find_later_rq(struct task_struct *sched_ctx, struct task_struct *exec_ctx);

static int
select_task_rq_dl(struct task_struct *p, int cpu, int flags)
@@ -1854,7 +1854,11 @@ select_task_rq_dl(struct task_struct *p, int cpu, int flags)
select_rq |= !dl_task_fits_capacity(p, cpu);

if (select_rq) {
- int target = find_later_rq(p);
+ /*
+ * XXX connoro: verify this but in wakeup path we should
+ * always have unblocked p, so exec_ctx == sched_ctx == p.
+ */
+ int target = find_later_rq(p, p);

if (target != -1 &&
dl_task_is_earliest_deadline(p, cpu_rq(target)))
@@ -1901,12 +1905,18 @@ static void migrate_task_rq_dl(struct task_struct *p, int new_cpu __maybe_unused

static void check_preempt_equal_dl(struct rq *rq, struct task_struct *p)
{
+ struct task_struct *exec_ctx;
+
/*
* Current can't be migrated, useless to reschedule,
* let's hope p can move out.
*/
if (rq_curr(rq)->nr_cpus_allowed == 1 ||
- !cpudl_find(&rq->rd->cpudl, rq_proxy(rq), NULL))
+ !cpudl_find(&rq->rd->cpudl, rq_proxy(rq), rq_curr(rq), NULL))
+ return;
+
+ exec_ctx = find_exec_ctx(rq, p);
+ if (task_current(rq, exec_ctx))
return;

/*
@@ -1914,7 +1924,7 @@ static void check_preempt_equal_dl(struct rq *rq, struct task_struct *p)
* see if it is pushed or pulled somewhere else.
*/
if (p->nr_cpus_allowed != 1 &&
- cpudl_find(&rq->rd->cpudl, p, NULL))
+ cpudl_find(&rq->rd->cpudl, p, exec_ctx, NULL))
return;

resched_curr(rq);
@@ -2084,14 +2094,6 @@ static void task_fork_dl(struct task_struct *p)
/* Only try algorithms three times */
#define DL_MAX_TRIES 3

-static int pick_dl_task(struct rq *rq, struct task_struct *p, int cpu)
-{
- if (!task_on_cpu(rq, p) &&
- cpumask_test_cpu(cpu, &p->cpus_mask))
- return 1;
- return 0;
-}
-
/*
* Return the earliest pushable rq's task, which is suitable to be executed
* on the CPU, NULL otherwise:
@@ -2110,7 +2112,7 @@ static struct task_struct *pick_earliest_pushable_dl_task(struct rq *rq, int cpu
if (next_node) {
p = __node_2_pdl(next_node);

- if (pick_dl_task(rq, p, cpu))
+ if (pushable_chain(rq, p, cpu) == 1)
return p;

next_node = rb_next(next_node);
@@ -2122,25 +2124,25 @@ static struct task_struct *pick_earliest_pushable_dl_task(struct rq *rq, int cpu

static DEFINE_PER_CPU(cpumask_var_t, local_cpu_mask_dl);

-static int find_later_rq(struct task_struct *task)
+static int find_later_rq(struct task_struct *sched_ctx, struct task_struct *exec_ctx)
{
struct sched_domain *sd;
struct cpumask *later_mask = this_cpu_cpumask_var_ptr(local_cpu_mask_dl);
int this_cpu = smp_processor_id();
- int cpu = task_cpu(task);
+ int cpu = task_cpu(sched_ctx);

/* Make sure the mask is initialized first */
if (unlikely(!later_mask))
return -1;

- if (task->nr_cpus_allowed == 1)
+ if (exec_ctx && exec_ctx->nr_cpus_allowed == 1)
return -1;

/*
* We have to consider system topology and task affinity
* first, then we can look for a suitable CPU.
*/
- if (!cpudl_find(&task_rq(task)->rd->cpudl, task, later_mask))
+ if (!cpudl_find(&task_rq(exec_ctx)->rd->cpudl, sched_ctx, exec_ctx, later_mask))
return -1;

/*
@@ -2209,15 +2211,59 @@ static int find_later_rq(struct task_struct *task)
return -1;
}

+static struct task_struct *pick_next_pushable_dl_task(struct rq *rq)
+{
+ struct task_struct *p = NULL;
+ struct rb_node *next_node;
+
+ if (!has_pushable_dl_tasks(rq))
+ return NULL;
+
+ next_node = rb_first_cached(&rq->dl.pushable_dl_tasks_root);
+
+next_node:
+ if (next_node) {
+ p = __node_2_pdl(next_node);
+
+ /*
+ * cpu argument doesn't matter because we treat a -1 result
+ * (pushable but can't go to cpu0) the same as a 1 result
+ * (pushable to cpu0). All we care about here is general
+ * pushability.
+ */
+ if (pushable_chain(rq, p, 0))
+ return p; /* XXX connoro TODO this is definitely wrong in combo with the later checks...*/
+
+ next_node = rb_next(next_node);
+ goto next_node;
+ }
+
+ if (!p)
+ return NULL;
+
+ WARN_ON_ONCE(rq->cpu != task_cpu(p));
+ WARN_ON_ONCE(task_current(rq, p));
+ WARN_ON_ONCE(p->nr_cpus_allowed <= 1);
+
+ WARN_ON_ONCE(!task_on_rq_queued(p));
+ WARN_ON_ONCE(!dl_task(p));
+
+ return p;
+}
+
/* Locks the rq it finds */
static struct rq *find_lock_later_rq(struct task_struct *task, struct rq *rq)
{
+ struct task_struct *exec_ctx;
struct rq *later_rq = NULL;
+ bool retry;
int tries;
int cpu;

for (tries = 0; tries < DL_MAX_TRIES; tries++) {
- cpu = find_later_rq(task);
+ retry = false;
+ exec_ctx = find_exec_ctx(rq, task);
+ cpu = find_later_rq(task, exec_ctx);

if ((cpu == -1) || (cpu == rq->cpu))
break;
@@ -2236,11 +2282,30 @@ static struct rq *find_lock_later_rq(struct task_struct *task, struct rq *rq)

/* Retry if something changed. */
if (double_lock_balance(rq, later_rq)) {
- if (unlikely(task_rq(task) != rq ||
- !cpumask_test_cpu(later_rq->cpu, &task->cpus_mask) ||
- task_on_cpu(rq, task) ||
- !dl_task(task) ||
- !task_on_rq_queued(task))) {
+ bool fail = false;
+
+ /* XXX connoro: this is a mess. Surely there's a better way to express it...*/
+ if (!dl_task(task)) {
+ fail = true;
+ } else if (rq != this_rq()) {
+ struct task_struct *next_task = pick_next_pushable_dl_task(rq);
+
+ if (next_task != task) {
+ fail = true;
+ } else {
+ exec_ctx = find_exec_ctx(rq, next_task);
+ retry = (exec_ctx &&
+ !cpumask_test_cpu(later_rq->cpu,
+ &exec_ctx->cpus_mask));
+ }
+ } else {
+ int pushable = pushable_chain(rq, task, later_rq->cpu);
+
+ fail = !pushable;
+ retry = pushable == -1;
+ }
+
+ if (unlikely(fail)) {
double_unlock_balance(rq, later_rq);
later_rq = NULL;
break;
@@ -2252,7 +2317,7 @@ static struct rq *find_lock_later_rq(struct task_struct *task, struct rq *rq)
* its earliest one has a later deadline than our
* task, the rq is a good one.
*/
- if (dl_task_is_earliest_deadline(task, later_rq))
+ if (!retry && dl_task_is_earliest_deadline(task, later_rq))
break;

/* Otherwise we try again. */
@@ -2263,25 +2328,6 @@ static struct rq *find_lock_later_rq(struct task_struct *task, struct rq *rq)
return later_rq;
}

-static struct task_struct *pick_next_pushable_dl_task(struct rq *rq)
-{
- struct task_struct *p;
-
- if (!has_pushable_dl_tasks(rq))
- return NULL;
-
- p = __node_2_pdl(rb_first_cached(&rq->dl.pushable_dl_tasks_root));
-
- WARN_ON_ONCE(rq->cpu != task_cpu(p));
- WARN_ON_ONCE(task_current(rq, p));
- WARN_ON_ONCE(p->nr_cpus_allowed <= 1);
-
- WARN_ON_ONCE(!task_on_rq_queued(p));
- WARN_ON_ONCE(!dl_task(p));
-
- return p;
-}
-
/*
* See if the non running -deadline tasks on this rq
* can be sent to some other CPU where they can preempt
@@ -2351,9 +2397,7 @@ static int push_dl_task(struct rq *rq)
goto retry;
}

- deactivate_task(rq, next_task, 0);
- set_task_cpu(next_task, later_rq->cpu);
- activate_task(later_rq, next_task, 0);
+ push_task_chain(rq, later_rq, next_task);
ret = 1;

resched_curr(later_rq);
@@ -2439,9 +2483,7 @@ static void pull_dl_task(struct rq *this_rq)
if (is_migration_disabled(p)) {
push_task = get_push_task(src_rq);
} else {
- deactivate_task(src_rq, p, 0);
- set_task_cpu(p, this_cpu);
- activate_task(this_rq, p, 0);
+ push_task_chain(src_rq, this_rq, p);
dmin = p->dl.deadline;
resched = true;
}
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index aa8d772efadf..bb2d61cbb5b3 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8449,6 +8449,11 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env)

lockdep_assert_rq_held(env->src_rq);

+ /*
+ * XXX connoro: Is this correct, or should we be doing chain
+ * balancing for CFS tasks too? Balancing chains that aren't
+ * part of the running task's blocked "tree" seems reasonable?
+ */
if (task_is_blocked(p))
return 0;

diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 419270b0918e..b28fc4ccc9d2 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1599,7 +1599,7 @@ static void yield_task_rt(struct rq *rq)
}

#ifdef CONFIG_SMP
-static int find_lowest_rq(struct task_struct *task);
+static int find_lowest_rq(struct task_struct *sched_ctx, struct task_struct *exec_ctx);

static int
select_task_rq_rt(struct task_struct *p, int cpu, int flags)
@@ -1649,7 +1649,10 @@ select_task_rq_rt(struct task_struct *p, int cpu, int flags)
(curr->nr_cpus_allowed < 2 || proxy->prio <= p->prio);

if (test || !rt_task_fits_capacity(p, cpu)) {
- int target = find_lowest_rq(p);
+ /* XXX connoro: double check this, but if we're waking p then
+ * it is unblocked so exec_ctx == sched_ctx == p.
+ */
+ int target = find_lowest_rq(p, p);

/*
* Bail out if we were forcing a migration to find a better
@@ -1676,12 +1679,22 @@ select_task_rq_rt(struct task_struct *p, int cpu, int flags)

static void check_preempt_equal_prio(struct rq *rq, struct task_struct *p)
{
+ struct task_struct *exec_ctx = p;
+ /*
+ * Current can't be migrated, useless to reschedule,
+ * let's hope p can move out.
+ */
/* XXX connoro: need to revise cpupri_find() to reflect the split
* context since it should look at rq->proxy for priority but rq->curr
* for affinity.
*/
if (rq_curr(rq)->nr_cpus_allowed == 1 ||
- !cpupri_find(&rq->rd->cpupri, rq_proxy(rq), NULL))
+ !cpupri_find(&rq->rd->cpupri, rq_proxy(rq), rq_curr(rq), NULL))
+ return;
+
+ /* No reason to preempt since rq->curr wouldn't change anyway */
+ exec_ctx = find_exec_ctx(rq, p);
+ if (task_current(rq, exec_ctx))
return;

/*
@@ -1689,7 +1702,7 @@ static void check_preempt_equal_prio(struct rq *rq, struct task_struct *p)
* see if it is pushed or pulled somewhere else.
*/
if (p->nr_cpus_allowed != 1 &&
- cpupri_find(&rq->rd->cpupri, p, NULL))
+ cpupri_find(&rq->rd->cpupri, p, exec_ctx, NULL))
return;

/*
@@ -1855,15 +1868,6 @@ static void put_prev_task_rt(struct rq *rq, struct task_struct *p)
/* Only try algorithms three times */
#define RT_MAX_TRIES 3

-static int pick_rt_task(struct rq *rq, struct task_struct *p, int cpu)
-{
- if (!task_on_cpu(rq, p) &&
- cpumask_test_cpu(cpu, &p->cpus_mask))
- return 1;
-
- return 0;
-}
-
/*
* Return the highest pushable rq's task, which is suitable to be executed
* on the CPU, NULL otherwise
@@ -1877,7 +1881,7 @@ static struct task_struct *pick_highest_pushable_task(struct rq *rq, int cpu)
return NULL;

plist_for_each_entry(p, head, pushable_tasks) {
- if (pick_rt_task(rq, p, cpu))
+ if (pushable_chain(rq, p, cpu) == 1)
return p;
}

@@ -1886,19 +1890,19 @@ static struct task_struct *pick_highest_pushable_task(struct rq *rq, int cpu)

static DEFINE_PER_CPU(cpumask_var_t, local_cpu_mask);

-static int find_lowest_rq(struct task_struct *task)
+static int find_lowest_rq(struct task_struct *sched_ctx, struct task_struct *exec_ctx)
{
struct sched_domain *sd;
struct cpumask *lowest_mask = this_cpu_cpumask_var_ptr(local_cpu_mask);
int this_cpu = smp_processor_id();
- int cpu = task_cpu(task);
+ int cpu = task_cpu(sched_ctx);
int ret;

/* Make sure the mask is initialized first */
if (unlikely(!lowest_mask))
return -1;

- if (task->nr_cpus_allowed == 1)
+ if (exec_ctx && exec_ctx->nr_cpus_allowed == 1)
return -1; /* No other targets possible */

/*
@@ -1907,13 +1911,13 @@ static int find_lowest_rq(struct task_struct *task)
*/
if (sched_asym_cpucap_active()) {

- ret = cpupri_find_fitness(&task_rq(task)->rd->cpupri,
- task, lowest_mask,
+ ret = cpupri_find_fitness(&task_rq(sched_ctx)->rd->cpupri,
+ sched_ctx, exec_ctx, lowest_mask,
rt_task_fits_capacity);
} else {

- ret = cpupri_find(&task_rq(task)->rd->cpupri,
- task, lowest_mask);
+ ret = cpupri_find(&task_rq(sched_ctx)->rd->cpupri,
+ sched_ctx, exec_ctx, lowest_mask);
}

if (!ret)
@@ -1977,15 +1981,48 @@ static int find_lowest_rq(struct task_struct *task)
return -1;
}

+static struct task_struct *pick_next_pushable_task(struct rq *rq)
+{
+ struct plist_head *head = &rq->rt.pushable_tasks;
+ struct task_struct *p, *push_task = NULL;
+
+ if (!has_pushable_tasks(rq))
+ return NULL;
+
+ plist_for_each_entry(p, head, pushable_tasks) {
+ if (pushable_chain(rq, p, 0)) {
+ push_task = p;
+ break;
+ }
+ }
+
+ if (!push_task)
+ return NULL;
+
+ BUG_ON(rq->cpu != task_cpu(push_task));
+ BUG_ON(task_current(rq, push_task) || task_current_proxy(rq, push_task));
+ /*XXX connoro: this check is pointless for blocked push_task. */
+ /* BUG_ON(push_task->nr_cpus_allowed <= 1); */
+
+ BUG_ON(!task_on_rq_queued(push_task));
+ BUG_ON(!rt_task(push_task));
+
+ return p;
+}
+
/* Will lock the rq it finds */
static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq)
{
+ struct task_struct *exec_ctx;
struct rq *lowest_rq = NULL;
+ bool retry;
int tries;
int cpu;

for (tries = 0; tries < RT_MAX_TRIES; tries++) {
- cpu = find_lowest_rq(task);
+ retry = false;
+ exec_ctx = find_exec_ctx(rq, task);
+ cpu = find_lowest_rq(task, exec_ctx);

if ((cpu == -1) || (cpu == rq->cpu))
break;
@@ -2004,18 +2041,77 @@ static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq)

/* if the prio of this runqueue changed, try again */
if (double_lock_balance(rq, lowest_rq)) {
+ bool fail = false;
/*
* We had to unlock the run queue. In
* the mean time, task could have
* migrated already or had its affinity changed.
* Also make sure that it wasn't scheduled on its rq.
+ *
+ * XXX connoro: releasing the rq lock means we need to re-check pushability.
+ * Some scenarios:
+ * 1) If a migration from another CPU sent a task/chain to rq
+ * that made task newly unpushable by completing a chain
+ * from task to rq->curr, then we need to bail out and push something
+ * else.
+ * 2) If our chain led off this CPU or to a dequeued task, the last waiter
+ * on this CPU might have acquired the lock and woken (or even migrated
+ * & run, handed off the lock it held, etc...). This can invalidate the
+ * result of find_lowest_rq() if our chain previously ended in a blocked
+ * task whose affinity we could ignore, but now ends in an unblocked
+ * task that can't run on lowest_rq.
+ * 3) race described at https://lore.kernel.org/all/[email protected]/
+ *
+ * Notes on these:
+ * - Scenario #2 is properly handled by rerunning find_lowest_rq
+ * - Scenario #1 requires that we fail
+ * - Scenario #3 can AFAICT only occur when rq is not this_rq(). And the
+ * suggested fix is not universally correct now that push_cpu_stop() can
+ * call this function.
*/
- if (unlikely(task_rq(task) != rq ||
- !cpumask_test_cpu(lowest_rq->cpu, &task->cpus_mask) ||
- task_on_cpu(rq, task) ||
- !rt_task(task) ||
- !task_on_rq_queued(task))) {
+ if (!rt_task(task)) {
+ fail = true;
+ } else if (rq != this_rq()) {
+ /*
+ * If we are dealing with a remote rq, then all bets are off
+ * because task might have run & then been dequeued since we
+ * released the lock, at which point our normal checks can race
+ * with migration, as described in
+ * https://lore.kernel.org/all/[email protected]/
+ * Need to repick to ensure we avoid a race.
+ * But re-picking would be unnecessary & incorrect in the
+ * push_cpu_stop() path.
+ */
+ struct task_struct *next_task = pick_next_pushable_task(rq);
+
+ if (next_task != task) {
+ fail = true;
+ } else {
+ exec_ctx = find_exec_ctx(rq, next_task);
+ retry = (exec_ctx &&
+ !cpumask_test_cpu(lowest_rq->cpu,
+ &exec_ctx->cpus_mask));
+ }
+ } else {
+ /*
+ * Chain level balancing introduces new ways for our choice of
+ * task & rq to become invalid when we release the rq lock, e.g.:
+ * 1) Migration to rq from another CPU makes task newly unpushable
+ * by completing a "blocked chain" from task to rq->curr.
+ * Fail so a different task can be chosen for push.
+ * 2) In cases where task's blocked chain led to a dequeued task
+ * or one on another rq, the last waiter in the chain on this
+ * rq might have acquired the lock and woken, meaning we must
+ * pick a different rq if its affinity prevents running on
+ * lowest_rq.
+ */
+ int pushable = pushable_chain(rq, task, lowest_rq->cpu);
+
+ fail = !pushable;
+ retry = pushable == -1;
+ }

+ if (unlikely(fail)) {
double_unlock_balance(rq, lowest_rq);
lowest_rq = NULL;
break;
@@ -2023,7 +2119,7 @@ static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq)
}

/* If this rq is still suitable use it. */
- if (lowest_rq->rt.highest_prio.curr > task->prio)
+ if (lowest_rq->rt.highest_prio.curr > task->prio && !retry)
break;

/* try again */
@@ -2034,26 +2130,6 @@ static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq)
return lowest_rq;
}

-static struct task_struct *pick_next_pushable_task(struct rq *rq)
-{
- struct task_struct *p;
-
- if (!has_pushable_tasks(rq))
- return NULL;
-
- p = plist_first_entry(&rq->rt.pushable_tasks,
- struct task_struct, pushable_tasks);
-
- BUG_ON(rq->cpu != task_cpu(p));
- BUG_ON(task_current(rq, p) || task_current_proxy(rq, p));
- BUG_ON(p->nr_cpus_allowed <= 1);
-
- BUG_ON(!task_on_rq_queued(p));
- BUG_ON(!rt_task(p));
-
- return p;
-}
-
/*
* If the current CPU has more than one RT task, see if the non
* running task can migrate over to a CPU that is running a task
@@ -2109,10 +2185,10 @@ static int push_rt_task(struct rq *rq, bool pull)
* If #3 is needed, might be best to make a separate patch with
* all the "chain-level load balancing" changes.
*/
- if (rq_curr(rq)->sched_class != &rt_sched_class)
+ if (rq_proxy(rq)->sched_class != &rt_sched_class)
return 0;

- cpu = find_lowest_rq(rq_curr(rq));
+ cpu = find_lowest_rq(rq_proxy(rq), rq_curr(rq));
if (cpu == -1 || cpu == rq->cpu)
return 0;

@@ -2146,6 +2222,15 @@ static int push_rt_task(struct rq *rq, bool pull)
* case for when we push a blocked task whose lock owner is not on
* this rq.
*/
+ /* XXX connoro: we might unlock the rq here. But it might be the case that
+ * the unpushable set can only *grow* and not shrink? Hmmm
+ * - load balancing should not pull anything from the active blocked tree
+ * - rq->curr can't have made progress or released mutexes
+ * - we can't have scheduled, right? Is preemption disabled here?
+ * - however, suppose proxy() pushed a task or chain here that linked our chain
+ * into the active tree.
+ */
+ /* XXX connoro: we need to pass in */
lowest_rq = find_lock_lowest_rq(next_task, rq);
if (!lowest_rq) {
struct task_struct *task;
@@ -2180,9 +2265,7 @@ static int push_rt_task(struct rq *rq, bool pull)
goto retry;
}

- deactivate_task(rq, next_task, 0);
- set_task_cpu(next_task, lowest_rq->cpu);
- activate_task(lowest_rq, next_task, 0);
+ push_task_chain(rq, lowest_rq, next_task);
resched_curr(lowest_rq);
ret = 1;

@@ -2453,9 +2536,8 @@ static void pull_rt_task(struct rq *this_rq)
if (is_migration_disabled(p)) {
push_task = get_push_task(src_rq);
} else {
- deactivate_task(src_rq, p, 0);
- set_task_cpu(p, this_cpu);
- activate_task(this_rq, p, 0);
+ /* XXX connoro: need to do chain migration here. */
+ push_task_chain(src_rq, this_rq, p);
resched = true;
}
/*
@@ -2469,6 +2551,14 @@ static void pull_rt_task(struct rq *this_rq)
double_unlock_balance(this_rq, src_rq);

if (push_task) {
+ /*
+ * can push_cpu_stop get away with following blocked_proxy
+ * even though it's not following it from rq->curr?
+ * I can't figure out if that's correct.
+ * Ha! actually the trick is that get_push_task should return
+ * the proxy!
+ * So push_cpu_stop just follows blocked_on relations.
+ */
raw_spin_rq_unlock(this_rq);
stop_one_cpu_nowait(src_rq->cpu, push_cpu_stop,
push_task, &src_rq->push_work);
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 01f82ace084a..1c998ddc7cb8 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2425,7 +2425,7 @@ static inline struct task_struct *get_push_task(struct rq *rq)
* chain during the __schedule() call immediately after rq->curr is
* pushed.
*/
- struct task_struct *p = rq_curr(rq);
+ struct task_struct *p = rq_proxy(rq);

lockdep_assert_rq_held(rq);

@@ -3412,4 +3412,152 @@ static inline void switch_mm_cid(struct task_struct *prev, struct task_struct *n
static inline void switch_mm_cid(struct task_struct *prev, struct task_struct *next) { }
#endif

+#ifdef CONFIG_SMP
+
+static inline bool task_queued_on_rq(struct rq *rq, struct task_struct *task)
+{
+ if (!task_on_rq_queued(task))
+ return false;
+ smp_rmb();
+ if (task_rq(task) != rq)
+ return false;
+ smp_rmb();
+ if (!task_on_rq_queued(task))
+ return false;
+ return true;
+}
+
+static inline void push_task_chain(struct rq *rq, struct rq *dst_rq, struct task_struct *task)
+{
+ struct task_struct *owner;
+
+ lockdep_assert_rq_held(rq);
+ lockdep_assert_rq_held(dst_rq);
+
+ BUG_ON(!task_queued_on_rq(rq, task));
+ BUG_ON(task_current_proxy(rq, task));
+
+ for (; task != NULL; task = owner) {
+ /*
+ * XXX connoro: note that if task is currently in the process of migrating to
+ * rq (but not yet enqueued since we hold the rq lock) then we stop only after
+ * pushing all the preceding tasks. This isn't ideal (the pushed chain will
+ * probably get sent back as soon as it's picked on dst_rq) but short of holding
+ * all of the rq locks while balancing, I don't see how we can avoid this, and
+ * some extra migrations are clearly better than trying to dequeue task from rq
+ * before it's ever enqueued here.
+ *
+ * XXX connoro: catastrophic race when task is dequeued on rq to start and then
+ * wakes on another rq in between the two checks.
+ * There's probably a better way than the below though...
+ */
+ if (!task_queued_on_rq(rq, task) || task_current_proxy(rq, task))
+ break;
+
+ if (task_is_blocked(task)) {
+ owner = mutex_owner(task->blocked_on);
+ } else {
+ owner = NULL;
+ }
+ deactivate_task(rq, task, 0);
+ set_task_cpu(task, dst_rq->cpu);
+ activate_task(dst_rq, task, 0);
+ if (task == owner)
+ break;
+ }
+}
+
+/*
+ * Returns the unblocked task at the end of the blocked chain starting with p
+ * if that chain is composed entirely of tasks enqueued on rq, or NULL otherwise.
+ */
+static inline struct task_struct *find_exec_ctx(struct rq *rq, struct task_struct *p)
+{
+ struct task_struct *exec_ctx, *owner;
+ struct mutex *mutex;
+
+ lockdep_assert_rq_held(rq);
+
+ /*
+ * XXX connoro: I *think* we have to return rq->curr if it occurs anywhere in the chain
+ * to avoid races in certain scenarios where rq->curr has just blocked but can't
+ * switch out until we release its rq lock.
+ * Should the check be task_on_cpu() instead? Does it matter? I don't think this
+ * gets called while context switch is actually ongoing which IIUC is where this would
+ * make a difference...
+ * correction: it can get called from finish_task_switch apparently. Unless that's wrong;
+ * double check.
+ */
+ for (exec_ctx = p; task_is_blocked(exec_ctx) && !task_on_cpu(rq, exec_ctx);
+ exec_ctx = owner) {
+ mutex = exec_ctx->blocked_on;
+ owner = mutex_owner(mutex);
+ if (owner == exec_ctx)
+ break;
+
+ /*
+ * XXX connoro: can we race here if owner is migrating to rq?
+ * owner has to be dequeued from its old rq before set_task_cpu
+ * is called, and we hold this rq's lock so it can't be
+ * enqueued here yet...right?
+ *
+ * Also if owner is dequeued we can race with its wakeup on another
+ * CPU...at which point all hell will break loose potentially...
+ */
+ if (!task_queued_on_rq(rq, owner) || task_current_proxy(rq, owner)) {
+ exec_ctx = NULL;
+ break;
+ }
+ }
+ return exec_ctx;
+}
+
+
+/*
+ * Returns:
+ * 1 if chain is pushable and affinity does not prevent pushing to cpu
+ * 0 if chain is unpushable
+ * -1 if chain is pushable but affinity blocks running on cpu.
+ * XXX connoro: maybe there's a cleaner way to do this...
+ */
+static inline int pushable_chain(struct rq *rq, struct task_struct *p, int cpu)
+{
+ struct task_struct *exec_ctx;
+
+ lockdep_assert_rq_held(rq);
+
+ /*
+ * XXX connoro: 2 issues combine here:
+ * 1) we apparently have some stuff on the pushable list after it's
+ * dequeued from the rq
+ * 2) This check can race with migration/wakeup if p was already dequeued
+ * when we got the rq lock...
+ */
+ if (task_rq(p) != rq || !task_on_rq_queued(p))
+ return 0;
+
+ exec_ctx = find_exec_ctx(rq, p);
+ /*
+ * Chain leads off the rq, we're free to push it anywhere.
+ *
+ * One wrinkle with relying on find_exec_ctx is that when the chain
+ * leads to a task currently migrating to rq, we see the chain as
+ * pushable & push everything prior to the migrating task. Even if
+ * we checked explicitly for this case, we could still race with a
+ * migration after the check.
+ * This shouldn't permanently produce a bad state though, as proxy()
+ * will send the chain back to rq and by that point the migration
+ * should be complete & a proper push can occur.
+ */
+ if (!exec_ctx)
+ return 1;
+
+ if (task_on_cpu(rq, exec_ctx) || exec_ctx->nr_cpus_allowed <= 1)
+ return 0;
+
+ return cpumask_test_cpu(cpu, &exec_ctx->cpus_mask) ? 1 : -1;
+}
+
+#endif
+
#endif /* _KERNEL_SCHED_SCHED_H */
--
2.40.0.rc1.284.g88254d51c5-goog


2023-03-21 02:51:19

by John Stultz

[permalink] [raw]
Subject: Re: [PATCH v2 00/12] Reviving the Proxy Execution Series v2

On Mon, Mar 20, 2023 at 4:37 PM John Stultz <[email protected]> wrote:
> Changes since Connor’s last submission:
> ---------------------------------------
> * I dropped the patch changing mutex::wait_lock to always save/restore irq
> flags (as Joel raised a concern that the patch wasn’t actually
> necessary).

Well, despite a bit of testing prior, it is of course immediately
after sending it I managed to trip lockdep to get a warning on this
(though it tripped on the blocked_lock not the wait_lock), so I'll be
re-adding that patch (or a variant) back in in the next series.

[ 1.351993] CPU0 CPU1
[ 1.351993] ---- ----
[ 1.351993] lock(&p->blocked_lock);
[ 1.351993] local_irq_disable();
[ 1.351993] lock(&rq->__lock);
[ 1.351993] lock(&p->blocked_lock);
[ 1.351993] <Interrupt>
[ 1.351993] lock(&rq->__lock);

thanks
-john

2023-03-21 11:12:49

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v2 07/12] sched: Replace rq->curr access w/ rq_curr(rq)

Hi John,

I love your patch! Yet something to improve:

[auto build test ERROR on tip/sched/core]
[also build test ERROR on tip/locking/core tip/master tip/auto-latest linus/master v6.3-rc3 next-20230321]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url: https://github.com/intel-lab-lkp/linux/commits/John-Stultz/locking-ww_mutex-Remove-wakeups-from-under-mutex-wait_lock/20230321-074149
patch link: https://lore.kernel.org/r/20230320233720.3488453-8-jstultz%40google.com
patch subject: [PATCH v2 07/12] sched: Replace rq->curr access w/ rq_curr(rq)
config: x86_64-rhel-8.3 (https://download.01.org/0day-ci/archive/20230321/[email protected]/config)
compiler: gcc-11 (Debian 11.3.0-8) 11.3.0
reproduce (this is a W=1 build):
# https://github.com/intel-lab-lkp/linux/commit/77507c3a77e09cdc627a73364246f252d918de41
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review John-Stultz/locking-ww_mutex-Remove-wakeups-from-under-mutex-wait_lock/20230321-074149
git checkout 77507c3a77e09cdc627a73364246f252d918de41
# save the config file
mkdir build_dir && cp config build_dir/.config
make W=1 O=build_dir ARCH=x86_64 olddefconfig
make W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <[email protected]>
| Link: https://lore.kernel.org/oe-kbuild-all/[email protected]/

All errors (new ones prefixed by >>):

In file included from include/linux/rculist.h:11,
from include/linux/sched/signal.h:5,
from include/linux/sched/cputime.h:5,
from kernel/sched/build_policy.c:17:
kernel/sched/cputime.c: In function 'kcpustat_field':
>> kernel/sched/cputime.c:997:42: error: 'struct rq' has no member named 'curr'
997 | curr = rcu_dereference(rq->curr);
| ^~
include/linux/rcupdate.h:453:17: note: in definition of macro '__rcu_dereference_check'
453 | typeof(*p) *local = (typeof(*p) *__force)READ_ONCE(p); \
| ^
include/linux/rcupdate.h:673:28: note: in expansion of macro 'rcu_dereference_check'
673 | #define rcu_dereference(p) rcu_dereference_check(p, 0)
| ^~~~~~~~~~~~~~~~~~~~~
kernel/sched/cputime.c:997:24: note: in expansion of macro 'rcu_dereference'
997 | curr = rcu_dereference(rq->curr);
| ^~~~~~~~~~~~~~~
>> kernel/sched/cputime.c:997:42: error: 'struct rq' has no member named 'curr'
997 | curr = rcu_dereference(rq->curr);
| ^~
include/linux/rcupdate.h:453:38: note: in definition of macro '__rcu_dereference_check'
453 | typeof(*p) *local = (typeof(*p) *__force)READ_ONCE(p); \
| ^
include/linux/rcupdate.h:673:28: note: in expansion of macro 'rcu_dereference_check'
673 | #define rcu_dereference(p) rcu_dereference_check(p, 0)
| ^~~~~~~~~~~~~~~~~~~~~
kernel/sched/cputime.c:997:24: note: in expansion of macro 'rcu_dereference'
997 | curr = rcu_dereference(rq->curr);
| ^~~~~~~~~~~~~~~
In file included from <command-line>:
>> kernel/sched/cputime.c:997:42: error: 'struct rq' has no member named 'curr'
997 | curr = rcu_dereference(rq->curr);
| ^~
include/linux/compiler_types.h:352:23: note: in definition of macro '__compiletime_assert'
352 | if (!(condition)) \
| ^~~~~~~~~
include/linux/compiler_types.h:372:9: note: in expansion of macro '_compiletime_assert'
372 | _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
| ^~~~~~~~~~~~~~~~~~~
include/asm-generic/rwonce.h:36:9: note: in expansion of macro 'compiletime_assert'
36 | compiletime_assert(__native_word(t) || sizeof(t) == sizeof(long long), \
| ^~~~~~~~~~~~~~~~~~
include/asm-generic/rwonce.h:36:28: note: in expansion of macro '__native_word'
36 | compiletime_assert(__native_word(t) || sizeof(t) == sizeof(long long), \
| ^~~~~~~~~~~~~
include/asm-generic/rwonce.h:49:9: note: in expansion of macro 'compiletime_assert_rwonce_type'
49 | compiletime_assert_rwonce_type(x); \
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/rcupdate.h:453:50: note: in expansion of macro 'READ_ONCE'
453 | typeof(*p) *local = (typeof(*p) *__force)READ_ONCE(p); \
| ^~~~~~~~~
include/linux/rcupdate.h:601:9: note: in expansion of macro '__rcu_dereference_check'
601 | __rcu_dereference_check((p), __UNIQUE_ID(rcu), \
| ^~~~~~~~~~~~~~~~~~~~~~~
include/linux/rcupdate.h:673:28: note: in expansion of macro 'rcu_dereference_check'
673 | #define rcu_dereference(p) rcu_dereference_check(p, 0)
| ^~~~~~~~~~~~~~~~~~~~~
kernel/sched/cputime.c:997:24: note: in expansion of macro 'rcu_dereference'
997 | curr = rcu_dereference(rq->curr);
| ^~~~~~~~~~~~~~~
>> kernel/sched/cputime.c:997:42: error: 'struct rq' has no member named 'curr'
997 | curr = rcu_dereference(rq->curr);
| ^~
include/linux/compiler_types.h:352:23: note: in definition of macro '__compiletime_assert'
352 | if (!(condition)) \
| ^~~~~~~~~
include/linux/compiler_types.h:372:9: note: in expansion of macro '_compiletime_assert'
372 | _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
| ^~~~~~~~~~~~~~~~~~~
include/asm-generic/rwonce.h:36:9: note: in expansion of macro 'compiletime_assert'
36 | compiletime_assert(__native_word(t) || sizeof(t) == sizeof(long long), \
| ^~~~~~~~~~~~~~~~~~
include/asm-generic/rwonce.h:36:28: note: in expansion of macro '__native_word'
36 | compiletime_assert(__native_word(t) || sizeof(t) == sizeof(long long), \
| ^~~~~~~~~~~~~
include/asm-generic/rwonce.h:49:9: note: in expansion of macro 'compiletime_assert_rwonce_type'
49 | compiletime_assert_rwonce_type(x); \
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/rcupdate.h:453:50: note: in expansion of macro 'READ_ONCE'
453 | typeof(*p) *local = (typeof(*p) *__force)READ_ONCE(p); \
| ^~~~~~~~~
include/linux/rcupdate.h:601:9: note: in expansion of macro '__rcu_dereference_check'
601 | __rcu_dereference_check((p), __UNIQUE_ID(rcu), \
| ^~~~~~~~~~~~~~~~~~~~~~~
include/linux/rcupdate.h:673:28: note: in expansion of macro 'rcu_dereference_check'
673 | #define rcu_dereference(p) rcu_dereference_check(p, 0)
| ^~~~~~~~~~~~~~~~~~~~~
kernel/sched/cputime.c:997:24: note: in expansion of macro 'rcu_dereference'
997 | curr = rcu_dereference(rq->curr);
| ^~~~~~~~~~~~~~~
>> kernel/sched/cputime.c:997:42: error: 'struct rq' has no member named 'curr'
997 | curr = rcu_dereference(rq->curr);
| ^~
include/linux/compiler_types.h:352:23: note: in definition of macro '__compiletime_assert'
352 | if (!(condition)) \
| ^~~~~~~~~
include/linux/compiler_types.h:372:9: note: in expansion of macro '_compiletime_assert'
372 | _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
| ^~~~~~~~~~~~~~~~~~~
include/asm-generic/rwonce.h:36:9: note: in expansion of macro 'compiletime_assert'
36 | compiletime_assert(__native_word(t) || sizeof(t) == sizeof(long long), \
| ^~~~~~~~~~~~~~~~~~
include/asm-generic/rwonce.h:36:28: note: in expansion of macro '__native_word'
36 | compiletime_assert(__native_word(t) || sizeof(t) == sizeof(long long), \
| ^~~~~~~~~~~~~
include/asm-generic/rwonce.h:49:9: note: in expansion of macro 'compiletime_assert_rwonce_type'
49 | compiletime_assert_rwonce_type(x); \
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/rcupdate.h:453:50: note: in expansion of macro 'READ_ONCE'
453 | typeof(*p) *local = (typeof(*p) *__force)READ_ONCE(p); \
| ^~~~~~~~~
include/linux/rcupdate.h:601:9: note: in expansion of macro '__rcu_dereference_check'
601 | __rcu_dereference_check((p), __UNIQUE_ID(rcu), \
| ^~~~~~~~~~~~~~~~~~~~~~~
include/linux/rcupdate.h:673:28: note: in expansion of macro 'rcu_dereference_check'
673 | #define rcu_dereference(p) rcu_dereference_check(p, 0)
| ^~~~~~~~~~~~~~~~~~~~~
kernel/sched/cputime.c:997:24: note: in expansion of macro 'rcu_dereference'
997 | curr = rcu_dereference(rq->curr);
| ^~~~~~~~~~~~~~~
>> kernel/sched/cputime.c:997:42: error: 'struct rq' has no member named 'curr'
997 | curr = rcu_dereference(rq->curr);
| ^~
include/linux/compiler_types.h:352:23: note: in definition of macro '__compiletime_assert'
352 | if (!(condition)) \
| ^~~~~~~~~
include/linux/compiler_types.h:372:9: note: in expansion of macro '_compiletime_assert'
372 | _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
| ^~~~~~~~~~~~~~~~~~~
include/asm-generic/rwonce.h:36:9: note: in expansion of macro 'compiletime_assert'
36 | compiletime_assert(__native_word(t) || sizeof(t) == sizeof(long long), \
| ^~~~~~~~~~~~~~~~~~
include/asm-generic/rwonce.h:36:28: note: in expansion of macro '__native_word'
36 | compiletime_assert(__native_word(t) || sizeof(t) == sizeof(long long), \
| ^~~~~~~~~~~~~
include/asm-generic/rwonce.h:49:9: note: in expansion of macro 'compiletime_assert_rwonce_type'
49 | compiletime_assert_rwonce_type(x); \
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/rcupdate.h:453:50: note: in expansion of macro 'READ_ONCE'
453 | typeof(*p) *local = (typeof(*p) *__force)READ_ONCE(p); \
| ^~~~~~~~~
include/linux/rcupdate.h:601:9: note: in expansion of macro '__rcu_dereference_check'
601 | __rcu_dereference_check((p), __UNIQUE_ID(rcu), \
| ^~~~~~~~~~~~~~~~~~~~~~~
include/linux/rcupdate.h:673:28: note: in expansion of macro 'rcu_dereference_check'
673 | #define rcu_dereference(p) rcu_dereference_check(p, 0)
| ^~~~~~~~~~~~~~~~~~~~~
kernel/sched/cputime.c:997:24: note: in expansion of macro 'rcu_dereference'
997 | curr = rcu_dereference(rq->curr);
| ^~~~~~~~~~~~~~~
>> kernel/sched/cputime.c:997:42: error: 'struct rq' has no member named 'curr'
997 | curr = rcu_dereference(rq->curr);
| ^~
include/linux/compiler_types.h:352:23: note: in definition of macro '__compiletime_assert'
352 | if (!(condition)) \
| ^~~~~~~~~
include/linux/compiler_types.h:372:9: note: in expansion of macro '_compiletime_assert'
372 | _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
| ^~~~~~~~~~~~~~~~~~~
include/asm-generic/rwonce.h:36:9: note: in expansion of macro 'compiletime_assert'
36 | compiletime_assert(__native_word(t) || sizeof(t) == sizeof(long long), \
| ^~~~~~~~~~~~~~~~~~
include/asm-generic/rwonce.h:49:9: note: in expansion of macro 'compiletime_assert_rwonce_type'
49 | compiletime_assert_rwonce_type(x); \
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/rcupdate.h:453:50: note: in expansion of macro 'READ_ONCE'
453 | typeof(*p) *local = (typeof(*p) *__force)READ_ONCE(p); \
| ^~~~~~~~~
include/linux/rcupdate.h:601:9: note: in expansion of macro '__rcu_dereference_check'
601 | __rcu_dereference_check((p), __UNIQUE_ID(rcu), \
| ^~~~~~~~~~~~~~~~~~~~~~~
include/linux/rcupdate.h:673:28: note: in expansion of macro 'rcu_dereference_check'
673 | #define rcu_dereference(p) rcu_dereference_check(p, 0)
| ^~~~~~~~~~~~~~~~~~~~~
kernel/sched/cputime.c:997:24: note: in expansion of macro 'rcu_dereference'
997 | curr = rcu_dereference(rq->curr);
| ^~~~~~~~~~~~~~~
>> kernel/sched/cputime.c:997:42: error: 'struct rq' has no member named 'curr'
997 | curr = rcu_dereference(rq->curr);
| ^~
include/linux/compiler_types.h:328:27: note: in definition of macro '__unqual_scalar_typeof'
328 | _Generic((x), \
| ^
include/asm-generic/rwonce.h:50:9: note: in expansion of macro '__READ_ONCE'
50 | __READ_ONCE(x); \
| ^~~~~~~~~~~
include/linux/rcupdate.h:453:50: note: in expansion of macro 'READ_ONCE'
453 | typeof(*p) *local = (typeof(*p) *__force)READ_ONCE(p); \
| ^~~~~~~~~
include/linux/rcupdate.h:601:9: note: in expansion of macro '__rcu_dereference_check'
601 | __rcu_dereference_check((p), __UNIQUE_ID(rcu), \
| ^~~~~~~~~~~~~~~~~~~~~~~
include/linux/rcupdate.h:673:28: note: in expansion of macro 'rcu_dereference_check'
673 | #define rcu_dereference(p) rcu_dereference_check(p, 0)
| ^~~~~~~~~~~~~~~~~~~~~
kernel/sched/cputime.c:997:24: note: in expansion of macro 'rcu_dereference'
997 | curr = rcu_dereference(rq->curr);
| ^~~~~~~~~~~~~~~
In file included from ./arch/x86/include/generated/asm/rwonce.h:1,
from include/linux/compiler.h:247,
from include/linux/build_bug.h:5,
from include/linux/container_of.h:5,
from include/linux/list.h:5,
from include/linux/smp.h:12,
from include/linux/sched/clock.h:5,
from kernel/sched/build_policy.c:16:
>> kernel/sched/cputime.c:997:42: error: 'struct rq' has no member named 'curr'
997 | curr = rcu_dereference(rq->curr);
| ^~
include/asm-generic/rwonce.h:44:73: note: in definition of macro '__READ_ONCE'
44 | #define __READ_ONCE(x) (*(const volatile __unqual_scalar_typeof(x) *)&(x))
| ^
include/linux/rcupdate.h:453:50: note: in expansion of macro 'READ_ONCE'
453 | typeof(*p) *local = (typeof(*p) *__force)READ_ONCE(p); \
| ^~~~~~~~~
include/linux/rcupdate.h:601:9: note: in expansion of macro '__rcu_dereference_check'
601 | __rcu_dereference_check((p), __UNIQUE_ID(rcu), \
| ^~~~~~~~~~~~~~~~~~~~~~~
include/linux/rcupdate.h:673:28: note: in expansion of macro 'rcu_dereference_check'
673 | #define rcu_dereference(p) rcu_dereference_check(p, 0)
| ^~~~~~~~~~~~~~~~~~~~~
kernel/sched/cputime.c:997:24: note: in expansion of macro 'rcu_dereference'
997 | curr = rcu_dereference(rq->curr);
| ^~~~~~~~~~~~~~~
In file included from include/linux/rculist.h:11,
from include/linux/sched/signal.h:5,
from include/linux/sched/cputime.h:5,
from kernel/sched/build_policy.c:17:
>> kernel/sched/cputime.c:997:42: error: 'struct rq' has no member named 'curr'
997 | curr = rcu_dereference(rq->curr);
| ^~
include/linux/rcupdate.h:456:19: note: in definition of macro '__rcu_dereference_check'
456 | ((typeof(*p) __force __kernel *)(local)); \
| ^
include/linux/rcupdate.h:673:28: note: in expansion of macro 'rcu_dereference_check'
673 | #define rcu_dereference(p) rcu_dereference_check(p, 0)
| ^~~~~~~~~~~~~~~~~~~~~
kernel/sched/cputime.c:997:24: note: in expansion of macro 'rcu_dereference'
997 | curr = rcu_dereference(rq->curr);
| ^~~~~~~~~~~~~~~
kernel/sched/cputime.c: In function 'kcpustat_cpu_fetch':
kernel/sched/cputime.c:1084:42: error: 'struct rq' has no member named 'curr'
1084 | curr = rcu_dereference(rq->curr);
| ^~
include/linux/rcupdate.h:453:17: note: in definition of macro '__rcu_dereference_check'
453 | typeof(*p) *local = (typeof(*p) *__force)READ_ONCE(p); \
| ^
include/linux/rcupdate.h:673:28: note: in expansion of macro 'rcu_dereference_check'
673 | #define rcu_dereference(p) rcu_dereference_check(p, 0)
| ^~~~~~~~~~~~~~~~~~~~~
kernel/sched/cputime.c:1084:24: note: in expansion of macro 'rcu_dereference'
1084 | curr = rcu_dereference(rq->curr);
| ^~~~~~~~~~~~~~~
kernel/sched/cputime.c:1084:42: error: 'struct rq' has no member named 'curr'
1084 | curr = rcu_dereference(rq->curr);
| ^~
include/linux/rcupdate.h:453:38: note: in definition of macro '__rcu_dereference_check'
453 | typeof(*p) *local = (typeof(*p) *__force)READ_ONCE(p); \
| ^
include/linux/rcupdate.h:673:28: note: in expansion of macro 'rcu_dereference_check'
673 | #define rcu_dereference(p) rcu_dereference_check(p, 0)
| ^~~~~~~~~~~~~~~~~~~~~
kernel/sched/cputime.c:1084:24: note: in expansion of macro 'rcu_dereference'
1084 | curr = rcu_dereference(rq->curr);
| ^~~~~~~~~~~~~~~
In file included from <command-line>:
kernel/sched/cputime.c:1084:42: error: 'struct rq' has no member named 'curr'
1084 | curr = rcu_dereference(rq->curr);
| ^~
include/linux/compiler_types.h:352:23: note: in definition of macro '__compiletime_assert'
352 | if (!(condition)) \
| ^~~~~~~~~
include/linux/compiler_types.h:372:9: note: in expansion of macro '_compiletime_assert'
372 | _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
| ^~~~~~~~~~~~~~~~~~~
include/asm-generic/rwonce.h:36:9: note: in expansion of macro 'compiletime_assert'
36 | compiletime_assert(__native_word(t) || sizeof(t) == sizeof(long long), \
| ^~~~~~~~~~~~~~~~~~
include/asm-generic/rwonce.h:36:28: note: in expansion of macro '__native_word'
36 | compiletime_assert(__native_word(t) || sizeof(t) == sizeof(long long), \
| ^~~~~~~~~~~~~
include/asm-generic/rwonce.h:49:9: note: in expansion of macro 'compiletime_assert_rwonce_type'
49 | compiletime_assert_rwonce_type(x); \
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/rcupdate.h:453:50: note: in expansion of macro 'READ_ONCE'
453 | typeof(*p) *local = (typeof(*p) *__force)READ_ONCE(p); \
| ^~~~~~~~~
include/linux/rcupdate.h:601:9: note: in expansion of macro '__rcu_dereference_check'
601 | __rcu_dereference_check((p), __UNIQUE_ID(rcu), \
| ^~~~~~~~~~~~~~~~~~~~~~~
include/linux/rcupdate.h:673:28: note: in expansion of macro 'rcu_dereference_check'
673 | #define rcu_dereference(p) rcu_dereference_check(p, 0)
| ^~~~~~~~~~~~~~~~~~~~~
kernel/sched/cputime.c:1084:24: note: in expansion of macro 'rcu_dereference'
1084 | curr = rcu_dereference(rq->curr);
| ^~~~~~~~~~~~~~~
kernel/sched/cputime.c:1084:42: error: 'struct rq' has no member named 'curr'
1084 | curr = rcu_dereference(rq->curr);
| ^~
include/linux/compiler_types.h:352:23: note: in definition of macro '__compiletime_assert'
352 | if (!(condition)) \
| ^~~~~~~~~
include/linux/compiler_types.h:372:9: note: in expansion of macro '_compiletime_assert'
372 | _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
| ^~~~~~~~~~~~~~~~~~~
include/asm-generic/rwonce.h:36:9: note: in expansion of macro 'compiletime_assert'
36 | compiletime_assert(__native_word(t) || sizeof(t) == sizeof(long long), \
| ^~~~~~~~~~~~~~~~~~
include/asm-generic/rwonce.h:36:28: note: in expansion of macro '__native_word'
36 | compiletime_assert(__native_word(t) || sizeof(t) == sizeof(long long), \
| ^~~~~~~~~~~~~
include/asm-generic/rwonce.h:49:9: note: in expansion of macro 'compiletime_assert_rwonce_type'
49 | compiletime_assert_rwonce_type(x); \
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/rcupdate.h:453:50: note: in expansion of macro 'READ_ONCE'
453 | typeof(*p) *local = (typeof(*p) *__force)READ_ONCE(p); \
| ^~~~~~~~~
include/linux/rcupdate.h:601:9: note: in expansion of macro '__rcu_dereference_check'
601 | __rcu_dereference_check((p), __UNIQUE_ID(rcu), \
| ^~~~~~~~~~~~~~~~~~~~~~~
include/linux/rcupdate.h:673:28: note: in expansion of macro 'rcu_dereference_check'
673 | #define rcu_dereference(p) rcu_dereference_check(p, 0)
| ^~~~~~~~~~~~~~~~~~~~~
kernel/sched/cputime.c:1084:24: note: in expansion of macro 'rcu_dereference'
1084 | curr = rcu_dereference(rq->curr);
| ^~~~~~~~~~~~~~~
kernel/sched/cputime.c:1084:42: error: 'struct rq' has no member named 'curr'
1084 | curr = rcu_dereference(rq->curr);
| ^~


vim +997 kernel/sched/cputime.c

64eea63c19a2c3 Frederic Weisbecker 2019-10-25 979
64eea63c19a2c3 Frederic Weisbecker 2019-10-25 980 u64 kcpustat_field(struct kernel_cpustat *kcpustat,
64eea63c19a2c3 Frederic Weisbecker 2019-10-25 981 enum cpu_usage_stat usage, int cpu)
64eea63c19a2c3 Frederic Weisbecker 2019-10-25 982 {
64eea63c19a2c3 Frederic Weisbecker 2019-10-25 983 u64 *cpustat = kcpustat->cpustat;
e0d648f9d883ec Borislav Petkov 2020-03-27 984 u64 val = cpustat[usage];
64eea63c19a2c3 Frederic Weisbecker 2019-10-25 985 struct rq *rq;
64eea63c19a2c3 Frederic Weisbecker 2019-10-25 986 int err;
64eea63c19a2c3 Frederic Weisbecker 2019-10-25 987
64eea63c19a2c3 Frederic Weisbecker 2019-10-25 988 if (!vtime_accounting_enabled_cpu(cpu))
e0d648f9d883ec Borislav Petkov 2020-03-27 989 return val;
64eea63c19a2c3 Frederic Weisbecker 2019-10-25 990
64eea63c19a2c3 Frederic Weisbecker 2019-10-25 991 rq = cpu_rq(cpu);
64eea63c19a2c3 Frederic Weisbecker 2019-10-25 992
64eea63c19a2c3 Frederic Weisbecker 2019-10-25 993 for (;;) {
64eea63c19a2c3 Frederic Weisbecker 2019-10-25 994 struct task_struct *curr;
64eea63c19a2c3 Frederic Weisbecker 2019-10-25 995
64eea63c19a2c3 Frederic Weisbecker 2019-10-25 996 rcu_read_lock();
64eea63c19a2c3 Frederic Weisbecker 2019-10-25 @997 curr = rcu_dereference(rq->curr);
64eea63c19a2c3 Frederic Weisbecker 2019-10-25 998 if (WARN_ON_ONCE(!curr)) {
64eea63c19a2c3 Frederic Weisbecker 2019-10-25 999 rcu_read_unlock();
64eea63c19a2c3 Frederic Weisbecker 2019-10-25 1000 return cpustat[usage];
64eea63c19a2c3 Frederic Weisbecker 2019-10-25 1001 }
64eea63c19a2c3 Frederic Weisbecker 2019-10-25 1002
5a1c95580f1d89 Frederic Weisbecker 2019-11-21 1003 err = kcpustat_field_vtime(cpustat, curr, usage, cpu, &val);
64eea63c19a2c3 Frederic Weisbecker 2019-10-25 1004 rcu_read_unlock();
64eea63c19a2c3 Frederic Weisbecker 2019-10-25 1005
64eea63c19a2c3 Frederic Weisbecker 2019-10-25 1006 if (!err)
64eea63c19a2c3 Frederic Weisbecker 2019-10-25 1007 return val;
64eea63c19a2c3 Frederic Weisbecker 2019-10-25 1008
64eea63c19a2c3 Frederic Weisbecker 2019-10-25 1009 cpu_relax();
64eea63c19a2c3 Frederic Weisbecker 2019-10-25 1010 }
64eea63c19a2c3 Frederic Weisbecker 2019-10-25 1011 }
64eea63c19a2c3 Frederic Weisbecker 2019-10-25 1012 EXPORT_SYMBOL_GPL(kcpustat_field);
74722bb223d0f2 Frederic Weisbecker 2019-11-21 1013

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests

2023-03-22 04:00:57

by John Stultz

[permalink] [raw]
Subject: Re: [PATCH v2 07/12] sched: Replace rq->curr access w/ rq_curr(rq)

On Tue, Mar 21, 2023 at 4:12 AM kernel test robot <[email protected]> wrote:
>
> Hi John,
>
> I love your patch! Yet something to improve:
>
> [auto build test ERROR on tip/sched/core]
> [also build test ERROR on tip/locking/core tip/master tip/auto-latest linus/master v6.3-rc3 next-20230321]
> [If your patch is applied to the wrong git tree, kindly drop us a note.
> And when submitting patch, we suggest to use '--base' as documented in
> https://git-scm.com/docs/git-format-patch#_base_tree_information]
>
> url: https://github.com/intel-lab-lkp/linux/commits/John-Stultz/locking-ww_mutex-Remove-wakeups-from-under-mutex-wait_lock/20230321-074149
> patch link: https://lore.kernel.org/r/20230320233720.3488453-8-jstultz%40google.com
> patch subject: [PATCH v2 07/12] sched: Replace rq->curr access w/ rq_curr(rq)
> config: x86_64-rhel-8.3 (https://download.01.org/0day-ci/archive/20230321/[email protected]/config)
> compiler: gcc-11 (Debian 11.3.0-8) 11.3.0
> reproduce (this is a W=1 build):
> # https://github.com/intel-lab-lkp/linux/commit/77507c3a77e09cdc627a73364246f252d918de41
> git remote add linux-review https://github.com/intel-lab-lkp/linux
> git fetch --no-tags linux-review John-Stultz/locking-ww_mutex-Remove-wakeups-from-under-mutex-wait_lock/20230321-074149
> git checkout 77507c3a77e09cdc627a73364246f252d918de41
> # save the config file
> mkdir build_dir && cp config build_dir/.config
> make W=1 O=build_dir ARCH=x86_64 olddefconfig
> make W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash
>
> If you fix the issue, kindly add following tag where applicable
> | Reported-by: kernel test robot <[email protected]>
> | Link: https://lore.kernel.org/oe-kbuild-all/[email protected]/
>
> All errors (new ones prefixed by >>):
>
> In file included from include/linux/rculist.h:11,
> from include/linux/sched/signal.h:5,
> from include/linux/sched/cputime.h:5,
> from kernel/sched/build_policy.c:17:
> kernel/sched/cputime.c: In function 'kcpustat_field':
> >> kernel/sched/cputime.c:997:42: error: 'struct rq' has no member named 'curr'
> 997 | curr = rcu_dereference(rq->curr);
> | ^~


Ah! Thanks for catching this. I've fixed it up for the next revision.
-john