2017-03-23 15:05:48

by Peter Zijlstra

[permalink] [raw]
Subject: [PATCH -v3 1/8] rtmutex: Deboost before waking up the top waiter

From: Xunlei Pang <[email protected]>

We should deboost before waking the high-priority task, such that we
don't run two tasks with the same "state" (priority, deadline,
sched_class, etc).

In order to make sure the boosting task doesn't start running between
unlock and deboost (due to 'spurious' wakeup), we move the deboost
under the wait_lock, that way its serialized against the wait loop in
__rt_mutex_slowlock().

Doing the deboost early can however lead to priority-inversion if
current would get preempted after the deboost but before waking our
high-prio task, hence we disable preemption before doing deboost, and
enabling it after the wake up is over.

This gets us the right semantic order, but most importantly however;
this change ensures pointer stability for the next patch, where we
have rt_mutex_setprio() cache a pointer to the top-most waiter task.
If we, as before this change, do the wakeup first and then deboost,
this pointer might point into thin air.

[peterz: Changelog + patch munging]
Cc: Ingo Molnar <[email protected]>
Cc: Juri Lelli <[email protected]>
Acked-by: Steven Rostedt <[email protected]>
Suggested-by: Peter Zijlstra <[email protected]>
Signed-off-by: Xunlei Pang <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
---

kernel/futex.c | 5 ---
kernel/locking/rtmutex.c | 59 +++++++++++++++++++++-------------------
kernel/locking/rtmutex_common.h | 2 -
3 files changed, 34 insertions(+), 32 deletions(-)

--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -1465,10 +1465,7 @@ static int wake_futex_pi(u32 __user *uad
out_unlock:
raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);

- if (deboost) {
- wake_up_q(&wake_q);
- rt_mutex_adjust_prio(current);
- }
+ rt_mutex_postunlock(&wake_q, deboost);

return ret;
}
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -307,24 +307,6 @@ static void __rt_mutex_adjust_prio(struc
}

/*
- * Adjust task priority (undo boosting). Called from the exit path of
- * rt_mutex_slowunlock() and rt_mutex_slowlock().
- *
- * (Note: We do this outside of the protection of lock->wait_lock to
- * allow the lock to be taken while or before we readjust the priority
- * of task. We do not use the spin_xx_mutex() variants here as we are
- * outside of the debug path.)
- */
-void rt_mutex_adjust_prio(struct task_struct *task)
-{
- unsigned long flags;
-
- raw_spin_lock_irqsave(&task->pi_lock, flags);
- __rt_mutex_adjust_prio(task);
- raw_spin_unlock_irqrestore(&task->pi_lock, flags);
-}
-
-/*
* Deadlock detection is conditional:
*
* If CONFIG_DEBUG_RT_MUTEXES=n, deadlock detection is only conducted
@@ -985,6 +967,7 @@ static void mark_wakeup_next_waiter(stru
* lock->wait_lock.
*/
rt_mutex_dequeue_pi(current, waiter);
+ __rt_mutex_adjust_prio(current);

/*
* As we are waking up the top waiter, and the waiter stays
@@ -1321,6 +1304,16 @@ static bool __sched rt_mutex_slowunlock(
*/
mark_wakeup_next_waiter(wake_q, lock);

+ /*
+ * We should deboost before waking the top waiter task such that
+ * we don't run two tasks with the 'same' priority. This however
+ * can lead to prio-inversion if we would get preempted after
+ * the deboost but before waking our high-prio task, hence the
+ * preempt_disable before unlock. Pairs with preempt_enable() in
+ * rt_mutex_postunlock();
+ */
+ preempt_disable();
+
raw_spin_unlock_irqrestore(&lock->wait_lock, flags);

/* check PI boosting */
@@ -1370,6 +1363,18 @@ rt_mutex_fasttrylock(struct rt_mutex *lo
return slowfn(lock);
}

+/*
+ * Undo pi boosting (if necessary) and wake top waiter.
+ */
+void rt_mutex_postunlock(struct wake_q_head *wake_q, bool deboost)
+{
+ wake_up_q(wake_q);
+
+ /* Pairs with preempt_disable() in rt_mutex_slowunlock() */
+ if (deboost)
+ preempt_enable();
+}
+
static inline void
rt_mutex_fastunlock(struct rt_mutex *lock,
bool (*slowfn)(struct rt_mutex *lock,
@@ -1383,11 +1388,7 @@ rt_mutex_fastunlock(struct rt_mutex *loc

deboost = slowfn(lock, &wake_q);

- wake_up_q(&wake_q);
-
- /* Undo pi boosting if necessary: */
- if (deboost)
- rt_mutex_adjust_prio(current);
+ rt_mutex_postunlock(&wake_q, deboost);
}

/**
@@ -1513,6 +1514,13 @@ bool __sched __rt_mutex_futex_unlock(str
}

mark_wakeup_next_waiter(wake_q, lock);
+ /*
+ * We've already deboosted, retain preempt_disabled when dropping
+ * the wait_lock to avoid inversion until the wakeup. Matched
+ * by rt_mutex_postunlock();
+ */
+ preempt_disable();
+
return true; /* deboost and wakeups */
}

@@ -1525,10 +1533,7 @@ void __sched rt_mutex_futex_unlock(struc
deboost = __rt_mutex_futex_unlock(lock, &wake_q);
raw_spin_unlock_irq(&lock->wait_lock);

- if (deboost) {
- wake_up_q(&wake_q);
- rt_mutex_adjust_prio(current);
- }
+ rt_mutex_postunlock(&wake_q, deboost);
}

/**
--- a/kernel/locking/rtmutex_common.h
+++ b/kernel/locking/rtmutex_common.h
@@ -116,7 +116,7 @@ extern void rt_mutex_futex_unlock(struct
extern bool __rt_mutex_futex_unlock(struct rt_mutex *lock,
struct wake_q_head *wqh);

-extern void rt_mutex_adjust_prio(struct task_struct *task);
+extern void rt_mutex_postunlock(struct wake_q_head *wake_q, bool deboost);

#ifdef CONFIG_DEBUG_RT_MUTEXES
# include "rtmutex-debug.h"



Subject: [tip:locking/core] rtmutex: Deboost before waking up the top waiter

Commit-ID: 2a1c6029940675abb2217b590512dbf691867ec4
Gitweb: http://git.kernel.org/tip/2a1c6029940675abb2217b590512dbf691867ec4
Author: Xunlei Pang <[email protected]>
AuthorDate: Thu, 23 Mar 2017 15:56:07 +0100
Committer: Thomas Gleixner <[email protected]>
CommitDate: Tue, 4 Apr 2017 11:44:05 +0200

rtmutex: Deboost before waking up the top waiter

We should deboost before waking the high-priority task, such that we
don't run two tasks with the same "state" (priority, deadline,
sched_class, etc).

In order to make sure the boosting task doesn't start running between
unlock and deboost (due to 'spurious' wakeup), we move the deboost
under the wait_lock, that way its serialized against the wait loop in
__rt_mutex_slowlock().

Doing the deboost early can however lead to priority-inversion if
current would get preempted after the deboost but before waking our
high-prio task, hence we disable preemption before doing deboost, and
enabling it after the wake up is over.

This gets us the right semantic order, but most importantly however;
this change ensures pointer stability for the next patch, where we
have rt_mutex_setprio() cache a pointer to the top-most waiter task.
If we, as before this change, do the wakeup first and then deboost,
this pointer might point into thin air.

[peterz: Changelog + patch munging]
Suggested-by: Peter Zijlstra <[email protected]>
Signed-off-by: Xunlei Pang <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Acked-by: Steven Rostedt <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Thomas Gleixner <[email protected]>

---
kernel/futex.c | 5 +---
kernel/locking/rtmutex.c | 59 ++++++++++++++++++++++-------------------
kernel/locking/rtmutex_common.h | 2 +-
3 files changed, 34 insertions(+), 32 deletions(-)

diff --git a/kernel/futex.c b/kernel/futex.c
index 628be42..414a30d 100644
--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -1460,10 +1460,7 @@ static int wake_futex_pi(u32 __user *uaddr, u32 uval, struct futex_pi_state *pi_
out_unlock:
raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);

- if (deboost) {
- wake_up_q(&wake_q);
- rt_mutex_adjust_prio(current);
- }
+ rt_mutex_postunlock(&wake_q, deboost);

return ret;
}
diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index dd10312..71ecf06 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -373,24 +373,6 @@ static void __rt_mutex_adjust_prio(struct task_struct *task)
}

/*
- * Adjust task priority (undo boosting). Called from the exit path of
- * rt_mutex_slowunlock() and rt_mutex_slowlock().
- *
- * (Note: We do this outside of the protection of lock->wait_lock to
- * allow the lock to be taken while or before we readjust the priority
- * of task. We do not use the spin_xx_mutex() variants here as we are
- * outside of the debug path.)
- */
-void rt_mutex_adjust_prio(struct task_struct *task)
-{
- unsigned long flags;
-
- raw_spin_lock_irqsave(&task->pi_lock, flags);
- __rt_mutex_adjust_prio(task);
- raw_spin_unlock_irqrestore(&task->pi_lock, flags);
-}
-
-/*
* Deadlock detection is conditional:
*
* If CONFIG_DEBUG_RT_MUTEXES=n, deadlock detection is only conducted
@@ -1051,6 +1033,7 @@ static void mark_wakeup_next_waiter(struct wake_q_head *wake_q,
* lock->wait_lock.
*/
rt_mutex_dequeue_pi(current, waiter);
+ __rt_mutex_adjust_prio(current);

/*
* As we are waking up the top waiter, and the waiter stays
@@ -1393,6 +1376,16 @@ static bool __sched rt_mutex_slowunlock(struct rt_mutex *lock,
*/
mark_wakeup_next_waiter(wake_q, lock);

+ /*
+ * We should deboost before waking the top waiter task such that
+ * we don't run two tasks with the 'same' priority. This however
+ * can lead to prio-inversion if we would get preempted after
+ * the deboost but before waking our high-prio task, hence the
+ * preempt_disable before unlock. Pairs with preempt_enable() in
+ * rt_mutex_postunlock();
+ */
+ preempt_disable();
+
raw_spin_unlock_irqrestore(&lock->wait_lock, flags);

/* check PI boosting */
@@ -1442,6 +1435,18 @@ rt_mutex_fasttrylock(struct rt_mutex *lock,
return slowfn(lock);
}

+/*
+ * Undo pi boosting (if necessary) and wake top waiter.
+ */
+void rt_mutex_postunlock(struct wake_q_head *wake_q, bool deboost)
+{
+ wake_up_q(wake_q);
+
+ /* Pairs with preempt_disable() in rt_mutex_slowunlock() */
+ if (deboost)
+ preempt_enable();
+}
+
static inline void
rt_mutex_fastunlock(struct rt_mutex *lock,
bool (*slowfn)(struct rt_mutex *lock,
@@ -1455,11 +1460,7 @@ rt_mutex_fastunlock(struct rt_mutex *lock,

deboost = slowfn(lock, &wake_q);

- wake_up_q(&wake_q);
-
- /* Undo pi boosting if necessary: */
- if (deboost)
- rt_mutex_adjust_prio(current);
+ rt_mutex_postunlock(&wake_q, deboost);
}

/**
@@ -1572,6 +1573,13 @@ bool __sched __rt_mutex_futex_unlock(struct rt_mutex *lock,
}

mark_wakeup_next_waiter(wake_q, lock);
+ /*
+ * We've already deboosted, retain preempt_disabled when dropping
+ * the wait_lock to avoid inversion until the wakeup. Matched
+ * by rt_mutex_postunlock();
+ */
+ preempt_disable();
+
return true; /* deboost and wakeups */
}

@@ -1584,10 +1592,7 @@ void __sched rt_mutex_futex_unlock(struct rt_mutex *lock)
deboost = __rt_mutex_futex_unlock(lock, &wake_q);
raw_spin_unlock_irq(&lock->wait_lock);

- if (deboost) {
- wake_up_q(&wake_q);
- rt_mutex_adjust_prio(current);
- }
+ rt_mutex_postunlock(&wake_q, deboost);
}

/**
diff --git a/kernel/locking/rtmutex_common.h b/kernel/locking/rtmutex_common.h
index b1ccfea..a09c029 100644
--- a/kernel/locking/rtmutex_common.h
+++ b/kernel/locking/rtmutex_common.h
@@ -122,7 +122,7 @@ extern void rt_mutex_futex_unlock(struct rt_mutex *lock);
extern bool __rt_mutex_futex_unlock(struct rt_mutex *lock,
struct wake_q_head *wqh);

-extern void rt_mutex_adjust_prio(struct task_struct *task);
+extern void rt_mutex_postunlock(struct wake_q_head *wake_q, bool deboost);

#ifdef CONFIG_DEBUG_RT_MUTEXES
# include "rtmutex-debug.h"

2017-04-05 08:10:22

by Mike Galbraith

[permalink] [raw]
Subject: Re: [tip:locking/core] rtmutex: Deboost before waking up the top waiter

locking/rtmutex: Fix preempt leak in __rt_mutex_futex_unlock()

mark_wakeup_next_waiter() already disables preemption, doing so
again leaves us with an unpaired preempt_disable().

Signed-off-by: Mike Galbraith <[email protected]>
---
kernel/locking/rtmutex.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)

--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -1581,13 +1581,13 @@ bool __sched __rt_mutex_futex_unlock(str
return false; /* done */
}

- mark_wakeup_next_waiter(wake_q, lock);
/*
- * We've already deboosted, retain preempt_disabled when dropping
- * the wait_lock to avoid inversion until the wakeup. Matched
- * by rt_mutex_postunlock();
+ * We've already deboosted, mark_wakeup_next_waiter() will
+ * retain preempt_disabled when we drop the wait_lock, to
+ * avoid inversion prior to the wakeup. preempt_disable()
+ * therein pairs with rt_mutex_postunlock().
*/
- preempt_disable();
+ mark_wakeup_next_waiter(wake_q, lock);

return true; /* call postunlock() */
}

Subject: [tip:locking/core] Retiplockingcore_rtmutex_Deboost_before_waking_up_the_top_waiter

Commit-ID: 94247f76e7361afd85ba03a3f923bf3d07ba3017
Gitweb: http://git.kernel.org/tip/94247f76e7361afd85ba03a3f923bf3d07ba3017
Author: Mike Galbraith <[email protected]>
AuthorDate: Wed, 5 Apr 2017 10:08:27 +0200
Committer: Thomas Gleixner <[email protected]>
CommitDate: Wed, 5 Apr 2017 16:52:10 +0200

Retiplockingcore_rtmutex_Deboost_before_waking_up_the_top_waiter

mark_wakeup_next_waiter() already disables preemption, doing so again
leaves us with an unpaired preempt_disable().

Fixes: 2a1c60299406 ("rtmutex: Deboost before waking up the top waiter")
Signed-off-by: Mike Galbraith <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: [email protected]
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Thomas Gleixner <[email protected]>

---
kernel/locking/rtmutex.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index 0e641eb..b955094 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -1581,13 +1581,13 @@ bool __sched __rt_mutex_futex_unlock(struct rt_mutex *lock,
return false; /* done */
}

- mark_wakeup_next_waiter(wake_q, lock);
/*
- * We've already deboosted, retain preempt_disabled when dropping
- * the wait_lock to avoid inversion until the wakeup. Matched
- * by rt_mutex_postunlock();
+ * We've already deboosted, mark_wakeup_next_waiter() will
+ * retain preempt_disabled when we drop the wait_lock, to
+ * avoid inversion prior to the wakeup. preempt_disable()
+ * therein pairs with rt_mutex_postunlock().
*/
- preempt_disable();
+ mark_wakeup_next_waiter(wake_q, lock);

return true; /* call postunlock() */
}

Subject: [tip:locking/core] rtmutex: Plug preempt count leak in rt_mutex_futex_unlock()

Commit-ID: def34eaae5ce04b324e48e1bfac873091d945213
Gitweb: http://git.kernel.org/tip/def34eaae5ce04b324e48e1bfac873091d945213
Author: Mike Galbraith <[email protected]>
AuthorDate: Wed, 5 Apr 2017 10:08:27 +0200
Committer: Thomas Gleixner <[email protected]>
CommitDate: Wed, 5 Apr 2017 16:59:37 +0200

rtmutex: Plug preempt count leak in rt_mutex_futex_unlock()

mark_wakeup_next_waiter() already disables preemption, doing so again
leaves us with an unpaired preempt_disable().

Fixes: 2a1c60299406 ("rtmutex: Deboost before waking up the top waiter")
Signed-off-by: Mike Galbraith <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: [email protected]
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Thomas Gleixner <[email protected]>
---
kernel/locking/rtmutex.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index 0e641eb..b955094 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -1581,13 +1581,13 @@ bool __sched __rt_mutex_futex_unlock(struct rt_mutex *lock,
return false; /* done */
}

- mark_wakeup_next_waiter(wake_q, lock);
/*
- * We've already deboosted, retain preempt_disabled when dropping
- * the wait_lock to avoid inversion until the wakeup. Matched
- * by rt_mutex_postunlock();
+ * We've already deboosted, mark_wakeup_next_waiter() will
+ * retain preempt_disabled when we drop the wait_lock, to
+ * avoid inversion prior to the wakeup. preempt_disable()
+ * therein pairs with rt_mutex_postunlock().
*/
- preempt_disable();
+ mark_wakeup_next_waiter(wake_q, lock);

return true; /* call postunlock() */
}

2017-04-06 06:15:32

by Xunlei Pang

[permalink] [raw]
Subject: Re: [tip:locking/core] rtmutex: Deboost before waking up the top waiter

On 04/05/2017 at 04:08 PM, Mike Galbraith wrote:
> locking/rtmutex: Fix preempt leak in __rt_mutex_futex_unlock()
>
> mark_wakeup_next_waiter() already disables preemption, doing so
> again leaves us with an unpaired preempt_disable().

You can also fix the corresponding comment in rt_mutex_postunlock():
/* Pairs with preempt_disable() in rt_mutex_slowunlock() */
preempt_enable();

Thanks,
Xunlei

>
> Signed-off-by: Mike Galbraith <[email protected]>
> ---
> kernel/locking/rtmutex.c | 10 +++++-----
> 1 file changed, 5 insertions(+), 5 deletions(-)
>
> --- a/kernel/locking/rtmutex.c
> +++ b/kernel/locking/rtmutex.c
> @@ -1581,13 +1581,13 @@ bool __sched __rt_mutex_futex_unlock(str
> return false; /* done */
> }
>
> - mark_wakeup_next_waiter(wake_q, lock);
> /*
> - * We've already deboosted, retain preempt_disabled when dropping
> - * the wait_lock to avoid inversion until the wakeup. Matched
> - * by rt_mutex_postunlock();
> + * We've already deboosted, mark_wakeup_next_waiter() will
> + * retain preempt_disabled when we drop the wait_lock, to
> + * avoid inversion prior to the wakeup. preempt_disable()
> + * therein pairs with rt_mutex_postunlock().
> */
> - preempt_disable();
> + mark_wakeup_next_waiter(wake_q, lock);
>
> return true; /* call postunlock() */
> }