In case the dead lock detector is enabled we follow the lock chain to
the end in rt_mutex_adjust_prio_chain, even if we could stop earlier
due to the priority/waiter constellation.
But once we are not longer the top priority waiter in a certain step
or the task holding the lock has already the same priority then there
is no point in dequeing and enqueing along the lock chain as there is
no change at all.
So stop the requeueing at this point.
Signed-off-by: Thomas Gleixner <[email protected]>
---
kernel/locking/rtmutex.c | 61 +++++++++++++++++++++++++++++++++++++++++------
1 file changed, 54 insertions(+), 7 deletions(-)
Index: tip/kernel/locking/rtmutex.c
===================================================================
--- tip.orig/kernel/locking/rtmutex.c
+++ tip/kernel/locking/rtmutex.c
@@ -359,6 +359,7 @@ static int rt_mutex_adjust_prio_chain(st
struct rt_mutex *lock;
bool detect_deadlock;
unsigned long flags;
+ bool requeue = true;
detect_deadlock = rt_mutex_cond_detect_deadlock(orig_waiter, chwalk);
@@ -436,18 +437,31 @@ static int rt_mutex_adjust_prio_chain(st
goto out_unlock_pi;
/*
* If deadlock detection is off, we stop here if we
- * are not the top pi waiter of the task.
+ * are not the top pi waiter of the task. If deadlock
+ * detection is enabled we continue, but stop the
+ * requeueing in the chain walk.
*/
- if (!detect_deadlock && top_waiter != task_top_pi_waiter(task))
- goto out_unlock_pi;
+ if (top_waiter != task_top_pi_waiter(task)) {
+ if (!detect_deadlock)
+ goto out_unlock_pi;
+ else
+ requeue = false;
+ }
}
/*
- * When deadlock detection is off then we check, if further
- * priority adjustment is necessary.
+ * If the waiter priority is the same as the task priority
+ * then there is no further priority adjustment necessary. If
+ * deadlock detection is off, we stop the chain walk. If its
+ * enabled we continue, but stop the requeueing in the chain
+ * walk.
*/
- if (!detect_deadlock && waiter->prio == task->prio)
- goto out_unlock_pi;
+ if (waiter->prio == task->prio) {
+ if (!detect_deadlock)
+ goto out_unlock_pi;
+ else
+ requeue = false;
+ }
/*
* We need to trylock here as we are holding task->pi_lock,
@@ -475,6 +489,39 @@ static int rt_mutex_adjust_prio_chain(st
}
/*
+ * If we just follow the lock chain for deadlock detection, no
+ * need to do all the requeue operations. We avoid a truckload
+ * of conditinals around the various places below and just do
+ * the minimum chain walk checks here.
+ */
+ if (!requeue) {
+ /* Release the task */
+ raw_spin_unlock_irqrestore(&task->pi_lock, flags);
+ put_task_struct(task);
+
+ /* If there is no owner of the lock, end of chain. */
+ if (!rt_mutex_owner(lock)) {
+ raw_spin_unlock(&lock->wait_lock);
+ return 0;
+ }
+
+ /* Grab the next task, i.e. owner of @lock */
+ task = rt_mutex_owner(lock);
+ get_task_struct(task);
+ raw_spin_lock_irqsave(&task->pi_lock, flags);
+
+ /* Store whether owner is blocked itself and drop locks */
+ next_lock = task_blocked_on(task);
+ raw_spin_unlock_irqrestore(&task->pi_lock, flags);
+ raw_spin_unlock(&lock->wait_lock);
+
+ /* If owner is not blocked, end of chain. */
+ if (!next_lock)
+ goto out_put_task;
+ goto again;
+ }
+
+ /*
* Store the current top waiter before doing the requeue
* operation on @lock. We need it for the boost/deboost
* decision below.
On Mon, 09 Jun 2014 20:28:10 -0000
Thomas Gleixner <[email protected]> wrote:
> In case the dead lock detector is enabled we follow the lock chain to
> the end in rt_mutex_adjust_prio_chain, even if we could stop earlier
> due to the priority/waiter constellation.
>
> But once we are not longer the top priority waiter in a certain step
> or the task holding the lock has already the same priority then there
> is no point in dequeing and enqueing along the lock chain as there is
> no change at all.
>
> So stop the requeueing at this point.
>
> Signed-off-by: Thomas Gleixner <[email protected]>
> ---
> kernel/locking/rtmutex.c | 61 +++++++++++++++++++++++++++++++++++++++++------
> 1 file changed, 54 insertions(+), 7 deletions(-)
>
> Index: tip/kernel/locking/rtmutex.c
> ===================================================================
> --- tip.orig/kernel/locking/rtmutex.c
> +++ tip/kernel/locking/rtmutex.c
> @@ -359,6 +359,7 @@ static int rt_mutex_adjust_prio_chain(st
> struct rt_mutex *lock;
> bool detect_deadlock;
> unsigned long flags;
> + bool requeue = true;
>
> detect_deadlock = rt_mutex_cond_detect_deadlock(orig_waiter, chwalk);
>
> @@ -436,18 +437,31 @@ static int rt_mutex_adjust_prio_chain(st
> goto out_unlock_pi;
> /*
> * If deadlock detection is off, we stop here if we
> - * are not the top pi waiter of the task.
> + * are not the top pi waiter of the task. If deadlock
> + * detection is enabled we continue, but stop the
> + * requeueing in the chain walk.
> */
> - if (!detect_deadlock && top_waiter != task_top_pi_waiter(task))
> - goto out_unlock_pi;
> + if (top_waiter != task_top_pi_waiter(task)) {
> + if (!detect_deadlock)
> + goto out_unlock_pi;
> + else
> + requeue = false;
> + }
> }
>
> /*
> - * When deadlock detection is off then we check, if further
> - * priority adjustment is necessary.
> + * If the waiter priority is the same as the task priority
> + * then there is no further priority adjustment necessary. If
> + * deadlock detection is off, we stop the chain walk. If its
> + * enabled we continue, but stop the requeueing in the chain
> + * walk.
> */
> - if (!detect_deadlock && waiter->prio == task->prio)
> - goto out_unlock_pi;
> + if (waiter->prio == task->prio) {
> + if (!detect_deadlock)
> + goto out_unlock_pi;
> + else
> + requeue = false;
> + }
>
> /*
> * We need to trylock here as we are holding task->pi_lock,
> @@ -475,6 +489,39 @@ static int rt_mutex_adjust_prio_chain(st
> }
>
> /*
> + * If we just follow the lock chain for deadlock detection, no
> + * need to do all the requeue operations. We avoid a truckload
s/We/To/
> + * of conditinals around the various places below and just do
s/ and/, /
> + * the minimum chain walk checks here.
> + */
> + if (!requeue) {
> + /* Release the task */
> + raw_spin_unlock_irqrestore(&task->pi_lock, flags);
> + put_task_struct(task);
> +
> + /* If there is no owner of the lock, end of chain. */
> + if (!rt_mutex_owner(lock)) {
> + raw_spin_unlock(&lock->wait_lock);
> + return 0;
> + }
> +
> + /* Grab the next task, i.e. owner of @lock */
> + task = rt_mutex_owner(lock);
> + get_task_struct(task);
> + raw_spin_lock_irqsave(&task->pi_lock, flags);
> +
> + /* Store whether owner is blocked itself and drop locks */
> + next_lock = task_blocked_on(task);
> + raw_spin_unlock_irqrestore(&task->pi_lock, flags);
> + raw_spin_unlock(&lock->wait_lock);
> +
> + /* If owner is not blocked, end of chain. */
> + if (!next_lock)
> + goto out_put_task;
On the loop back around, have something like:
if (top_waiter) {
if (!task_has_pi_waiters(task))
goto out_unlock_pi;
if (!requeue &&
top_waiter != task_top_pi_waiter(task)) {
if (!detect_deadlock)
goto out_unlock_pi;
else
requeue = false;
}
}
??
> + goto again;
> + }
> +
> + /*
> * Store the current top waiter before doing the requeue
> * operation on @lock. We need it for the boost/deboost
> * decision below.
>
On Mon, Jun 9, 2014 at 6:20 PM, Steven Rostedt <[email protected]> wrote:
> On Mon, 09 Jun 2014 20:28:10 -0000
> Thomas Gleixner <[email protected]> wrote:
>
>> In case the dead lock detector is enabled we follow the lock chain to
>> the end in rt_mutex_adjust_prio_chain, even if we could stop earlier
>> due to the priority/waiter constellation.
>>
>> But once we are not longer the top priority waiter in a certain step
>> or the task holding the lock has already the same priority then there
>> is no point in dequeing and enqueing along the lock chain as there is
>> no change at all.
>>
>> So stop the requeueing at this point.
>>
>> Signed-off-by: Thomas Gleixner <[email protected]>
>> ---
>> kernel/locking/rtmutex.c | 61 +++++++++++++++++++++++++++++++++++++++++------
>> 1 file changed, 54 insertions(+), 7 deletions(-)
>>
>> Index: tip/kernel/locking/rtmutex.c
>> ===================================================================
>> --- tip.orig/kernel/locking/rtmutex.c
>> +++ tip/kernel/locking/rtmutex.c
>> @@ -359,6 +359,7 @@ static int rt_mutex_adjust_prio_chain(st
>> struct rt_mutex *lock;
>> bool detect_deadlock;
>> unsigned long flags;
>> + bool requeue = true;
>>
>> detect_deadlock = rt_mutex_cond_detect_deadlock(orig_waiter, chwalk);
>>
>> @@ -436,18 +437,31 @@ static int rt_mutex_adjust_prio_chain(st
>> goto out_unlock_pi;
>> /*
>> * If deadlock detection is off, we stop here if we
>> - * are not the top pi waiter of the task.
>> + * are not the top pi waiter of the task. If deadlock
>> + * detection is enabled we continue, but stop the
>> + * requeueing in the chain walk.
>> */
>> - if (!detect_deadlock && top_waiter != task_top_pi_waiter(task))
>> - goto out_unlock_pi;
>> + if (top_waiter != task_top_pi_waiter(task)) {
>> + if (!detect_deadlock)
>> + goto out_unlock_pi;
>> + else
>> + requeue = false;
>> + }
>> }
>>
>> /*
>> - * When deadlock detection is off then we check, if further
>> - * priority adjustment is necessary.
>> + * If the waiter priority is the same as the task priority
>> + * then there is no further priority adjustment necessary. If
>> + * deadlock detection is off, we stop the chain walk. If its
>> + * enabled we continue, but stop the requeueing in the chain
>> + * walk.
>> */
>> - if (!detect_deadlock && waiter->prio == task->prio)
>> - goto out_unlock_pi;
>> + if (waiter->prio == task->prio) {
>> + if (!detect_deadlock)
>> + goto out_unlock_pi;
>> + else
>> + requeue = false;
>> + }
>>
>> /*
>> * We need to trylock here as we are holding task->pi_lock,
>> @@ -475,6 +489,39 @@ static int rt_mutex_adjust_prio_chain(st
>> }
>>
>> /*
>> + * If we just follow the lock chain for deadlock detection, no
>> + * need to do all the requeue operations. We avoid a truckload
>
> s/We/To/
>
>
>> + * of conditinals around the various places below and just do
>
> s/ and/, /
And s/conditinals/conditionals/
>> + * the minimum chain walk checks here.
>> + */
>> + if (!requeue) {
>> + /* Release the task */
>> + raw_spin_unlock_irqrestore(&task->pi_lock, flags);
>> + put_task_struct(task);
>> +
>> + /* If there is no owner of the lock, end of chain. */
>> + if (!rt_mutex_owner(lock)) {
>> + raw_spin_unlock(&lock->wait_lock);
>> + return 0;
>> + }
>> +
>> + /* Grab the next task, i.e. owner of @lock */
>> + task = rt_mutex_owner(lock);
>> + get_task_struct(task);
>> + raw_spin_lock_irqsave(&task->pi_lock, flags);
>> +
>> + /* Store whether owner is blocked itself and drop locks */
>> + next_lock = task_blocked_on(task);
>> + raw_spin_unlock_irqrestore(&task->pi_lock, flags);
>> + raw_spin_unlock(&lock->wait_lock);
>> +
>> + /* If owner is not blocked, end of chain. */
>> + if (!next_lock)
>> + goto out_put_task;
>
> On the loop back around, have something like:
>
> if (top_waiter) {
> if (!task_has_pi_waiters(task))
> goto out_unlock_pi;
>
> if (!requeue &&
> top_waiter != task_top_pi_waiter(task)) {
> if (!detect_deadlock)
> goto out_unlock_pi;
> else
> requeue = false;
> }
> }
>
> ??
>
>
>> + goto again;
>> + }
>> +
>> + /*
>> * Store the current top waiter before doing the requeue
>> * operation on @lock. We need it for the boost/deboost
>> * decision below.
>>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
On Mon, Jun 09, 2014 at 08:28:10PM -0000, Thomas Gleixner wrote:
> In case the dead lock detector is enabled we follow the lock chain to
> the end in rt_mutex_adjust_prio_chain, even if we could stop earlier
> due to the priority/waiter constellation.
>
> But once we are not longer the top priority waiter in a certain step
> or the task holding the lock has already the same priority then there
> is no point in dequeing and enqueing along the lock chain as there is
> no change at all.
>
> So stop the requeueing at this point.
>
> Signed-off-by: Thomas Gleixner <[email protected]>
> ---
> kernel/locking/rtmutex.c | 61 +++++++++++++++++++++++++++++++++++++++++------
> 1 file changed, 54 insertions(+), 7 deletions(-)
>
> Index: tip/kernel/locking/rtmutex.c
> ===================================================================
> --- tip.orig/kernel/locking/rtmutex.c
> +++ tip/kernel/locking/rtmutex.c
> @@ -359,6 +359,7 @@ static int rt_mutex_adjust_prio_chain(st
> struct rt_mutex *lock;
> bool detect_deadlock;
> unsigned long flags;
> + bool requeue = true;
>
> detect_deadlock = rt_mutex_cond_detect_deadlock(orig_waiter, chwalk);
>
> @@ -436,18 +437,31 @@ static int rt_mutex_adjust_prio_chain(st
> goto out_unlock_pi;
> /*
> * If deadlock detection is off, we stop here if we
> - * are not the top pi waiter of the task.
> + * are not the top pi waiter of the task. If deadlock
> + * detection is enabled we continue, but stop the
> + * requeueing in the chain walk.
> */
> - if (!detect_deadlock && top_waiter != task_top_pi_waiter(task))
> - goto out_unlock_pi;
> + if (top_waiter != task_top_pi_waiter(task)) {
> + if (!detect_deadlock)
> + goto out_unlock_pi;
> + else
> + requeue = false;
> + }
> }
>
> /*
> - * When deadlock detection is off then we check, if further
> - * priority adjustment is necessary.
> + * If the waiter priority is the same as the task priority
> + * then there is no further priority adjustment necessary. If
> + * deadlock detection is off, we stop the chain walk. If its
s/its/it's/
> + * enabled we continue, but stop the requeueing in the chain
> + * walk.
> */
> - if (!detect_deadlock && waiter->prio == task->prio)
> - goto out_unlock_pi;
> + if (waiter->prio == task->prio) {
> + if (!detect_deadlock)
> + goto out_unlock_pi;
> + else
> + requeue = false;
> + }
>
> /*
> * We need to trylock here as we are holding task->pi_lock,
> @@ -475,6 +489,39 @@ static int rt_mutex_adjust_prio_chain(st
> }
>
> /*
> + * If we just follow the lock chain for deadlock detection, no
> + * need to do all the requeue operations. We avoid a truckload
> + * of conditinals around the various places below and just do
> + * the minimum chain walk checks here.
> + */
> + if (!requeue) {
> + /* Release the task */
> + raw_spin_unlock_irqrestore(&task->pi_lock, flags);
> + put_task_struct(task);
> +
> + /* If there is no owner of the lock, end of chain. */
> + if (!rt_mutex_owner(lock)) {
> + raw_spin_unlock(&lock->wait_lock);
> + return 0;
> + }
> +
> + /* Grab the next task, i.e. owner of @lock */
> + task = rt_mutex_owner(lock);
> + get_task_struct(task);
> + raw_spin_lock_irqsave(&task->pi_lock, flags);
> +
> + /* Store whether owner is blocked itself and drop locks */
> + next_lock = task_blocked_on(task);
task_blocked_on(task) is not clear to me, the recipient of the
return is the only clue that hints at what the function does.
> + raw_spin_unlock_irqrestore(&task->pi_lock, flags);
> + raw_spin_unlock(&lock->wait_lock);
> +
> + /* If owner is not blocked, end of chain. */
> + if (!next_lock)
> + goto out_put_task;
> + goto again;
> + }
> +
> + /*
> * Store the current top waiter before doing the requeue
> * operation on @lock. We need it for the boost/deboost
> * decision below.
>
>
On Tue, 10 Jun 2014 09:57:25 -0500
"Brad Mouring" <[email protected]> wrote:
> > + /* Store whether owner is blocked itself and drop locks */
> > + next_lock = task_blocked_on(task);
> task_blocked_on(task) is not clear to me, the recipient of the
> return is the only clue that hints at what the function does.
Well, this is more than confusing, it's the only user, all other users
are task_blocked_on_lock(), and this causes the code not to compile.
-- Steve
> > + raw_spin_unlock_irqrestore(&task->pi_lock, flags);
> > + raw_spin_unlock(&lock->wait_lock);
> > +
> > + /* If owner is not blocked, end of chain. */
> > + if (!next_lock)
> > + goto out_put_task;
> > + goto again;
> > + }
> > +
> > + /*
> > * Store the current top waiter before doing the requeue
> > * operation on @lock. We need it for the boost/deboost
> > * decision below.
> >
> >
On Mon, 9 Jun 2014, Steven Rostedt wrote:
> On Mon, 09 Jun 2014 20:28:10 -0000
> Thomas Gleixner <[email protected]> wrote:
> > /*
> > + * If we just follow the lock chain for deadlock detection, no
> > + * need to do all the requeue operations. We avoid a truckload
>
> s/We/To/
>
>
> > + * of conditinals around the various places below and just do
>
> s/ and/, /
Ok.
> > + * the minimum chain walk checks here.
> > + */
> > + if (!requeue) {
> > + /* Release the task */
> > + raw_spin_unlock_irqrestore(&task->pi_lock, flags);
> > + put_task_struct(task);
> > +
> > + /* If there is no owner of the lock, end of chain. */
> > + if (!rt_mutex_owner(lock)) {
> > + raw_spin_unlock(&lock->wait_lock);
> > + return 0;
> > + }
> > +
> > + /* Grab the next task, i.e. owner of @lock */
> > + task = rt_mutex_owner(lock);
> > + get_task_struct(task);
> > + raw_spin_lock_irqsave(&task->pi_lock, flags);
> > +
> > + /* Store whether owner is blocked itself and drop locks */
> > + next_lock = task_blocked_on(task);
> > + raw_spin_unlock_irqrestore(&task->pi_lock, flags);
> > + raw_spin_unlock(&lock->wait_lock);
> > +
> > + /* If owner is not blocked, end of chain. */
> > + if (!next_lock)
> > + goto out_put_task;
>
> On the loop back around, have something like:
>
> if (top_waiter) {
> if (!task_has_pi_waiters(task))
> goto out_unlock_pi;
The task has at least one pi waiter.
> if (!requeue &&
> top_waiter != task_top_pi_waiter(task)) {
> if (!detect_deadlock)
> goto out_unlock_pi;
> else
> requeue = false;
> }
Errm? if requeue is off we are in deadlock detection chainwalk
mode. So all we care about is whether task is blocked on
next_lock or not.
What you actually missed is that we need to read out top_waiter for
the current lock. Fixed already.
Thanks,
tglx
On Tue, 10 Jun 2014, Steven Rostedt wrote:
> On Tue, 10 Jun 2014 09:57:25 -0500
> "Brad Mouring" <[email protected]> wrote:
>
>
> > > + /* Store whether owner is blocked itself and drop locks */
> > > + next_lock = task_blocked_on(task);
> > task_blocked_on(task) is not clear to me, the recipient of the
> > return is the only clue that hints at what the function does.
>
> Well, this is more than confusing, it's the only user, all other users
> are task_blocked_on_lock(), and this causes the code not to compile.
Grr, yes.
On Tue, 10 Jun 2014 19:41:39 +0200 (CEST)
Thomas Gleixner <[email protected]> wrote:
> > On the loop back around, have something like:
> >
> > if (top_waiter) {
> > if (!task_has_pi_waiters(task))
> > goto out_unlock_pi;
>
> The task has at least one pi waiter.
>
> > if (!requeue &&
> > top_waiter != task_top_pi_waiter(task)) {
> > if (!detect_deadlock)
> > goto out_unlock_pi;
> > else
> > requeue = false;
> > }
>
> Errm? if requeue is off we are in deadlock detection chainwalk
> mode. So all we care about is whether task is blocked on
> next_lock or not.
Actually that was a typo on my part. That should have been:
if (requeue &&
...
As we don't need to read the task_top_pi_waiter() again.
-- Steve
On Tue, 10 Jun 2014 19:43:16 +0200 (CEST)
Thomas Gleixner <[email protected]> wrote:
> On Tue, 10 Jun 2014, Steven Rostedt wrote:
>
> > On Tue, 10 Jun 2014 09:57:25 -0500
> > "Brad Mouring" <[email protected]> wrote:
> >
> >
> > > > + /* Store whether owner is blocked itself and drop locks */
> > > > + next_lock = task_blocked_on(task);
> > > task_blocked_on(task) is not clear to me, the recipient of the
> > > return is the only clue that hints at what the function does.
> >
> > Well, this is more than confusing, it's the only user, all other users
> > are task_blocked_on_lock(), and this causes the code not to compile.
>
> Grr, yes.
Luckily you are not posting this to that grumpy IRQ maintainer. He'd
shoot some frozen sharks your way if you sent him patches like this ;-)
-- Steve
On Tue, 10 Jun 2014, Steven Rostedt wrote:
> On Tue, 10 Jun 2014 19:41:39 +0200 (CEST)
> Thomas Gleixner <[email protected]> wrote:
>
>
> > > On the loop back around, have something like:
> > >
> > > if (top_waiter) {
> > > if (!task_has_pi_waiters(task))
> > > goto out_unlock_pi;
> >
> > The task has at least one pi waiter.
> >
> > > if (!requeue &&
> > > top_waiter != task_top_pi_waiter(task)) {
> > > if (!detect_deadlock)
> > > goto out_unlock_pi;
> > > else
> > > requeue = false;
> > > }
> >
> > Errm? if requeue is off we are in deadlock detection chainwalk
> > mode. So all we care about is whether task is blocked on
> > next_lock or not.
>
> Actually that was a typo on my part. That should have been:
>
> if (requeue &&
> ...
>
> As we don't need to read the task_top_pi_waiter() again.
if (requeue ...
is completely pointless as the code you were talking about is in the
if (!requeue) {
branch. So what?
Thanks,
tglx
On Tue, 10 Jun 2014, Steven Rostedt wrote:
> On Tue, 10 Jun 2014 19:43:16 +0200 (CEST)
> Thomas Gleixner <[email protected]> wrote:
>
> > On Tue, 10 Jun 2014, Steven Rostedt wrote:
> >
> > > On Tue, 10 Jun 2014 09:57:25 -0500
> > > "Brad Mouring" <[email protected]> wrote:
> > >
> > >
> > > > > + /* Store whether owner is blocked itself and drop locks */
> > > > > + next_lock = task_blocked_on(task);
> > > > task_blocked_on(task) is not clear to me, the recipient of the
> > > > return is the only clue that hints at what the function does.
> > >
> > > Well, this is more than confusing, it's the only user, all other users
> > > are task_blocked_on_lock(), and this causes the code not to compile.
> >
> > Grr, yes.
>
> Luckily you are not posting this to that grumpy IRQ maintainer. He'd
> shoot some frozen sharks your way if you sent him patches like this ;-)
Rightfully so. :)