From: Jan Blunck <[email protected]>
_atomic_dec_and_lock() can deadlock on UP with spinlock debugging
enabled. Currently, on UP we unconditionally spin_lock() first, which
calls __spin_lock_debug(), which takes the lock unconditionally even
on UP. This will deadlock in situations in which we call
atomic_dec_and_lock() knowing that the counter won't go to zero
(because we hold another reference) and that we already hold the lock.
Instead, we should use the SMP code path which only takes the lock if
necessary.
Signed-off-by: Jan Blunck <[email protected]>
Signed-off-by: Valerie Aurora (Henson) <[email protected]>
---
lib/dec_and_lock.c | 3 +--
1 files changed, 1 insertions(+), 2 deletions(-)
diff --git a/lib/dec_and_lock.c b/lib/dec_and_lock.c
index a65c314..e73822a 100644
--- a/lib/dec_and_lock.c
+++ b/lib/dec_and_lock.c
@@ -19,11 +19,10 @@
*/
int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock)
{
-#ifdef CONFIG_SMP
/* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */
if (atomic_add_unless(atomic, -1, 1))
return 0;
-#endif
+
/* Otherwise do it the slow way */
spin_lock(lock);
if (atomic_dec_and_test(atomic))
--
1.6.0.6
On Mon, 15 Jun 2009 14:11:13 -0400
Valerie Aurora <[email protected]> wrote:
> _atomic_dec_and_lock() can deadlock on UP with spinlock debugging
> enabled. Currently, on UP we unconditionally spin_lock() first, which
> calls __spin_lock_debug(), which takes the lock unconditionally even
> on UP. This will deadlock in situations in which we call
> atomic_dec_and_lock() knowing that the counter won't go to zero
> (because we hold another reference) and that we already hold the lock.
> Instead, we should use the SMP code path which only takes the lock if
> necessary.
Yup, I have this queued for 2.6.31 as
atomic-only-take-lock-when-the-counter-drops-to-zero-on-up-as-well.patch,
with a different changelog:
_atomic_dec_and_lock() should not unconditionally take the lock before
calling atomic_dec_and_test() in the UP case. For consistency reasons it
should behave exactly like in the SMP case.
Besides that this works around the problem that with CONFIG_DEBUG_SPINLOCK
this spins in __spin_lock_debug() if the lock is already taken even if the
counter doesn't drop to 0.
Signed-off-by: Jan Blunck <[email protected]>
Acked-by: Paul E. McKenney <[email protected]>
Acked-by: Nick Piggin <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
I can't remember why we decided that 2.6.30 doesn't need this.
On Mon, Jun 15, 2009 at 02:11:13PM -0400, Valerie Aurora wrote:
> From: Jan Blunck <[email protected]>
>
> _atomic_dec_and_lock() can deadlock on UP with spinlock debugging
> enabled. Currently, on UP we unconditionally spin_lock() first, which
> calls __spin_lock_debug(), which takes the lock unconditionally even
> on UP. This will deadlock in situations in which we call
> atomic_dec_and_lock() knowing that the counter won't go to zero
> (because we hold another reference) and that we already hold the lock.
> Instead, we should use the SMP code path which only takes the lock if
> necessary.
Reviewed-by: Paul E. McKenney <[email protected]>
> Signed-off-by: Jan Blunck <[email protected]>
> Signed-off-by: Valerie Aurora (Henson) <[email protected]>
> ---
> lib/dec_and_lock.c | 3 +--
> 1 files changed, 1 insertions(+), 2 deletions(-)
>
> diff --git a/lib/dec_and_lock.c b/lib/dec_and_lock.c
> index a65c314..e73822a 100644
> --- a/lib/dec_and_lock.c
> +++ b/lib/dec_and_lock.c
> @@ -19,11 +19,10 @@
> */
> int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock)
> {
> -#ifdef CONFIG_SMP
> /* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */
> if (atomic_add_unless(atomic, -1, 1))
> return 0;
> -#endif
> +
> /* Otherwise do it the slow way */
> spin_lock(lock);
> if (atomic_dec_and_test(atomic))
> --
> 1.6.0.6
>
On Mon, Jun 15, 2009 at 11:45:43AM -0700, Andrew Morton wrote:
> On Mon, 15 Jun 2009 14:11:13 -0400
> Valerie Aurora <[email protected]> wrote:
>
> > _atomic_dec_and_lock() can deadlock on UP with spinlock debugging
> > enabled. Currently, on UP we unconditionally spin_lock() first, which
> > calls __spin_lock_debug(), which takes the lock unconditionally even
> > on UP. This will deadlock in situations in which we call
> > atomic_dec_and_lock() knowing that the counter won't go to zero
> > (because we hold another reference) and that we already hold the lock.
> > Instead, we should use the SMP code path which only takes the lock if
> > necessary.
>
> Yup, I have this queued for 2.6.31 as
> atomic-only-take-lock-when-the-counter-drops-to-zero-on-up-as-well.patch,
> with a different changelog:
>
> _atomic_dec_and_lock() should not unconditionally take the lock before
> calling atomic_dec_and_test() in the UP case. For consistency reasons it
> should behave exactly like in the SMP case.
>
> Besides that this works around the problem that with CONFIG_DEBUG_SPINLOCK
> this spins in __spin_lock_debug() if the lock is already taken even if the
> counter doesn't drop to 0.
>
> Signed-off-by: Jan Blunck <[email protected]>
> Acked-by: Paul E. McKenney <[email protected]>
> Acked-by: Nick Piggin <[email protected]>
> Signed-off-by: Andrew Morton <[email protected]>
>
>
> I can't remember why we decided that 2.6.30 doesn't need this.
Great, last I heard the changelog was still a problem. Thanks,
-VAL
On Mon, 15 Jun 2009 15:12:23 -0400
Valerie Aurora <[email protected]> wrote:
> On Mon, Jun 15, 2009 at 11:45:43AM -0700, Andrew Morton wrote:
> > On Mon, 15 Jun 2009 14:11:13 -0400
> > Valerie Aurora <[email protected]> wrote:
> >
> > > _atomic_dec_and_lock() can deadlock on UP with spinlock debugging
> > > enabled. Currently, on UP we unconditionally spin_lock() first, which
> > > calls __spin_lock_debug(), which takes the lock unconditionally even
> > > on UP. This will deadlock in situations in which we call
> > > atomic_dec_and_lock() knowing that the counter won't go to zero
> > > (because we hold another reference) and that we already hold the lock.
> > > Instead, we should use the SMP code path which only takes the lock if
> > > necessary.
> >
> > Yup, I have this queued for 2.6.31 as
> > atomic-only-take-lock-when-the-counter-drops-to-zero-on-up-as-well.patch,
> > with a different changelog:
> >
> > _atomic_dec_and_lock() should not unconditionally take the lock before
> > calling atomic_dec_and_test() in the UP case. For consistency reasons it
> > should behave exactly like in the SMP case.
> >
> > Besides that this works around the problem that with CONFIG_DEBUG_SPINLOCK
> > this spins in __spin_lock_debug() if the lock is already taken even if the
> > counter doesn't drop to 0.
> >
> > Signed-off-by: Jan Blunck <[email protected]>
> > Acked-by: Paul E. McKenney <[email protected]>
> > Acked-by: Nick Piggin <[email protected]>
> > Signed-off-by: Andrew Morton <[email protected]>
> >
> >
> > I can't remember why we decided that 2.6.30 doesn't need this.
>
> Great, last I heard the changelog was still a problem. Thanks,
>
<goes back and checks>
OK, I decided that we didn't need this in 2.6.30 or earlier because
Jan's union mount code is the only known triggerer of the problem.
However the patch is clearly a suitable thing for -stable. Opinions
are sought..