2009-04-11 14:18:14

by Jan Blunck

[permalink] [raw]
Subject: [PATCH] atomic: Only take lock when the counter drops to zero on UP as well

I think it is wrong to unconditionally take the lock before calling
atomic_dec_and_test() in _atomic_dec_and_lock(). This will deadlock in
situation where it is known that the counter will not reach zero (e.g. holding
another reference to the same object) but the lock is already taken.

Signed-off-by: Jan Blunck <[email protected]>
---
lib/dec_and_lock.c | 3 +--
1 files changed, 1 insertions(+), 2 deletions(-)

diff --git a/lib/dec_and_lock.c b/lib/dec_and_lock.c
index a65c314..e73822a 100644
--- a/lib/dec_and_lock.c
+++ b/lib/dec_and_lock.c
@@ -19,11 +19,10 @@
*/
int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock)
{
-#ifdef CONFIG_SMP
/* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */
if (atomic_add_unless(atomic, -1, 1))
return 0;
-#endif
+
/* Otherwise do it the slow way */
spin_lock(lock);
if (atomic_dec_and_test(atomic))
--
1.6.0.2


2009-04-11 17:49:18

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH] atomic: Only take lock when the counter drops to zero on UP as well

On Fri, Apr 10, 2009 at 06:13:57PM +0200, Jan Blunck wrote:
> I think it is wrong to unconditionally take the lock before calling
> atomic_dec_and_test() in _atomic_dec_and_lock(). This will deadlock in
> situation where it is known that the counter will not reach zero (e.g. holding
> another reference to the same object) but the lock is already taken.

The thought of calling _atomic_dec_and_lock() when you already hold the
lock really really scares me.

Could you please give an example where you need to do this?

Thanx, Paul

> Signed-off-by: Jan Blunck <[email protected]>
> ---
> lib/dec_and_lock.c | 3 +--
> 1 files changed, 1 insertions(+), 2 deletions(-)
>
> diff --git a/lib/dec_and_lock.c b/lib/dec_and_lock.c
> index a65c314..e73822a 100644
> --- a/lib/dec_and_lock.c
> +++ b/lib/dec_and_lock.c
> @@ -19,11 +19,10 @@
> */
> int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock)
> {
> -#ifdef CONFIG_SMP
> /* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */
> if (atomic_add_unless(atomic, -1, 1))
> return 0;
> -#endif
> +
> /* Otherwise do it the slow way */
> spin_lock(lock);
> if (atomic_dec_and_test(atomic))
> --
> 1.6.0.2
>

2009-04-12 11:33:36

by Jan Blunck

[permalink] [raw]
Subject: Re: [PATCH] atomic: Only take lock when the counter drops to zero on UP as well

Am 11.04.2009 um 19:49 schrieb "Paul E. McKenney" <[email protected]
>:

> On Fri, Apr 10, 2009 at 06:13:57PM +0200, Jan Blunck wrote:
>> I think it is wrong to unconditionally take the lock before calling
>> atomic_dec_and_test() in _atomic_dec_and_lock(). This will deadlock
>> in
>> situation where it is known that the counter will not reach zero
>> (e.g. holding
>> another reference to the same object) but the lock is already taken.
>
> The thought of calling _atomic_dec_and_lock() when you already hold
> the
> lock really really scares me.
>
> Could you please give an example where you need to do this?
>

There is a part of the union mount patches that needs to do a
union_put() (which itself includes a path_put() that uses
atomic_dec_and_lock() in mntput() ). Since it is changing the
namespace I need to hold the vfsmount lock. I know that the mnt's
count > 1 since it is a parent of the mnt I'm changing in the mount
tree. I could possibly delay the union_put().

In general this let's atomic_dec_and_lock() behave similar on SMP and
UP. Remember that this already works with CONFIG_SMP as before Nick's
patch.


> Thanx, Paul
>
>> Signed-off-by: Jan Blunck <[email protected]>
>> ---
>> lib/dec_and_lock.c | 3 +--
>> 1 files changed, 1 insertions(+), 2 deletions(-)
>>
>> diff --git a/lib/dec_and_lock.c b/lib/dec_and_lock.c
>> index a65c314..e73822a 100644
>> --- a/lib/dec_and_lock.c
>> +++ b/lib/dec_and_lock.c
>> @@ -19,11 +19,10 @@
>> */
>> int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock)
>> {
>> -#ifdef CONFIG_SMP
>> /* Subtract 1 from counter unless that drops it to 0 (ie. it was
>> 1) */
>> if (atomic_add_unless(atomic, -1, 1))
>> return 0;
>> -#endif
>> +
>> /* Otherwise do it the slow way */
>> spin_lock(lock);
>> if (atomic_dec_and_test(atomic))
>> --
>> 1.6.0.2
>>

2009-04-13 06:02:48

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH] atomic: Only take lock when the counter drops to zero on UP as well

On Sun, Apr 12, 2009 at 01:32:54PM +0200, Jan Blunck wrote:
> Am 11.04.2009 um 19:49 schrieb "Paul E. McKenney"
> <[email protected]>:
>
>> On Fri, Apr 10, 2009 at 06:13:57PM +0200, Jan Blunck wrote:
>>> I think it is wrong to unconditionally take the lock before calling
>>> atomic_dec_and_test() in _atomic_dec_and_lock(). This will deadlock in
>>> situation where it is known that the counter will not reach zero (e.g.
>>> holding
>>> another reference to the same object) but the lock is already taken.
>>
>> The thought of calling _atomic_dec_and_lock() when you already hold the
>> lock really really scares me.
>>
>> Could you please give an example where you need to do this?
>>
>
> There is a part of the union mount patches that needs to do a union_put()
> (which itself includes a path_put() that uses atomic_dec_and_lock() in
> mntput() ). Since it is changing the namespace I need to hold the vfsmount
> lock. I know that the mnt's count > 1 since it is a parent of the mnt I'm
> changing in the mount tree. I could possibly delay the union_put().
>
> In general this let's atomic_dec_and_lock() behave similar on SMP and UP.
> Remember that this already works with CONFIG_SMP as before Nick's patch.

I asked, I guess. ;-)

There is some sort of common code path, so that you cannot simply call
atomic_dec() when holding the lock?

Thanx, Paul
>>
>>> Signed-off-by: Jan Blunck <[email protected]>
>>> ---
>>> lib/dec_and_lock.c | 3 +--
>>> 1 files changed, 1 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/lib/dec_and_lock.c b/lib/dec_and_lock.c
>>> index a65c314..e73822a 100644
>>> --- a/lib/dec_and_lock.c
>>> +++ b/lib/dec_and_lock.c
>>> @@ -19,11 +19,10 @@
>>> */
>>> int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock)
>>> {
>>> -#ifdef CONFIG_SMP
>>> /* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */
>>> if (atomic_add_unless(atomic, -1, 1))
>>> return 0;
>>> -#endif
>>> +
>>> /* Otherwise do it the slow way */
>>> spin_lock(lock);
>>> if (atomic_dec_and_test(atomic))
>>> --
>>> 1.6.0.2
>>>

2009-04-14 06:52:55

by Nick Piggin

[permalink] [raw]
Subject: Re: [PATCH] atomic: Only take lock when the counter drops to zero on UP as well

On Fri, Apr 10, 2009 at 06:13:57PM +0200, Jan Blunck wrote:
> I think it is wrong to unconditionally take the lock before calling
> atomic_dec_and_test() in _atomic_dec_and_lock(). This will deadlock in
> situation where it is known that the counter will not reach zero (e.g. holding
> another reference to the same object) but the lock is already taken.
>
> Signed-off-by: Jan Blunck <[email protected]>

Paul's worry about callers aside, I think it is probably a good idea
to reduce ifdefs and share more code.

So for this patch,

Acked-by: Nick Piggin <[email protected]>

> ---
> lib/dec_and_lock.c | 3 +--
> 1 files changed, 1 insertions(+), 2 deletions(-)
>
> diff --git a/lib/dec_and_lock.c b/lib/dec_and_lock.c
> index a65c314..e73822a 100644
> --- a/lib/dec_and_lock.c
> +++ b/lib/dec_and_lock.c
> @@ -19,11 +19,10 @@
> */
> int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock)
> {
> -#ifdef CONFIG_SMP
> /* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */
> if (atomic_add_unless(atomic, -1, 1))
> return 0;
> -#endif
> +
> /* Otherwise do it the slow way */
> spin_lock(lock);
> if (atomic_dec_and_test(atomic))
> --
> 1.6.0.2

2009-04-14 16:49:20

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH] atomic: Only take lock when the counter drops to zero on UP as well

On Tue, Apr 14, 2009 at 08:52:39AM +0200, Nick Piggin wrote:
> On Fri, Apr 10, 2009 at 06:13:57PM +0200, Jan Blunck wrote:
> > I think it is wrong to unconditionally take the lock before calling
> > atomic_dec_and_test() in _atomic_dec_and_lock(). This will deadlock in
> > situation where it is known that the counter will not reach zero (e.g. holding
> > another reference to the same object) but the lock is already taken.
> >
> > Signed-off-by: Jan Blunck <[email protected]>
>
> Paul's worry about callers aside, I think it is probably a good idea
> to reduce ifdefs and share more code.

I am also OK with this patch.

Acked-by: Paul E. McKenney <[email protected]>

> So for this patch,
>
> Acked-by: Nick Piggin <[email protected]>
>
> > ---
> > lib/dec_and_lock.c | 3 +--
> > 1 files changed, 1 insertions(+), 2 deletions(-)
> >
> > diff --git a/lib/dec_and_lock.c b/lib/dec_and_lock.c
> > index a65c314..e73822a 100644
> > --- a/lib/dec_and_lock.c
> > +++ b/lib/dec_and_lock.c
> > @@ -19,11 +19,10 @@
> > */
> > int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock)
> > {
> > -#ifdef CONFIG_SMP
> > /* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */
> > if (atomic_add_unless(atomic, -1, 1))
> > return 0;
> > -#endif
> > +
> > /* Otherwise do it the slow way */
> > spin_lock(lock);
> > if (atomic_dec_and_test(atomic))
> > --
> > 1.6.0.2

2009-04-17 22:17:21

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH] atomic: Only take lock when the counter drops to zero on UP as well

On Fri, 10 Apr 2009 18:13:57 +0200
Jan Blunck <[email protected]> wrote:

> I think it is wrong to unconditionally take the lock before calling
> atomic_dec_and_test() in _atomic_dec_and_lock(). This will deadlock in
> situation where it is known that the counter will not reach zero (e.g. holding
> another reference to the same object) but the lock is already taken.
>

It can't deadlock, because spin_lock() doesn't do anything on
CONFIG_SMP=n.

You might get lockdep whines on CONFIG_SMP=n, but they'd be false
positives because lockdep doesn't know that we generate additional code
for SMP builds.

> ---
> lib/dec_and_lock.c | 3 +--
> 1 files changed, 1 insertions(+), 2 deletions(-)
>
> diff --git a/lib/dec_and_lock.c b/lib/dec_and_lock.c
> index a65c314..e73822a 100644
> --- a/lib/dec_and_lock.c
> +++ b/lib/dec_and_lock.c
> @@ -19,11 +19,10 @@
> */
> int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock)
> {
> -#ifdef CONFIG_SMP
> /* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */
> if (atomic_add_unless(atomic, -1, 1))
> return 0;
> -#endif
> +
> /* Otherwise do it the slow way */
> spin_lock(lock);
> if (atomic_dec_and_test(atomic))

The patch looks reasonable from a cleanup/consistency POV, but the
analysis and changelog need a bit of help, methinks.

2009-04-22 12:56:34

by Jan Blunck

[permalink] [raw]
Subject: Re: [PATCH] atomic: Only take lock when the counter drops to zero on UP as well

On Sun, Apr 12, Paul E. McKenney wrote:

> On Sun, Apr 12, 2009 at 01:32:54PM +0200, Jan Blunck wrote:
> > Am 11.04.2009 um 19:49 schrieb "Paul E. McKenney"
> > <[email protected]>:
> >
> >> On Fri, Apr 10, 2009 at 06:13:57PM +0200, Jan Blunck wrote:
> >>> I think it is wrong to unconditionally take the lock before calling
> >>> atomic_dec_and_test() in _atomic_dec_and_lock(). This will deadlock in
> >>> situation where it is known that the counter will not reach zero (e.g.
> >>> holding
> >>> another reference to the same object) but the lock is already taken.
> >>
> >> The thought of calling _atomic_dec_and_lock() when you already hold the
> >> lock really really scares me.
> >>
> >> Could you please give an example where you need to do this?
> >>
> >
> > There is a part of the union mount patches that needs to do a union_put()
> > (which itself includes a path_put() that uses atomic_dec_and_lock() in
> > mntput() ). Since it is changing the namespace I need to hold the vfsmount
> > lock. I know that the mnt's count > 1 since it is a parent of the mnt I'm
> > changing in the mount tree. I could possibly delay the union_put().
> >
> > In general this let's atomic_dec_and_lock() behave similar on SMP and UP.
> > Remember that this already works with CONFIG_SMP as before Nick's patch.
>
> I asked, I guess. ;-)
>
> There is some sort of common code path, so that you cannot simply call
> atomic_dec() when holding the lock?

If it is possible I don't want to introduce another special mntput() variant
just for that code path.

Thanks,
Jan

2009-04-22 14:09:19

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH] atomic: Only take lock when the counter drops to zero on UP as well

On Wed, Apr 22, 2009 at 02:56:20PM +0200, Jan Blunck wrote:
> On Sun, Apr 12, Paul E. McKenney wrote:
>
> > On Sun, Apr 12, 2009 at 01:32:54PM +0200, Jan Blunck wrote:
> > > Am 11.04.2009 um 19:49 schrieb "Paul E. McKenney"
> > > <[email protected]>:
> > >
> > >> On Fri, Apr 10, 2009 at 06:13:57PM +0200, Jan Blunck wrote:
> > >>> I think it is wrong to unconditionally take the lock before calling
> > >>> atomic_dec_and_test() in _atomic_dec_and_lock(). This will deadlock in
> > >>> situation where it is known that the counter will not reach zero (e.g.
> > >>> holding
> > >>> another reference to the same object) but the lock is already taken.
> > >>
> > >> The thought of calling _atomic_dec_and_lock() when you already hold the
> > >> lock really really scares me.
> > >>
> > >> Could you please give an example where you need to do this?
> > >>
> > >
> > > There is a part of the union mount patches that needs to do a union_put()
> > > (which itself includes a path_put() that uses atomic_dec_and_lock() in
> > > mntput() ). Since it is changing the namespace I need to hold the vfsmount
> > > lock. I know that the mnt's count > 1 since it is a parent of the mnt I'm
> > > changing in the mount tree. I could possibly delay the union_put().
> > >
> > > In general this let's atomic_dec_and_lock() behave similar on SMP and UP.
> > > Remember that this already works with CONFIG_SMP as before Nick's patch.
> >
> > I asked, I guess. ;-)
> >
> > There is some sort of common code path, so that you cannot simply call
> > atomic_dec() when holding the lock?
>
> If it is possible I don't want to introduce another special mntput() variant
> just for that code path.

Fair enough!!!

Thanx, Paul

2009-04-23 13:32:27

by Jan Blunck

[permalink] [raw]
Subject: Re: [PATCH] atomic: Only take lock when the counter drops to zero on UP as well

On Fri, Apr 17, Andrew Morton wrote:

> On Fri, 10 Apr 2009 18:13:57 +0200
> Jan Blunck <[email protected]> wrote:
>
> > I think it is wrong to unconditionally take the lock before calling
> > atomic_dec_and_test() in _atomic_dec_and_lock(). This will deadlock in
> > situation where it is known that the counter will not reach zero (e.g. holding
> > another reference to the same object) but the lock is already taken.
> >
>
> It can't deadlock, because spin_lock() doesn't do anything on
> CONFIG_SMP=n.
>
> You might get lockdep whines on CONFIG_SMP=n, but they'd be false
> positives because lockdep doesn't know that we generate additional code
> for SMP builds.

Sorry, you are right. spin_lock() isn't the problem here. _raw_spin_lock()
calls into __spin_lock_debug():

static void __spin_lock_debug(spinlock_t *lock)
{
u64 i;
u64 loops = loops_per_jiffy * HZ;
int print_once = 1;

for (;;) {
for (i = 0; i < loops; i++) {
if (__raw_spin_trylock(&lock->raw_lock))
return;
__delay(1);
}
/* lockup suspected: */
if (print_once) {
print_once = 0;
printk(KERN_EMERG "BUG: spinlock lockup on CPU#%d, "
"%s/%d, %p\n",
raw_smp_processor_id(), current->comm,
task_pid_nr(current), lock);
dump_stack();
#ifdef CONFIG_SMP
trigger_all_cpu_backtrace();
#endif
}
}
}

This is an endless loop in this cases since the lock is already held and
therefore __raw_spin_trylock() never succeeds.

> > ---
> > lib/dec_and_lock.c | 3 +--
> > 1 files changed, 1 insertions(+), 2 deletions(-)
> >
> > diff --git a/lib/dec_and_lock.c b/lib/dec_and_lock.c
> > index a65c314..e73822a 100644
> > --- a/lib/dec_and_lock.c
> > +++ b/lib/dec_and_lock.c
> > @@ -19,11 +19,10 @@
> > */
> > int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock)
> > {
> > -#ifdef CONFIG_SMP
> > /* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */
> > if (atomic_add_unless(atomic, -1, 1))
> > return 0;
> > -#endif
> > +
> > /* Otherwise do it the slow way */
> > spin_lock(lock);
> > if (atomic_dec_and_test(atomic))
>
> The patch looks reasonable from a cleanup/consistency POV, but the
> analysis and changelog need a bit of help, methinks.
>

Sorry, I'll come up with a more verbose description of the root cause of how
this locks up.

Cheers,
Jan