The asm-generic rwsem implementation directly acceses sem->cnt when
performing a __down_read_trylock operation. Whilst this is probably safe
on all architectures, we should stick to the atomic_long_* API and use
atomic_long_read instead.
Signed-off-by: Will Deacon <[email protected]>
---
include/asm-generic/rwsem.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/asm-generic/rwsem.h b/include/asm-generic/rwsem.h
index bb1e2cdeb9bf..75af612f54f8 100644
--- a/include/asm-generic/rwsem.h
+++ b/include/asm-generic/rwsem.h
@@ -41,7 +41,7 @@ static inline int __down_read_trylock(struct rw_semaphore *sem)
{
long tmp;
- while ((tmp = sem->count) >= 0) {
+ while ((tmp = atomic_long_read((atomic_long_t *)&sem->count)) >= 0) {
if (tmp == cmpxchg(&sem->count, tmp,
tmp + RWSEM_ACTIVE_READ_BIAS)) {
return 1;
--
1.8.2.2
asm-generic/rwsem.h used to live under arch/powerpc. During its
liberation to common code, a few references to its former home where
preserved, in particular the definition of RWSEM_ACTIVE_MASK is
predicated on CONFIG_PPC64.
This patch updates the ifdefs and comments to architecturally neutral
versions.
Signed-off-by: Will Deacon <[email protected]>
---
include/asm-generic/rwsem.h | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/include/asm-generic/rwsem.h b/include/asm-generic/rwsem.h
index 75af612f54f8..603a0a11e592 100644
--- a/include/asm-generic/rwsem.h
+++ b/include/asm-generic/rwsem.h
@@ -1,5 +1,5 @@
-#ifndef _ASM_POWERPC_RWSEM_H
-#define _ASM_POWERPC_RWSEM_H
+#ifndef _ASM_GENERIC_RWSEM_H
+#define _ASM_GENERIC_RWSEM_H
#ifndef _LINUX_RWSEM_H
#error "Please don't include <asm/rwsem.h> directly, use <linux/rwsem.h> instead."
@@ -8,7 +8,7 @@
#ifdef __KERNEL__
/*
- * R/W semaphores for PPC using the stuff in lib/rwsem.c.
+ * R/W semaphores originally for PPC using the stuff in lib/rwsem.c.
* Adapted largely from include/asm-i386/rwsem.h
* by Paul Mackerras <[email protected]>.
*/
@@ -16,7 +16,7 @@
/*
* the semaphore definition
*/
-#ifdef CONFIG_PPC64
+#ifdef CONFIG_64BIT
# define RWSEM_ACTIVE_MASK 0xffffffffL
#else
# define RWSEM_ACTIVE_MASK 0x0000ffffL
@@ -129,4 +129,4 @@ static inline long rwsem_atomic_update(long delta, struct rw_semaphore *sem)
}
#endif /* __KERNEL__ */
-#endif /* _ASM_POWERPC_RWSEM_H */
+#endif /* _ASM_GENERIC_RWSEM_H */
--
1.8.2.2
On Fri, Feb 21, 2014 at 05:22:27PM +0000, Will Deacon wrote:
> asm-generic/rwsem.h used to live under arch/powerpc. During its
> liberation to common code, a few references to its former home where
> preserved, in particular the definition of RWSEM_ACTIVE_MASK is
> predicated on CONFIG_PPC64.
>
> This patch updates the ifdefs and comments to architecturally neutral
> versions.
>
> Signed-off-by: Will Deacon <[email protected]>
> ---
> include/asm-generic/rwsem.h | 10 +++++-----
> 1 file changed, 5 insertions(+), 5 deletions(-)
>
Thanks; I probably should have noticed these when I copied the file over.
Acked-by: Richard Kuo <[email protected]>
--
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
hosted by The Linux Foundation
On Fri, 2014-02-21 at 17:22 +0000, Will Deacon wrote:
> The asm-generic rwsem implementation directly acceses sem->cnt when
> performing a __down_read_trylock operation. Whilst this is probably safe
> on all architectures, we should stick to the atomic_long_* API and use
> atomic_long_read instead.
>
> Signed-off-by: Will Deacon <[email protected]>
> ---
> include/asm-generic/rwsem.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/include/asm-generic/rwsem.h b/include/asm-generic/rwsem.h
> index bb1e2cdeb9bf..75af612f54f8 100644
> --- a/include/asm-generic/rwsem.h
> +++ b/include/asm-generic/rwsem.h
> @@ -41,7 +41,7 @@ static inline int __down_read_trylock(struct rw_semaphore *sem)
> {
> long tmp;
>
> - while ((tmp = sem->count) >= 0) {
> + while ((tmp = atomic_long_read((atomic_long_t *)&sem->count)) >= 0) {
That's pretty ugly, how about having infinite look and just do the tmp
assign separately from the conditional?
It also looks like a cpu_relax() could help here between iterations.
Other than that:
Reviewed-by: Davidlohr Bueso <[email protected]>
On Thu, Feb 27, 2014 at 05:28:24AM +0000, Davidlohr Bueso wrote:
> On Fri, 2014-02-21 at 17:22 +0000, Will Deacon wrote:
> > The asm-generic rwsem implementation directly acceses sem->cnt when
> > performing a __down_read_trylock operation. Whilst this is probably safe
> > on all architectures, we should stick to the atomic_long_* API and use
> > atomic_long_read instead.
> >
> > Signed-off-by: Will Deacon <[email protected]>
> > ---
> > include/asm-generic/rwsem.h | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/include/asm-generic/rwsem.h b/include/asm-generic/rwsem.h
> > index bb1e2cdeb9bf..75af612f54f8 100644
> > --- a/include/asm-generic/rwsem.h
> > +++ b/include/asm-generic/rwsem.h
> > @@ -41,7 +41,7 @@ static inline int __down_read_trylock(struct rw_semaphore *sem)
> > {
> > long tmp;
> >
> > - while ((tmp = sem->count) >= 0) {
> > + while ((tmp = atomic_long_read((atomic_long_t *)&sem->count)) >= 0) {
>
> That's pretty ugly, how about having infinite look and just do the tmp
> assign separately from the conditional?
>
> It also looks like a cpu_relax() could help here between iterations.
Sure, I can do these in a separate patch (this patch is just a simple fix).
> Reviewed-by: Davidlohr Bueso <[email protected]>
Thanks,
Will
On Thu, Feb 27, 2014 at 05:28:24AM +0000, Davidlohr Bueso wrote:
> On Fri, 2014-02-21 at 17:22 +0000, Will Deacon wrote:
> > The asm-generic rwsem implementation directly acceses sem->cnt when
> > performing a __down_read_trylock operation. Whilst this is probably safe
> > on all architectures, we should stick to the atomic_long_* API and use
> > atomic_long_read instead.
> >
> > Signed-off-by: Will Deacon <[email protected]>
> > ---
> > include/asm-generic/rwsem.h | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/include/asm-generic/rwsem.h b/include/asm-generic/rwsem.h
> > index bb1e2cdeb9bf..75af612f54f8 100644
> > --- a/include/asm-generic/rwsem.h
> > +++ b/include/asm-generic/rwsem.h
> > @@ -41,7 +41,7 @@ static inline int __down_read_trylock(struct rw_semaphore *sem)
> > {
> > long tmp;
> >
> > - while ((tmp = sem->count) >= 0) {
> > + while ((tmp = atomic_long_read((atomic_long_t *)&sem->count)) >= 0) {
>
> That's pretty ugly, how about having infinite look and just do the tmp
> assign separately from the conditional?
>
> It also looks like a cpu_relax() could help here between iterations.
> Other than that:
>
> Reviewed-by: Davidlohr Bueso <[email protected]>
Actually, we should make that cmpxchg an atomic_long_cmpxchg, so the extra
diff ends up looking like below. It's ugly adding a cpu_relax(), since you
only want it when the cmpxchg fails (and we don't have such logic in the
asm-generic __atomic_add_unless, for example).
Will
--->8
diff --git a/include/asm-generic/rwsem.h b/include/asm-generic/rwsem.h
index 603a0a11e592..2b6401f9e428 100644
--- a/include/asm-generic/rwsem.h
+++ b/include/asm-generic/rwsem.h
@@ -40,14 +40,16 @@ static inline void __down_read(struct rw_semaphore *sem)
static inline int __down_read_trylock(struct rw_semaphore *sem)
{
long tmp;
+ atomic_long_t *cnt = (atomic_long_t *)&sem->count;
- while ((tmp = atomic_long_read((atomic_long_t *)&sem->count)) >= 0) {
- if (tmp == cmpxchg(&sem->count, tmp,
- tmp + RWSEM_ACTIVE_READ_BIAS)) {
- return 1;
- }
- }
- return 0;
+ do {
+ tmp = atomic_long_read(cnt);
+ if (tmp < 0)
+ return 0;
+ } while (tmp != atomic_long_cmpxchg(cnt, tmp,
+ tmp + RWSEM_ACTIVE_READ_BIAS));
+
+ return 1;
}
/*
On 02/28/2014 07:13 AM, Will Deacon wrote:
> On Thu, Feb 27, 2014 at 05:28:24AM +0000, Davidlohr Bueso wrote:
>> On Fri, 2014-02-21 at 17:22 +0000, Will Deacon wrote:
>>> The asm-generic rwsem implementation directly acceses sem->cnt when
>>> performing a __down_read_trylock operation. Whilst this is probably safe
>>> on all architectures, we should stick to the atomic_long_* API and use
>>> atomic_long_read instead.
>>>
>>> Signed-off-by: Will Deacon <[email protected]>
>>> ---
>>> include/asm-generic/rwsem.h | 2 +-
>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/include/asm-generic/rwsem.h b/include/asm-generic/rwsem.h
>>> index bb1e2cdeb9bf..75af612f54f8 100644
>>> --- a/include/asm-generic/rwsem.h
>>> +++ b/include/asm-generic/rwsem.h
>>> @@ -41,7 +41,7 @@ static inline int __down_read_trylock(struct rw_semaphore *sem)
>>> {
>>> long tmp;
>>>
>>> - while ((tmp = sem->count) >= 0) {
>>> + while ((tmp = atomic_long_read((atomic_long_t *)&sem->count)) >= 0) {
>>
>> That's pretty ugly, how about having infinite look and just do the tmp
>> assign separately from the conditional?
>>
>> It also looks like a cpu_relax() could help here between iterations.
This is the read trylock so no cpu_relax().
>> Other than that:
>>
>> Reviewed-by: Davidlohr Bueso <[email protected]>
>
> Actually, we should make that cmpxchg an atomic_long_cmpxchg, so the extra
> diff ends up looking like below. It's ugly adding a cpu_relax(), since you
> only want it when the cmpxchg fails (and we don't have such logic in the
> asm-generic __atomic_add_unless, for example).
>
> Will
>
> --->8
>
> diff --git a/include/asm-generic/rwsem.h b/include/asm-generic/rwsem.h
> index 603a0a11e592..2b6401f9e428 100644
> --- a/include/asm-generic/rwsem.h
> +++ b/include/asm-generic/rwsem.h
> @@ -40,14 +40,16 @@ static inline void __down_read(struct rw_semaphore *sem)
> static inline int __down_read_trylock(struct rw_semaphore *sem)
> {
> long tmp;
> + atomic_long_t *cnt = (atomic_long_t *)&sem->count;
The shared rwsem failure paths (kernel/locking/rwsem_xadd.c) peek at
sem->count as long type, so this isn't really necessary.
>
> - while ((tmp = atomic_long_read((atomic_long_t *)&sem->count)) >= 0) {
> - if (tmp == cmpxchg(&sem->count, tmp,
> - tmp + RWSEM_ACTIVE_READ_BIAS)) {
> - return 1;
> - }
> - }
> - return 0;
> + do {
> + tmp = atomic_long_read(cnt);
> + if (tmp < 0)
> + return 0;
> + } while (tmp != atomic_long_cmpxchg(cnt, tmp,
> + tmp + RWSEM_ACTIVE_READ_BIAS));
> +
> + return 1;
> }
Regards,
Peter Hurley
On Fri, Feb 28, 2014 at 12:50:06PM +0000, Peter Hurley wrote:
> On 02/28/2014 07:13 AM, Will Deacon wrote:
> > diff --git a/include/asm-generic/rwsem.h b/include/asm-generic/rwsem.h
> > index 603a0a11e592..2b6401f9e428 100644
> > --- a/include/asm-generic/rwsem.h
> > +++ b/include/asm-generic/rwsem.h
> > @@ -40,14 +40,16 @@ static inline void __down_read(struct rw_semaphore *sem)
> > static inline int __down_read_trylock(struct rw_semaphore *sem)
> > {
> > long tmp;
> > + atomic_long_t *cnt = (atomic_long_t *)&sem->count;
>
> The shared rwsem failure paths (kernel/locking/rwsem_xadd.c) peek at
> sem->count as long type, so this isn't really necessary.
Yeah, none of this patch is necessary from a technical perspective. I was
just in the area and tried to make the file at least self-consistent wrt the
atomic API. It's a harmless cleanup.
Will