Changes in v2:
- hopefully fixed all (at least to my eye and to checkpatch)
errors on whitespacing, etc.
- changed text on ACQUIRE guarantees pointed by Andrea and
Dmitry
- removed mentioning of control dependency on dec/sub_and_test
variants
I also have to send it now with a cover letter, because I think
it is Intel mailer that malformed the patch last time for god knows
what reason...
Elena Reshetova (1):
refcount_t: add ACQUIRE ordering on success for dec(sub)_and_test
variants
Documentation/core-api/refcount-vs-atomic.rst | 24 +++++++++++++++++++++---
arch/x86/include/asm/refcount.h | 22 ++++++++++++++++++----
lib/refcount.c | 18 +++++++++++++-----
3 files changed, 52 insertions(+), 12 deletions(-)
--
2.7.4
This adds an smp_acquire__after_ctrl_dep() barrier on successful
decrease of refcounter value from 1 to 0 for refcount_dec(sub)_and_test
variants and therefore gives stronger memory ordering guarantees than
prior versions of these functions.
Co-developed-by: Peter Zijlstra (Intel) <[email protected]>
Signed-off-by: Elena Reshetova <[email protected]>
---
Documentation/core-api/refcount-vs-atomic.rst | 24 +++++++++++++++++++++---
arch/x86/include/asm/refcount.h | 22 ++++++++++++++++++----
lib/refcount.c | 18 +++++++++++++-----
3 files changed, 52 insertions(+), 12 deletions(-)
diff --git a/Documentation/core-api/refcount-vs-atomic.rst b/Documentation/core-api/refcount-vs-atomic.rst
index 322851b..976e85a 100644
--- a/Documentation/core-api/refcount-vs-atomic.rst
+++ b/Documentation/core-api/refcount-vs-atomic.rst
@@ -54,6 +54,13 @@ must propagate to all other CPUs before the release operation
(A-cumulative property). This is implemented using
:c:func:`smp_store_release`.
+An ACQUIRE memory ordering guarantees that all post loads and
+stores (all po-later instructions) on the same CPU are
+completed after the acquire operation. It also guarantees that all
+po-later stores on the same CPU must propagate to all other CPUs
+after the acquire operation executes. This is implemented using
+:c:func:`smp_acquire__after_ctrl_dep`.
+
A control dependency (on success) for refcounters guarantees that
if a reference for an object was successfully obtained (reference
counter increment or addition happened, function returned true),
@@ -119,13 +126,24 @@ Memory ordering guarantees changes:
result of obtaining pointer to the object!
-case 5) - decrement-based RMW ops that return a value
------------------------------------------------------
+case 5) - generic dec/sub decrement-based RMW ops that return a value
+---------------------------------------------------------------------
Function changes:
* :c:func:`atomic_dec_and_test` --> :c:func:`refcount_dec_and_test`
* :c:func:`atomic_sub_and_test` --> :c:func:`refcount_sub_and_test`
+
+Memory ordering guarantees changes:
+
+ * fully ordered --> RELEASE ordering + ACQUIRE ordering on success
+
+
+case 6) other decrement-based RMW ops that return a value
+---------------------------------------------------------
+
+Function changes:
+
* no atomic counterpart --> :c:func:`refcount_dec_if_one`
* ``atomic_add_unless(&var, -1, 1)`` --> ``refcount_dec_not_one(&var)``
@@ -136,7 +154,7 @@ Memory ordering guarantees changes:
.. note:: :c:func:`atomic_add_unless` only provides full order on success.
-case 6) - lock-based RMW
+case 7) - lock-based RMW
------------------------
Function changes:
diff --git a/arch/x86/include/asm/refcount.h b/arch/x86/include/asm/refcount.h
index dbaed55..232f856 100644
--- a/arch/x86/include/asm/refcount.h
+++ b/arch/x86/include/asm/refcount.h
@@ -67,16 +67,30 @@ static __always_inline void refcount_dec(refcount_t *r)
static __always_inline __must_check
bool refcount_sub_and_test(unsigned int i, refcount_t *r)
{
- return GEN_BINARY_SUFFIXED_RMWcc(LOCK_PREFIX "subl",
+ bool ret = GEN_BINARY_SUFFIXED_RMWcc(LOCK_PREFIX "subl",
REFCOUNT_CHECK_LT_ZERO,
r->refs.counter, e, "er", i, "cx");
+
+ if (ret) {
+ smp_acquire__after_ctrl_dep();
+ return true;
+ }
+
+ return false;
}
static __always_inline __must_check bool refcount_dec_and_test(refcount_t *r)
{
- return GEN_UNARY_SUFFIXED_RMWcc(LOCK_PREFIX "decl",
- REFCOUNT_CHECK_LT_ZERO,
- r->refs.counter, e, "cx");
+ bool ret = GEN_UNARY_SUFFIXED_RMWcc(LOCK_PREFIX "decl",
+ REFCOUNT_CHECK_LT_ZERO,
+ r->refs.counter, e, "cx");
+
+ if (ret) {
+ smp_acquire__after_ctrl_dep();
+ return true;
+ }
+
+ return false;
}
static __always_inline __must_check
diff --git a/lib/refcount.c b/lib/refcount.c
index ebcf8cd..6e904af 100644
--- a/lib/refcount.c
+++ b/lib/refcount.c
@@ -33,6 +33,9 @@
* Note that the allocator is responsible for ordering things between free()
* and alloc().
*
+ * The decrements dec_and_test() and sub_and_test() also provide acquire
+ * ordering on success.
+ *
*/
#include <linux/mutex.h>
@@ -164,8 +167,8 @@ EXPORT_SYMBOL(refcount_inc_checked);
* at UINT_MAX.
*
* Provides release memory ordering, such that prior loads and stores are done
- * before, and provides a control dependency such that free() must come after.
- * See the comment on top.
+ * before, and provides an acquire ordering on success such that free()
+ * must come after.
*
* Use of this function is not recommended for the normal reference counting
* use case in which references are taken and released one at a time. In these
@@ -190,7 +193,12 @@ bool refcount_sub_and_test_checked(unsigned int i, refcount_t *r)
} while (!atomic_try_cmpxchg_release(&r->refs, &val, new));
- return !new;
+ if (!new) {
+ smp_acquire__after_ctrl_dep();
+ return true;
+ }
+ return false;
+
}
EXPORT_SYMBOL(refcount_sub_and_test_checked);
@@ -202,8 +210,8 @@ EXPORT_SYMBOL(refcount_sub_and_test_checked);
* decrement when saturated at UINT_MAX.
*
* Provides release memory ordering, such that prior loads and stores are done
- * before, and provides a control dependency such that free() must come after.
- * See the comment on top.
+ * before, and provides an acquire ordering on success such that free()
+ * must come after.
*
* Return: true if the resulting refcount is 0, false otherwise
*/
--
2.7.4
On Wed, Jan 30, 2019 at 01:18:51PM +0200, Elena Reshetova wrote:
> This adds an smp_acquire__after_ctrl_dep() barrier on successful
> decrease of refcounter value from 1 to 0 for refcount_dec(sub)_and_test
> variants and therefore gives stronger memory ordering guarantees than
> prior versions of these functions.
>
> Co-developed-by: Peter Zijlstra (Intel) <[email protected]>
> Signed-off-by: Elena Reshetova <[email protected]>
Reviewed-by: Andrea Parri <[email protected]>
Andrea
> ---
> Documentation/core-api/refcount-vs-atomic.rst | 24 +++++++++++++++++++++---
> arch/x86/include/asm/refcount.h | 22 ++++++++++++++++++----
> lib/refcount.c | 18 +++++++++++++-----
> 3 files changed, 52 insertions(+), 12 deletions(-)
>
> diff --git a/Documentation/core-api/refcount-vs-atomic.rst b/Documentation/core-api/refcount-vs-atomic.rst
> index 322851b..976e85a 100644
> --- a/Documentation/core-api/refcount-vs-atomic.rst
> +++ b/Documentation/core-api/refcount-vs-atomic.rst
> @@ -54,6 +54,13 @@ must propagate to all other CPUs before the release operation
> (A-cumulative property). This is implemented using
> :c:func:`smp_store_release`.
>
> +An ACQUIRE memory ordering guarantees that all post loads and
> +stores (all po-later instructions) on the same CPU are
> +completed after the acquire operation. It also guarantees that all
> +po-later stores on the same CPU must propagate to all other CPUs
> +after the acquire operation executes. This is implemented using
> +:c:func:`smp_acquire__after_ctrl_dep`.
> +
> A control dependency (on success) for refcounters guarantees that
> if a reference for an object was successfully obtained (reference
> counter increment or addition happened, function returned true),
> @@ -119,13 +126,24 @@ Memory ordering guarantees changes:
> result of obtaining pointer to the object!
>
>
> -case 5) - decrement-based RMW ops that return a value
> ------------------------------------------------------
> +case 5) - generic dec/sub decrement-based RMW ops that return a value
> +---------------------------------------------------------------------
>
> Function changes:
>
> * :c:func:`atomic_dec_and_test` --> :c:func:`refcount_dec_and_test`
> * :c:func:`atomic_sub_and_test` --> :c:func:`refcount_sub_and_test`
> +
> +Memory ordering guarantees changes:
> +
> + * fully ordered --> RELEASE ordering + ACQUIRE ordering on success
> +
> +
> +case 6) other decrement-based RMW ops that return a value
> +---------------------------------------------------------
> +
> +Function changes:
> +
> * no atomic counterpart --> :c:func:`refcount_dec_if_one`
> * ``atomic_add_unless(&var, -1, 1)`` --> ``refcount_dec_not_one(&var)``
>
> @@ -136,7 +154,7 @@ Memory ordering guarantees changes:
> .. note:: :c:func:`atomic_add_unless` only provides full order on success.
>
>
> -case 6) - lock-based RMW
> +case 7) - lock-based RMW
> ------------------------
>
> Function changes:
> diff --git a/arch/x86/include/asm/refcount.h b/arch/x86/include/asm/refcount.h
> index dbaed55..232f856 100644
> --- a/arch/x86/include/asm/refcount.h
> +++ b/arch/x86/include/asm/refcount.h
> @@ -67,16 +67,30 @@ static __always_inline void refcount_dec(refcount_t *r)
> static __always_inline __must_check
> bool refcount_sub_and_test(unsigned int i, refcount_t *r)
> {
> - return GEN_BINARY_SUFFIXED_RMWcc(LOCK_PREFIX "subl",
> + bool ret = GEN_BINARY_SUFFIXED_RMWcc(LOCK_PREFIX "subl",
> REFCOUNT_CHECK_LT_ZERO,
> r->refs.counter, e, "er", i, "cx");
> +
> + if (ret) {
> + smp_acquire__after_ctrl_dep();
> + return true;
> + }
> +
> + return false;
> }
>
> static __always_inline __must_check bool refcount_dec_and_test(refcount_t *r)
> {
> - return GEN_UNARY_SUFFIXED_RMWcc(LOCK_PREFIX "decl",
> - REFCOUNT_CHECK_LT_ZERO,
> - r->refs.counter, e, "cx");
> + bool ret = GEN_UNARY_SUFFIXED_RMWcc(LOCK_PREFIX "decl",
> + REFCOUNT_CHECK_LT_ZERO,
> + r->refs.counter, e, "cx");
> +
> + if (ret) {
> + smp_acquire__after_ctrl_dep();
> + return true;
> + }
> +
> + return false;
> }
>
> static __always_inline __must_check
> diff --git a/lib/refcount.c b/lib/refcount.c
> index ebcf8cd..6e904af 100644
> --- a/lib/refcount.c
> +++ b/lib/refcount.c
> @@ -33,6 +33,9 @@
> * Note that the allocator is responsible for ordering things between free()
> * and alloc().
> *
> + * The decrements dec_and_test() and sub_and_test() also provide acquire
> + * ordering on success.
> + *
> */
>
> #include <linux/mutex.h>
> @@ -164,8 +167,8 @@ EXPORT_SYMBOL(refcount_inc_checked);
> * at UINT_MAX.
> *
> * Provides release memory ordering, such that prior loads and stores are done
> - * before, and provides a control dependency such that free() must come after.
> - * See the comment on top.
> + * before, and provides an acquire ordering on success such that free()
> + * must come after.
> *
> * Use of this function is not recommended for the normal reference counting
> * use case in which references are taken and released one at a time. In these
> @@ -190,7 +193,12 @@ bool refcount_sub_and_test_checked(unsigned int i, refcount_t *r)
>
> } while (!atomic_try_cmpxchg_release(&r->refs, &val, new));
>
> - return !new;
> + if (!new) {
> + smp_acquire__after_ctrl_dep();
> + return true;
> + }
> + return false;
> +
> }
> EXPORT_SYMBOL(refcount_sub_and_test_checked);
>
> @@ -202,8 +210,8 @@ EXPORT_SYMBOL(refcount_sub_and_test_checked);
> * decrement when saturated at UINT_MAX.
> *
> * Provides release memory ordering, such that prior loads and stores are done
> - * before, and provides a control dependency such that free() must come after.
> - * See the comment on top.
> + * before, and provides an acquire ordering on success such that free()
> + * must come after.
> *
> * Return: true if the resulting refcount is 0, false otherwise
> */
> --
> 2.7.4
>
On Wed, Jan 30, 2019 at 01:31:31PM +0100, Andrea Parri wrote:
> On Wed, Jan 30, 2019 at 01:18:51PM +0200, Elena Reshetova wrote:
> > This adds an smp_acquire__after_ctrl_dep() barrier on successful
> > decrease of refcounter value from 1 to 0 for refcount_dec(sub)_and_test
> > variants and therefore gives stronger memory ordering guarantees than
> > prior versions of these functions.
> >
> > Co-developed-by: Peter Zijlstra (Intel) <[email protected]>
> > Signed-off-by: Elena Reshetova <[email protected]>
>
> Reviewed-by: Andrea Parri <[email protected]>
Thanks, got it queued now.
Commit-ID: 47b8f3ab9c49daa824af848f9e02889662d8638f
Gitweb: https://git.kernel.org/tip/47b8f3ab9c49daa824af848f9e02889662d8638f
Author: Elena Reshetova <[email protected]>
AuthorDate: Wed, 30 Jan 2019 13:18:51 +0200
Committer: Ingo Molnar <[email protected]>
CommitDate: Mon, 4 Feb 2019 09:03:31 +0100
refcount_t: Add ACQUIRE ordering on success for dec(sub)_and_test() variants
This adds an smp_acquire__after_ctrl_dep() barrier on successful
decrease of refcounter value from 1 to 0 for refcount_dec(sub)_and_test
variants and therefore gives stronger memory ordering guarantees than
prior versions of these functions.
Co-developed-by: Peter Zijlstra (Intel) <[email protected]>
Signed-off-by: Elena Reshetova <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Andrea Parri <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
Documentation/core-api/refcount-vs-atomic.rst | 24 +++++++++++++++++++++---
arch/x86/include/asm/refcount.h | 22 ++++++++++++++++++----
lib/refcount.c | 18 +++++++++++++-----
3 files changed, 52 insertions(+), 12 deletions(-)
diff --git a/Documentation/core-api/refcount-vs-atomic.rst b/Documentation/core-api/refcount-vs-atomic.rst
index 322851bada16..976e85adffe8 100644
--- a/Documentation/core-api/refcount-vs-atomic.rst
+++ b/Documentation/core-api/refcount-vs-atomic.rst
@@ -54,6 +54,13 @@ must propagate to all other CPUs before the release operation
(A-cumulative property). This is implemented using
:c:func:`smp_store_release`.
+An ACQUIRE memory ordering guarantees that all post loads and
+stores (all po-later instructions) on the same CPU are
+completed after the acquire operation. It also guarantees that all
+po-later stores on the same CPU must propagate to all other CPUs
+after the acquire operation executes. This is implemented using
+:c:func:`smp_acquire__after_ctrl_dep`.
+
A control dependency (on success) for refcounters guarantees that
if a reference for an object was successfully obtained (reference
counter increment or addition happened, function returned true),
@@ -119,13 +126,24 @@ Memory ordering guarantees changes:
result of obtaining pointer to the object!
-case 5) - decrement-based RMW ops that return a value
------------------------------------------------------
+case 5) - generic dec/sub decrement-based RMW ops that return a value
+---------------------------------------------------------------------
Function changes:
* :c:func:`atomic_dec_and_test` --> :c:func:`refcount_dec_and_test`
* :c:func:`atomic_sub_and_test` --> :c:func:`refcount_sub_and_test`
+
+Memory ordering guarantees changes:
+
+ * fully ordered --> RELEASE ordering + ACQUIRE ordering on success
+
+
+case 6) other decrement-based RMW ops that return a value
+---------------------------------------------------------
+
+Function changes:
+
* no atomic counterpart --> :c:func:`refcount_dec_if_one`
* ``atomic_add_unless(&var, -1, 1)`` --> ``refcount_dec_not_one(&var)``
@@ -136,7 +154,7 @@ Memory ordering guarantees changes:
.. note:: :c:func:`atomic_add_unless` only provides full order on success.
-case 6) - lock-based RMW
+case 7) - lock-based RMW
------------------------
Function changes:
diff --git a/arch/x86/include/asm/refcount.h b/arch/x86/include/asm/refcount.h
index dbaed55c1c24..232f856e0db0 100644
--- a/arch/x86/include/asm/refcount.h
+++ b/arch/x86/include/asm/refcount.h
@@ -67,16 +67,30 @@ static __always_inline void refcount_dec(refcount_t *r)
static __always_inline __must_check
bool refcount_sub_and_test(unsigned int i, refcount_t *r)
{
- return GEN_BINARY_SUFFIXED_RMWcc(LOCK_PREFIX "subl",
+ bool ret = GEN_BINARY_SUFFIXED_RMWcc(LOCK_PREFIX "subl",
REFCOUNT_CHECK_LT_ZERO,
r->refs.counter, e, "er", i, "cx");
+
+ if (ret) {
+ smp_acquire__after_ctrl_dep();
+ return true;
+ }
+
+ return false;
}
static __always_inline __must_check bool refcount_dec_and_test(refcount_t *r)
{
- return GEN_UNARY_SUFFIXED_RMWcc(LOCK_PREFIX "decl",
- REFCOUNT_CHECK_LT_ZERO,
- r->refs.counter, e, "cx");
+ bool ret = GEN_UNARY_SUFFIXED_RMWcc(LOCK_PREFIX "decl",
+ REFCOUNT_CHECK_LT_ZERO,
+ r->refs.counter, e, "cx");
+
+ if (ret) {
+ smp_acquire__after_ctrl_dep();
+ return true;
+ }
+
+ return false;
}
static __always_inline __must_check
diff --git a/lib/refcount.c b/lib/refcount.c
index ebcf8cd49e05..6e904af0fb3e 100644
--- a/lib/refcount.c
+++ b/lib/refcount.c
@@ -33,6 +33,9 @@
* Note that the allocator is responsible for ordering things between free()
* and alloc().
*
+ * The decrements dec_and_test() and sub_and_test() also provide acquire
+ * ordering on success.
+ *
*/
#include <linux/mutex.h>
@@ -164,8 +167,8 @@ EXPORT_SYMBOL(refcount_inc_checked);
* at UINT_MAX.
*
* Provides release memory ordering, such that prior loads and stores are done
- * before, and provides a control dependency such that free() must come after.
- * See the comment on top.
+ * before, and provides an acquire ordering on success such that free()
+ * must come after.
*
* Use of this function is not recommended for the normal reference counting
* use case in which references are taken and released one at a time. In these
@@ -190,7 +193,12 @@ bool refcount_sub_and_test_checked(unsigned int i, refcount_t *r)
} while (!atomic_try_cmpxchg_release(&r->refs, &val, new));
- return !new;
+ if (!new) {
+ smp_acquire__after_ctrl_dep();
+ return true;
+ }
+ return false;
+
}
EXPORT_SYMBOL(refcount_sub_and_test_checked);
@@ -202,8 +210,8 @@ EXPORT_SYMBOL(refcount_sub_and_test_checked);
* decrement when saturated at UINT_MAX.
*
* Provides release memory ordering, such that prior loads and stores are done
- * before, and provides a control dependency such that free() must come after.
- * See the comment on top.
+ * before, and provides an acquire ordering on success such that free()
+ * must come after.
*
* Return: true if the resulting refcount is 0, false otherwise
*/