The commit which tried to fix the concurrency issues of concurrent
static_key_slow_inc() failed to fix the equivalent issues
vs. static_key_slow_dec():
CPU0 CPU1
static_key_slow_dec()
static_key_slow_try_dec()
key->enabled == 1
val = atomic_fetch_add_unless(&key->enabled, -1, 1);
if (val == 1)
return false;
jump_label_lock();
if (atomic_dec_and_test(&key->enabled)) {
--> key->enabled == 0
__jump_label_update()
static_key_slow_dec()
static_key_slow_try_dec()
key->enabled == 0
val = atomic_fetch_add_unless(&key->enabled, -1, 1);
--> key->enabled == -1 <- FAIL
There is another bug in that code, when there is a concurrent
static_key_slow_inc() which enables the key as that sets key->enabled to -1
so on the other CPU
val = atomic_fetch_add_unless(&key->enabled, -1, 1);
will succeed and decrement to -2, which is invalid.
Cure all of this by replacing the atomic_fetch_add_unless() with a
atomic_try_cmpxchg() loop similar to static_key_fast_inc_not_disabled().
Fixes: 4c5ea0a9cd02 ("locking/static_key: Fix concurrent static_key_slow_inc()")
Reported-by: Yue Sun <[email protected]>
Reported-by: Xingwei Lee <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
---
kernel/jump_label.c | 38 ++++++++++++++++++++++----------------
1 file changed, 22 insertions(+), 16 deletions(-)
--- a/kernel/jump_label.c
+++ b/kernel/jump_label.c
@@ -131,7 +131,7 @@ bool static_key_fast_inc_not_disabled(st
STATIC_KEY_CHECK_USE(key);
/*
* Negative key->enabled has a special meaning: it sends
- * static_key_slow_inc() down the slow path, and it is non-zero
+ * static_key_slow_inc/dec() down the slow path, and it is non-zero
* so it counts as "enabled" in jump_label_update(). Note that
* atomic_inc_unless_negative() checks >= 0, so roll our own.
*/
@@ -150,7 +150,7 @@ bool static_key_slow_inc_cpuslocked(stru
lockdep_assert_cpus_held();
/*
- * Careful if we get concurrent static_key_slow_inc() calls;
+ * Careful if we get concurrent static_key_slow_inc/dec() calls;
* later calls must wait for the first one to _finish_ the
* jump_label_update() process. At the same time, however,
* the jump_label_update() call below wants to see
@@ -247,20 +247,25 @@ EXPORT_SYMBOL_GPL(static_key_disable);
static bool static_key_slow_try_dec(struct static_key *key)
{
- int val;
-
- val = atomic_fetch_add_unless(&key->enabled, -1, 1);
- if (val == 1)
- return false;
+ int v;
/*
- * The negative count check is valid even when a negative
- * key->enabled is in use by static_key_slow_inc(); a
- * __static_key_slow_dec() before the first static_key_slow_inc()
- * returns is unbalanced, because all other static_key_slow_inc()
- * instances block while the update is in progress.
+ * Go into the slow path if key::enabled is less than or equal than
+ * one. One is valid to shut down the key, anything less than one
+ * is an imbalance, which is handled at the call site.
+ *
+ * That includes the special case of '-1' which is set in
+ * static_key_slow_inc_cpuslocked(), but that's harmless as it is
+ * fully serialized in the slow path below. By the time this task
+ * acquires the jump label lock the value is back to one and the
+ * retry under the lock must succeed.
*/
- WARN(val < 0, "jump label: negative count!\n");
+ v = atomic_read(&key->enabled);
+ do {
+ if (v <= 1)
+ return false;
+ } while (!likely(atomic_try_cmpxchg(&key->enabled, &v, v - 1)));
+
return true;
}
@@ -271,10 +276,11 @@ static void __static_key_slow_dec_cpuslo
if (static_key_slow_try_dec(key))
return;
- jump_label_lock();
- if (atomic_dec_and_test(&key->enabled))
+ guard(mutex)(&jump_label_mutex);
+ if (atomic_cmpxchg(&key->enabled, 1, 0))
jump_label_update(key);
- jump_label_unlock();
+ else
+ WARN_ON_ONCE(!static_key_slow_try_dec(key));
}
static void __static_key_slow_dec(struct static_key *key)
On Mon, Jun 10, 2024 at 02:46:36PM +0200, Thomas Gleixner wrote:
> @@ -247,20 +247,25 @@ EXPORT_SYMBOL_GPL(static_key_disable);
>
> static bool static_key_slow_try_dec(struct static_key *key)
> {
> + int v;
>
> /*
> + * Go into the slow path if key::enabled is less than or equal than
> + * one. One is valid to shut down the key, anything less than one
> + * is an imbalance, which is handled at the call site.
> + *
> + * That includes the special case of '-1' which is set in
> + * static_key_slow_inc_cpuslocked(), but that's harmless as it is
> + * fully serialized in the slow path below. By the time this task
> + * acquires the jump label lock the value is back to one and the
> + * retry under the lock must succeed.
Harmless yes, but it really should not happen to begin with. If this
happens it means someone wants to disable a key that is in the middle of
getting enabled for the first time.
I'm tempted to want a WARN here instead. Hmm?
> */
> + v = atomic_read(&key->enabled);
> + do {
> + if (v <= 1)
> + return false;
> + } while (!likely(atomic_try_cmpxchg(&key->enabled, &v, v - 1)));
> +
> return true;
> }
On Mon, Jun 10 2024 at 19:57, Peter Zijlstra wrote:
> On Mon, Jun 10, 2024 at 02:46:36PM +0200, Thomas Gleixner wrote:
>
>> @@ -247,20 +247,25 @@ EXPORT_SYMBOL_GPL(static_key_disable);
>>
>> static bool static_key_slow_try_dec(struct static_key *key)
>> {
>> + int v;
>>
>> /*
>> + * Go into the slow path if key::enabled is less than or equal than
>> + * one. One is valid to shut down the key, anything less than one
>> + * is an imbalance, which is handled at the call site.
>> + *
>> + * That includes the special case of '-1' which is set in
>> + * static_key_slow_inc_cpuslocked(), but that's harmless as it is
>> + * fully serialized in the slow path below. By the time this task
>> + * acquires the jump label lock the value is back to one and the
>> + * retry under the lock must succeed.
>
> Harmless yes, but it really should not happen to begin with. If this
> happens it means someone wants to disable a key that is in the middle of
> getting enabled for the first time.
>
> I'm tempted to want a WARN here instead. Hmm?
No strong opinion