2023-03-22 19:45:42

by Frederic Weisbecker

[permalink] [raw]
Subject: [PATCH 0/4] rcu/nocb: Shrinker related boring fixes

Hi,

Just a couple of boring fixes related to the shrinker.

1/4: is related to (de-)offloading so the issue only concerns rcutorture
right now (which doesn't seem to test lazy so hardly possible to trigger).

2/4: fix ignored flush, making the shrinker more efficient but nothing
critical.

3/4: optimization

4/4: optimization

As such none of these patches carry a "Fixes:" because none fix an actual
regression. Nothing worth backportng.

Frederic Weisbecker (4):
rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading
rcu/nocb: Fix shrinker race against callback enqueuer
rcu/nocb: Recheck lazy callbacks under the ->nocb_lock from shrinker
rcu/nocb: Make shrinker to iterate only NOCB CPUs

kernel/rcu/tree_nocb.h | 43 +++++++++++++++++++++++++++++++++++++-----
1 file changed, 38 insertions(+), 5 deletions(-)

--
2.34.1


2023-03-22 19:46:00

by Frederic Weisbecker

[permalink] [raw]
Subject: [PATCH 1/4] rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading

The shrinker may run concurrently with callbacks (de-)offloading. As
such, calling rcu_nocb_lock() is very dangerous because it does a
conditional locking. The worst outcome is that rcu_nocb_lock() doesn't
lock but rcu_nocb_unlock() eventually unlocks, or the reverse, creating
an imbalance.

Fix this with protecting against (de-)offloading using the barrier mutex.

Signed-off-by: Frederic Weisbecker <[email protected]>
---
kernel/rcu/tree_nocb.h | 17 ++++++++++++++++-
1 file changed, 16 insertions(+), 1 deletion(-)

diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
index f2280616f9d5..dd9b655ae533 100644
--- a/kernel/rcu/tree_nocb.h
+++ b/kernel/rcu/tree_nocb.h
@@ -1336,13 +1336,25 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
unsigned long flags;
unsigned long count = 0;

+ /*
+ * Protect against concurrent (de-)offloading. Otherwise nocb locking
+ * may be ignored or imbalanced.
+ */
+ mutex_lock(&rcu_state.barrier_mutex);
+
/* Snapshot count of all CPUs */
for_each_possible_cpu(cpu) {
struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
- int _count = READ_ONCE(rdp->lazy_len);
+ int _count;
+
+ if (!rcu_rdp_is_offloaded(rdp))
+ continue;
+
+ _count = READ_ONCE(rdp->lazy_len);

if (_count == 0)
continue;
+
rcu_nocb_lock_irqsave(rdp, flags);
WRITE_ONCE(rdp->lazy_len, 0);
rcu_nocb_unlock_irqrestore(rdp, flags);
@@ -1352,6 +1364,9 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
if (sc->nr_to_scan <= 0)
break;
}
+
+ mutex_unlock(&rcu_state.barrier_mutex);
+
return count ? count : SHRINK_STOP;
}

--
2.34.1

2023-03-22 19:46:09

by Frederic Weisbecker

[permalink] [raw]
Subject: [PATCH 3/4] rcu/nocb: Recheck lazy callbacks under the ->nocb_lock from shrinker

The ->lazy_len is only checked locklessly. Recheck again under the
->nocb_lock to avoid spending more time on flushing/waking if not
necessary. The ->lazy_len can still increment concurrently (from 1 to
infinity) but under the ->nocb_lock we at least know for sure if there
are lazy callbacks at all (->lazy_len > 0).

Signed-off-by: Frederic Weisbecker <[email protected]>
---
kernel/rcu/tree_nocb.h | 16 ++++++++++++----
1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
index cb57e8312231..a3dc7465b0b2 100644
--- a/kernel/rcu/tree_nocb.h
+++ b/kernel/rcu/tree_nocb.h
@@ -1350,12 +1350,20 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
if (!rcu_rdp_is_offloaded(rdp))
continue;

+ if (!READ_ONCE(rdp->lazy_len))
+ continue;
+
+ rcu_nocb_lock_irqsave(rdp, flags);
+ /*
+ * Recheck under the nocb lock. Since we are not holding the bypass
+ * lock we may still race with increments from the enqueuer but still
+ * we know for sure if there is at least one lazy callback.
+ */
_count = READ_ONCE(rdp->lazy_len);
-
- if (_count == 0)
+ if (!_count) {
+ rcu_nocb_unlock_irqrestore(rdp, flags);
continue;
-
- rcu_nocb_lock_irqsave(rdp, flags);
+ }
WARN_ON_ONCE(!rcu_nocb_flush_bypass(rdp, NULL, jiffies, false));
rcu_nocb_unlock_irqrestore(rdp, flags);
wake_nocb_gp(rdp, false);
--
2.34.1

2023-03-22 19:47:06

by Frederic Weisbecker

[permalink] [raw]
Subject: [PATCH 2/4] rcu/nocb: Fix shrinker race against callback enqueuer

The shrinker resets the lazy callbacks counter in order to trigger the
pending lazy queue flush though the rcuog kthread. The counter reset is
protected by the ->nocb_lock against concurrent accesses...except
for one of them. Here is a list of existing synchronized readers/writer:

1) The first lazy enqueuer (incrementing ->lazy_len to 1) does so under
->nocb_lock and ->nocb_bypass_lock.

2) The further lazy enqueuers (incrementing ->lazy_len above 1) do so
under ->nocb_bypass_lock _only_.

3) The lazy flush checks and resets to 0 under ->nocb_lock and
->nocb_bypass_lock.

The shrinker protects its ->lazy_len reset against cases 1) and 3) but
not against 2). As such, setting ->lazy_len to 0 under the ->nocb_lock
may be cancelled right away by an overwrite from an enqueuer, leading
rcuog to ignore the flush.

To avoid that, use the proper bypass flush API which takes care of all
those details.

Signed-off-by: Frederic Weisbecker <[email protected]>
---
kernel/rcu/tree_nocb.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
index dd9b655ae533..cb57e8312231 100644
--- a/kernel/rcu/tree_nocb.h
+++ b/kernel/rcu/tree_nocb.h
@@ -1356,7 +1356,7 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
continue;

rcu_nocb_lock_irqsave(rdp, flags);
- WRITE_ONCE(rdp->lazy_len, 0);
+ WARN_ON_ONCE(!rcu_nocb_flush_bypass(rdp, NULL, jiffies, false));
rcu_nocb_unlock_irqrestore(rdp, flags);
wake_nocb_gp(rdp, false);
sc->nr_to_scan -= _count;
--
2.34.1

2023-03-22 19:53:00

by Frederic Weisbecker

[permalink] [raw]
Subject: [PATCH 4/4] rcu/nocb: Make shrinker to iterate only NOCB CPUs

Callbacks can only be queued as lazy on NOCB CPUs, therefore iterating
over the NOCB mask is enough for both counting and scanning. Just lock
the mostly uncontended barrier mutex on counting as well in order to
keep rcu_nocb_mask stable.

Signed-off-by: Frederic Weisbecker <[email protected]>
---
kernel/rcu/tree_nocb.h | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
index a3dc7465b0b2..185c0c9a60d4 100644
--- a/kernel/rcu/tree_nocb.h
+++ b/kernel/rcu/tree_nocb.h
@@ -1319,13 +1319,21 @@ lazy_rcu_shrink_count(struct shrinker *shrink, struct shrink_control *sc)
int cpu;
unsigned long count = 0;

+ if (WARN_ON_ONCE(!cpumask_available(rcu_nocb_mask)))
+ return 0;
+
+ /* Protect rcu_nocb_mask against concurrent (de-)offloading. */
+ mutex_lock(&rcu_state.barrier_mutex);
+
/* Snapshot count of all CPUs */
- for_each_possible_cpu(cpu) {
+ for_each_cpu(cpu, rcu_nocb_mask) {
struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);

count += READ_ONCE(rdp->lazy_len);
}

+ mutex_unlock(&rcu_state.barrier_mutex);
+
return count ? count : SHRINK_EMPTY;
}

@@ -1336,6 +1344,8 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
unsigned long flags;
unsigned long count = 0;

+ if (WARN_ON_ONCE(!cpumask_available(rcu_nocb_mask)))
+ return 0;
/*
* Protect against concurrent (de-)offloading. Otherwise nocb locking
* may be ignored or imbalanced.
@@ -1343,7 +1353,7 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
mutex_lock(&rcu_state.barrier_mutex);

/* Snapshot count of all CPUs */
- for_each_possible_cpu(cpu) {
+ for_each_cpu(cpu, rcu_nocb_mask) {
struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
int _count;

--
2.34.1

2023-03-22 23:22:24

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH 1/4] rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading

On Wed, Mar 22, 2023 at 08:44:53PM +0100, Frederic Weisbecker wrote:
> The shrinker may run concurrently with callbacks (de-)offloading. As
> such, calling rcu_nocb_lock() is very dangerous because it does a
> conditional locking. The worst outcome is that rcu_nocb_lock() doesn't
> lock but rcu_nocb_unlock() eventually unlocks, or the reverse, creating
> an imbalance.
>
> Fix this with protecting against (de-)offloading using the barrier mutex.
>
> Signed-off-by: Frederic Weisbecker <[email protected]>

Good catch!!! A few questions, comments, and speculations below.

Thanx, Paul

> ---
> kernel/rcu/tree_nocb.h | 17 ++++++++++++++++-
> 1 file changed, 16 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> index f2280616f9d5..dd9b655ae533 100644
> --- a/kernel/rcu/tree_nocb.h
> +++ b/kernel/rcu/tree_nocb.h
> @@ -1336,13 +1336,25 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> unsigned long flags;
> unsigned long count = 0;
>
> + /*
> + * Protect against concurrent (de-)offloading. Otherwise nocb locking
> + * may be ignored or imbalanced.
> + */
> + mutex_lock(&rcu_state.barrier_mutex);

I was worried about this possibly leading to out-of-memory deadlock,
but if I recall correctly, the (de-)offloading process never allocates
memory, so this should be OK?

The other concern was that the (de-)offloading operation might take a
long time, but the usual cause for that is huge numbers of callbacks,
in which case letting them free their memory is not necessarily a bad
strategy.

> +
> /* Snapshot count of all CPUs */
> for_each_possible_cpu(cpu) {
> struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
> - int _count = READ_ONCE(rdp->lazy_len);
> + int _count;
> +
> + if (!rcu_rdp_is_offloaded(rdp))
> + continue;

If the CPU is offloaded, isn't ->lazy_len guaranteed to be zero?

Or can it contain garbage after a de-offloading operation?

> + _count = READ_ONCE(rdp->lazy_len);
>
> if (_count == 0)
> continue;
> +
> rcu_nocb_lock_irqsave(rdp, flags);
> WRITE_ONCE(rdp->lazy_len, 0);
> rcu_nocb_unlock_irqrestore(rdp, flags);
> @@ -1352,6 +1364,9 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> if (sc->nr_to_scan <= 0)
> break;
> }
> +
> + mutex_unlock(&rcu_state.barrier_mutex);
> +
> return count ? count : SHRINK_STOP;
> }
>
> --
> 2.34.1
>

2023-03-22 23:22:38

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH 2/4] rcu/nocb: Fix shrinker race against callback enqueuer

On Wed, Mar 22, 2023 at 08:44:54PM +0100, Frederic Weisbecker wrote:
> The shrinker resets the lazy callbacks counter in order to trigger the
> pending lazy queue flush though the rcuog kthread. The counter reset is
> protected by the ->nocb_lock against concurrent accesses...except
> for one of them. Here is a list of existing synchronized readers/writer:
>
> 1) The first lazy enqueuer (incrementing ->lazy_len to 1) does so under
> ->nocb_lock and ->nocb_bypass_lock.
>
> 2) The further lazy enqueuers (incrementing ->lazy_len above 1) do so
> under ->nocb_bypass_lock _only_.
>
> 3) The lazy flush checks and resets to 0 under ->nocb_lock and
> ->nocb_bypass_lock.
>
> The shrinker protects its ->lazy_len reset against cases 1) and 3) but
> not against 2). As such, setting ->lazy_len to 0 under the ->nocb_lock
> may be cancelled right away by an overwrite from an enqueuer, leading
> rcuog to ignore the flush.
>
> To avoid that, use the proper bypass flush API which takes care of all
> those details.
>
> Signed-off-by: Frederic Weisbecker <[email protected]>

Again, good catch, and this one looks good to me.

So what am I missing? ;-)

Thanx, Paul

> ---
> kernel/rcu/tree_nocb.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> index dd9b655ae533..cb57e8312231 100644
> --- a/kernel/rcu/tree_nocb.h
> +++ b/kernel/rcu/tree_nocb.h
> @@ -1356,7 +1356,7 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> continue;
>
> rcu_nocb_lock_irqsave(rdp, flags);
> - WRITE_ONCE(rdp->lazy_len, 0);
> + WARN_ON_ONCE(!rcu_nocb_flush_bypass(rdp, NULL, jiffies, false));
> rcu_nocb_unlock_irqrestore(rdp, flags);
> wake_nocb_gp(rdp, false);
> sc->nr_to_scan -= _count;
> --
> 2.34.1
>

2023-03-22 23:28:24

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH 3/4] rcu/nocb: Recheck lazy callbacks under the ->nocb_lock from shrinker

On Wed, Mar 22, 2023 at 08:44:55PM +0100, Frederic Weisbecker wrote:
> The ->lazy_len is only checked locklessly. Recheck again under the
> ->nocb_lock to avoid spending more time on flushing/waking if not
> necessary. The ->lazy_len can still increment concurrently (from 1 to
> infinity) but under the ->nocb_lock we at least know for sure if there
> are lazy callbacks at all (->lazy_len > 0).
>
> Signed-off-by: Frederic Weisbecker <[email protected]>

This one looks plausible, and might hold the answer to earlier
questions.

Thanx, Paul

> ---
> kernel/rcu/tree_nocb.h | 16 ++++++++++++----
> 1 file changed, 12 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> index cb57e8312231..a3dc7465b0b2 100644
> --- a/kernel/rcu/tree_nocb.h
> +++ b/kernel/rcu/tree_nocb.h
> @@ -1350,12 +1350,20 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> if (!rcu_rdp_is_offloaded(rdp))
> continue;
>
> + if (!READ_ONCE(rdp->lazy_len))
> + continue;
> +
> + rcu_nocb_lock_irqsave(rdp, flags);
> + /*
> + * Recheck under the nocb lock. Since we are not holding the bypass
> + * lock we may still race with increments from the enqueuer but still
> + * we know for sure if there is at least one lazy callback.
> + */
> _count = READ_ONCE(rdp->lazy_len);
> -
> - if (_count == 0)
> + if (!_count) {
> + rcu_nocb_unlock_irqrestore(rdp, flags);
> continue;
> -
> - rcu_nocb_lock_irqsave(rdp, flags);
> + }
> WARN_ON_ONCE(!rcu_nocb_flush_bypass(rdp, NULL, jiffies, false));
> rcu_nocb_unlock_irqrestore(rdp, flags);
> wake_nocb_gp(rdp, false);
> --
> 2.34.1
>

2023-03-24 00:53:28

by Joel Fernandes

[permalink] [raw]
Subject: Re: [PATCH 4/4] rcu/nocb: Make shrinker to iterate only NOCB CPUs

On Wed, Mar 22, 2023 at 08:44:56PM +0100, Frederic Weisbecker wrote:
> Callbacks can only be queued as lazy on NOCB CPUs, therefore iterating
> over the NOCB mask is enough for both counting and scanning. Just lock
> the mostly uncontended barrier mutex on counting as well in order to
> keep rcu_nocb_mask stable.
>

Reviewed-by: Joel Fernandes (Google) <[email protected]>

thanks,

- Joel


> Signed-off-by: Frederic Weisbecker <[email protected]>
> ---
> kernel/rcu/tree_nocb.h | 14 ++++++++++++--
> 1 file changed, 12 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> index a3dc7465b0b2..185c0c9a60d4 100644
> --- a/kernel/rcu/tree_nocb.h
> +++ b/kernel/rcu/tree_nocb.h
> @@ -1319,13 +1319,21 @@ lazy_rcu_shrink_count(struct shrinker *shrink, struct shrink_control *sc)
> int cpu;
> unsigned long count = 0;
>
> + if (WARN_ON_ONCE(!cpumask_available(rcu_nocb_mask)))
> + return 0;
> +
> + /* Protect rcu_nocb_mask against concurrent (de-)offloading. */
> + mutex_lock(&rcu_state.barrier_mutex);
> +
> /* Snapshot count of all CPUs */
> - for_each_possible_cpu(cpu) {
> + for_each_cpu(cpu, rcu_nocb_mask) {
> struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
>
> count += READ_ONCE(rdp->lazy_len);
> }
>
> + mutex_unlock(&rcu_state.barrier_mutex);
> +
> return count ? count : SHRINK_EMPTY;
> }
>
> @@ -1336,6 +1344,8 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> unsigned long flags;
> unsigned long count = 0;
>
> + if (WARN_ON_ONCE(!cpumask_available(rcu_nocb_mask)))
> + return 0;
> /*
> * Protect against concurrent (de-)offloading. Otherwise nocb locking
> * may be ignored or imbalanced.
> @@ -1343,7 +1353,7 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> mutex_lock(&rcu_state.barrier_mutex);
>
> /* Snapshot count of all CPUs */
> - for_each_possible_cpu(cpu) {
> + for_each_cpu(cpu, rcu_nocb_mask) {
> struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
> int _count;
>
> --
> 2.34.1
>

2023-03-24 00:56:29

by Joel Fernandes

[permalink] [raw]
Subject: Re: [PATCH 1/4] rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading

On Wed, Mar 22, 2023 at 04:18:24PM -0700, Paul E. McKenney wrote:
> On Wed, Mar 22, 2023 at 08:44:53PM +0100, Frederic Weisbecker wrote:
> > The shrinker may run concurrently with callbacks (de-)offloading. As
> > such, calling rcu_nocb_lock() is very dangerous because it does a
> > conditional locking. The worst outcome is that rcu_nocb_lock() doesn't
> > lock but rcu_nocb_unlock() eventually unlocks, or the reverse, creating
> > an imbalance.
> >
> > Fix this with protecting against (de-)offloading using the barrier mutex.
> >
> > Signed-off-by: Frederic Weisbecker <[email protected]>
>
> Good catch!!! A few questions, comments, and speculations below.

Added a few more. ;)

> > ---
> > kernel/rcu/tree_nocb.h | 17 ++++++++++++++++-
> > 1 file changed, 16 insertions(+), 1 deletion(-)
> >
> > diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> > index f2280616f9d5..dd9b655ae533 100644
> > --- a/kernel/rcu/tree_nocb.h
> > +++ b/kernel/rcu/tree_nocb.h
> > @@ -1336,13 +1336,25 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> > unsigned long flags;
> > unsigned long count = 0;
> >
> > + /*
> > + * Protect against concurrent (de-)offloading. Otherwise nocb locking
> > + * may be ignored or imbalanced.
> > + */
> > + mutex_lock(&rcu_state.barrier_mutex);
>
> I was worried about this possibly leading to out-of-memory deadlock,
> but if I recall correctly, the (de-)offloading process never allocates
> memory, so this should be OK?

Maybe trylock is better then? If we can't make progress, may be better to let
kswapd free memory by other means than blocking on the mutex.

ISTR, from my Android days that there are weird lockdep issues that happen
when locking in a shrinker (due to the 'fake lock' dependency added during
reclaim).

> The other concern was that the (de-)offloading operation might take a
> long time, but the usual cause for that is huge numbers of callbacks,
> in which case letting them free their memory is not necessarily a bad
> strategy.
>
> > +
> > /* Snapshot count of all CPUs */
> > for_each_possible_cpu(cpu) {
> > struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
> > - int _count = READ_ONCE(rdp->lazy_len);
> > + int _count;
> > +
> > + if (!rcu_rdp_is_offloaded(rdp))
> > + continue;
>
> If the CPU is offloaded, isn't ->lazy_len guaranteed to be zero?

Did you mean de-offloaded? If it is offloaded, that means nocb is active so
there could be lazy CBs queued. Or did I miss something?

thanks,

- Joel


> Or can it contain garbage after a de-offloading operation?
>
> > + _count = READ_ONCE(rdp->lazy_len);
> >
> > if (_count == 0)
> > continue;
> > +
> > rcu_nocb_lock_irqsave(rdp, flags);
> > WRITE_ONCE(rdp->lazy_len, 0);
> > rcu_nocb_unlock_irqrestore(rdp, flags);
> > @@ -1352,6 +1364,9 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> > if (sc->nr_to_scan <= 0)
> > break;
> > }
> > +
> > + mutex_unlock(&rcu_state.barrier_mutex);
> > +
> > return count ? count : SHRINK_STOP;
> > }
> >
> > --
> > 2.34.1
> >

2023-03-24 01:12:58

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH 1/4] rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading

On Fri, Mar 24, 2023 at 12:55:23AM +0000, Joel Fernandes wrote:
> On Wed, Mar 22, 2023 at 04:18:24PM -0700, Paul E. McKenney wrote:
> > On Wed, Mar 22, 2023 at 08:44:53PM +0100, Frederic Weisbecker wrote:
> > > The shrinker may run concurrently with callbacks (de-)offloading. As
> > > such, calling rcu_nocb_lock() is very dangerous because it does a
> > > conditional locking. The worst outcome is that rcu_nocb_lock() doesn't
> > > lock but rcu_nocb_unlock() eventually unlocks, or the reverse, creating
> > > an imbalance.
> > >
> > > Fix this with protecting against (de-)offloading using the barrier mutex.
> > >
> > > Signed-off-by: Frederic Weisbecker <[email protected]>
> >
> > Good catch!!! A few questions, comments, and speculations below.
>
> Added a few more. ;)
>
> > > ---
> > > kernel/rcu/tree_nocb.h | 17 ++++++++++++++++-
> > > 1 file changed, 16 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> > > index f2280616f9d5..dd9b655ae533 100644
> > > --- a/kernel/rcu/tree_nocb.h
> > > +++ b/kernel/rcu/tree_nocb.h
> > > @@ -1336,13 +1336,25 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> > > unsigned long flags;
> > > unsigned long count = 0;
> > >
> > > + /*
> > > + * Protect against concurrent (de-)offloading. Otherwise nocb locking
> > > + * may be ignored or imbalanced.
> > > + */
> > > + mutex_lock(&rcu_state.barrier_mutex);
> >
> > I was worried about this possibly leading to out-of-memory deadlock,
> > but if I recall correctly, the (de-)offloading process never allocates
> > memory, so this should be OK?
>
> Maybe trylock is better then? If we can't make progress, may be better to let
> kswapd free memory by other means than blocking on the mutex.
>
> ISTR, from my Android days that there are weird lockdep issues that happen
> when locking in a shrinker (due to the 'fake lock' dependency added during
> reclaim).

This stuff gets tricky quickly. ;-)

> > The other concern was that the (de-)offloading operation might take a
> > long time, but the usual cause for that is huge numbers of callbacks,
> > in which case letting them free their memory is not necessarily a bad
> > strategy.
> >
> > > +
> > > /* Snapshot count of all CPUs */
> > > for_each_possible_cpu(cpu) {
> > > struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
> > > - int _count = READ_ONCE(rdp->lazy_len);
> > > + int _count;
> > > +
> > > + if (!rcu_rdp_is_offloaded(rdp))
> > > + continue;
> >
> > If the CPU is offloaded, isn't ->lazy_len guaranteed to be zero?
>
> Did you mean de-offloaded? If it is offloaded, that means nocb is active so
> there could be lazy CBs queued. Or did I miss something?

You are quite right, offloaded for ->lazy_len to be zero.

Thanx, Paul.

> thanks,
>
> - Joel
>
>
> > Or can it contain garbage after a de-offloading operation?
> >
> > > + _count = READ_ONCE(rdp->lazy_len);
> > >
> > > if (_count == 0)
> > > continue;
> > > +
> > > rcu_nocb_lock_irqsave(rdp, flags);
> > > WRITE_ONCE(rdp->lazy_len, 0);
> > > rcu_nocb_unlock_irqrestore(rdp, flags);
> > > @@ -1352,6 +1364,9 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> > > if (sc->nr_to_scan <= 0)
> > > break;
> > > }
> > > +
> > > + mutex_unlock(&rcu_state.barrier_mutex);
> > > +
> > > return count ? count : SHRINK_STOP;
> > > }
> > >
> > > --
> > > 2.34.1
> > >

2023-03-24 22:11:38

by Frederic Weisbecker

[permalink] [raw]
Subject: Re: [PATCH 1/4] rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading

Le Wed, Mar 22, 2023 at 04:18:24PM -0700, Paul E. McKenney a ?crit :
> > @@ -1336,13 +1336,25 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> > unsigned long flags;
> > unsigned long count = 0;
> >
> > + /*
> > + * Protect against concurrent (de-)offloading. Otherwise nocb locking
> > + * may be ignored or imbalanced.
> > + */
> > + mutex_lock(&rcu_state.barrier_mutex);
>
> I was worried about this possibly leading to out-of-memory deadlock,
> but if I recall correctly, the (de-)offloading process never allocates
> memory, so this should be OK?

Good point. It _should_ be fine but like you, Joel and Hillf pointed out
it's asking for trouble.

We could try Joel's idea to use mutex_trylock() as a best effort, which
should be fine as it's mostly uncontended.

The alternative is to force nocb locking and check the offloading state
right after. So instead of:

rcu_nocb_lock_irqsave(rdp, flags);
//flush stuff
rcu_nocb_unlock_irqrestore(rdp, flags);

Have:

raw_spin_lock_irqsave(rdp->nocb_lock, flags);
if (!rcu_rdp_is_offloaded(rdp))
raw_spin_unlock_irqrestore(rdp->nocb_lock, flags);
continue;
}
//flush stuff
rcu_nocb_unlock_irqrestore(rdp, flags);

But it's not pretty and also disqualifies the last two patches as
rcu_nocb_mask can't be iterated safely anymore.

What do you think?

> > /* Snapshot count of all CPUs */
> > for_each_possible_cpu(cpu) {
> > struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
> > - int _count = READ_ONCE(rdp->lazy_len);
> > + int _count;
> > +
> > + if (!rcu_rdp_is_offloaded(rdp))
> > + continue;
>
> If the CPU is offloaded, isn't ->lazy_len guaranteed to be zero?
>
> Or can it contain garbage after a de-offloading operation?

If it's deoffloaded, ->lazy_len is indeed (supposed to be) guaranteed to be zero.
Bypass is flushed and disabled atomically early on de-offloading and the
flush resets ->lazy_len.

Thanks.

2023-03-24 23:03:36

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH 1/4] rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading

On Fri, Mar 24, 2023 at 11:09:08PM +0100, Frederic Weisbecker wrote:
> Le Wed, Mar 22, 2023 at 04:18:24PM -0700, Paul E. McKenney a ?crit :
> > > @@ -1336,13 +1336,25 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> > > unsigned long flags;
> > > unsigned long count = 0;
> > >
> > > + /*
> > > + * Protect against concurrent (de-)offloading. Otherwise nocb locking
> > > + * may be ignored or imbalanced.
> > > + */
> > > + mutex_lock(&rcu_state.barrier_mutex);
> >
> > I was worried about this possibly leading to out-of-memory deadlock,
> > but if I recall correctly, the (de-)offloading process never allocates
> > memory, so this should be OK?
>
> Good point. It _should_ be fine but like you, Joel and Hillf pointed out
> it's asking for trouble.
>
> We could try Joel's idea to use mutex_trylock() as a best effort, which
> should be fine as it's mostly uncontended.
>
> The alternative is to force nocb locking and check the offloading state
> right after. So instead of:
>
> rcu_nocb_lock_irqsave(rdp, flags);
> //flush stuff
> rcu_nocb_unlock_irqrestore(rdp, flags);
>
> Have:
>
> raw_spin_lock_irqsave(rdp->nocb_lock, flags);
> if (!rcu_rdp_is_offloaded(rdp))
> raw_spin_unlock_irqrestore(rdp->nocb_lock, flags);
> continue;
> }
> //flush stuff
> rcu_nocb_unlock_irqrestore(rdp, flags);
>
> But it's not pretty and also disqualifies the last two patches as
> rcu_nocb_mask can't be iterated safely anymore.
>
> What do you think?

The mutex_trylock() approach does have the advantage of simplicity,
and as you say should do well given low contention.

Which reminds me, what sort of test strategy did you have in mind?
Memory exhaustion can have surprising effects.

> > > /* Snapshot count of all CPUs */
> > > for_each_possible_cpu(cpu) {
> > > struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
> > > - int _count = READ_ONCE(rdp->lazy_len);
> > > + int _count;
> > > +
> > > + if (!rcu_rdp_is_offloaded(rdp))
> > > + continue;
> >
> > If the CPU is offloaded, isn't ->lazy_len guaranteed to be zero?
> >
> > Or can it contain garbage after a de-offloading operation?
>
> If it's deoffloaded, ->lazy_len is indeed (supposed to be) guaranteed to be zero.
> Bypass is flushed and disabled atomically early on de-offloading and the
> flush resets ->lazy_len.

Whew! At the moment, I don't feel strongly about whether or not
the following code should (1) read the value, (2) warn on non-zero,
(3) assume zero without reading, or (4) some other option that is not
occurring to me. Your choice!

Thanx, Paul

2023-03-26 20:07:07

by Frederic Weisbecker

[permalink] [raw]
Subject: Re: [PATCH 1/4] rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading

Le Fri, Mar 24, 2023 at 03:51:54PM -0700, Paul E. McKenney a ?crit :
> On Fri, Mar 24, 2023 at 11:09:08PM +0100, Frederic Weisbecker wrote:
> > Le Wed, Mar 22, 2023 at 04:18:24PM -0700, Paul E. McKenney a ?crit :
> > > > @@ -1336,13 +1336,25 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> > > > unsigned long flags;
> > > > unsigned long count = 0;
> > > >
> > > > + /*
> > > > + * Protect against concurrent (de-)offloading. Otherwise nocb locking
> > > > + * may be ignored or imbalanced.
> > > > + */
> > > > + mutex_lock(&rcu_state.barrier_mutex);
> > >
> > > I was worried about this possibly leading to out-of-memory deadlock,
> > > but if I recall correctly, the (de-)offloading process never allocates
> > > memory, so this should be OK?
> >
> > Good point. It _should_ be fine but like you, Joel and Hillf pointed out
> > it's asking for trouble.
> >
> > We could try Joel's idea to use mutex_trylock() as a best effort, which
> > should be fine as it's mostly uncontended.
> >
> > The alternative is to force nocb locking and check the offloading state
> > right after. So instead of:
> >
> > rcu_nocb_lock_irqsave(rdp, flags);
> > //flush stuff
> > rcu_nocb_unlock_irqrestore(rdp, flags);
> >
> > Have:
> >
> > raw_spin_lock_irqsave(rdp->nocb_lock, flags);
> > if (!rcu_rdp_is_offloaded(rdp))
> > raw_spin_unlock_irqrestore(rdp->nocb_lock, flags);
> > continue;
> > }
> > //flush stuff
> > rcu_nocb_unlock_irqrestore(rdp, flags);
> >
> > But it's not pretty and also disqualifies the last two patches as
> > rcu_nocb_mask can't be iterated safely anymore.
> >
> > What do you think?
>
> The mutex_trylock() approach does have the advantage of simplicity,
> and as you say should do well given low contention.
>
> Which reminds me, what sort of test strategy did you have in mind?
> Memory exhaustion can have surprising effects.

The best I can do is to trigger the count and scan callbacks through
the shrinker debugfs and see if it crashes or not :-)

>
> > > > /* Snapshot count of all CPUs */
> > > > for_each_possible_cpu(cpu) {
> > > > struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
> > > > - int _count = READ_ONCE(rdp->lazy_len);
> > > > + int _count;
> > > > +
> > > > + if (!rcu_rdp_is_offloaded(rdp))
> > > > + continue;
> > >
> > > If the CPU is offloaded, isn't ->lazy_len guaranteed to be zero?
> > >
> > > Or can it contain garbage after a de-offloading operation?
> >
> > If it's deoffloaded, ->lazy_len is indeed (supposed to be) guaranteed to be zero.
> > Bypass is flushed and disabled atomically early on de-offloading and the
> > flush resets ->lazy_len.
>
> Whew! At the moment, I don't feel strongly about whether or not
> the following code should (1) read the value, (2) warn on non-zero,
> (3) assume zero without reading, or (4) some other option that is not
> occurring to me. Your choice!

(2) looks like a good idea!

Thanks.

2023-03-26 22:05:35

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH 1/4] rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading

On Sun, Mar 26, 2023 at 10:01:34PM +0200, Frederic Weisbecker wrote:
> Le Fri, Mar 24, 2023 at 03:51:54PM -0700, Paul E. McKenney a ?crit :
> > On Fri, Mar 24, 2023 at 11:09:08PM +0100, Frederic Weisbecker wrote:
> > > Le Wed, Mar 22, 2023 at 04:18:24PM -0700, Paul E. McKenney a ?crit :
> > > > > @@ -1336,13 +1336,25 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> > > > > unsigned long flags;
> > > > > unsigned long count = 0;
> > > > >
> > > > > + /*
> > > > > + * Protect against concurrent (de-)offloading. Otherwise nocb locking
> > > > > + * may be ignored or imbalanced.
> > > > > + */
> > > > > + mutex_lock(&rcu_state.barrier_mutex);
> > > >
> > > > I was worried about this possibly leading to out-of-memory deadlock,
> > > > but if I recall correctly, the (de-)offloading process never allocates
> > > > memory, so this should be OK?
> > >
> > > Good point. It _should_ be fine but like you, Joel and Hillf pointed out
> > > it's asking for trouble.
> > >
> > > We could try Joel's idea to use mutex_trylock() as a best effort, which
> > > should be fine as it's mostly uncontended.
> > >
> > > The alternative is to force nocb locking and check the offloading state
> > > right after. So instead of:
> > >
> > > rcu_nocb_lock_irqsave(rdp, flags);
> > > //flush stuff
> > > rcu_nocb_unlock_irqrestore(rdp, flags);
> > >
> > > Have:
> > >
> > > raw_spin_lock_irqsave(rdp->nocb_lock, flags);
> > > if (!rcu_rdp_is_offloaded(rdp))
> > > raw_spin_unlock_irqrestore(rdp->nocb_lock, flags);
> > > continue;
> > > }
> > > //flush stuff
> > > rcu_nocb_unlock_irqrestore(rdp, flags);
> > >
> > > But it's not pretty and also disqualifies the last two patches as
> > > rcu_nocb_mask can't be iterated safely anymore.
> > >
> > > What do you think?
> >
> > The mutex_trylock() approach does have the advantage of simplicity,
> > and as you say should do well given low contention.
> >
> > Which reminds me, what sort of test strategy did you have in mind?
> > Memory exhaustion can have surprising effects.
>
> The best I can do is to trigger the count and scan callbacks through
> the shrinker debugfs and see if it crashes or not :-)

Sounds like a good start. Maybe also a good finish? ;-)

> > > > > /* Snapshot count of all CPUs */
> > > > > for_each_possible_cpu(cpu) {
> > > > > struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
> > > > > - int _count = READ_ONCE(rdp->lazy_len);
> > > > > + int _count;
> > > > > +
> > > > > + if (!rcu_rdp_is_offloaded(rdp))
> > > > > + continue;
> > > >
> > > > If the CPU is offloaded, isn't ->lazy_len guaranteed to be zero?
> > > >
> > > > Or can it contain garbage after a de-offloading operation?
> > >
> > > If it's deoffloaded, ->lazy_len is indeed (supposed to be) guaranteed to be zero.
> > > Bypass is flushed and disabled atomically early on de-offloading and the
> > > flush resets ->lazy_len.
> >
> > Whew! At the moment, I don't feel strongly about whether or not
> > the following code should (1) read the value, (2) warn on non-zero,
> > (3) assume zero without reading, or (4) some other option that is not
> > occurring to me. Your choice!
>
> (2) looks like a good idea!

Sounds good to me!

Thanx, Paul

2023-03-29 16:31:41

by Frederic Weisbecker

[permalink] [raw]
Subject: Re: [PATCH 1/4] rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading

On Sun, Mar 26, 2023 at 02:45:18PM -0700, Paul E. McKenney wrote:
> On Sun, Mar 26, 2023 at 10:01:34PM +0200, Frederic Weisbecker wrote:
> > > > > > /* Snapshot count of all CPUs */
> > > > > > for_each_possible_cpu(cpu) {
> > > > > > struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
> > > > > > - int _count = READ_ONCE(rdp->lazy_len);
> > > > > > + int _count;
> > > > > > +
> > > > > > + if (!rcu_rdp_is_offloaded(rdp))
> > > > > > + continue;
> > > > >
> > > > > If the CPU is offloaded, isn't ->lazy_len guaranteed to be zero?
> > > > >
> > > > > Or can it contain garbage after a de-offloading operation?
> > > >
> > > > If it's deoffloaded, ->lazy_len is indeed (supposed to be) guaranteed to be zero.
> > > > Bypass is flushed and disabled atomically early on de-offloading and the
> > > > flush resets ->lazy_len.
> > >
> > > Whew! At the moment, I don't feel strongly about whether or not
> > > the following code should (1) read the value, (2) warn on non-zero,
> > > (3) assume zero without reading, or (4) some other option that is not
> > > occurring to me. Your choice!
> >
> > (2) looks like a good idea!
>
> Sounds good to me!

So since we now iterate rcu_nocb_mask after the patchset, there is no more
deoffloaded rdp to check. Meanwhile I put a WARN in the new series making
sure that an rdp in rcu_nocb_mask is also offloaded (heh!)

2023-03-29 20:49:52

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH 1/4] rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading

On Wed, Mar 29, 2023 at 06:07:58PM +0200, Frederic Weisbecker wrote:
> On Sun, Mar 26, 2023 at 02:45:18PM -0700, Paul E. McKenney wrote:
> > On Sun, Mar 26, 2023 at 10:01:34PM +0200, Frederic Weisbecker wrote:
> > > > > > > /* Snapshot count of all CPUs */
> > > > > > > for_each_possible_cpu(cpu) {
> > > > > > > struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
> > > > > > > - int _count = READ_ONCE(rdp->lazy_len);
> > > > > > > + int _count;
> > > > > > > +
> > > > > > > + if (!rcu_rdp_is_offloaded(rdp))
> > > > > > > + continue;
> > > > > >
> > > > > > If the CPU is offloaded, isn't ->lazy_len guaranteed to be zero?
> > > > > >
> > > > > > Or can it contain garbage after a de-offloading operation?
> > > > >
> > > > > If it's deoffloaded, ->lazy_len is indeed (supposed to be) guaranteed to be zero.
> > > > > Bypass is flushed and disabled atomically early on de-offloading and the
> > > > > flush resets ->lazy_len.
> > > >
> > > > Whew! At the moment, I don't feel strongly about whether or not
> > > > the following code should (1) read the value, (2) warn on non-zero,
> > > > (3) assume zero without reading, or (4) some other option that is not
> > > > occurring to me. Your choice!
> > >
> > > (2) looks like a good idea!
> >
> > Sounds good to me!
>
> So since we now iterate rcu_nocb_mask after the patchset, there is no more
> deoffloaded rdp to check. Meanwhile I put a WARN in the new series making
> sure that an rdp in rcu_nocb_mask is also offloaded (heh!)

Sounds good, thank you!

Thanx, Paul