2019-01-23 21:21:53

by Josh Elsasser

[permalink] [raw]
Subject: [PATCH net] rhashtable: avoid reschedule loop after rapid growth and shrink

When running workloads with large bursts of fragmented packets, we've seen
a few machines stuck returning -EEXIST from rht_shrink() and endlessly
rescheduling their hash table's deferred work, pegging a CPU core.

Root cause is commit da20420f83ea ("rhashtable: Add nested tables"), which
stops ignoring the return code of rhashtable_shrink() and the reallocs
used to grow the hashtable. This uncovers a bug in the shrink logic where
"needs to shrink" check runs against the last table but the actual shrink
operation runs on the first bucket_table in the hashtable (see below):

+-------+ +--------------+ +---------------+
| ht | | "first" tbl | | "last" tbl |
| - tbl ---> | - future_tbl ---------> | - future_tbl ---> NULL
+-------+ +--------------+ +---------------+
^^^ ^^^
used by rhashtable_shrink() used by rht_shrink_below_30()

A rehash then stalls out when both the last table needs to shrink, the
first table has more elements than the target size, but rht_shrink() hits
a non-NULL future_tbl and returns -EEXIST. This skips the item rehashing
and kicks off a reschedule loop, as no forward progress can be made while
the rhashtable needs to shrink.

Extend rhashtable_shrink() with a "tbl" param to avoid endless exit-and-
reschedules after hitting the EEXIST, allowing it to check a future_tbl
pointer that can actually be non-NULL and make forward progress when the
hashtable needs to shrink.

Fixes: da20420f83ea ("rhashtable: Add nested tables")
Signed-off-by: Josh Elsasser <[email protected]>
---
lib/rhashtable.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/lib/rhashtable.c b/lib/rhashtable.c
index 852ffa5160f1..98e91f9544fa 100644
--- a/lib/rhashtable.c
+++ b/lib/rhashtable.c
@@ -377,9 +377,9 @@ static int rhashtable_rehash_alloc(struct rhashtable *ht,
* It is valid to have concurrent insertions and deletions protected by per
* bucket locks or concurrent RCU protected lookups and traversals.
*/
-static int rhashtable_shrink(struct rhashtable *ht)
+static int rhashtable_shrink(struct rhashtable *ht,
+ struct bucket_table *old_tbl)
{
- struct bucket_table *old_tbl = rht_dereference(ht->tbl, ht);
unsigned int nelems = atomic_read(&ht->nelems);
unsigned int size = 0;

@@ -412,7 +412,7 @@ static void rht_deferred_worker(struct work_struct *work)
if (rht_grow_above_75(ht, tbl))
err = rhashtable_rehash_alloc(ht, tbl, tbl->size * 2);
else if (ht->p.automatic_shrinking && rht_shrink_below_30(ht, tbl))
- err = rhashtable_shrink(ht);
+ err = rhashtable_shrink(ht, tbl);
else if (tbl->nest)
err = rhashtable_rehash_alloc(ht, tbl, tbl->size);

--
2.19.1



2019-01-24 03:09:13

by Herbert Xu

[permalink] [raw]
Subject: [v2 PATCH] rhashtable: Still do rehash when we get EEXIST

On Wed, Jan 23, 2019 at 01:17:58PM -0800, Josh Elsasser wrote:
> When running workloads with large bursts of fragmented packets, we've seen
> a few machines stuck returning -EEXIST from rht_shrink() and endlessly
> rescheduling their hash table's deferred work, pegging a CPU core.
>
> Root cause is commit da20420f83ea ("rhashtable: Add nested tables"), which
> stops ignoring the return code of rhashtable_shrink() and the reallocs
> used to grow the hashtable. This uncovers a bug in the shrink logic where
> "needs to shrink" check runs against the last table but the actual shrink
> operation runs on the first bucket_table in the hashtable (see below):
>
> +-------+ +--------------+ +---------------+
> | ht | | "first" tbl | | "last" tbl |
> | - tbl ---> | - future_tbl ---------> | - future_tbl ---> NULL
> +-------+ +--------------+ +---------------+
> ^^^ ^^^
> used by rhashtable_shrink() used by rht_shrink_below_30()
>
> A rehash then stalls out when both the last table needs to shrink, the
> first table has more elements than the target size, but rht_shrink() hits
> a non-NULL future_tbl and returns -EEXIST. This skips the item rehashing
> and kicks off a reschedule loop, as no forward progress can be made while
> the rhashtable needs to shrink.
>
> Extend rhashtable_shrink() with a "tbl" param to avoid endless exit-and-
> reschedules after hitting the EEXIST, allowing it to check a future_tbl
> pointer that can actually be non-NULL and make forward progress when the
> hashtable needs to shrink.
>
> Fixes: da20420f83ea ("rhashtable: Add nested tables")
> Signed-off-by: Josh Elsasser <[email protected]>

Thanks for catching this!

Although I think we should fix this in a different way. The problem
here is that the shrink cannot proceed because there was a previous
rehash that is still incomplete. We should wait for its completion
and then reattempt a shrinnk should it still be necessary.

So something like this:

---8<---
As it stands if a shrink is delayed because of an outstanding
rehash, we will go into a rescheduling loop without ever doing
the rehash.

This patch fixes this by still carrying out the rehash and then
rescheduling so that we can shrink after the completion of the
rehash should it still be necessary.

The return value of EEXIST captures this case and other cases
(e.g., another thread expanded/rehashed the table at the same
time) where we should still proceed with the rehash.

Fixes: da20420f83ea ("rhashtable: Add nested tables")
Reported-by: Josh Elsasser <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>

diff --git a/lib/rhashtable.c b/lib/rhashtable.c
index 852ffa5160f1..4edcf3310513 100644
--- a/lib/rhashtable.c
+++ b/lib/rhashtable.c
@@ -416,8 +416,12 @@ static void rht_deferred_worker(struct work_struct *work)
else if (tbl->nest)
err = rhashtable_rehash_alloc(ht, tbl, tbl->size);

- if (!err)
- err = rhashtable_rehash_table(ht);
+ if (!err || err == -EEXIST) {
+ int nerr;
+
+ nerr = rhashtable_rehash_table(ht);
+ err = err ?: nerr;
+ }

mutex_unlock(&ht->mutex);

--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2019-01-24 03:42:54

by Josh Elsasser

[permalink] [raw]
Subject: Re: [v2 PATCH] rhashtable: Still do rehash when we get EEXIST

On Jan 23, 2019, at 7:08 PM, Herbert Xu <[email protected]> wrote:

> Thanks for catching this!
>
> Although I think we should fix this in a different way. The problem
> here is that the shrink cannot proceed because there was a previous
> rehash that is still incomplete. We should wait for its completion
> and then reattempt a shrinnk should it still be necessary.
>
> So something like this:

SGTM.

I can't test this right now because our VM server's down after a power
outage this evening, but I tried a similar patch that swallowed the
-EEXIST err and even with that oversight the hashtable dodged the
reschedule loop.

- Josh

2019-01-26 22:03:27

by Josh Elsasser

[permalink] [raw]
Subject: Re: [v2 PATCH] rhashtable: Still do rehash when we get EEXIST

On Jan 23, 2019, at 7:40 PM, Josh Elsasser <[email protected]> wrote:
> On Jan 23, 2019, at 7:08 PM, Herbert Xu <[email protected]> wrote:
>
>> Thanks for catching this!
>>
>> Although I think we should fix this in a different way. The problem
>> here is that the shrink cannot proceed because there was a previous
>> rehash that is still incomplete. We should wait for its completion
>> and then reattempt a shrinnk should it still be necessary.
>
> I can't test this right now because our VM server's down

Got one of the poor little reproducer VM's back up and running and loaded
up this patch. Works like a charm. For the v2 PATCH, can add my:

Tested-by: Josh Elsasser <[email protected]>

2019-03-20 22:40:21

by Josh Hunt

[permalink] [raw]
Subject: Re: [v2 PATCH] rhashtable: Still do rehash when we get EEXIST

On Sat, Jan 26, 2019 at 2:03 PM Josh Elsasser <[email protected]> wrote:
>
> On Jan 23, 2019, at 7:40 PM, Josh Elsasser <[email protected]> wrote:
> > On Jan 23, 2019, at 7:08 PM, Herbert Xu <[email protected]> wrote:
> >
> >> Thanks for catching this!
> >>
> >> Although I think we should fix this in a different way. The problem
> >> here is that the shrink cannot proceed because there was a previous
> >> rehash that is still incomplete. We should wait for its completion
> >> and then reattempt a shrinnk should it still be necessary.
> >
> > I can't test this right now because our VM server's down
>
> Got one of the poor little reproducer VM's back up and running and loaded
> up this patch. Works like a charm. For the v2 PATCH, can add my:
>
> Tested-by: Josh Elsasser <[email protected]>

Trying again... gmail sent HTML mail first time.

Herbert

We're seeing this pretty regularly on 4.14 LTS kernels. I didn't see
your change in any of the regular trees. Are there plans to submit
this? If so, can it get queued up for 4.14 stable too?

Thanks!
--
Josh