2022-12-28 06:29:58

by David Rientjes

[permalink] [raw]
Subject: [patch] mm, slab: periodically resched in drain_freelist()

drain_freelist() can be called with a very large number of slabs to free,
such as for kmem_cache_shrink(), or depending on various settings of the
slab cache when doing periodic reaping.

If there is a potentially long list of slabs to drain, periodically
schedule to ensure we aren't saturating the cpu for too long.

Signed-off-by: David Rientjes <[email protected]>
---
mm/slab.c | 2 ++
1 file changed, 2 insertions(+)

diff --git a/mm/slab.c b/mm/slab.c
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -2211,6 +2211,8 @@ static int drain_freelist(struct kmem_cache *cache,
raw_spin_unlock_irq(&n->list_lock);
slab_destroy(cache, slab);
nr_freed++;
+
+ cond_resched();
}
out:
return nr_freed;


2022-12-29 14:03:41

by Hyeonggon Yoo

[permalink] [raw]
Subject: Re: [patch] mm, slab: periodically resched in drain_freelist()

On Tue, Dec 27, 2022 at 10:05:48PM -0800, David Rientjes wrote:
> drain_freelist() can be called with a very large number of slabs to free,
> such as for kmem_cache_shrink(), or depending on various settings of the
> slab cache when doing periodic reaping.
>
> If there is a potentially long list of slabs to drain, periodically
> schedule to ensure we aren't saturating the cpu for too long.
>
> Signed-off-by: David Rientjes <[email protected]>
> ---
> mm/slab.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/mm/slab.c b/mm/slab.c
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -2211,6 +2211,8 @@ static int drain_freelist(struct kmem_cache *cache,
> raw_spin_unlock_irq(&n->list_lock);
> slab_destroy(cache, slab);
> nr_freed++;
> +
> + cond_resched();
> }
> out:
> return nr_freed;

Looks good to me,
Reviewed-by: Hyeonggon Yoo <[email protected]>

--
Thanks,
Hyeonggon

2023-01-02 08:35:12

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [patch] mm, slab: periodically resched in drain_freelist()

On 12/28/22 07:05, David Rientjes wrote:
> drain_freelist() can be called with a very large number of slabs to free,
> such as for kmem_cache_shrink(), or depending on various settings of the
> slab cache when doing periodic reaping.
>
> If there is a potentially long list of slabs to drain, periodically
> schedule to ensure we aren't saturating the cpu for too long.
>
> Signed-off-by: David Rientjes <[email protected]>

Thanks, added to slab/for-6.2-rc3/fixes

> ---
> mm/slab.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/mm/slab.c b/mm/slab.c
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -2211,6 +2211,8 @@ static int drain_freelist(struct kmem_cache *cache,
> raw_spin_unlock_irq(&n->list_lock);
> slab_destroy(cache, slab);
> nr_freed++;
> +
> + cond_resched();
> }
> out:
> return nr_freed;