2022-06-06 03:45:36

by Daniel Vetter

[permalink] [raw]
Subject: [PATCH 1/3] mm/page_alloc: use might_alloc()

... instead of open codding it. Completely equivalent code, just
a notch more meaningful when reading.

Signed-off-by: Daniel Vetter <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: [email protected]
---
mm/page_alloc.c | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2db95780e003..277774d170cb 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5177,10 +5177,7 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
*alloc_flags |= ALLOC_CPUSET;
}

- fs_reclaim_acquire(gfp_mask);
- fs_reclaim_release(gfp_mask);
-
- might_sleep_if(gfp_mask & __GFP_DIRECT_RECLAIM);
+ might_alloc(gfp_mask);

if (should_fail_alloc_page(gfp_mask, order))
return false;
--
2.36.0


2022-06-06 03:49:08

by Daniel Vetter

[permalink] [raw]
Subject: [PATCH 2/3] mm/slab: delete cache_alloc_debugcheck_before()

It only does a might_sleep_if(GFP_RECLAIM) check, which is already
covered by the might_alloc() in slab_pre_alloc_hook(). And all callers
of cache_alloc_debugcheck_before() call that beforehand already.

Signed-off-by: Daniel Vetter <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Pekka Enberg <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: Roman Gushchin <[email protected]>
Cc: [email protected]
---
mm/slab.c | 10 ----------
1 file changed, 10 deletions(-)

diff --git a/mm/slab.c b/mm/slab.c
index b04e40078bdf..75779ac5f5ba 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -2973,12 +2973,6 @@ static void *cache_alloc_refill(struct kmem_cache *cachep, gfp_t flags)
return ac->entry[--ac->avail];
}

-static inline void cache_alloc_debugcheck_before(struct kmem_cache *cachep,
- gfp_t flags)
-{
- might_sleep_if(gfpflags_allow_blocking(flags));
-}
-
#if DEBUG
static void *cache_alloc_debugcheck_after(struct kmem_cache *cachep,
gfp_t flags, void *objp, unsigned long caller)
@@ -3219,7 +3213,6 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_
if (unlikely(ptr))
goto out_hooks;

- cache_alloc_debugcheck_before(cachep, flags);
local_irq_save(save_flags);

if (nodeid == NUMA_NO_NODE)
@@ -3304,7 +3297,6 @@ slab_alloc(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags,
if (unlikely(objp))
goto out;

- cache_alloc_debugcheck_before(cachep, flags);
local_irq_save(save_flags);
objp = __do_cache_alloc(cachep, flags);
local_irq_restore(save_flags);
@@ -3541,8 +3533,6 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
if (!s)
return 0;

- cache_alloc_debugcheck_before(s, flags);
-
local_irq_disable();
for (i = 0; i < size; i++) {
void *objp = kfence_alloc(s, s->object_size, flags) ?: __do_cache_alloc(s, flags);
--
2.36.0

2022-06-06 05:02:59

by Daniel Vetter

[permalink] [raw]
Subject: [PATCH 3/3] mm/mempool: use might_alloc()

mempool are generally used for GFP_NOIO, so this wont benefit all that
much because might_alloc currently only checks GFP_NOFS. But it does
validate against mmu notifier pte zapping, some might catch some
drivers doing really silly things, plus it's a bit more meaningful in
what we're checking for here.

Signed-off-by: Daniel Vetter <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: [email protected]
---
mm/mempool.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/mempool.c b/mm/mempool.c
index b933d0fc21b8..96488b13a1ef 100644
--- a/mm/mempool.c
+++ b/mm/mempool.c
@@ -379,7 +379,7 @@ void *mempool_alloc(mempool_t *pool, gfp_t gfp_mask)
gfp_t gfp_temp;

VM_WARN_ON_ONCE(gfp_mask & __GFP_ZERO);
- might_sleep_if(gfp_mask & __GFP_DIRECT_RECLAIM);
+ might_alloc(gfp_mask);

gfp_mask |= __GFP_NOMEMALLOC; /* don't allocate emergency reserves */
gfp_mask |= __GFP_NORETRY; /* don't loop in __alloc_pages */
--
2.36.0

2022-06-08 04:54:15

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH 1/3] mm/page_alloc: use might_alloc()

On 05.06.22 17:25, Daniel Vetter wrote:
> ... instead of open codding it. Completely equivalent code, just
> a notch more meaningful when reading.
>
> Signed-off-by: Daniel Vetter <[email protected]>
> Cc: Andrew Morton <[email protected]>
> Cc: [email protected]
> ---
> mm/page_alloc.c | 5 +----
> 1 file changed, 1 insertion(+), 4 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 2db95780e003..277774d170cb 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -5177,10 +5177,7 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
> *alloc_flags |= ALLOC_CPUSET;
> }
>
> - fs_reclaim_acquire(gfp_mask);
> - fs_reclaim_release(gfp_mask);
> -
> - might_sleep_if(gfp_mask & __GFP_DIRECT_RECLAIM);
> + might_alloc(gfp_mask);
>
> if (should_fail_alloc_page(gfp_mask, order))
> return false;

Reviewed-by: David Hildenbrand <[email protected]>

--
Thanks,

David / dhildenb

2022-06-08 05:20:37

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH 2/3] mm/slab: delete cache_alloc_debugcheck_before()

On 05.06.22 17:25, Daniel Vetter wrote:
> It only does a might_sleep_if(GFP_RECLAIM) check, which is already
> covered by the might_alloc() in slab_pre_alloc_hook(). And all callers
> of cache_alloc_debugcheck_before() call that beforehand already.
>
> Signed-off-by: Daniel Vetter <[email protected]>
> Cc: Christoph Lameter <[email protected]>
> Cc: Pekka Enberg <[email protected]>
> Cc: David Rientjes <[email protected]>
> Cc: Joonsoo Kim <[email protected]>
> Cc: Andrew Morton <[email protected]>
> Cc: Vlastimil Babka <[email protected]>
> Cc: Roman Gushchin <[email protected]>
> Cc: [email protected]
> ---

LGTM

Reviewed-by: David Hildenbrand <[email protected]>

--
Thanks,

David / dhildenb

2022-06-13 03:11:20

by David Rientjes

[permalink] [raw]
Subject: Re: [PATCH 2/3] mm/slab: delete cache_alloc_debugcheck_before()

On Sun, 5 Jun 2022, Daniel Vetter wrote:

> It only does a might_sleep_if(GFP_RECLAIM) check, which is already
> covered by the might_alloc() in slab_pre_alloc_hook(). And all callers
> of cache_alloc_debugcheck_before() call that beforehand already.
>
> Signed-off-by: Daniel Vetter <[email protected]>
> Cc: Christoph Lameter <[email protected]>
> Cc: Pekka Enberg <[email protected]>
> Cc: David Rientjes <[email protected]>
> Cc: Joonsoo Kim <[email protected]>
> Cc: Andrew Morton <[email protected]>
> Cc: Vlastimil Babka <[email protected]>
> Cc: Roman Gushchin <[email protected]>
> Cc: [email protected]

Acked-by: David Rientjes <[email protected]>

2022-06-13 03:27:57

by Muchun Song

[permalink] [raw]
Subject: Re: [PATCH 2/3] mm/slab: delete cache_alloc_debugcheck_before()

On Sun, Jun 05, 2022 at 05:25:38PM +0200, Daniel Vetter wrote:
> It only does a might_sleep_if(GFP_RECLAIM) check, which is already
> covered by the might_alloc() in slab_pre_alloc_hook(). And all callers
> of cache_alloc_debugcheck_before() call that beforehand already.
>
> Signed-off-by: Daniel Vetter <[email protected]>

Nice cleanup.

Reviewed-by: Muchun Song <[email protected]>

Thanks.

2022-06-14 13:33:30

by Vlastimil Babka (SUSE)

[permalink] [raw]
Subject: Re: [PATCH 3/3] mm/mempool: use might_alloc()

On 6/5/22 17:25, Daniel Vetter wrote:
> mempool are generally used for GFP_NOIO, so this wont benefit all that
> much because might_alloc currently only checks GFP_NOFS. But it does
> validate against mmu notifier pte zapping, some might catch some
> drivers doing really silly things, plus it's a bit more meaningful in
> what we're checking for here.
>
> Signed-off-by: Daniel Vetter <[email protected]>
> Cc: Andrew Morton <[email protected]>
> Cc: [email protected]

Reviewed-by: Vlastimil Babka <[email protected]>

> ---
> mm/mempool.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/mempool.c b/mm/mempool.c
> index b933d0fc21b8..96488b13a1ef 100644
> --- a/mm/mempool.c
> +++ b/mm/mempool.c
> @@ -379,7 +379,7 @@ void *mempool_alloc(mempool_t *pool, gfp_t gfp_mask)
> gfp_t gfp_temp;
>
> VM_WARN_ON_ONCE(gfp_mask & __GFP_ZERO);
> - might_sleep_if(gfp_mask & __GFP_DIRECT_RECLAIM);
> + might_alloc(gfp_mask);
>
> gfp_mask |= __GFP_NOMEMALLOC; /* don't allocate emergency reserves */
> gfp_mask |= __GFP_NORETRY; /* don't loop in __alloc_pages */

2022-06-14 13:35:52

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH 2/3] mm/slab: delete cache_alloc_debugcheck_before()

On 6/5/22 17:25, Daniel Vetter wrote:
> It only does a might_sleep_if(GFP_RECLAIM) check, which is already
> covered by the might_alloc() in slab_pre_alloc_hook(). And all callers
> of cache_alloc_debugcheck_before() call that beforehand already.
>
> Signed-off-by: Daniel Vetter <[email protected]>
> Cc: Christoph Lameter <[email protected]>
> Cc: Pekka Enberg <[email protected]>
> Cc: David Rientjes <[email protected]>
> Cc: Joonsoo Kim <[email protected]>
> Cc: Andrew Morton <[email protected]>
> Cc: Vlastimil Babka <[email protected]>
> Cc: Roman Gushchin <[email protected]>
> Cc: [email protected]

Thanks, added to slab/for-5.20/cleanup as it's slab-specific and independent
from 1/3 and 3/3.

> ---
> mm/slab.c | 10 ----------
> 1 file changed, 10 deletions(-)
>
> diff --git a/mm/slab.c b/mm/slab.c
> index b04e40078bdf..75779ac5f5ba 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -2973,12 +2973,6 @@ static void *cache_alloc_refill(struct kmem_cache *cachep, gfp_t flags)
> return ac->entry[--ac->avail];
> }
>
> -static inline void cache_alloc_debugcheck_before(struct kmem_cache *cachep,
> - gfp_t flags)
> -{
> - might_sleep_if(gfpflags_allow_blocking(flags));
> -}
> -
> #if DEBUG
> static void *cache_alloc_debugcheck_after(struct kmem_cache *cachep,
> gfp_t flags, void *objp, unsigned long caller)
> @@ -3219,7 +3213,6 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_
> if (unlikely(ptr))
> goto out_hooks;
>
> - cache_alloc_debugcheck_before(cachep, flags);
> local_irq_save(save_flags);
>
> if (nodeid == NUMA_NO_NODE)
> @@ -3304,7 +3297,6 @@ slab_alloc(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags,
> if (unlikely(objp))
> goto out;
>
> - cache_alloc_debugcheck_before(cachep, flags);
> local_irq_save(save_flags);
> objp = __do_cache_alloc(cachep, flags);
> local_irq_restore(save_flags);
> @@ -3541,8 +3533,6 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
> if (!s)
> return 0;
>
> - cache_alloc_debugcheck_before(s, flags);
> -
> local_irq_disable();
> for (i = 0; i < size; i++) {
> void *objp = kfence_alloc(s, s->object_size, flags) ?: __do_cache_alloc(s, flags);

2022-06-14 13:55:19

by Vlastimil Babka (SUSE)

[permalink] [raw]
Subject: Re: [PATCH 1/3] mm/page_alloc: use might_alloc()

On 6/5/22 17:25, Daniel Vetter wrote:
> ... instead of open codding it. Completely equivalent code, just
> a notch more meaningful when reading.
>
> Signed-off-by: Daniel Vetter <[email protected]>
> Cc: Andrew Morton <[email protected]>
> Cc: [email protected]

Reviewed-by: Vlastimil Babka <[email protected]>

> ---
> mm/page_alloc.c | 5 +----
> 1 file changed, 1 insertion(+), 4 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 2db95780e003..277774d170cb 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -5177,10 +5177,7 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
> *alloc_flags |= ALLOC_CPUSET;
> }
>
> - fs_reclaim_acquire(gfp_mask);
> - fs_reclaim_release(gfp_mask);
> -
> - might_sleep_if(gfp_mask & __GFP_DIRECT_RECLAIM);
> + might_alloc(gfp_mask);
>
> if (should_fail_alloc_page(gfp_mask, order))
> return false;