Some struct slab fields are initialized differently for SLAB and SLUB so
we can simplify with SLUB being the only remaining allocator.
Reviewed-by: Kees Cook <[email protected]>
Reviewed-by: Marco Elver <[email protected]>
Signed-off-by: Vlastimil Babka <[email protected]>
---
mm/kfence/core.c | 4 ----
1 file changed, 4 deletions(-)
diff --git a/mm/kfence/core.c b/mm/kfence/core.c
index 3872528d0963..8350f5c06f2e 100644
--- a/mm/kfence/core.c
+++ b/mm/kfence/core.c
@@ -463,11 +463,7 @@ static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t g
/* Set required slab fields. */
slab = virt_to_slab((void *)meta->addr);
slab->slab_cache = cache;
-#if defined(CONFIG_SLUB)
slab->objects = 1;
-#elif defined(CONFIG_SLAB)
- slab->s_mem = addr;
-#endif
/* Memory initialization. */
set_canary(meta);
--
2.42.1
On Mon, Nov 20, 2023 at 07:34:15PM +0100, Vlastimil Babka wrote:
> Some struct slab fields are initialized differently for SLAB and SLUB so
> we can simplify with SLUB being the only remaining allocator.
>
> Reviewed-by: Kees Cook <[email protected]>
> Reviewed-by: Marco Elver <[email protected]>
> Signed-off-by: Vlastimil Babka <[email protected]>
> ---
> mm/kfence/core.c | 4 ----
> 1 file changed, 4 deletions(-)
>
> diff --git a/mm/kfence/core.c b/mm/kfence/core.c
> index 3872528d0963..8350f5c06f2e 100644
> --- a/mm/kfence/core.c
> +++ b/mm/kfence/core.c
> @@ -463,11 +463,7 @@ static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t g
> /* Set required slab fields. */
> slab = virt_to_slab((void *)meta->addr);
> slab->slab_cache = cache;
> -#if defined(CONFIG_SLUB)
> slab->objects = 1;
> -#elif defined(CONFIG_SLAB)
> - slab->s_mem = addr;
> -#endif
>
> /* Memory initialization. */
> set_canary(meta);
Looks good to me,
Reviewed-by: Hyeonggon Yoo <[email protected]>
>
> --
> 2.42.1
>
>