2024-02-23 18:27:53

by Vlastimil Babka

[permalink] [raw]
Subject: [PATCH v2 0/3] cleanup of SLAB_ flags

This started by the report that SLAB_MEM_SPREAD flag is dead (Patch 1).
Then in the alloc profiling series we realized it's too easy to reuse an
existing SLAB_ flag's value when defining a new one, by mistake.
Thus let the compiler do that for us via a new helper enum (Patch 2).
When checking if more flags are dead or could be removed, didn't spot
any, but found out the SLAB_KASAN handling of preventing cache merging
can be simplified since we now have an explicit SLAB_NO_MERGE (Patch 3).

The SLAB_MEM_SPREAD flag is now marked as unused and for removal, and
has a value of 0 so it's a no-op. Patches to remove its usage can/will
be submitted to respective subsystems independently of this series - the
flag is already dead as of v6.8-rc1 with SLAB removed. The removal of
dead cpuset_do_slab_mem_spread() code can also be submitted
independently.

Signed-off-by: Vlastimil Babka <[email protected]>
---
Changes in v2:
- Collect R-b, T-b (thanks!)
- Unify all disabled flags's value to a sparse-happy zero with a new macro (lkp/sparse).
- Rename __SF_BIT to __SLAB_FLAG_BIT (Roman Gushchin)
- Rewrod kasan_cache_create() comment (Andrey Konovalov)
- Link to v1: https://lore.kernel.org/r/[email protected]

---
Vlastimil Babka (3):
mm, slab: deprecate SLAB_MEM_SPREAD flag
mm, slab: use an enum to define SLAB_ cache creation flags
mm, slab, kasan: replace kasan_never_merge() with SLAB_NO_MERGE

include/linux/kasan.h | 6 ----
include/linux/slab.h | 97 ++++++++++++++++++++++++++++++++++++---------------
mm/kasan/generic.c | 22 ++++--------
mm/slab.h | 1 -
mm/slab_common.c | 2 +-
mm/slub.c | 6 ++--
6 files changed, 79 insertions(+), 55 deletions(-)
---
base-commit: 6613476e225e090cc9aad49be7fa504e290dd33d
change-id: 20240219-slab-cleanup-flags-c864415ecc8e

Best regards,
--
Vlastimil Babka <[email protected]>



2024-02-23 18:27:56

by Vlastimil Babka

[permalink] [raw]
Subject: [PATCH v2 3/3] mm, slab, kasan: replace kasan_never_merge() with SLAB_NO_MERGE

The SLAB_KASAN flag prevents merging of caches in some configurations,
which is handled in a rather complicated way via kasan_never_merge().
Since we now have a generic SLAB_NO_MERGE flag, we can instead use it
for KASAN caches in addition to SLAB_KASAN in those configurations,
and simplify the SLAB_NEVER_MERGE handling.

Tested-by: Xiongwei Song <[email protected]>
Reviewed-by: Chengming Zhou <[email protected]>
Reviewed-by: Andrey Konovalov <[email protected]>
Signed-off-by: Vlastimil Babka <[email protected]>
---
include/linux/kasan.h | 6 ------
mm/kasan/generic.c | 22 ++++++----------------
mm/slab_common.c | 2 +-
3 files changed, 7 insertions(+), 23 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index dbb06d789e74..70d6a8f6e25d 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -429,7 +429,6 @@ struct kasan_cache {
};

size_t kasan_metadata_size(struct kmem_cache *cache, bool in_object);
-slab_flags_t kasan_never_merge(void);
void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
slab_flags_t *flags);

@@ -446,11 +445,6 @@ static inline size_t kasan_metadata_size(struct kmem_cache *cache,
{
return 0;
}
-/* And thus nothing prevents cache merging. */
-static inline slab_flags_t kasan_never_merge(void)
-{
- return 0;
-}
/* And no cache-related metadata initialization is required. */
static inline void kasan_cache_create(struct kmem_cache *cache,
unsigned int *size,
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index df6627f62402..27297dc4a55b 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -334,14 +334,6 @@ DEFINE_ASAN_SET_SHADOW(f3);
DEFINE_ASAN_SET_SHADOW(f5);
DEFINE_ASAN_SET_SHADOW(f8);

-/* Only allow cache merging when no per-object metadata is present. */
-slab_flags_t kasan_never_merge(void)
-{
- if (!kasan_requires_meta())
- return 0;
- return SLAB_KASAN;
-}
-
/*
* Adaptive redzone policy taken from the userspace AddressSanitizer runtime.
* For larger allocations larger redzones are used.
@@ -370,15 +362,13 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
return;

/*
- * SLAB_KASAN is used to mark caches that are sanitized by KASAN
- * and that thus have per-object metadata.
- * Currently this flag is used in two places:
- * 1. In slab_ksize() to account for per-object metadata when
- * calculating the size of the accessible memory within the object.
- * 2. In slab_common.c via kasan_never_merge() to prevent merging of
- * caches with per-object metadata.
+ * SLAB_KASAN is used to mark caches that are sanitized by KASAN and
+ * that thus have per-object metadata. Currently, this flag is used in
+ * slab_ksize() to account for per-object metadata when calculating the
+ * size of the accessible memory within the object. Additionally, we use
+ * SLAB_NO_MERGE to prevent merging of caches with per-object metadata.
*/
- *flags |= SLAB_KASAN;
+ *flags |= SLAB_KASAN | SLAB_NO_MERGE;

ok_size = *size;

diff --git a/mm/slab_common.c b/mm/slab_common.c
index 238293b1dbe1..7cfa2f1ce655 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -50,7 +50,7 @@ static DECLARE_WORK(slab_caches_to_rcu_destroy_work,
*/
#define SLAB_NEVER_MERGE (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER | \
SLAB_TRACE | SLAB_TYPESAFE_BY_RCU | SLAB_NOLEAKTRACE | \
- SLAB_FAILSLAB | SLAB_NO_MERGE | kasan_never_merge())
+ SLAB_FAILSLAB | SLAB_NO_MERGE)

#define SLAB_MERGE_SAME (SLAB_RECLAIM_ACCOUNT | SLAB_CACHE_DMA | \
SLAB_CACHE_DMA32 | SLAB_ACCOUNT)

--
2.43.2


2024-02-23 18:28:00

by Vlastimil Babka

[permalink] [raw]
Subject: [PATCH v2 1/3] mm, slab: deprecate SLAB_MEM_SPREAD flag

The SLAB_MEM_SPREAD flag used to be implemented in SLAB, which was
removed. SLUB instead relies on the page allocator's NUMA policies.
Change the flag's value to 0 to free up the value it had, and mark it
for full removal once all users are gone.

Reported-by: Steven Rostedt <[email protected]>
Closes: https://lore.kernel.org/all/[email protected]/
Reviewed-and-tested-by: Xiongwei Song <[email protected]>
Reviewed-by: Chengming Zhou <[email protected]>
Reviewed-by: Roman Gushchin <[email protected]>
Signed-off-by: Vlastimil Babka <[email protected]>
---
include/linux/slab.h | 5 +++--
mm/slab.h | 1 -
2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/include/linux/slab.h b/include/linux/slab.h
index b5f5ee8308d0..b1675ff6b904 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -96,8 +96,6 @@
*/
/* Defer freeing slabs to RCU */
#define SLAB_TYPESAFE_BY_RCU ((slab_flags_t __force)0x00080000U)
-/* Spread some memory over cpuset */
-#define SLAB_MEM_SPREAD ((slab_flags_t __force)0x00100000U)
/* Trace allocations and frees */
#define SLAB_TRACE ((slab_flags_t __force)0x00200000U)

@@ -164,6 +162,9 @@
#endif
#define SLAB_TEMPORARY SLAB_RECLAIM_ACCOUNT /* Objects are short-lived */

+/* Obsolete unused flag, to be removed */
+#define SLAB_MEM_SPREAD ((slab_flags_t __force)0U)
+
/*
* ZERO_SIZE_PTR will be returned for zero sized kmalloc requests.
*
diff --git a/mm/slab.h b/mm/slab.h
index 54deeb0428c6..f4534eefb35d 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -469,7 +469,6 @@ static inline bool is_kmalloc_cache(struct kmem_cache *s)
SLAB_STORE_USER | \
SLAB_TRACE | \
SLAB_CONSISTENCY_CHECKS | \
- SLAB_MEM_SPREAD | \
SLAB_NOLEAKTRACE | \
SLAB_RECLAIM_ACCOUNT | \
SLAB_TEMPORARY | \

--
2.43.2


2024-02-23 18:28:09

by Vlastimil Babka

[permalink] [raw]
Subject: [PATCH v2 2/3] mm, slab: use an enum to define SLAB_ cache creation flags

The values of SLAB_ cache creation flags are defined by hand, which is
tedious and error-prone. Use an enum to assign the bit number and a
__SLAB_FLAG_BIT() macro to #define the final flags.

This renumbers the flag values, which is OK as they are only used
internally.

Also define a __SLAB_FLAG_UNUSED macro to assign value to flags disabled
by their respective config options in a unified and sparse-friendly way.

Reviewed-and-tested-by: Xiongwei Song <[email protected]>
Reviewed-by: Chengming Zhou <[email protected]>
Reviewed-by: Roman Gushchin <[email protected]>
Signed-off-by: Vlastimil Babka <[email protected]>
---
include/linux/slab.h | 94 +++++++++++++++++++++++++++++++++++++---------------
mm/slub.c | 6 ++--
2 files changed, 70 insertions(+), 30 deletions(-)

diff --git a/include/linux/slab.h b/include/linux/slab.h
index b1675ff6b904..f6323763cd61 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -21,29 +21,69 @@
#include <linux/cleanup.h>
#include <linux/hash.h>

+enum _slab_flag_bits {
+ _SLAB_CONSISTENCY_CHECKS,
+ _SLAB_RED_ZONE,
+ _SLAB_POISON,
+ _SLAB_KMALLOC,
+ _SLAB_HWCACHE_ALIGN,
+ _SLAB_CACHE_DMA,
+ _SLAB_CACHE_DMA32,
+ _SLAB_STORE_USER,
+ _SLAB_PANIC,
+ _SLAB_TYPESAFE_BY_RCU,
+ _SLAB_TRACE,
+#ifdef CONFIG_DEBUG_OBJECTS
+ _SLAB_DEBUG_OBJECTS,
+#endif
+ _SLAB_NOLEAKTRACE,
+ _SLAB_NO_MERGE,
+#ifdef CONFIG_FAILSLAB
+ _SLAB_FAILSLAB,
+#endif
+#ifdef CONFIG_MEMCG_KMEM
+ _SLAB_ACCOUNT,
+#endif
+#ifdef CONFIG_KASAN_GENERIC
+ _SLAB_KASAN,
+#endif
+ _SLAB_NO_USER_FLAGS,
+#ifdef CONFIG_KFENCE
+ _SLAB_SKIP_KFENCE,
+#endif
+#ifndef CONFIG_SLUB_TINY
+ _SLAB_RECLAIM_ACCOUNT,
+#endif
+ _SLAB_OBJECT_POISON,
+ _SLAB_CMPXCHG_DOUBLE,
+ _SLAB_FLAGS_LAST_BIT
+};
+
+#define __SLAB_FLAG_BIT(nr) ((slab_flags_t __force)(1U << (nr)))
+#define __SLAB_FLAG_UNUSED ((slab_flags_t __force)(0U))

/*
* Flags to pass to kmem_cache_create().
* The ones marked DEBUG need CONFIG_SLUB_DEBUG enabled, otherwise are no-op
*/
/* DEBUG: Perform (expensive) checks on alloc/free */
-#define SLAB_CONSISTENCY_CHECKS ((slab_flags_t __force)0x00000100U)
+#define SLAB_CONSISTENCY_CHECKS __SLAB_FLAG_BIT(_SLAB_CONSISTENCY_CHECKS)
/* DEBUG: Red zone objs in a cache */
-#define SLAB_RED_ZONE ((slab_flags_t __force)0x00000400U)
+#define SLAB_RED_ZONE __SLAB_FLAG_BIT(_SLAB_RED_ZONE)
/* DEBUG: Poison objects */
-#define SLAB_POISON ((slab_flags_t __force)0x00000800U)
+#define SLAB_POISON __SLAB_FLAG_BIT(_SLAB_POISON)
/* Indicate a kmalloc slab */
-#define SLAB_KMALLOC ((slab_flags_t __force)0x00001000U)
+#define SLAB_KMALLOC __SLAB_FLAG_BIT(_SLAB_KMALLOC)
/* Align objs on cache lines */
-#define SLAB_HWCACHE_ALIGN ((slab_flags_t __force)0x00002000U)
+#define SLAB_HWCACHE_ALIGN __SLAB_FLAG_BIT(_SLAB_HWCACHE_ALIGN)
/* Use GFP_DMA memory */
-#define SLAB_CACHE_DMA ((slab_flags_t __force)0x00004000U)
+#define SLAB_CACHE_DMA __SLAB_FLAG_BIT(_SLAB_CACHE_DMA)
/* Use GFP_DMA32 memory */
-#define SLAB_CACHE_DMA32 ((slab_flags_t __force)0x00008000U)
+#define SLAB_CACHE_DMA32 __SLAB_FLAG_BIT(_SLAB_CACHE_DMA32)
/* DEBUG: Store the last owner for bug hunting */
-#define SLAB_STORE_USER ((slab_flags_t __force)0x00010000U)
+#define SLAB_STORE_USER __SLAB_FLAG_BIT(_SLAB_STORE_USER)
/* Panic if kmem_cache_create() fails */
-#define SLAB_PANIC ((slab_flags_t __force)0x00040000U)
+#define SLAB_PANIC __SLAB_FLAG_BIT(_SLAB_PANIC)
/*
* SLAB_TYPESAFE_BY_RCU - **WARNING** READ THIS!
*
@@ -95,19 +135,19 @@
* Note that SLAB_TYPESAFE_BY_RCU was originally named SLAB_DESTROY_BY_RCU.
*/
/* Defer freeing slabs to RCU */
-#define SLAB_TYPESAFE_BY_RCU ((slab_flags_t __force)0x00080000U)
+#define SLAB_TYPESAFE_BY_RCU __SLAB_FLAG_BIT(_SLAB_TYPESAFE_BY_RCU)
/* Trace allocations and frees */
-#define SLAB_TRACE ((slab_flags_t __force)0x00200000U)
+#define SLAB_TRACE __SLAB_FLAG_BIT(_SLAB_TRACE)

/* Flag to prevent checks on free */
#ifdef CONFIG_DEBUG_OBJECTS
-# define SLAB_DEBUG_OBJECTS ((slab_flags_t __force)0x00400000U)
+# define SLAB_DEBUG_OBJECTS __SLAB_FLAG_BIT(_SLAB_DEBUG_OBJECTS)
#else
-# define SLAB_DEBUG_OBJECTS 0
+# define SLAB_DEBUG_OBJECTS __SLAB_FLAG_UNUSED
#endif

/* Avoid kmemleak tracing */
-#define SLAB_NOLEAKTRACE ((slab_flags_t __force)0x00800000U)
+#define SLAB_NOLEAKTRACE __SLAB_FLAG_BIT(_SLAB_NOLEAKTRACE)

/*
* Prevent merging with compatible kmem caches. This flag should be used
@@ -119,25 +159,25 @@
* - performance critical caches, should be very rare and consulted with slab
* maintainers, and not used together with CONFIG_SLUB_TINY
*/
-#define SLAB_NO_MERGE ((slab_flags_t __force)0x01000000U)
+#define SLAB_NO_MERGE __SLAB_FLAG_BIT(_SLAB_NO_MERGE)

/* Fault injection mark */
#ifdef CONFIG_FAILSLAB
-# define SLAB_FAILSLAB ((slab_flags_t __force)0x02000000U)
+# define SLAB_FAILSLAB __SLAB_FLAG_BIT(_SLAB_FAILSLAB)
#else
-# define SLAB_FAILSLAB 0
+# define SLAB_FAILSLAB __SLAB_FLAG_UNUSED
#endif
/* Account to memcg */
#ifdef CONFIG_MEMCG_KMEM
-# define SLAB_ACCOUNT ((slab_flags_t __force)0x04000000U)
+# define SLAB_ACCOUNT __SLAB_FLAG_BIT(_SLAB_ACCOUNT)
#else
-# define SLAB_ACCOUNT 0
+# define SLAB_ACCOUNT __SLAB_FLAG_UNUSED
#endif

#ifdef CONFIG_KASAN_GENERIC
-#define SLAB_KASAN ((slab_flags_t __force)0x08000000U)
+#define SLAB_KASAN __SLAB_FLAG_BIT(_SLAB_KASAN)
#else
-#define SLAB_KASAN 0
+#define SLAB_KASAN __SLAB_FLAG_UNUSED
#endif

/*
@@ -145,25 +185,25 @@
* Intended for caches created for self-tests so they have only flags
* specified in the code and other flags are ignored.
*/
-#define SLAB_NO_USER_FLAGS ((slab_flags_t __force)0x10000000U)
+#define SLAB_NO_USER_FLAGS __SLAB_FLAG_BIT(_SLAB_NO_USER_FLAGS)

#ifdef CONFIG_KFENCE
-#define SLAB_SKIP_KFENCE ((slab_flags_t __force)0x20000000U)
+#define SLAB_SKIP_KFENCE __SLAB_FLAG_BIT(_SLAB_SKIP_KFENCE)
#else
-#define SLAB_SKIP_KFENCE 0
+#define SLAB_SKIP_KFENCE __SLAB_FLAG_UNUSED
#endif

/* The following flags affect the page allocator grouping pages by mobility */
/* Objects are reclaimable */
#ifndef CONFIG_SLUB_TINY
-#define SLAB_RECLAIM_ACCOUNT ((slab_flags_t __force)0x00020000U)
+#define SLAB_RECLAIM_ACCOUNT __SLAB_FLAG_BIT(_SLAB_RECLAIM_ACCOUNT)
#else
-#define SLAB_RECLAIM_ACCOUNT ((slab_flags_t __force)0)
+#define SLAB_RECLAIM_ACCOUNT __SLAB_FLAG_UNUSED
#endif
#define SLAB_TEMPORARY SLAB_RECLAIM_ACCOUNT /* Objects are short-lived */

/* Obsolete unused flag, to be removed */
-#define SLAB_MEM_SPREAD ((slab_flags_t __force)0U)
+#define SLAB_MEM_SPREAD __SLAB_FLAG_UNUSED

/*
* ZERO_SIZE_PTR will be returned for zero sized kmalloc requests.
diff --git a/mm/slub.c b/mm/slub.c
index 2ef88bbf56a3..2934ef5f3cff 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -306,13 +306,13 @@ static inline bool kmem_cache_has_cpu_partial(struct kmem_cache *s)

/* Internal SLUB flags */
/* Poison object */
-#define __OBJECT_POISON ((slab_flags_t __force)0x80000000U)
+#define __OBJECT_POISON __SLAB_FLAG_BIT(_SLAB_OBJECT_POISON)
/* Use cmpxchg_double */

#ifdef system_has_freelist_aba
-#define __CMPXCHG_DOUBLE ((slab_flags_t __force)0x40000000U)
+#define __CMPXCHG_DOUBLE __SLAB_FLAG_BIT(_SLAB_CMPXCHG_DOUBLE)
#else
-#define __CMPXCHG_DOUBLE ((slab_flags_t __force)0U)
+#define __CMPXCHG_DOUBLE __SLAB_FLAG_UNUSED
#endif

/*

--
2.43.2


2024-02-24 21:01:05

by David Rientjes

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] mm, slab: deprecate SLAB_MEM_SPREAD flag

On Fri, 23 Feb 2024, Vlastimil Babka wrote:

> The SLAB_MEM_SPREAD flag used to be implemented in SLAB, which was
> removed. SLUB instead relies on the page allocator's NUMA policies.
> Change the flag's value to 0 to free up the value it had, and mark it
> for full removal once all users are gone.
>
> Reported-by: Steven Rostedt <[email protected]>
> Closes: https://lore.kernel.org/all/[email protected]/
> Reviewed-and-tested-by: Xiongwei Song <[email protected]>
> Reviewed-by: Chengming Zhou <[email protected]>
> Reviewed-by: Roman Gushchin <[email protected]>
> Signed-off-by: Vlastimil Babka <[email protected]>

Acked-by: David Rientjes <[email protected]>

2024-02-24 21:02:20

by David Rientjes

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] mm, slab: use an enum to define SLAB_ cache creation flags

On Fri, 23 Feb 2024, Vlastimil Babka wrote:

> The values of SLAB_ cache creation flags are defined by hand, which is
> tedious and error-prone. Use an enum to assign the bit number and a
> __SLAB_FLAG_BIT() macro to #define the final flags.
>
> This renumbers the flag values, which is OK as they are only used
> internally.
>
> Also define a __SLAB_FLAG_UNUSED macro to assign value to flags disabled
> by their respective config options in a unified and sparse-friendly way.
>
> Reviewed-and-tested-by: Xiongwei Song <[email protected]>
> Reviewed-by: Chengming Zhou <[email protected]>
> Reviewed-by: Roman Gushchin <[email protected]>
> Signed-off-by: Vlastimil Babka <[email protected]>

Acked-by: David Rientjes <[email protected]>

2024-02-24 21:03:11

by David Rientjes

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] mm, slab, kasan: replace kasan_never_merge() with SLAB_NO_MERGE

On Fri, 23 Feb 2024, Vlastimil Babka wrote:

> The SLAB_KASAN flag prevents merging of caches in some configurations,
> which is handled in a rather complicated way via kasan_never_merge().
> Since we now have a generic SLAB_NO_MERGE flag, we can instead use it
> for KASAN caches in addition to SLAB_KASAN in those configurations,
> and simplify the SLAB_NEVER_MERGE handling.
>
> Tested-by: Xiongwei Song <[email protected]>
> Reviewed-by: Chengming Zhou <[email protected]>
> Reviewed-by: Andrey Konovalov <[email protected]>
> Signed-off-by: Vlastimil Babka <[email protected]>

Tested-by: David Rientjes <[email protected]>

2024-02-26 09:46:09

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH v2 0/3] cleanup of SLAB_ flags

On 2/23/24 19:27, Vlastimil Babka wrote:
> This started by the report that SLAB_MEM_SPREAD flag is dead (Patch 1).
> Then in the alloc profiling series we realized it's too easy to reuse an
> existing SLAB_ flag's value when defining a new one, by mistake.
> Thus let the compiler do that for us via a new helper enum (Patch 2).
> When checking if more flags are dead or could be removed, didn't spot
> any, but found out the SLAB_KASAN handling of preventing cache merging
> can be simplified since we now have an explicit SLAB_NO_MERGE (Patch 3).
>
> The SLAB_MEM_SPREAD flag is now marked as unused and for removal, and
> has a value of 0 so it's a no-op. Patches to remove its usage can/will
> be submitted to respective subsystems independently of this series - the
> flag is already dead as of v6.8-rc1 with SLAB removed. The removal of
> dead cpuset_do_slab_mem_spread() code can also be submitted
> independently.
>
> Signed-off-by: Vlastimil Babka <[email protected]>

Pushed to slab/for-next

> ---
> Changes in v2:
> - Collect R-b, T-b (thanks!)
> - Unify all disabled flags's value to a sparse-happy zero with a new macro (lkp/sparse).
> - Rename __SF_BIT to __SLAB_FLAG_BIT (Roman Gushchin)
> - Rewrod kasan_cache_create() comment (Andrey Konovalov)
> - Link to v1: https://lore.kernel.org/r/[email protected]
>
> ---
> Vlastimil Babka (3):
> mm, slab: deprecate SLAB_MEM_SPREAD flag
> mm, slab: use an enum to define SLAB_ cache creation flags
> mm, slab, kasan: replace kasan_never_merge() with SLAB_NO_MERGE
>
> include/linux/kasan.h | 6 ----
> include/linux/slab.h | 97 ++++++++++++++++++++++++++++++++++++---------------
> mm/kasan/generic.c | 22 ++++--------
> mm/slab.h | 1 -
> mm/slab_common.c | 2 +-
> mm/slub.c | 6 ++--
> 6 files changed, 79 insertions(+), 55 deletions(-)
> ---
> base-commit: 6613476e225e090cc9aad49be7fa504e290dd33d
> change-id: 20240219-slab-cleanup-flags-c864415ecc8e
>
> Best regards,