2023-11-20 09:21:00

by Xiongwei Song

[permalink] [raw]
Subject: [PATCH 0/4] supplement of slab removal

From: Xiongwei Song <[email protected]>

Hi,

These patches are based on [1] and repo [2], so they are supplement of slab
removal. There is no functionality changed.

[1] https://lore.kernel.org/linux-mm/[email protected]/T/#t
[2] https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-remove-slab-v1r4

Xiongwei Song (4):
Documentation: kernel-parameters: remove slab_max_order
mm/slab: remove slab_nomrege and slab_merge
mm/slab: make calculate_alignment() public
mm/slab: move slab merge from slab_common.c to slub.c

.../admin-guide/kernel-parameters.txt | 17 +--
mm/Kconfig | 2 +-
mm/slab.h | 5 +-
mm/slab_common.c | 103 +-----------------
mm/slub.c | 100 ++++++++++++++++-
5 files changed, 105 insertions(+), 122 deletions(-)

--
2.34.1


2023-11-20 09:22:28

by Xiongwei Song

[permalink] [raw]
Subject: [PATCH 3/4] mm/slab: make calculate_alignment() public

From: Xiongwei Song <[email protected]>

We are going to move slab merge from slab_common.c to slub.c, there is a
call with it in find_mergeable(), so make it public.

Signed-off-by: Xiongwei Song <[email protected]>
---
mm/slab.h | 2 ++
mm/slab_common.c | 2 +-
2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/mm/slab.h b/mm/slab.h
index eb04c8a5dbd1..8d20f8c6269d 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -427,6 +427,8 @@ extern void create_boot_cache(struct kmem_cache *, const char *name,
unsigned int size, slab_flags_t flags,
unsigned int useroffset, unsigned int usersize);

+unsigned int calculate_alignment(slab_flags_t flags,
+ unsigned int align, unsigned int size);
int slab_unmergeable(struct kmem_cache *s);
struct kmem_cache *find_mergeable(unsigned size, unsigned align,
slab_flags_t flags, const char *name, void (*ctor)(void *));
diff --git a/mm/slab_common.c b/mm/slab_common.c
index d707abd31926..62eb77fdedf2 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -106,7 +106,7 @@ static inline int kmem_cache_sanity_check(const char *name, unsigned int size)
* Figure out what the alignment of the objects will be given a set of
* flags, a user specified alignment and the size of the objects.
*/
-static unsigned int calculate_alignment(slab_flags_t flags,
+unsigned int calculate_alignment(slab_flags_t flags,
unsigned int align, unsigned int size)
{
/*
--
2.34.1

2023-11-20 09:24:06

by Xiongwei Song

[permalink] [raw]
Subject: [PATCH 4/4] mm/slab: move slab merge from slab_common.c to slub.c

From: Xiongwei Song <[email protected]>

Since slab allocator has been removed. There is no users about slab
merge except slub. This commit is almost to revert
commit 423c929cbbec ("mm/slab_common: commonize slab merge logic").

Also change all prefix of slab merge related functions, variables and
definitions from "slab/SLAB" to"slub/SLUB".

Signed-off-by: Xiongwei Song <[email protected]>
---
mm/slab.h | 3 --
mm/slab_common.c | 98 ----------------------------------------------
mm/slub.c | 100 ++++++++++++++++++++++++++++++++++++++++++++++-
3 files changed, 99 insertions(+), 102 deletions(-)

diff --git a/mm/slab.h b/mm/slab.h
index 8d20f8c6269d..cd52e705ce28 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -429,9 +429,6 @@ extern void create_boot_cache(struct kmem_cache *, const char *name,

unsigned int calculate_alignment(slab_flags_t flags,
unsigned int align, unsigned int size);
-int slab_unmergeable(struct kmem_cache *s);
-struct kmem_cache *find_mergeable(unsigned size, unsigned align,
- slab_flags_t flags, const char *name, void (*ctor)(void *));
struct kmem_cache *
__kmem_cache_alias(const char *name, unsigned int size, unsigned int align,
slab_flags_t flags, void (*ctor)(void *));
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 62eb77fdedf2..6960ae5c35ee 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -45,36 +45,6 @@ static void slab_caches_to_rcu_destroy_workfn(struct work_struct *work);
static DECLARE_WORK(slab_caches_to_rcu_destroy_work,
slab_caches_to_rcu_destroy_workfn);

-/*
- * Set of flags that will prevent slab merging
- */
-#define SLAB_NEVER_MERGE (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER | \
- SLAB_TRACE | SLAB_TYPESAFE_BY_RCU | SLAB_NOLEAKTRACE | \
- SLAB_FAILSLAB | SLAB_NO_MERGE | kasan_never_merge())
-
-#define SLAB_MERGE_SAME (SLAB_RECLAIM_ACCOUNT | SLAB_CACHE_DMA | \
- SLAB_CACHE_DMA32 | SLAB_ACCOUNT)
-
-/*
- * Merge control. If this is set then no merging of slab caches will occur.
- */
-static bool slub_nomerge = !IS_ENABLED(CONFIG_SLAB_MERGE_DEFAULT);
-
-static int __init setup_slab_nomerge(char *str)
-{
- slub_nomerge = true;
- return 1;
-}
-
-static int __init setup_slab_merge(char *str)
-{
- slub_nomerge = false;
- return 1;
-}
-
-__setup_param("slub_nomerge", slub_nomerge, setup_slab_nomerge, 0);
-__setup_param("slub_merge", slub_merge, setup_slab_merge, 0);
-
/*
* Determine the size of a slab object
*/
@@ -130,74 +100,6 @@ unsigned int calculate_alignment(slab_flags_t flags,
return ALIGN(align, sizeof(void *));
}

-/*
- * Find a mergeable slab cache
- */
-int slab_unmergeable(struct kmem_cache *s)
-{
- if (slub_nomerge || (s->flags & SLAB_NEVER_MERGE))
- return 1;
-
- if (s->ctor)
- return 1;
-
-#ifdef CONFIG_HARDENED_USERCOPY
- if (s->usersize)
- return 1;
-#endif
-
- /*
- * We may have set a slab to be unmergeable during bootstrap.
- */
- if (s->refcount < 0)
- return 1;
-
- return 0;
-}
-
-struct kmem_cache *find_mergeable(unsigned int size, unsigned int align,
- slab_flags_t flags, const char *name, void (*ctor)(void *))
-{
- struct kmem_cache *s;
-
- if (slub_nomerge)
- return NULL;
-
- if (ctor)
- return NULL;
-
- size = ALIGN(size, sizeof(void *));
- align = calculate_alignment(flags, align, size);
- size = ALIGN(size, align);
- flags = kmem_cache_flags(size, flags, name);
-
- if (flags & SLAB_NEVER_MERGE)
- return NULL;
-
- list_for_each_entry_reverse(s, &slab_caches, list) {
- if (slab_unmergeable(s))
- continue;
-
- if (size > s->size)
- continue;
-
- if ((flags & SLAB_MERGE_SAME) != (s->flags & SLAB_MERGE_SAME))
- continue;
- /*
- * Check if alignment is compatible.
- * Courtesy of Adrian Drzewiecki
- */
- if ((s->size & ~(align - 1)) != s->size)
- continue;
-
- if (s->size - size >= sizeof(void *))
- continue;
-
- return s;
- }
- return NULL;
-}
-
static struct kmem_cache *create_cache(const char *name,
unsigned int object_size, unsigned int align,
slab_flags_t flags, unsigned int useroffset,
diff --git a/mm/slub.c b/mm/slub.c
index ae1e6e635253..435d9ed140e4 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -709,6 +709,104 @@ static inline bool slab_update_freelist(struct kmem_cache *s, struct slab *slab,
return false;
}

+/*
+ * Set of flags that will prevent slab merging
+ */
+#define SLUB_NEVER_MERGE (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER | \
+ SLAB_TRACE | SLAB_TYPESAFE_BY_RCU | SLAB_NOLEAKTRACE | \
+ SLAB_FAILSLAB | SLAB_NO_MERGE | kasan_never_merge())
+
+#define SLUB_MERGE_SAME (SLAB_RECLAIM_ACCOUNT | SLAB_CACHE_DMA | \
+ SLAB_CACHE_DMA32 | SLAB_ACCOUNT)
+
+/*
+ * Merge control. If this is set then no merging of slab caches will occur.
+ */
+static bool slub_nomerge = !IS_ENABLED(CONFIG_SLAB_MERGE_DEFAULT);
+
+static int __init setup_slub_nomerge(char *str)
+{
+ slub_nomerge = true;
+ return 1;
+}
+
+static int __init setup_slub_merge(char *str)
+{
+ slub_nomerge = false;
+ return 1;
+}
+
+__setup_param("slub_nomerge", slub_nomerge, setup_slab_nomerge, 0);
+__setup_param("slub_merge", slub_merge, setup_slab_merge, 0);
+
+/*
+ * Find a mergeable slab cache
+ */
+static inline int slub_unmergeable(struct kmem_cache *s)
+{
+ if (slub_nomerge || (s->flags & SLUB_NEVER_MERGE))
+ return 1;
+
+ if (s->ctor)
+ return 1;
+
+#ifdef CONFIG_HARDENED_USERCOPY
+ if (s->usersize)
+ return 1;
+#endif
+
+ /*
+ * We may have set a slab to be unmergeable during bootstrap.
+ */
+ if (s->refcount < 0)
+ return 1;
+
+ return 0;
+}
+
+static struct kmem_cache *find_mergeable(unsigned int size, unsigned int align,
+ slab_flags_t flags, const char *name, void (*ctor)(void *))
+{
+ struct kmem_cache *s;
+
+ if (slub_nomerge)
+ return NULL;
+
+ if (ctor)
+ return NULL;
+
+ size = ALIGN(size, sizeof(void *));
+ align = calculate_alignment(flags, align, size);
+ size = ALIGN(size, align);
+ flags = kmem_cache_flags(size, flags, name);
+
+ if (flags & SLUB_NEVER_MERGE)
+ return NULL;
+
+ list_for_each_entry_reverse(s, &slab_caches, list) {
+ if (slub_unmergeable(s))
+ continue;
+
+ if (size > s->size)
+ continue;
+
+ if ((flags & SLUB_MERGE_SAME) != (s->flags & SLUB_MERGE_SAME))
+ continue;
+ /*
+ * Check if alignment is compatible.
+ * Courtesy of Adrian Drzewiecki
+ */
+ if ((s->size & ~(align - 1)) != s->size)
+ continue;
+
+ if (s->size - size >= sizeof(void *))
+ continue;
+
+ return s;
+ }
+ return NULL;
+}
+
#ifdef CONFIG_SLUB_DEBUG
static unsigned long object_map[BITS_TO_LONGS(MAX_OBJS_PER_PAGE)];
static DEFINE_SPINLOCK(object_map_lock);
@@ -6679,7 +6777,7 @@ static int sysfs_slab_add(struct kmem_cache *s)
int err;
const char *name;
struct kset *kset = cache_kset(s);
- int unmergeable = slab_unmergeable(s);
+ int unmergeable = slub_unmergeable(s);

if (!unmergeable && disable_higher_order_debug &&
(slub_debug & DEBUG_METADATA_FLAGS))
--
2.34.1

2023-11-20 09:24:07

by Xiongwei Song

[permalink] [raw]
Subject: [PATCH 1/4] Documentation: kernel-parameters: remove slab_max_order

From: Xiongwei Song <[email protected]>

Since slab allocator has already been removed. There is no users about
it, so remove it.

Signed-off-by: Xiongwei Song <[email protected]>
---
Documentation/admin-guide/kernel-parameters.txt | 6 ------
1 file changed, 6 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 65731b060e3f..c7709a11f8ce 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -5887,12 +5887,6 @@
own.
For more information see Documentation/mm/slub.rst.

- slab_max_order= [MM, SLAB]
- Determines the maximum allowed order for slabs.
- A high setting may cause OOMs due to memory
- fragmentation. Defaults to 1 for systems with
- more than 32MB of RAM, 0 otherwise.
-
slub_debug[=options[,slabs][;[options[,slabs]]...] [MM, SLUB]
Enabling slub_debug allows one to determine the
culprit if slab objects become corrupted. Enabling
--
2.34.1

2023-11-20 09:44:14

by Xiongwei Song

[permalink] [raw]
Subject: [PATCH 2/4] mm/slab: remove slab_nomrege and slab_merge

From: Xiongwei Song <[email protected]>

Since slab allocatoer has already been removed, so we should also remove
the related parameters. And change the global flag from slab_nomerge
to slub_nomerge.

Signed-off-by: Xiongwei Song <[email protected]>
---
Documentation/admin-guide/kernel-parameters.txt | 11 ++---------
mm/Kconfig | 2 +-
mm/slab_common.c | 13 +++++--------
3 files changed, 8 insertions(+), 18 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index c7709a11f8ce..afca9ff7c9f0 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -5870,11 +5870,11 @@

slram= [HW,MTD]

- slab_merge [MM]
+ slub_merge [MM]
Enable merging of slabs with similar size when the
kernel is built without CONFIG_SLAB_MERGE_DEFAULT.

- slab_nomerge [MM]
+ slub_nomerge [MM]
Disable merging of slabs with similar size. May be
necessary if there is some reason to distinguish
allocs to different slabs, especially in hardened
@@ -5915,13 +5915,6 @@
lower than slub_max_order.
For more information see Documentation/mm/slub.rst.

- slub_merge [MM, SLUB]
- Same with slab_merge.
-
- slub_nomerge [MM, SLUB]
- Same with slab_nomerge. This is supported for legacy.
- See slab_nomerge for more information.
-
smart2= [HW]
Format: <io1>[,<io2>[,...,<io8>]]

diff --git a/mm/Kconfig b/mm/Kconfig
index 766aa8f8e553..87c3f2e1d0d3 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -255,7 +255,7 @@ config SLAB_MERGE_DEFAULT
cache layout), which makes such heap attacks easier to exploit
by attackers. By keeping caches unmerged, these kinds of exploits
can usually only damage objects in the same cache. To disable
- merging at runtime, "slab_nomerge" can be passed on the kernel
+ merging at runtime, "slub_nomerge" can be passed on the kernel
command line.

config SLAB_FREELIST_RANDOM
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 238293b1dbe1..d707abd31926 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -58,26 +58,23 @@ static DECLARE_WORK(slab_caches_to_rcu_destroy_work,
/*
* Merge control. If this is set then no merging of slab caches will occur.
*/
-static bool slab_nomerge = !IS_ENABLED(CONFIG_SLAB_MERGE_DEFAULT);
+static bool slub_nomerge = !IS_ENABLED(CONFIG_SLAB_MERGE_DEFAULT);

static int __init setup_slab_nomerge(char *str)
{
- slab_nomerge = true;
+ slub_nomerge = true;
return 1;
}

static int __init setup_slab_merge(char *str)
{
- slab_nomerge = false;
+ slub_nomerge = false;
return 1;
}

__setup_param("slub_nomerge", slub_nomerge, setup_slab_nomerge, 0);
__setup_param("slub_merge", slub_merge, setup_slab_merge, 0);

-__setup("slab_nomerge", setup_slab_nomerge);
-__setup("slab_merge", setup_slab_merge);
-
/*
* Determine the size of a slab object
*/
@@ -138,7 +135,7 @@ static unsigned int calculate_alignment(slab_flags_t flags,
*/
int slab_unmergeable(struct kmem_cache *s)
{
- if (slab_nomerge || (s->flags & SLAB_NEVER_MERGE))
+ if (slub_nomerge || (s->flags & SLAB_NEVER_MERGE))
return 1;

if (s->ctor)
@@ -163,7 +160,7 @@ struct kmem_cache *find_mergeable(unsigned int size, unsigned int align,
{
struct kmem_cache *s;

- if (slab_nomerge)
+ if (slub_nomerge)
return NULL;

if (ctor)
--
2.34.1

2023-11-21 08:30:19

by Hyeonggon Yoo

[permalink] [raw]
Subject: Re: [PATCH 1/4] Documentation: kernel-parameters: remove slab_max_order

On Mon, Nov 20, 2023 at 6:12 PM <[email protected]> wrote:
>
> From: Xiongwei Song <[email protected]>
>
> Since slab allocator has already been removed. There is no users about
> it, so remove it.
>
> Signed-off-by: Xiongwei Song <[email protected]>
> ---
> Documentation/admin-guide/kernel-parameters.txt | 6 ------
> 1 file changed, 6 deletions(-)
>
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index 65731b060e3f..c7709a11f8ce 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -5887,12 +5887,6 @@
> own.
> For more information see Documentation/mm/slub.rst.
>
> - slab_max_order= [MM, SLAB]
> - Determines the maximum allowed order for slabs.
> - A high setting may cause OOMs due to memory
> - fragmentation. Defaults to 1 for systems with
> - more than 32MB of RAM, 0 otherwise.
> -

Good catch!

By the way I think noaliencache can be removed too in this patch together:
> noaliencache [MM, NUMA, SLAB] Disables the allocation of alien
> caches in the slab allocator. Saves per-node memory,
> but will impact performance.

Thanks,
Hyeonggon

2023-11-21 08:44:40

by Hyeonggon Yoo

[permalink] [raw]
Subject: Re: [PATCH 2/4] mm/slab: remove slab_nomrege and slab_merge

On Mon, Nov 20, 2023 at 6:12 PM <[email protected]> wrote:
>
> From: Xiongwei Song <[email protected]>
>
> Since slab allocatoer has already been removed, so we should also remove
> the related parameters. And change the global flag from slab_nomerge
> to slub_nomerge.

No, kernel parameters should be changed only in a backward-compatible
way (if possible)

Before slab merging was supported in SLAB, only SLUB supported it.
After commit 423c929cbbec ("mm/slab_common: commonize slab merge logic"), using
slab_[no]merge parameters for CONFIG_SLUB builds became legal.

I think what the documentation says is "slab_[no]merge enables or
disables slab merging
and slub_[no]merge remain supported only for backward compatibility"

>
> Signed-off-by: Xiongwei Song <[email protected]>
> ---
> Documentation/admin-guide/kernel-parameters.txt | 11 ++---------
> mm/Kconfig | 2 +-
> mm/slab_common.c | 13 +++++--------
> 3 files changed, 8 insertions(+), 18 deletions(-)
>
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index c7709a11f8ce..afca9ff7c9f0 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -5870,11 +5870,11 @@
>
> slram= [HW,MTD]
>
> - slab_merge [MM]
> + slub_merge [MM]
> Enable merging of slabs with similar size when the
> kernel is built without CONFIG_SLAB_MERGE_DEFAULT.
>
> - slab_nomerge [MM]
> + slub_nomerge [MM]
> Disable merging of slabs with similar size. May be
> necessary if there is some reason to distinguish
> allocs to different slabs, especially in hardened
> @@ -5915,13 +5915,6 @@
> lower than slub_max_order.
> For more information see Documentation/mm/slub.rst.
>
> - slub_merge [MM, SLUB]
> - Same with slab_merge.
> -
> - slub_nomerge [MM, SLUB]
> - Same with slab_nomerge. This is supported for legacy.
> - See slab_nomerge for more information.
> -
> smart2= [HW]
> Format: <io1>[,<io2>[,...,<io8>]]

2023-11-21 08:54:59

by Hyeonggon Yoo

[permalink] [raw]
Subject: Re: [PATCH 4/4] mm/slab: move slab merge from slab_common.c to slub.c

On Mon, Nov 20, 2023 at 6:13 PM <[email protected]> wrote:
>
> From: Xiongwei Song <[email protected]>
>
> Since slab allocator has been removed. There is no users about slab
> merge except slub. This commit is almost to revert
> commit 423c929cbbec ("mm/slab_common: commonize slab merge logic").
>
> Also change all prefix of slab merge related functions, variables and
> definitions from "slab/SLAB" to"slub/SLUB".

Could you please elaborate a little bit?
I am not sure if I understand what the last two patches of this series
are useful for.

- Why rename variable/function/macro names?
- Why move merge related functions from slab_common.c to slub.c?
(I mean merging slab_common.c and slub.c into single file might make sense
but why move only some parts of one into the other?)

> Signed-off-by: Xiongwei Song <[email protected]>
> ---
> mm/slab.h | 3 --
> mm/slab_common.c | 98 ----------------------------------------------
> mm/slub.c | 100 ++++++++++++++++++++++++++++++++++++++++++++++-
> 3 files changed, 99 insertions(+), 102 deletions(-)
>
> diff --git a/mm/slab.h b/mm/slab.h
> index 8d20f8c6269d..cd52e705ce28 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -429,9 +429,6 @@ extern void create_boot_cache(struct kmem_cache *, const char *name,
>
> unsigned int calculate_alignment(slab_flags_t flags,
> unsigned int align, unsigned int size);
> -int slab_unmergeable(struct kmem_cache *s);
> -struct kmem_cache *find_mergeable(unsigned size, unsigned align,
> - slab_flags_t flags, const char *name, void (*ctor)(void *));
> struct kmem_cache *
> __kmem_cache_alias(const char *name, unsigned int size, unsigned int align,
> slab_flags_t flags, void (*ctor)(void *));
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index 62eb77fdedf2..6960ae5c35ee 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -45,36 +45,6 @@ static void slab_caches_to_rcu_destroy_workfn(struct work_struct *work);
> static DECLARE_WORK(slab_caches_to_rcu_destroy_work,
> slab_caches_to_rcu_destroy_workfn);
>
> -/*
> - * Set of flags that will prevent slab merging
> - */
> -#define SLAB_NEVER_MERGE (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER | \
> - SLAB_TRACE | SLAB_TYPESAFE_BY_RCU | SLAB_NOLEAKTRACE | \
> - SLAB_FAILSLAB | SLAB_NO_MERGE | kasan_never_merge())
> -
> -#define SLAB_MERGE_SAME (SLAB_RECLAIM_ACCOUNT | SLAB_CACHE_DMA | \
> - SLAB_CACHE_DMA32 | SLAB_ACCOUNT)
> -
> -/*
> - * Merge control. If this is set then no merging of slab caches will occur.
> - */
> -static bool slub_nomerge = !IS_ENABLED(CONFIG_SLAB_MERGE_DEFAULT);
> -
> -static int __init setup_slab_nomerge(char *str)
> -{
> - slub_nomerge = true;
> - return 1;
> -}
> -
> -static int __init setup_slab_merge(char *str)
> -{
> - slub_nomerge = false;
> - return 1;
> -}
> -
> -__setup_param("slub_nomerge", slub_nomerge, setup_slab_nomerge, 0);
> -__setup_param("slub_merge", slub_merge, setup_slab_merge, 0);
> -
> /*
> * Determine the size of a slab object
> */
> @@ -130,74 +100,6 @@ unsigned int calculate_alignment(slab_flags_t flags,
> return ALIGN(align, sizeof(void *));
> }
>
> -/*
> - * Find a mergeable slab cache
> - */
> -int slab_unmergeable(struct kmem_cache *s)
> -{
> - if (slub_nomerge || (s->flags & SLAB_NEVER_MERGE))
> - return 1;
> -
> - if (s->ctor)
> - return 1;
> -
> -#ifdef CONFIG_HARDENED_USERCOPY
> - if (s->usersize)
> - return 1;
> -#endif
> -
> - /*
> - * We may have set a slab to be unmergeable during bootstrap.
> - */
> - if (s->refcount < 0)
> - return 1;
> -
> - return 0;
> -}
> -
> -struct kmem_cache *find_mergeable(unsigned int size, unsigned int align,
> - slab_flags_t flags, const char *name, void (*ctor)(void *))
> -{
> - struct kmem_cache *s;
> -
> - if (slub_nomerge)
> - return NULL;
> -
> - if (ctor)
> - return NULL;
> -
> - size = ALIGN(size, sizeof(void *));
> - align = calculate_alignment(flags, align, size);
> - size = ALIGN(size, align);
> - flags = kmem_cache_flags(size, flags, name);
> -
> - if (flags & SLAB_NEVER_MERGE)
> - return NULL;
> -
> - list_for_each_entry_reverse(s, &slab_caches, list) {
> - if (slab_unmergeable(s))
> - continue;
> -
> - if (size > s->size)
> - continue;
> -
> - if ((flags & SLAB_MERGE_SAME) != (s->flags & SLAB_MERGE_SAME))
> - continue;
> - /*
> - * Check if alignment is compatible.
> - * Courtesy of Adrian Drzewiecki
> - */
> - if ((s->size & ~(align - 1)) != s->size)
> - continue;
> -
> - if (s->size - size >= sizeof(void *))
> - continue;
> -
> - return s;
> - }
> - return NULL;
> -}
> -
> static struct kmem_cache *create_cache(const char *name,
> unsigned int object_size, unsigned int align,
> slab_flags_t flags, unsigned int useroffset,
> diff --git a/mm/slub.c b/mm/slub.c
> index ae1e6e635253..435d9ed140e4 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -709,6 +709,104 @@ static inline bool slab_update_freelist(struct kmem_cache *s, struct slab *slab,
> return false;
> }
>
> +/*
> + * Set of flags that will prevent slab merging
> + */
> +#define SLUB_NEVER_MERGE (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER | \
> + SLAB_TRACE | SLAB_TYPESAFE_BY_RCU | SLAB_NOLEAKTRACE | \
> + SLAB_FAILSLAB | SLAB_NO_MERGE | kasan_never_merge())
> +
> +#define SLUB_MERGE_SAME (SLAB_RECLAIM_ACCOUNT | SLAB_CACHE_DMA | \
> + SLAB_CACHE_DMA32 | SLAB_ACCOUNT)
> +
> +/*
> + * Merge control. If this is set then no merging of slab caches will occur.
> + */
> +static bool slub_nomerge = !IS_ENABLED(CONFIG_SLAB_MERGE_DEFAULT);
> +
> +static int __init setup_slub_nomerge(char *str)
> +{
> + slub_nomerge = true;
> + return 1;
> +}
> +
> +static int __init setup_slub_merge(char *str)
> +{
> + slub_nomerge = false;
> + return 1;
> +}
> +
> +__setup_param("slub_nomerge", slub_nomerge, setup_slab_nomerge, 0);
> +__setup_param("slub_merge", slub_merge, setup_slab_merge, 0);
> +
> +/*
> + * Find a mergeable slab cache
> + */
> +static inline int slub_unmergeable(struct kmem_cache *s)
> +{
> + if (slub_nomerge || (s->flags & SLUB_NEVER_MERGE))
> + return 1;
> +
> + if (s->ctor)
> + return 1;
> +
> +#ifdef CONFIG_HARDENED_USERCOPY
> + if (s->usersize)
> + return 1;
> +#endif
> +
> + /*
> + * We may have set a slab to be unmergeable during bootstrap.
> + */
> + if (s->refcount < 0)
> + return 1;
> +
> + return 0;
> +}
> +
> +static struct kmem_cache *find_mergeable(unsigned int size, unsigned int align,
> + slab_flags_t flags, const char *name, void (*ctor)(void *))
> +{
> + struct kmem_cache *s;
> +
> + if (slub_nomerge)
> + return NULL;
> +
> + if (ctor)
> + return NULL;
> +
> + size = ALIGN(size, sizeof(void *));
> + align = calculate_alignment(flags, align, size);
> + size = ALIGN(size, align);
> + flags = kmem_cache_flags(size, flags, name);
> +
> + if (flags & SLUB_NEVER_MERGE)
> + return NULL;
> +
> + list_for_each_entry_reverse(s, &slab_caches, list) {
> + if (slub_unmergeable(s))
> + continue;
> +
> + if (size > s->size)
> + continue;
> +
> + if ((flags & SLUB_MERGE_SAME) != (s->flags & SLUB_MERGE_SAME))
> + continue;
> + /*
> + * Check if alignment is compatible.
> + * Courtesy of Adrian Drzewiecki
> + */
> + if ((s->size & ~(align - 1)) != s->size)
> + continue;
> +
> + if (s->size - size >= sizeof(void *))
> + continue;
> +
> + return s;
> + }
> + return NULL;
> +}
> +
> #ifdef CONFIG_SLUB_DEBUG
> static unsigned long object_map[BITS_TO_LONGS(MAX_OBJS_PER_PAGE)];
> static DEFINE_SPINLOCK(object_map_lock);
> @@ -6679,7 +6777,7 @@ static int sysfs_slab_add(struct kmem_cache *s)
> int err;
> const char *name;
> struct kset *kset = cache_kset(s);
> - int unmergeable = slab_unmergeable(s);
> + int unmergeable = slub_unmergeable(s);
>
> if (!unmergeable && disable_higher_order_debug &&
> (slub_debug & DEBUG_METADATA_FLAGS))
> --
> 2.34.1
>

2023-11-21 09:03:45

by Hyeonggon Yoo

[permalink] [raw]
Subject: Re: [PATCH 4/4] mm/slab: move slab merge from slab_common.c to slub.c

On Mon, Nov 20, 2023 at 6:13 PM <[email protected]> wrote:
>
> From: Xiongwei Song <[email protected]>
>
> Since slab allocator has been removed. There is no users about slab
> merge except slub. This commit is almost to revert
> commit 423c929cbbec ("mm/slab_common: commonize slab merge logic").
>
> Also change all prefix of slab merge related functions, variables and
> definitions from "slab/SLAB" to"slub/SLUB".
>
> Signed-off-by: Xiongwei Song <[email protected]>
> ---
> mm/slab.h | 3 --
> mm/slab_common.c | 98 ----------------------------------------------
> mm/slub.c | 100 ++++++++++++++++++++++++++++++++++++++++++++++-
> 3 files changed, 99 insertions(+), 102 deletions(-)
[...]
> +/*
> + * Merge control. If this is set then no merging of slab caches will occur.
> + */
> +static bool slub_nomerge = !IS_ENABLED(CONFIG_SLAB_MERGE_DEFAULT);
> +
> +static int __init setup_slub_nomerge(char *str)
> +{
> + slub_nomerge = true;
> + return 1;
> +}
> +
> +static int __init setup_slub_merge(char *str)
> +{
> + slub_nomerge = false;
> + return 1;
> +}
> +
> +__setup_param("slub_nomerge", slub_nomerge, setup_slab_nomerge, 0);
> +__setup_param("slub_merge", slub_merge, setup_slab_merge, 0);

FYI This hunk breaks kernel builds:

In file included from ./include/linux/printk.h:6,
from ./include/asm-generic/bug.h:22,
from ./arch/x86/include/asm/bug.h:87,
from ./include/linux/bug.h:5,
from ./include/linux/mmdebug.h:5,
from ./include/linux/mm.h:6,
from mm/slub.c:13:
mm/slub.c:748:45: error: ‘setup_slab_nomerge’ undeclared here (not in
a function); did you mean ‘setup_slub_nomerge’?
748 | __setup_param("slub_nomerge", slub_nomerge, setup_slab_nomerge, 0);
| ^~~~~~~~~~~~~~~~~~
./include/linux/init.h:340:32: note: in definition of macro ‘__setup_param’
340 | = { __setup_str_##unique_id, fn, early }
| ^~
mm/slub.c:749:41: error: ‘setup_slab_merge’ undeclared here (not in a
function); did you mean ‘setup_slub_merge’?
749 | __setup_param("slub_merge", slub_merge, setup_slab_merge, 0);
| ^~~~~~~~~~~~~~~~
./include/linux/init.h:340:32: note: in definition of macro ‘__setup_param’
340 | = { __setup_str_##unique_id, fn, early }
| ^~
CC kernel/time/ntp.o
mm/slub.c:742:19: warning: ‘setup_slub_merge’ defined but not used
[-Wunused-function]
742 | static int __init setup_slub_merge(char *str)
| ^~~~~~~~~~~~~~~~
mm/slub.c:736:19: warning: ‘setup_slub_nomerge’ defined but not used
[-Wunused-function]
736 | static int __init setup_slub_nomerge(char *str)
| ^~~~~~~~~~~~~~~~~~

2023-11-21 10:03:23

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH 4/4] mm/slab: move slab merge from slab_common.c to slub.c

On 11/21/23 09:54, Hyeonggon Yoo wrote:
> On Mon, Nov 20, 2023 at 6:13 PM <[email protected]> wrote:
>>
>> From: Xiongwei Song <[email protected]>
>>
>> Since slab allocator has been removed. There is no users about slab
>> merge except slub. This commit is almost to revert
>> commit 423c929cbbec ("mm/slab_common: commonize slab merge logic").
>>
>> Also change all prefix of slab merge related functions, variables and
>> definitions from "slab/SLAB" to"slub/SLUB".
>
> Could you please elaborate a little bit?
> I am not sure if I understand what the last two patches of this series
> are useful for.
>
> - Why rename variable/function/macro names?
> - Why move merge related functions from slab_common.c to slub.c?

In my series I have moved functions that were part of allocation/free hot
paths as there should be performance benefits if they are all in the same
compilation unit.

> (I mean merging slab_common.c and slub.c into single file might make sense
> but why move only some parts of one into the other?)

OTOH slub.c becomes quite big, so I think it would make sense to not merge
mm/slab_common.c fully. The non-hot code that's handling e.g. the caches
creation and management, such as what this patch is moving, could certainly
stay away from mm/slub.c. We could just pick a more descriptive name for
slab_common.c.

I'd even investigate if more parts of slub.c could be split out (to a new
file/files) without compromising the hot paths, i.e. sysfs, debugging etc.

2023-11-22 05:22:51

by Song, Xiongwei

[permalink] [raw]
Subject: RE: [PATCH 1/4] Documentation: kernel-parameters: remove slab_max_order



> -----Original Message-----
> From: [email protected] <[email protected]> On Behalf Of Hyeonggon
> Yoo
> Sent: Tuesday, November 21, 2023 4:30 PM
> To: [email protected]
> Cc: [email protected]; [email protected]; [email protected]; [email protected];
> [email protected]; [email protected]; [email protected]; [email protected]; linux-
> [email protected]; [email protected]
> Subject: Re: [PATCH 1/4] Documentation: kernel-parameters: remove slab_max_order
>
> CAUTION: This email comes from a non Wind River email account!
> Do not click links or open attachments unless you recognize the sender and know the
> content is safe.
>
> On Mon, Nov 20, 2023 at 6:12 PM <[email protected]> wrote:
> >
> > From: Xiongwei Song <[email protected]>
> >
> > Since slab allocator has already been removed. There is no users about
> > it, so remove it.
> >
> > Signed-off-by: Xiongwei Song <[email protected]>
> > ---
> > Documentation/admin-guide/kernel-parameters.txt | 6 ------
> > 1 file changed, 6 deletions(-)
> >
> > diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-
> guide/kernel-parameters.txt
> > index 65731b060e3f..c7709a11f8ce 100644
> > --- a/Documentation/admin-guide/kernel-parameters.txt
> > +++ b/Documentation/admin-guide/kernel-parameters.txt
> > @@ -5887,12 +5887,6 @@
> > own.
> > For more information see Documentation/mm/slub.rst.
> >
> > - slab_max_order= [MM, SLAB]
> > - Determines the maximum allowed order for slabs.
> > - A high setting may cause OOMs due to memory
> > - fragmentation. Defaults to 1 for systems with
> > - more than 32MB of RAM, 0 otherwise.
> > -
>
> Good catch!
>
> By the way I think noaliencache can be removed too in this patch together:

Thanks Hyeonggon. Will do that.

Regards,
Xiongwei

> > noaliencache [MM, NUMA, SLAB] Disables the allocation of alien
> > caches in the slab allocator. Saves per-node memory,
> > but will impact performance.
>
> Thanks,
> Hyeonggon

2023-11-22 05:28:37

by Song, Xiongwei

[permalink] [raw]
Subject: RE: [PATCH 2/4] mm/slab: remove slab_nomrege and slab_merge

Hi Hyeonggon,

> -----Original Message-----
> From: [email protected] <[email protected]> On Behalf Of Hyeonggon
> Yoo
> Sent: Tuesday, November 21, 2023 4:44 PM
> To: [email protected]
> Cc: [email protected]; [email protected]; [email protected]; [email protected];
> [email protected]; [email protected]; [email protected]; [email protected]; linux-
> [email protected]; [email protected]
> Subject: Re: [PATCH 2/4] mm/slab: remove slab_nomrege and slab_merge
>
> CAUTION: This email comes from a non Wind River email account!
> Do not click links or open attachments unless you recognize the sender and know the
> content is safe.
>
> On Mon, Nov 20, 2023 at 6:12 PM <[email protected]> wrote:
> >
> > From: Xiongwei Song <[email protected]>
> >
> > Since slab allocatoer has already been removed, so we should also remove
> > the related parameters. And change the global flag from slab_nomerge
> > to slub_nomerge.
>
> No, kernel parameters should be changed only in a backward-compatible
> way (if possible)
>
> Before slab merging was supported in SLAB, only SLUB supported it.
> After commit 423c929cbbec ("mm/slab_common: commonize slab merge logic"), using
> slab_[no]merge parameters for CONFIG_SLUB builds became legal.
>
> I think what the documentation says is "slab_[no]merge enables or
> disables slab merging
> and slub_[no]merge remain supported only for backward compatibility"

Yes. But slab allocator will not exist anymore. Is slab_[no]merge still proper?
Will the term "slab/SLAB" still be used in the future?

Regards,
Xiongwei
>
> >
> > Signed-off-by: Xiongwei Song <[email protected]>
> > ---
> > Documentation/admin-guide/kernel-parameters.txt | 11 ++---------
> > mm/Kconfig | 2 +-
> > mm/slab_common.c | 13 +++++--------
> > 3 files changed, 8 insertions(+), 18 deletions(-)
> >
> > diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-
> guide/kernel-parameters.txt
> > index c7709a11f8ce..afca9ff7c9f0 100644
> > --- a/Documentation/admin-guide/kernel-parameters.txt
> > +++ b/Documentation/admin-guide/kernel-parameters.txt
> > @@ -5870,11 +5870,11 @@
> >
> > slram= [HW,MTD]
> >
> > - slab_merge [MM]
> > + slub_merge [MM]
> > Enable merging of slabs with similar size when the
> > kernel is built without CONFIG_SLAB_MERGE_DEFAULT.
> >
> > - slab_nomerge [MM]
> > + slub_nomerge [MM]
> > Disable merging of slabs with similar size. May be
> > necessary if there is some reason to distinguish
> > allocs to different slabs, especially in hardened
> > @@ -5915,13 +5915,6 @@
> > lower than slub_max_order.
> > For more information see Documentation/mm/slub.rst.
> >
> > - slub_merge [MM, SLUB]
> > - Same with slab_merge.
> > -
> > - slub_nomerge [MM, SLUB]
> > - Same with slab_nomerge. This is supported for legacy.
> > - See slab_nomerge for more information.
> > -
> > smart2= [HW]
> > Format: <io1>[,<io2>[,...,<io8>]]

2023-11-22 05:34:17

by Song, Xiongwei

[permalink] [raw]
Subject: RE: [PATCH 4/4] mm/slab: move slab merge from slab_common.c to slub.c



> -----Original Message-----
> From: [email protected] <[email protected]> On Behalf Of Vlastimil
> Babka
> Sent: Tuesday, November 21, 2023 6:03 PM
> To: Hyeonggon Yoo <[email protected]>; [email protected]
> Cc: [email protected]; [email protected]; [email protected]; [email protected];
> [email protected]; [email protected]; [email protected]; linux-
> [email protected]; [email protected]
> Subject: Re: [PATCH 4/4] mm/slab: move slab merge from slab_common.c to slub.c
>
> CAUTION: This email comes from a non Wind River email account!
> Do not click links or open attachments unless you recognize the sender and know the
> content is safe.
>
> On 11/21/23 09:54, Hyeonggon Yoo wrote:
> > On Mon, Nov 20, 2023 at 6:13 PM <[email protected]> wrote:
> >>
> >> From: Xiongwei Song <[email protected]>
> >>
> >> Since slab allocator has been removed. There is no users about slab
> >> merge except slub. This commit is almost to revert
> >> commit 423c929cbbec ("mm/slab_common: commonize slab merge logic").
> >>
> >> Also change all prefix of slab merge related functions, variables and
> >> definitions from "slab/SLAB" to"slub/SLUB".
> >
> > Could you please elaborate a little bit?
> > I am not sure if I understand what the last two patches of this series
> > are useful for.
> >
> > - Why rename variable/function/macro names?
> > - Why move merge related functions from slab_common.c to slub.c?
>
> In my series I have moved functions that were part of allocation/free hot
> paths as there should be performance benefits if they are all in the same
> compilation unit.
>
> > (I mean merging slab_common.c and slub.c into single file might make sense
> > but why move only some parts of one into the other?)
>
> OTOH slub.c becomes quite big, so I think it would make sense to not merge
> mm/slab_common.c fully. The non-hot code that's handling e.g. the caches
> creation and management, such as what this patch is moving, could certainly
> stay away from mm/slub.c. We could just pick a more descriptive name for
> slab_common.c.
>
> I'd even investigate if more parts of slub.c could be split out (to a new
> file/files) without compromising the hot paths, i.e. sysfs, debugging etc.

Ok, sure. Sounds good.

Regards,
Xiongwei

2023-11-22 06:00:02

by Hyeonggon Yoo

[permalink] [raw]
Subject: Re: [PATCH 2/4] mm/slab: remove slab_nomrege and slab_merge

On Wed, Nov 22, 2023 at 2:27 PM Song, Xiongwei
<[email protected]> wrote:
>
> Hi Hyeonggon,
>
> > -----Original Message-----
> > From: [email protected] <[email protected]> On Behalf Of Hyeonggon
> > Yoo
> > Sent: Tuesday, November 21, 2023 4:44 PM
> > To: [email protected]
> > Cc: [email protected]; [email protected]; [email protected]; [email protected];
> > [email protected]; [email protected]; [email protected]; [email protected]; linux-
> > [email protected]; [email protected]
> > Subject: Re: [PATCH 2/4] mm/slab: remove slab_nomrege and slab_merge
> >
> > CAUTION: This email comes from a non Wind River email account!
> > Do not click links or open attachments unless you recognize the sender and know the
> > content is safe.
> >
> > On Mon, Nov 20, 2023 at 6:12 PM <[email protected]> wrote:
> > >
> > > From: Xiongwei Song <[email protected]>
> > >
> > > Since slab allocatoer has already been removed, so we should also remove
> > > the related parameters. And change the global flag from slab_nomerge
> > > to slub_nomerge.
> >
> > No, kernel parameters should be changed only in a backward-compatible
> > way (if possible)
> >
> > Before slab merging was supported in SLAB, only SLUB supported it.
> > After commit 423c929cbbec ("mm/slab_common: commonize slab merge logic"), using
> > slab_[no]merge parameters for CONFIG_SLUB builds became legal.
> >
> > I think what the documentation says is "slab_[no]merge enables or
> > disables slab merging
> > and slub_[no]merge remain supported only for backward compatibility"
>
> Yes. But slab allocator will not exist anymore. Is slab_[no]merge still proper?
> Will the term "slab/SLAB" still be used in the future?

Well, why break existing users for no strong reason?

The reason why commit 423c929c did not drop slub_[no]merge after commonization
is to support existing users and avoid breaking what worked before.

Removing slab_max_order made sense because SLAB has gone and
it didn't have any effect on SLUB, but slab_[no]merge are not the case.

Also, technically SLUB is an implementation of the slab allocator concept,
so IMHO it is not an improper name.

and (let's say) even if it is improper, I'm not sure if changing
everything would be worth it:
$ git grep 'slab' mm | wc -l
2365

--
Thanks!
Hyeonggon