2023-01-16 09:28:20

by Rong Tao

[permalink] [raw]
Subject: [PATCH] mm: Functions used internally should not be put into slub_def.h

From: Rong Tao <[email protected]>

commit 40f3bf0cb04c("mm: Convert struct page to struct slab in functions
used by other subsystems") introduce 'slab_address()' and 'struct slab'
in slab_def.h(CONFIG_SLAB) and slub_def.h(CONFIG_SLUB). When referencing
a header file <linux/slub_def.h> in a module or BPF code, 'slab_address()'
and 'struct slab' are not recognized, resulting in incomplete and
undefined errors(see bcc slabratetop.py error [0]).

Moving the function definitions of reference data structures such as
struct slab and slab_address() such as nearest_obj(), obj_to_index(),
and objs_per_slab() to the internal header file slab.h solves this
fatal problem.

[0] https://github.com/iovisor/bcc/issues/4438

Signed-off-by: Rong Tao <[email protected]>
---
include/linux/slab_def.h | 33 --------------------
include/linux/slub_def.h | 32 -------------------
mm/slab.h | 66 ++++++++++++++++++++++++++++++++++++++++
3 files changed, 66 insertions(+), 65 deletions(-)

diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h
index 5834bad8ad78..5658b5fddf9b 100644
--- a/include/linux/slab_def.h
+++ b/include/linux/slab_def.h
@@ -88,37 +88,4 @@ struct kmem_cache {
struct kmem_cache_node *node[MAX_NUMNODES];
};

-static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *slab,
- void *x)
-{
- void *object = x - (x - slab->s_mem) % cache->size;
- void *last_object = slab->s_mem + (cache->num - 1) * cache->size;
-
- if (unlikely(object > last_object))
- return last_object;
- else
- return object;
-}
-
-/*
- * We want to avoid an expensive divide : (offset / cache->size)
- * Using the fact that size is a constant for a particular cache,
- * we can replace (offset / cache->size) by
- * reciprocal_divide(offset, cache->reciprocal_buffer_size)
- */
-static inline unsigned int obj_to_index(const struct kmem_cache *cache,
- const struct slab *slab, void *obj)
-{
- u32 offset = (obj - slab->s_mem);
- return reciprocal_divide(offset, cache->reciprocal_buffer_size);
-}
-
-static inline int objs_per_slab(const struct kmem_cache *cache,
- const struct slab *slab)
-{
- if (is_kfence_address(slab_address(slab)))
- return 1;
- return cache->num;
-}
-
#endif /* _LINUX_SLAB_DEF_H */
diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index aa0ee1678d29..660fd6b2a748 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -163,36 +163,4 @@ static inline void sysfs_slab_release(struct kmem_cache *s)

void *fixup_red_left(struct kmem_cache *s, void *p);

-static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *slab,
- void *x) {
- void *object = x - (x - slab_address(slab)) % cache->size;
- void *last_object = slab_address(slab) +
- (slab->objects - 1) * cache->size;
- void *result = (unlikely(object > last_object)) ? last_object : object;
-
- result = fixup_red_left(cache, result);
- return result;
-}
-
-/* Determine object index from a given position */
-static inline unsigned int __obj_to_index(const struct kmem_cache *cache,
- void *addr, void *obj)
-{
- return reciprocal_divide(kasan_reset_tag(obj) - addr,
- cache->reciprocal_size);
-}
-
-static inline unsigned int obj_to_index(const struct kmem_cache *cache,
- const struct slab *slab, void *obj)
-{
- if (is_kfence_address(obj))
- return 0;
- return __obj_to_index(cache, slab_address(slab), obj);
-}
-
-static inline int objs_per_slab(const struct kmem_cache *cache,
- const struct slab *slab)
-{
- return slab->objects;
-}
#endif /* _LINUX_SLUB_DEF_H */
diff --git a/mm/slab.h b/mm/slab.h
index 7cc432969945..38350a0efa91 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -227,10 +227,76 @@ struct kmem_cache {

#ifdef CONFIG_SLAB
#include <linux/slab_def.h>
+
+static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *slab,
+ void *x)
+{
+ void *object = x - (x - slab->s_mem) % cache->size;
+ void *last_object = slab->s_mem + (cache->num - 1) * cache->size;
+
+ if (unlikely(object > last_object))
+ return last_object;
+ else
+ return object;
+}
+
+/*
+ * We want to avoid an expensive divide : (offset / cache->size)
+ * Using the fact that size is a constant for a particular cache,
+ * we can replace (offset / cache->size) by
+ * reciprocal_divide(offset, cache->reciprocal_buffer_size)
+ */
+static inline unsigned int obj_to_index(const struct kmem_cache *cache,
+ const struct slab *slab, void *obj)
+{
+ u32 offset = (obj - slab->s_mem);
+ return reciprocal_divide(offset, cache->reciprocal_buffer_size);
+}
+
+static inline int objs_per_slab(const struct kmem_cache *cache,
+ const struct slab *slab)
+{
+ if (is_kfence_address(slab_address(slab)))
+ return 1;
+ return cache->num;
+}
#endif

#ifdef CONFIG_SLUB
#include <linux/slub_def.h>
+
+static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *slab,
+ void *x) {
+ void *object = x - (x - slab_address(slab)) % cache->size;
+ void *last_object = slab_address(slab) +
+ (slab->objects - 1) * cache->size;
+ void *result = (unlikely(object > last_object)) ? last_object : object;
+
+ result = fixup_red_left(cache, result);
+ return result;
+}
+
+/* Determine object index from a given position */
+static inline unsigned int __obj_to_index(const struct kmem_cache *cache,
+ void *addr, void *obj)
+{
+ return reciprocal_divide(kasan_reset_tag(obj) - addr,
+ cache->reciprocal_size);
+}
+
+static inline unsigned int obj_to_index(const struct kmem_cache *cache,
+ const struct slab *slab, void *obj)
+{
+ if (is_kfence_address(obj))
+ return 0;
+ return __obj_to_index(cache, slab_address(slab), obj);
+}
+
+static inline int objs_per_slab(const struct kmem_cache *cache,
+ const struct slab *slab)
+{
+ return slab->objects;
+}
#endif

#include <linux/memcontrol.h>
--
2.39.0


2023-01-16 12:24:21

by Hyeonggon Yoo

[permalink] [raw]
Subject: Re: [PATCH] mm: Functions used internally should not be put into slub_def.h

On Mon, Jan 16, 2023 at 04:50:05PM +0800, Rong Tao wrote:
> From: Rong Tao <[email protected]>
>
> commit 40f3bf0cb04c("mm: Convert struct page to struct slab in functions
> used by other subsystems") introduce 'slab_address()' and 'struct slab'
> in slab_def.h(CONFIG_SLAB) and slub_def.h(CONFIG_SLUB). When referencing
> a header file <linux/slub_def.h> in a module or BPF code, 'slab_address()'
> and 'struct slab' are not recognized, resulting in incomplete and
> undefined errors(see bcc slabratetop.py error [0]).
>

Hello Rong,

IMO sl*b_def.h is not intended to be used externally.
and I'm not sure if it's worth for -stable release too.

IIUC The reason for slabratetop.py to rely on sl*b_def.h is to read
cachep->cache and cachep->size.

I think this can be solved if you use a tool that supports BPF Type Format?

> Moving the function definitions of reference data structures such as
> struct slab and slab_address() such as nearest_obj(), obj_to_index(),
> and objs_per_slab() to the internal header file slab.h solves this
> fatal problem.
>
> [0] https://github.com/iovisor/bcc/issues/4438
>
> Signed-off-by: Rong Tao <[email protected]>
> ---
> include/linux/slab_def.h | 33 --------------------
> include/linux/slub_def.h | 32 -------------------
> mm/slab.h | 66 ++++++++++++++++++++++++++++++++++++++++
> 3 files changed, 66 insertions(+), 65 deletions(-)
>
> diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h
> index 5834bad8ad78..5658b5fddf9b 100644
> --- a/include/linux/slab_def.h
> +++ b/include/linux/slab_def.h
> @@ -88,37 +88,4 @@ struct kmem_cache {
> struct kmem_cache_node *node[MAX_NUMNODES];
> };
>
> -static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *slab,
> - void *x)
> -{
> - void *object = x - (x - slab->s_mem) % cache->size;
> - void *last_object = slab->s_mem + (cache->num - 1) * cache->size;
> -
> - if (unlikely(object > last_object))
> - return last_object;
> - else
> - return object;
> -}
> -
> -/*
> - * We want to avoid an expensive divide : (offset / cache->size)
> - * Using the fact that size is a constant for a particular cache,
> - * we can replace (offset / cache->size) by
> - * reciprocal_divide(offset, cache->reciprocal_buffer_size)
> - */
> -static inline unsigned int obj_to_index(const struct kmem_cache *cache,
> - const struct slab *slab, void *obj)
> -{
> - u32 offset = (obj - slab->s_mem);
> - return reciprocal_divide(offset, cache->reciprocal_buffer_size);
> -}
> -
> -static inline int objs_per_slab(const struct kmem_cache *cache,
> - const struct slab *slab)
> -{
> - if (is_kfence_address(slab_address(slab)))
> - return 1;
> - return cache->num;
> -}
> -
> #endif /* _LINUX_SLAB_DEF_H */
> diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
> index aa0ee1678d29..660fd6b2a748 100644
> --- a/include/linux/slub_def.h
> +++ b/include/linux/slub_def.h
> @@ -163,36 +163,4 @@ static inline void sysfs_slab_release(struct kmem_cache *s)
>
> void *fixup_red_left(struct kmem_cache *s, void *p);
>
> -static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *slab,
> - void *x) {
> - void *object = x - (x - slab_address(slab)) % cache->size;
> - void *last_object = slab_address(slab) +
> - (slab->objects - 1) * cache->size;
> - void *result = (unlikely(object > last_object)) ? last_object : object;
> -
> - result = fixup_red_left(cache, result);
> - return result;
> -}
> -
> -/* Determine object index from a given position */
> -static inline unsigned int __obj_to_index(const struct kmem_cache *cache,
> - void *addr, void *obj)
> -{
> - return reciprocal_divide(kasan_reset_tag(obj) - addr,
> - cache->reciprocal_size);
> -}
> -
> -static inline unsigned int obj_to_index(const struct kmem_cache *cache,
> - const struct slab *slab, void *obj)
> -{
> - if (is_kfence_address(obj))
> - return 0;
> - return __obj_to_index(cache, slab_address(slab), obj);
> -}
> -
> -static inline int objs_per_slab(const struct kmem_cache *cache,
> - const struct slab *slab)
> -{
> - return slab->objects;
> -}
> #endif /* _LINUX_SLUB_DEF_H */
> diff --git a/mm/slab.h b/mm/slab.h
> index 7cc432969945..38350a0efa91 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -227,10 +227,76 @@ struct kmem_cache {
>
> #ifdef CONFIG_SLAB
> #include <linux/slab_def.h>
> +
> +static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *slab,
> + void *x)
> +{
> + void *object = x - (x - slab->s_mem) % cache->size;
> + void *last_object = slab->s_mem + (cache->num - 1) * cache->size;
> +
> + if (unlikely(object > last_object))
> + return last_object;
> + else
> + return object;
> +}
> +
> +/*
> + * We want to avoid an expensive divide : (offset / cache->size)
> + * Using the fact that size is a constant for a particular cache,
> + * we can replace (offset / cache->size) by
> + * reciprocal_divide(offset, cache->reciprocal_buffer_size)
> + */
> +static inline unsigned int obj_to_index(const struct kmem_cache *cache,
> + const struct slab *slab, void *obj)
> +{
> + u32 offset = (obj - slab->s_mem);
> + return reciprocal_divide(offset, cache->reciprocal_buffer_size);
> +}
> +
> +static inline int objs_per_slab(const struct kmem_cache *cache,
> + const struct slab *slab)
> +{
> + if (is_kfence_address(slab_address(slab)))
> + return 1;
> + return cache->num;
> +}
> #endif
>
> #ifdef CONFIG_SLUB
> #include <linux/slub_def.h>
> +
> +static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *slab,
> + void *x) {
> + void *object = x - (x - slab_address(slab)) % cache->size;
> + void *last_object = slab_address(slab) +
> + (slab->objects - 1) * cache->size;
> + void *result = (unlikely(object > last_object)) ? last_object : object;
> +
> + result = fixup_red_left(cache, result);
> + return result;
> +}
> +
> +/* Determine object index from a given position */
> +static inline unsigned int __obj_to_index(const struct kmem_cache *cache,
> + void *addr, void *obj)
> +{
> + return reciprocal_divide(kasan_reset_tag(obj) - addr,
> + cache->reciprocal_size);
> +}
> +
> +static inline unsigned int obj_to_index(const struct kmem_cache *cache,
> + const struct slab *slab, void *obj)
> +{
> + if (is_kfence_address(obj))
> + return 0;
> + return __obj_to_index(cache, slab_address(slab), obj);
> +}
> +
> +static inline int objs_per_slab(const struct kmem_cache *cache,
> + const struct slab *slab)
> +{
> + return slab->objects;
> +}
> #endif
>
> #include <linux/memcontrol.h>
> --
> 2.39.0
>

2023-01-17 02:16:40

by Rong Tao

[permalink] [raw]
Subject: Re: [PATCH] mm: Functions used internally should not be put into

Hi, dear Hyeonggon.

If sl*b_def.h is not intended to be used externally. That is to say we should
never include sl*b_def.h in module? The following code will not compile success
since commit 40f3bf0cb04c("mm: Convert struct page to struct slab in functions
used by other subsystems"), but success in older kernel. Especially for modules
that use the kmem_cache structure, this is catastrophic.

#include <linux/init.h>
#include <linux/printk.h>
#include <linux/slab.h>
#include <linux/slab_def.h>

static int kernel_init(void)
{
printk(KERN_INFO "my init.\n");
return 0;
}

static void kernel_exit(void)
{
printk(KERN_INFO "my exit.\n");
}

module_init(kernel_init);
module_exit(kernel_exit);

You are right, if sl*b_def.h is not intended to used externally, Modules that
use kmem_cache structures should not be designed from the outset to use
kmem_cache structures. Or the kernel can use this patch, and these modules can
be compiled successfully.

Best wishes.

Rong Tao

2023-01-17 13:17:52

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH] mm: Functions used internally should not be put into

On 1/17/23 03:01, Rong Tao wrote:
> Hi, dear Hyeonggon.
>
> If sl*b_def.h is not intended to be used externally. That is to say we should
> never include sl*b_def.h in module? The following code will not compile success
> since commit 40f3bf0cb04c("mm: Convert struct page to struct slab in functions
> used by other subsystems"), but success in older kernel. Especially for modules
> that use the kmem_cache structure, this is catastrophic.

If such module was part of the Linux tree, breaking it would be noticed
sooner and fixed, probably by changing the include to "mm/slab.h"
instead of sl*b_def

...

> You are right, if sl*b_def.h is not intended to used externally, Modules that
> use kmem_cache structures should not be designed from the outset to use
> kmem_cache structures. Or the kernel can use this patch, and these modules can
> be compiled successfully.

For out of tree modules we can't make any guarantees. It's best for
those if they don't rely on such specifics. As Hyeonggon pointed out,
there are more robust ways to get object layout than including kernel
headers. Also it was broken since 5.17, which is quite some time ago. Is
the code you're fixing still relevant or was replaced with something
based on BTF already?

> Best wishes.
>
> Rong Tao

2023-01-18 08:22:31

by Rong Tao

[permalink] [raw]
Subject: Re: [PATCH] mm: Functions used internally should not be put into

Hi, Vlastimil.

Thank you for your reply. Indeed, BTF can eliminate the existing problem
of data structure parsing. But as far as the current BCC[0] code is
concerned, more surgery may be required.

Now the bcc problem[1] should probably be solved by the bcc itself using
BTF, and should not bother the kernel. I hope the maintenance engineers
at BCC can agree with this. Thanks again.

[0] https://github.com/iovisor/bcc
[1] https://github.com/iovisor/bcc/issues/4438

Best wishes.

Rong Tao

2023-01-21 20:57:38

by Yonghong Song

[permalink] [raw]
Subject: Re: [PATCH] mm: Functions used internally should not be put into



On 1/17/23 11:23 PM, Rong Tao wrote:
> Hi, Vlastimil.
>
> Thank you for your reply. Indeed, BTF can eliminate the existing problem
> of data structure parsing. But as far as the current BCC[0] code is
> concerned, more surgery may be required.
>
> Now the bcc problem[1] should probably be solved by the bcc itself using
> BTF, and should not bother the kernel. I hope the maintenance engineers
> at BCC can agree with this. Thanks again.
>
> [0] https://github.com/iovisor/bcc
> [1] https://github.com/iovisor/bcc/issues/4438

Rong, I have added a comment in above [1]. Let us try to resolve
this issue inside the bcc. Thanks!

>
> Best wishes.
>
> Rong Tao