2021-06-30 13:54:36

by Yee Lee (李建誼)

[permalink] [raw]
Subject: [PATCH v3 1/1] kasan: Add memzero init for unaligned size under SLUB debug

From: Yee Lee <[email protected]>

Issue: when SLUB debug is on, hwtag kasan_unpoison() would overwrite
the redzone of object with unaligned size.

An additional memzero_explicit() path is added to replacing init by
hwtag instruction for those unaligned size at SLUB debug mode.

The penalty is acceptable since they are only enabled in debug mode,
not production builds. A block of comment is added for explanation.

Signed-off-by: Yee Lee <[email protected]>
Suggested-by: Marco Elver <[email protected]>
Suggested-by: Andrey Konovalov <[email protected]>
Cc: Andrey Ryabinin <[email protected]>
Cc: Alexander Potapenko <[email protected]>
Cc: Dmitry Vyukov <[email protected]>
Cc: Andrew Morton <[email protected]>
---
mm/kasan/kasan.h | 10 ++++++++++
1 file changed, 10 insertions(+)

diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 8f450bc28045..6f698f13dbe6 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -387,6 +387,16 @@ static inline void kasan_unpoison(const void *addr, size_t size, bool init)

if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK))
return;
+ /*
+ * Explicitly initialize the memory with the precise object size
+ * to avoid overwriting the SLAB redzone. This disables initialization
+ * in the arch code and may thus lead to performance penalty.
+ * The penalty is accepted since SLAB redzones aren't enabled in production builds.
+ */
+ if (IS_ENABLED(CONFIG_SLUB_DEBUG) && init && ((unsigned long)size & KASAN_GRANULE_MASK)) {
+ init = false;
+ memzero_explicit((void *)addr, size);
+ }
size = round_up(size, KASAN_GRANULE_SIZE);

hw_set_mem_tag_range((void *)addr, size, tag, init);
--
2.18.0


2021-06-30 19:15:03

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH v3 1/1] kasan: Add memzero init for unaligned size under SLUB debug

On Wed, Jun 30, 2021 at 09:49PM +0800, [email protected] wrote:
> From: Yee Lee <[email protected]>
>
> Issue: when SLUB debug is on, hwtag kasan_unpoison() would overwrite
> the redzone of object with unaligned size.
>
> An additional memzero_explicit() path is added to replacing init by
> hwtag instruction for those unaligned size at SLUB debug mode.
>
> The penalty is acceptable since they are only enabled in debug mode,
> not production builds. A block of comment is added for explanation.
>
> Signed-off-by: Yee Lee <[email protected]>
> Suggested-by: Marco Elver <[email protected]>
> Suggested-by: Andrey Konovalov <[email protected]>
> Cc: Andrey Ryabinin <[email protected]>
> Cc: Alexander Potapenko <[email protected]>
> Cc: Dmitry Vyukov <[email protected]>
> Cc: Andrew Morton <[email protected]>

In future, please add changes to each version after an additional '---'.
Example:

---
v2:
* Use IS_ENABLED(CONFIG_SLUB_DEBUG) in if-statement.

> ---
> mm/kasan/kasan.h | 10 ++++++++++
> 1 file changed, 10 insertions(+)
>
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 8f450bc28045..6f698f13dbe6 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -387,6 +387,16 @@ static inline void kasan_unpoison(const void *addr, size_t size, bool init)
>
> if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK))
> return;
> + /*
> + * Explicitly initialize the memory with the precise object size
> + * to avoid overwriting the SLAB redzone. This disables initialization
> + * in the arch code and may thus lead to performance penalty.
> + * The penalty is accepted since SLAB redzones aren't enabled in production builds.
> + */

Can we please format the comment properly:

diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 6f698f13dbe6..1972ec5736cb 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -388,10 +388,10 @@ static inline void kasan_unpoison(const void *addr, size_t size, bool init)
if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK))
return;
/*
- * Explicitly initialize the memory with the precise object size
- * to avoid overwriting the SLAB redzone. This disables initialization
- * in the arch code and may thus lead to performance penalty.
- * The penalty is accepted since SLAB redzones aren't enabled in production builds.
+ * Explicitly initialize the memory with the precise object size to
+ * avoid overwriting the SLAB redzone. This disables initialization in
+ * the arch code and may thus lead to performance penalty. The penalty
+ * is accepted since SLAB redzones aren't enabled in production builds.
*/
if (IS_ENABLED(CONFIG_SLUB_DEBUG) && init && ((unsigned long)size & KASAN_GRANULE_MASK)) {
init = false;

> + if (IS_ENABLED(CONFIG_SLUB_DEBUG) && init && ((unsigned long)size & KASAN_GRANULE_MASK)) {
> + init = false;
> + memzero_explicit((void *)addr, size);
> + }
> size = round_up(size, KASAN_GRANULE_SIZE);
>
> hw_set_mem_tag_range((void *)addr, size, tag, init);

I think this solution might be fine for now, as I don't see an easy way
to do this without some major refactor to use kmem_cache_debug_flags().

However, I think there's an intermediate solution where we only check
the static-key 'slub_debug_enabled' though. Because I've checked, and
various major distros _do_ enabled CONFIG_SLUB_DEBUG. But the static
branch just makes sure there's no performance overhead.

Checking the static branch requires including mm/slab.h into
mm/kasan/kasan.h, which we currently don't do and perhaps wanted to
avoid. Although I don't see a reason there, because there's no circular
dependency even if we did.

Andrey, any opinion?

In case you guys think checking static key is the better solution, I
think the below would work together with the pre-requisite patch at the
end:

diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 1972ec5736cb..9130d025612c 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -6,6 +6,8 @@
#include <linux/kfence.h>
#include <linux/stackdepot.h>

+#include "../slab.h"
+
#ifdef CONFIG_KASAN_HW_TAGS

#include <linux/static_key.h>
@@ -393,7 +395,8 @@ static inline void kasan_unpoison(const void *addr, size_t size, bool init)
* the arch code and may thus lead to performance penalty. The penalty
* is accepted since SLAB redzones aren't enabled in production builds.
*/
- if (IS_ENABLED(CONFIG_SLUB_DEBUG) && init && ((unsigned long)size & KASAN_GRANULE_MASK)) {
+ if (slub_debug_enabled_unlikely() &&
+ init && ((unsigned long)size & KASAN_GRANULE_MASK)) {
init = false;
memzero_explicit((void *)addr, size);
}



[ Note: You can pick the below patch up by extracting it from the email
and running 'git am -s <file>'. You could then use it as part of a patch
series together with your original patch. ]

From: Marco Elver <[email protected]>
Date: Wed, 30 Jun 2021 20:56:57 +0200
Subject: [PATCH] mm: introduce helper to check slub_debug_enabled

Introduce a helper to check slub_debug_enabled, so that we can confine
the use of #ifdef to the definition of the slub_debug_enabled_unlikely()
helper.

Signed-off-by: Marco Elver <[email protected]>
---
mm/slab.h | 15 +++++++++++----
1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/mm/slab.h b/mm/slab.h
index 18c1927cd196..9439da434712 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -215,10 +215,18 @@ DECLARE_STATIC_KEY_TRUE(slub_debug_enabled);
DECLARE_STATIC_KEY_FALSE(slub_debug_enabled);
#endif
extern void print_tracking(struct kmem_cache *s, void *object);
+static inline bool slub_debug_enabled_unlikely(void)
+{
+ return static_branch_unlikely(&slub_debug_enabled);
+}
#else
static inline void print_tracking(struct kmem_cache *s, void *object)
{
}
+static inline bool slub_debug_enabled_unlikely(void)
+{
+ return false;
+}
#endif

/*
@@ -228,11 +236,10 @@ static inline void print_tracking(struct kmem_cache *s, void *object)
*/
static inline bool kmem_cache_debug_flags(struct kmem_cache *s, slab_flags_t flags)
{
-#ifdef CONFIG_SLUB_DEBUG
- VM_WARN_ON_ONCE(!(flags & SLAB_DEBUG_FLAGS));
- if (static_branch_unlikely(&slub_debug_enabled))
+ if (IS_ENABLED(CONFIG_SLUB_DEBUG))
+ VM_WARN_ON_ONCE(!(flags & SLAB_DEBUG_FLAGS));
+ if (slub_debug_enabled_unlikely())
return s->flags & flags;
-#endif
return false;
}

--
2.32.0.93.g670b81a890-goog

2021-07-01 14:42:12

by Andrey Konovalov

[permalink] [raw]
Subject: Re: [PATCH v3 1/1] kasan: Add memzero init for unaligned size under SLUB debug

On Wed, Jun 30, 2021 at 10:13 PM Marco Elver <[email protected]> wrote:
>
> > + if (IS_ENABLED(CONFIG_SLUB_DEBUG) && init && ((unsigned long)size & KASAN_GRANULE_MASK)) {
> > + init = false;
> > + memzero_explicit((void *)addr, size);
> > + }
> > size = round_up(size, KASAN_GRANULE_SIZE);
> >
> > hw_set_mem_tag_range((void *)addr, size, tag, init);
>
> I think this solution might be fine for now, as I don't see an easy way
> to do this without some major refactor to use kmem_cache_debug_flags().
>
> However, I think there's an intermediate solution where we only check
> the static-key 'slub_debug_enabled' though. Because I've checked, and
> various major distros _do_ enabled CONFIG_SLUB_DEBUG. But the static
> branch just makes sure there's no performance overhead.
>
> Checking the static branch requires including mm/slab.h into
> mm/kasan/kasan.h, which we currently don't do and perhaps wanted to
> avoid. Although I don't see a reason there, because there's no circular
> dependency even if we did.

Most likely this won't be a problem. We already include ../slab.h into
many mm/kasan/*.c files.

> Andrey, any opinion?

I like this approach. Easy to implement and is better than checking
only CONFIG_SLUB_DEBUG.

Thanks, Marco!