2021-12-20 21:59:01

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 00/39] kasan, vmalloc, arm64: add vmalloc tagging support for SW/HW_TAGS

From: Andrey Konovalov <[email protected]>

Hi,

This patchset adds vmalloc tagging support for SW_TAGS and HW_TAGS
KASAN modes.

The tree with patches is available here:

https://github.com/xairy/linux/tree/up-kasan-vmalloc-tags-v4-akpm

About half of patches are cleanups I went for along the way. None of
them seem to be important enough to go through stable, so I decided
not to split them out into separate patches/series.

The patchset is partially based on an early version of the HW_TAGS
patchset by Vincenzo that had vmalloc support. Thus, I added a
Co-developed-by tag into a few patches.

SW_TAGS vmalloc tagging support is straightforward. It reuses all of
the generic KASAN machinery, but uses shadow memory to store tags
instead of magic values. Naturally, vmalloc tagging requires adding
a few kasan_reset_tag() annotations to the vmalloc code.

HW_TAGS vmalloc tagging support stands out. HW_TAGS KASAN is based on
Arm MTE, which can only assigns tags to physical memory. As a result,
HW_TAGS KASAN only tags vmalloc() allocations, which are backed by
page_alloc memory. It ignores vmap() and others.

Changes in v3->v4:
- Rebase onto fresh mm.
- Rename KASAN_VMALLOC_NOEXEC to KASAN_VMALLOC_PROT_NORMAL.
- Compare prot with PAGE_KERNEL instead of using pgprot_nx() to
indentify normal non-executable mappings.
- Rename arch_vmalloc_pgprot_modify() to arch_vmap_pgprot_tagged().
- Move checks from arch_vmap_pgprot_tagged() to __vmalloc_node_range()
as the same condition is used for other things in subsequent patches.
- Use proper kasan_hw_tags_enabled() checks instead of
IS_ENABLED(CONFIG_KASAN_HW_TAGS).
- Set __GFP_SKIP_KASAN_UNPOISON and __GFP_SKIP_ZERO flags instead of
resetting.
- Only define KASAN GFP flags when when HW_TAGS KASAN is enabled.
- Move setting KASAN GFP flags to __vmalloc_node_range() and do it
only for normal non-executable mapping when HW_TAGS KASAN is enabled.
- Add new GFP flags to include/trace/events/mmflags.h.
- Don't forget to save tagged addr to vm_struct->addr for VM_ALLOC
so that find_vm_area(addr)->addr == addr for vmalloc().
- Reset pointer tag in change_memory_common().
- Add test checks for set_memory_*() on vmalloc() allocations.
- Minor patch descriptions and comments fixes.

Changes in v2->v3:
- Rebase onto mm.
- New patch: "kasan, arm64: reset pointer tags of vmapped stacks".
- New patch: "kasan, vmalloc: don't tag executable vmalloc allocations".
- New patch: "kasan, arm64: don't tag executable vmalloc allocations".
- Allowing enabling KASAN_VMALLOC with SW/HW_TAGS is moved to
"kasan: allow enabling KASAN_VMALLOC and SW/HW_TAGS", as this can only
be done once executable allocations are no longer tagged.
- Minor fixes, see patches for lists of changes.

Changes in v1->v2:
- Move memory init for vmalloc() into vmalloc code for HW_TAGS KASAN.
- Minor fixes and code reshuffling, see patches for lists of changes.

Thanks!

Andrey Konovalov (39):
kasan, page_alloc: deduplicate should_skip_kasan_poison
kasan, page_alloc: move tag_clear_highpage out of
kernel_init_free_pages
kasan, page_alloc: merge kasan_free_pages into free_pages_prepare
kasan, page_alloc: simplify kasan_poison_pages call site
kasan, page_alloc: init memory of skipped pages on free
kasan: drop skip_kasan_poison variable in free_pages_prepare
mm: clarify __GFP_ZEROTAGS comment
kasan: only apply __GFP_ZEROTAGS when memory is zeroed
kasan, page_alloc: refactor init checks in post_alloc_hook
kasan, page_alloc: merge kasan_alloc_pages into post_alloc_hook
kasan, page_alloc: combine tag_clear_highpage calls in post_alloc_hook
kasan, page_alloc: move SetPageSkipKASanPoison in post_alloc_hook
kasan, page_alloc: move kernel_init_free_pages in post_alloc_hook
kasan, page_alloc: rework kasan_unpoison_pages call site
kasan: clean up metadata byte definitions
kasan: define KASAN_VMALLOC_INVALID for SW_TAGS
kasan, x86, arm64, s390: rename functions for modules shadow
kasan, vmalloc: drop outdated VM_KASAN comment
kasan: reorder vmalloc hooks
kasan: add wrappers for vmalloc hooks
kasan, vmalloc: reset tags in vmalloc functions
kasan, fork: reset pointer tags of vmapped stacks
kasan, arm64: reset pointer tags of vmapped stacks
kasan, vmalloc: add vmalloc tagging for SW_TAGS
kasan, vmalloc, arm64: mark vmalloc mappings as pgprot_tagged
kasan, vmalloc: unpoison VM_ALLOC pages after mapping
kasan, mm: only define ___GFP_SKIP_KASAN_POISON with HW_TAGS
kasan, page_alloc: allow skipping unpoisoning for HW_TAGS
kasan, page_alloc: allow skipping memory init for HW_TAGS
kasan, vmalloc: add vmalloc tagging for HW_TAGS
kasan, vmalloc: only tag normal vmalloc allocations
kasan, arm64: don't tag executable vmalloc allocations
kasan: mark kasan_arg_stacktrace as __initdata
kasan: simplify kasan_init_hw_tags
kasan: add kasan.vmalloc command line flag
kasan: allow enabling KASAN_VMALLOC and SW/HW_TAGS
arm64: select KASAN_VMALLOC for SW/HW_TAGS modes
kasan: documentation updates
kasan: improve vmalloc tests

Documentation/dev-tools/kasan.rst | 17 ++-
arch/arm64/Kconfig | 2 +-
arch/arm64/include/asm/vmalloc.h | 6 +
arch/arm64/include/asm/vmap_stack.h | 5 +-
arch/arm64/kernel/module.c | 5 +-
arch/arm64/mm/pageattr.c | 2 +-
arch/arm64/net/bpf_jit_comp.c | 3 +-
arch/s390/kernel/module.c | 2 +-
arch/x86/kernel/module.c | 2 +-
include/linux/gfp.h | 38 ++++--
include/linux/kasan.h | 97 ++++++++------
include/linux/vmalloc.h | 18 ++-
include/trace/events/mmflags.h | 15 ++-
kernel/fork.c | 1 +
kernel/scs.c | 4 +-
lib/Kconfig.kasan | 20 +--
lib/test_kasan.c | 189 +++++++++++++++++++++++++++-
mm/kasan/common.c | 4 +-
mm/kasan/hw_tags.c | 167 +++++++++++++++++++-----
mm/kasan/kasan.h | 16 ++-
mm/kasan/shadow.c | 63 ++++++----
mm/page_alloc.c | 157 ++++++++++++++++-------
mm/vmalloc.c | 99 ++++++++++++---
23 files changed, 721 insertions(+), 211 deletions(-)

--
2.25.1



2021-12-20 21:59:04

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 01/39] kasan, page_alloc: deduplicate should_skip_kasan_poison

From: Andrey Konovalov <[email protected]>

Currently, should_skip_kasan_poison() has two definitions: one for when
CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled, one for when it's not.

Instead of duplicating the checks, add a deferred_pages_enabled()
helper and use it in a single should_skip_kasan_poison() definition.

Also move should_skip_kasan_poison() closer to its caller and clarify
all conditions in the comment.

Signed-off-by: Andrey Konovalov <[email protected]>

---

Changes v2->v3:
- Update patch description.
---
mm/page_alloc.c | 55 +++++++++++++++++++++++++++++--------------------
1 file changed, 33 insertions(+), 22 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index edfd6c81af82..f0bcecac19cd 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -377,25 +377,9 @@ int page_group_by_mobility_disabled __read_mostly;
*/
static DEFINE_STATIC_KEY_TRUE(deferred_pages);

-/*
- * Calling kasan_poison_pages() only after deferred memory initialization
- * has completed. Poisoning pages during deferred memory init will greatly
- * lengthen the process and cause problem in large memory systems as the
- * deferred pages initialization is done with interrupt disabled.
- *
- * Assuming that there will be no reference to those newly initialized
- * pages before they are ever allocated, this should have no effect on
- * KASAN memory tracking as the poison will be properly inserted at page
- * allocation time. The only corner case is when pages are allocated by
- * on-demand allocation and then freed again before the deferred pages
- * initialization is done, but this is not likely to happen.
- */
-static inline bool should_skip_kasan_poison(struct page *page, fpi_t fpi_flags)
+static inline bool deferred_pages_enabled(void)
{
- return static_branch_unlikely(&deferred_pages) ||
- (!IS_ENABLED(CONFIG_KASAN_GENERIC) &&
- (fpi_flags & FPI_SKIP_KASAN_POISON)) ||
- PageSkipKASanPoison(page);
+ return static_branch_unlikely(&deferred_pages);
}

/* Returns true if the struct page for the pfn is uninitialised */
@@ -446,11 +430,9 @@ defer_init(int nid, unsigned long pfn, unsigned long end_pfn)
return false;
}
#else
-static inline bool should_skip_kasan_poison(struct page *page, fpi_t fpi_flags)
+static inline bool deferred_pages_enabled(void)
{
- return (!IS_ENABLED(CONFIG_KASAN_GENERIC) &&
- (fpi_flags & FPI_SKIP_KASAN_POISON)) ||
- PageSkipKASanPoison(page);
+ return false;
}

static inline bool early_page_uninitialised(unsigned long pfn)
@@ -1270,6 +1252,35 @@ static int free_tail_pages_check(struct page *head_page, struct page *page)
return ret;
}

+/*
+ * Skip KASAN memory poisoning when either:
+ *
+ * 1. Deferred memory initialization has not yet completed,
+ * see the explanation below.
+ * 2. Skipping poisoning is requested via FPI_SKIP_KASAN_POISON,
+ * see the comment next to it.
+ * 3. Skipping poisoning is requested via __GFP_SKIP_KASAN_POISON,
+ * see the comment next to it.
+ *
+ * Poisoning pages during deferred memory init will greatly lengthen the
+ * process and cause problem in large memory systems as the deferred pages
+ * initialization is done with interrupt disabled.
+ *
+ * Assuming that there will be no reference to those newly initialized
+ * pages before they are ever allocated, this should have no effect on
+ * KASAN memory tracking as the poison will be properly inserted at page
+ * allocation time. The only corner case is when pages are allocated by
+ * on-demand allocation and then freed again before the deferred pages
+ * initialization is done, but this is not likely to happen.
+ */
+static inline bool should_skip_kasan_poison(struct page *page, fpi_t fpi_flags)
+{
+ return deferred_pages_enabled() ||
+ (!IS_ENABLED(CONFIG_KASAN_GENERIC) &&
+ (fpi_flags & FPI_SKIP_KASAN_POISON)) ||
+ PageSkipKASanPoison(page);
+}
+
static void kernel_init_free_pages(struct page *page, int numpages, bool zero_tags)
{
int i;
--
2.25.1


2021-12-20 21:59:06

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 02/39] kasan, page_alloc: move tag_clear_highpage out of kernel_init_free_pages

From: Andrey Konovalov <[email protected]>

Currently, kernel_init_free_pages() serves two purposes: it either only
zeroes memory or zeroes both memory and memory tags via a different
code path. As this function has only two callers, each using only one
code path, this behaviour is confusing.

Pull the code that zeroes both memory and tags out of
kernel_init_free_pages().

As a result of this change, the code in free_pages_prepare() starts to
look complicated, but this is improved in the few following patches.
Those improvements are not integrated into this patch to make diffs
easier to read.

This patch does no functional changes.

Signed-off-by: Andrey Konovalov <[email protected]>
Reviewed-by: Alexander Potapenko <[email protected]>

---

Changes v2->v3:
- Update patch description.
---
mm/page_alloc.c | 24 +++++++++++++-----------
1 file changed, 13 insertions(+), 11 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index f0bcecac19cd..7c2b29483b53 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1281,16 +1281,10 @@ static inline bool should_skip_kasan_poison(struct page *page, fpi_t fpi_flags)
PageSkipKASanPoison(page);
}

-static void kernel_init_free_pages(struct page *page, int numpages, bool zero_tags)
+static void kernel_init_free_pages(struct page *page, int numpages)
{
int i;

- if (zero_tags) {
- for (i = 0; i < numpages; i++)
- tag_clear_highpage(page + i);
- return;
- }
-
/* s390's use of memset() could override KASAN redzones. */
kasan_disable_current();
for (i = 0; i < numpages; i++) {
@@ -1386,7 +1380,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
bool init = want_init_on_free();

if (init)
- kernel_init_free_pages(page, 1 << order, false);
+ kernel_init_free_pages(page, 1 << order);
if (!skip_kasan_poison)
kasan_poison_pages(page, order, init);
}
@@ -2429,9 +2423,17 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
bool init = !want_init_on_free() && want_init_on_alloc(gfp_flags);

kasan_unpoison_pages(page, order, init);
- if (init)
- kernel_init_free_pages(page, 1 << order,
- gfp_flags & __GFP_ZEROTAGS);
+
+ if (init) {
+ if (gfp_flags & __GFP_ZEROTAGS) {
+ int i;
+
+ for (i = 0; i < 1 << order; i++)
+ tag_clear_highpage(page + i);
+ } else {
+ kernel_init_free_pages(page, 1 << order);
+ }
+ }
}

set_page_owner(page, order, gfp_flags);
--
2.25.1


2021-12-20 21:59:09

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 03/39] kasan, page_alloc: merge kasan_free_pages into free_pages_prepare

From: Andrey Konovalov <[email protected]>

Currently, the code responsible for initializing and poisoning memory
in free_pages_prepare() is scattered across two locations:
kasan_free_pages() for HW_TAGS KASAN and free_pages_prepare() itself.
This is confusing.

This and a few following patches combine the code from these two
locations. Along the way, these patches also simplify the performed
checks to make them easier to follow.

Replaces the only caller of kasan_free_pages() with its implementation.

As kasan_has_integrated_init() is only true when CONFIG_KASAN_HW_TAGS
is enabled, moving the code does no functional changes.

This patch is not useful by itself but makes the simplifications in
the following patches easier to follow.

Signed-off-by: Andrey Konovalov <[email protected]>
Reviewed-by: Alexander Potapenko <[email protected]>

---

Changes v2->v3:
- Update patch description.
---
include/linux/kasan.h | 8 --------
mm/kasan/common.c | 2 +-
mm/kasan/hw_tags.c | 11 -----------
mm/page_alloc.c | 6 ++++--
4 files changed, 5 insertions(+), 22 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 4a45562d8893..a8bfe9f157c9 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -96,7 +96,6 @@ static inline bool kasan_hw_tags_enabled(void)
}

void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags);
-void kasan_free_pages(struct page *page, unsigned int order);

#else /* CONFIG_KASAN_HW_TAGS */

@@ -117,13 +116,6 @@ static __always_inline void kasan_alloc_pages(struct page *page,
BUILD_BUG();
}

-static __always_inline void kasan_free_pages(struct page *page,
- unsigned int order)
-{
- /* Only available for integrated init. */
- BUILD_BUG();
-}
-
#endif /* CONFIG_KASAN_HW_TAGS */

static inline bool kasan_has_integrated_init(void)
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 92196562687b..a0082fad48b1 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -387,7 +387,7 @@ static inline bool ____kasan_kfree_large(void *ptr, unsigned long ip)
}

/*
- * The object will be poisoned by kasan_free_pages() or
+ * The object will be poisoned by kasan_poison_pages() or
* kasan_slab_free_mempool().
*/

diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index 7355cb534e4f..0b8225add2e4 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -213,17 +213,6 @@ void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags)
}
}

-void kasan_free_pages(struct page *page, unsigned int order)
-{
- /*
- * This condition should match the one in free_pages_prepare() in
- * page_alloc.c.
- */
- bool init = want_init_on_free();
-
- kasan_poison_pages(page, order, init);
-}
-
#if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST)

void kasan_enable_tagging_sync(void)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 7c2b29483b53..740fb01a27ed 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1367,15 +1367,17 @@ static __always_inline bool free_pages_prepare(struct page *page,

/*
* As memory initialization might be integrated into KASAN,
- * kasan_free_pages and kernel_init_free_pages must be
+ * KASAN poisoning and memory initialization code must be
* kept together to avoid discrepancies in behavior.
*
* With hardware tag-based KASAN, memory tags must be set before the
* page becomes unavailable via debug_pagealloc or arch_free_page.
*/
if (kasan_has_integrated_init()) {
+ bool init = want_init_on_free();
+
if (!skip_kasan_poison)
- kasan_free_pages(page, order);
+ kasan_poison_pages(page, order, init);
} else {
bool init = want_init_on_free();

--
2.25.1


2021-12-20 21:59:12

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 04/39] kasan, page_alloc: simplify kasan_poison_pages call site

From: Andrey Konovalov <[email protected]>

Simplify the code around calling kasan_poison_pages() in
free_pages_prepare().

This patch does no functional changes.

Signed-off-by: Andrey Konovalov <[email protected]>
Reviewed-by: Alexander Potapenko <[email protected]>

---

Changes v1->v2:
- Don't reorder kasan_poison_pages() and free_pages_prepare().
---
mm/page_alloc.c | 18 +++++-------------
1 file changed, 5 insertions(+), 13 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 740fb01a27ed..db8cecdd0aaa 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1301,6 +1301,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
{
int bad = 0;
bool skip_kasan_poison = should_skip_kasan_poison(page, fpi_flags);
+ bool init = want_init_on_free();

VM_BUG_ON_PAGE(PageTail(page), page);

@@ -1373,19 +1374,10 @@ static __always_inline bool free_pages_prepare(struct page *page,
* With hardware tag-based KASAN, memory tags must be set before the
* page becomes unavailable via debug_pagealloc or arch_free_page.
*/
- if (kasan_has_integrated_init()) {
- bool init = want_init_on_free();
-
- if (!skip_kasan_poison)
- kasan_poison_pages(page, order, init);
- } else {
- bool init = want_init_on_free();
-
- if (init)
- kernel_init_free_pages(page, 1 << order);
- if (!skip_kasan_poison)
- kasan_poison_pages(page, order, init);
- }
+ if (init && !kasan_has_integrated_init())
+ kernel_init_free_pages(page, 1 << order);
+ if (!skip_kasan_poison)
+ kasan_poison_pages(page, order, init);

/*
* arch_free_page() can make the page's contents inaccessible. s390
--
2.25.1


2021-12-20 21:59:14

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 05/39] kasan, page_alloc: init memory of skipped pages on free

From: Andrey Konovalov <[email protected]>

Since commit 7a3b83537188 ("kasan: use separate (un)poison implementation
for integrated init"), when all init, kasan_has_integrated_init(), and
skip_kasan_poison are true, free_pages_prepare() doesn't initialize
the page. This is wrong.

Fix it by remembering whether kasan_poison_pages() performed
initialization, and call kernel_init_free_pages() if it didn't.

Reordering kasan_poison_pages() and kernel_init_free_pages() is OK,
since kernel_init_free_pages() can handle poisoned memory.

Signed-off-by: Andrey Konovalov <[email protected]>

---

Changes v2->v3:
- Drop Fixes tag, as the patch won't cleanly apply to older kernels
anyway. The commit is mentioned in the patch description.

Changes v1->v2:
- Reorder kasan_poison_pages() and free_pages_prepare() in this patch
instead of doing it in the previous one.
---
mm/page_alloc.c | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index db8cecdd0aaa..114d6b010331 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1374,11 +1374,16 @@ static __always_inline bool free_pages_prepare(struct page *page,
* With hardware tag-based KASAN, memory tags must be set before the
* page becomes unavailable via debug_pagealloc or arch_free_page.
*/
- if (init && !kasan_has_integrated_init())
- kernel_init_free_pages(page, 1 << order);
- if (!skip_kasan_poison)
+ if (!skip_kasan_poison) {
kasan_poison_pages(page, order, init);

+ /* Memory is already initialized if KASAN did it internally. */
+ if (kasan_has_integrated_init())
+ init = false;
+ }
+ if (init)
+ kernel_init_free_pages(page, 1 << order);
+
/*
* arch_free_page() can make the page's contents inaccessible. s390
* does this. So nothing which can access the page's contents should
--
2.25.1


2021-12-20 21:59:17

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 06/39] kasan: drop skip_kasan_poison variable in free_pages_prepare

From: Andrey Konovalov <[email protected]>

skip_kasan_poison is only used in a single place.
Call should_skip_kasan_poison() directly for simplicity.

Signed-off-by: Andrey Konovalov <[email protected]>
Suggested-by: Marco Elver <[email protected]>

---

Changes v1->v2:
- Add this patch.
---
mm/page_alloc.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 114d6b010331..73280222e0e8 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1300,7 +1300,6 @@ static __always_inline bool free_pages_prepare(struct page *page,
unsigned int order, bool check_free, fpi_t fpi_flags)
{
int bad = 0;
- bool skip_kasan_poison = should_skip_kasan_poison(page, fpi_flags);
bool init = want_init_on_free();

VM_BUG_ON_PAGE(PageTail(page), page);
@@ -1374,7 +1373,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
* With hardware tag-based KASAN, memory tags must be set before the
* page becomes unavailable via debug_pagealloc or arch_free_page.
*/
- if (!skip_kasan_poison) {
+ if (!should_skip_kasan_poison(page, fpi_flags)) {
kasan_poison_pages(page, order, init);

/* Memory is already initialized if KASAN did it internally. */
--
2.25.1


2021-12-20 21:59:19

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 07/39] mm: clarify __GFP_ZEROTAGS comment

From: Andrey Konovalov <[email protected]>

__GFP_ZEROTAGS is intended as an optimization: if memory is zeroed during
allocation, it's possible to set memory tags at the same time with little
performance impact.

Clarify this intention of __GFP_ZEROTAGS in the comment.

Signed-off-by: Andrey Konovalov <[email protected]>
---
include/linux/gfp.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index 0b2d2a636164..d6a184523ca2 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -228,8 +228,8 @@ struct vm_area_struct;
*
* %__GFP_ZERO returns a zeroed page on success.
*
- * %__GFP_ZEROTAGS returns a page with zeroed memory tags on success, if
- * __GFP_ZERO is set.
+ * %__GFP_ZEROTAGS zeroes memory tags at allocation time if the memory itself
+ * is being zeroed (either via __GFP_ZERO or via init_on_alloc).
*
* %__GFP_SKIP_KASAN_POISON returns a page which does not need to be poisoned
* on deallocation. Typically used for userspace pages. Currently only has an
--
2.25.1


2021-12-20 21:59:23

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 08/39] kasan: only apply __GFP_ZEROTAGS when memory is zeroed

From: Andrey Konovalov <[email protected]>

__GFP_ZEROTAGS should only be effective if memory is being zeroed.
Currently, hardware tag-based KASAN violates this requirement.

Fix by including an initialization check along with checking for
__GFP_ZEROTAGS.

Signed-off-by: Andrey Konovalov <[email protected]>
Reviewed-by: Alexander Potapenko <[email protected]>
---
mm/kasan/hw_tags.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index 0b8225add2e4..c643740b8599 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -199,11 +199,12 @@ void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags)
* page_alloc.c.
*/
bool init = !want_init_on_free() && want_init_on_alloc(flags);
+ bool init_tags = init && (flags & __GFP_ZEROTAGS);

if (flags & __GFP_SKIP_KASAN_POISON)
SetPageSkipKASanPoison(page);

- if (flags & __GFP_ZEROTAGS) {
+ if (init_tags) {
int i;

for (i = 0; i != 1 << order; ++i)
--
2.25.1


2021-12-20 21:59:31

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 10/39] kasan, page_alloc: merge kasan_alloc_pages into post_alloc_hook

From: Andrey Konovalov <[email protected]>

Currently, the code responsible for initializing and poisoning memory in
post_alloc_hook() is scattered across two locations: kasan_alloc_pages()
hook for HW_TAGS KASAN and post_alloc_hook() itself. This is confusing.

This and a few following patches combine the code from these two
locations. Along the way, these patches do a step-by-step restructure
the many performed checks to make them easier to follow.

Replace the only caller of kasan_alloc_pages() with its implementation.

As kasan_has_integrated_init() is only true when CONFIG_KASAN_HW_TAGS
is enabled, moving the code does no functional changes.

Also move init and init_tags variables definitions out of
kasan_has_integrated_init() clause in post_alloc_hook(), as they have
the same values regardless of what the if condition evaluates to.

This patch is not useful by itself but makes the simplifications in
the following patches easier to follow.

Signed-off-by: Andrey Konovalov <[email protected]>

---

Changes v2->v3:
- Update patch description.
---
include/linux/kasan.h | 9 ---------
mm/kasan/common.c | 2 +-
mm/kasan/hw_tags.c | 22 ----------------------
mm/page_alloc.c | 20 +++++++++++++++-----
4 files changed, 16 insertions(+), 37 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index a8bfe9f157c9..b88ca6b97ba3 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -95,8 +95,6 @@ static inline bool kasan_hw_tags_enabled(void)
return kasan_enabled();
}

-void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags);
-
#else /* CONFIG_KASAN_HW_TAGS */

static inline bool kasan_enabled(void)
@@ -109,13 +107,6 @@ static inline bool kasan_hw_tags_enabled(void)
return false;
}

-static __always_inline void kasan_alloc_pages(struct page *page,
- unsigned int order, gfp_t flags)
-{
- /* Only available for integrated init. */
- BUILD_BUG();
-}
-
#endif /* CONFIG_KASAN_HW_TAGS */

static inline bool kasan_has_integrated_init(void)
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index a0082fad48b1..d9079ec11f31 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -538,7 +538,7 @@ void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
return NULL;

/*
- * The object has already been unpoisoned by kasan_alloc_pages() for
+ * The object has already been unpoisoned by kasan_unpoison_pages() for
* alloc_pages() or by kasan_krealloc() for krealloc().
*/

diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index c643740b8599..76cf2b6229c7 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -192,28 +192,6 @@ void __init kasan_init_hw_tags(void)
kasan_stack_collection_enabled() ? "on" : "off");
}

-void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags)
-{
- /*
- * This condition should match the one in post_alloc_hook() in
- * page_alloc.c.
- */
- bool init = !want_init_on_free() && want_init_on_alloc(flags);
- bool init_tags = init && (flags & __GFP_ZEROTAGS);
-
- if (flags & __GFP_SKIP_KASAN_POISON)
- SetPageSkipKASanPoison(page);
-
- if (init_tags) {
- int i;
-
- for (i = 0; i != 1 << order; ++i)
- tag_clear_highpage(page + i);
- } else {
- kasan_unpoison_pages(page, order, init);
- }
-}
-
#if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST)

void kasan_enable_tagging_sync(void)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 9ecdf2124ac1..a2e32a8abd7f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2397,6 +2397,9 @@ static bool check_new_pages(struct page *page, unsigned int order)
inline void post_alloc_hook(struct page *page, unsigned int order,
gfp_t gfp_flags)
{
+ bool init = !want_init_on_free() && want_init_on_alloc(gfp_flags);
+ bool init_tags = init && (gfp_flags & __GFP_ZEROTAGS);
+
set_page_private(page, 0);
set_page_refcounted(page);

@@ -2412,15 +2415,22 @@ inline void post_alloc_hook(struct page *page, unsigned int order,

/*
* As memory initialization might be integrated into KASAN,
- * kasan_alloc_pages and kernel_init_free_pages must be
+ * KASAN unpoisoning and memory initializion code must be
* kept together to avoid discrepancies in behavior.
*/
if (kasan_has_integrated_init()) {
- kasan_alloc_pages(page, order, gfp_flags);
- } else {
- bool init = !want_init_on_free() && want_init_on_alloc(gfp_flags);
- bool init_tags = init && (gfp_flags & __GFP_ZEROTAGS);
+ if (gfp_flags & __GFP_SKIP_KASAN_POISON)
+ SetPageSkipKASanPoison(page);
+
+ if (init_tags) {
+ int i;

+ for (i = 0; i != 1 << order; ++i)
+ tag_clear_highpage(page + i);
+ } else {
+ kasan_unpoison_pages(page, order, init);
+ }
+ } else {
kasan_unpoison_pages(page, order, init);

if (init_tags) {
--
2.25.1


2021-12-20 21:59:42

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 09/39] kasan, page_alloc: refactor init checks in post_alloc_hook

From: Andrey Konovalov <[email protected]>

Separate code for zeroing memory from the code clearing tags in
post_alloc_hook().

This patch is not useful by itself but makes the simplifications in
the following patches easier to follow.

This patch does no functional changes.

Signed-off-by: Andrey Konovalov <[email protected]>
Reviewed-by: Alexander Potapenko <[email protected]>

---

Changes v2->v3:
- Update patch description.
---
mm/page_alloc.c | 18 ++++++++++--------
1 file changed, 10 insertions(+), 8 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 73280222e0e8..9ecdf2124ac1 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2419,19 +2419,21 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
kasan_alloc_pages(page, order, gfp_flags);
} else {
bool init = !want_init_on_free() && want_init_on_alloc(gfp_flags);
+ bool init_tags = init && (gfp_flags & __GFP_ZEROTAGS);

kasan_unpoison_pages(page, order, init);

- if (init) {
- if (gfp_flags & __GFP_ZEROTAGS) {
- int i;
+ if (init_tags) {
+ int i;

- for (i = 0; i < 1 << order; i++)
- tag_clear_highpage(page + i);
- } else {
- kernel_init_free_pages(page, 1 << order);
- }
+ for (i = 0; i < 1 << order; i++)
+ tag_clear_highpage(page + i);
+
+ init = false;
}
+
+ if (init)
+ kernel_init_free_pages(page, 1 << order);
}

set_page_owner(page, order, gfp_flags);
--
2.25.1


2021-12-20 22:00:02

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 11/39] kasan, page_alloc: combine tag_clear_highpage calls in post_alloc_hook

From: Andrey Konovalov <[email protected]>

Move tag_clear_highpage() loops out of the kasan_has_integrated_init()
clause as a code simplification.

This patch does no functional changes.

Signed-off-by: Andrey Konovalov <[email protected]>
Reviewed-by: Alexander Potapenko <[email protected]>

---

Changes v2->v3:
- Update patch description.
---
mm/page_alloc.c | 32 ++++++++++++++++----------------
1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index a2e32a8abd7f..2d1e63a01ed8 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2418,30 +2418,30 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
* KASAN unpoisoning and memory initializion code must be
* kept together to avoid discrepancies in behavior.
*/
+
+ /*
+ * If memory tags should be zeroed (which happens only when memory
+ * should be initialized as well).
+ */
+ if (init_tags) {
+ int i;
+
+ /* Initialize both memory and tags. */
+ for (i = 0; i != 1 << order; ++i)
+ tag_clear_highpage(page + i);
+
+ /* Note that memory is already initialized by the loop above. */
+ init = false;
+ }
if (kasan_has_integrated_init()) {
if (gfp_flags & __GFP_SKIP_KASAN_POISON)
SetPageSkipKASanPoison(page);

- if (init_tags) {
- int i;
-
- for (i = 0; i != 1 << order; ++i)
- tag_clear_highpage(page + i);
- } else {
+ if (!init_tags)
kasan_unpoison_pages(page, order, init);
- }
} else {
kasan_unpoison_pages(page, order, init);

- if (init_tags) {
- int i;
-
- for (i = 0; i < 1 << order; i++)
- tag_clear_highpage(page + i);
-
- init = false;
- }
-
if (init)
kernel_init_free_pages(page, 1 << order);
}
--
2.25.1


2021-12-20 22:00:08

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 12/39] kasan, page_alloc: move SetPageSkipKASanPoison in post_alloc_hook

From: Andrey Konovalov <[email protected]>

Pull the SetPageSkipKASanPoison() call in post_alloc_hook() out of the
big if clause for better code readability. This also allows for more
simplifications in the following patches.

Also turn the kasan_has_integrated_init() check into the proper
kasan_hw_tags_enabled() one. These checks evaluate to the same value,
but logically skipping kasan poisoning has nothing to do with
integrated init.

Signed-off-by: Andrey Konovalov <[email protected]>

---

Changes v3->v4:
- Use proper kasan_hw_tags_enabled() check instead of
IS_ENABLED(CONFIG_KASAN_HW_TAGS).
---
mm/page_alloc.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2d1e63a01ed8..076c43f369b4 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2434,9 +2434,6 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
init = false;
}
if (kasan_has_integrated_init()) {
- if (gfp_flags & __GFP_SKIP_KASAN_POISON)
- SetPageSkipKASanPoison(page);
-
if (!init_tags)
kasan_unpoison_pages(page, order, init);
} else {
@@ -2445,6 +2442,9 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
if (init)
kernel_init_free_pages(page, 1 << order);
}
+ /* Propagate __GFP_SKIP_KASAN_POISON to page flags. */
+ if (kasan_hw_tags_enabled() && (gfp_flags & __GFP_SKIP_KASAN_POISON))
+ SetPageSkipKASanPoison(page);

set_page_owner(page, order, gfp_flags);
page_table_check_alloc(page, order);
--
2.25.1


2021-12-20 22:00:11

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 13/39] kasan, page_alloc: move kernel_init_free_pages in post_alloc_hook

From: Andrey Konovalov <[email protected]>

Pull the kernel_init_free_pages() call in post_alloc_hook() out of the
big if clause for better code readability. This also allows for more
simplifications in the following patch.

This patch does no functional changes.

Signed-off-by: Andrey Konovalov <[email protected]>
Reviewed-by: Alexander Potapenko <[email protected]>
---
mm/page_alloc.c | 12 ++++++++----
1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 076c43f369b4..205884e3520b 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2434,14 +2434,18 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
init = false;
}
if (kasan_has_integrated_init()) {
- if (!init_tags)
+ if (!init_tags) {
kasan_unpoison_pages(page, order, init);
+
+ /* Note that memory is already initialized by KASAN. */
+ init = false;
+ }
} else {
kasan_unpoison_pages(page, order, init);
-
- if (init)
- kernel_init_free_pages(page, 1 << order);
}
+ /* If memory is still not initialized, do it now. */
+ if (init)
+ kernel_init_free_pages(page, 1 << order);
/* Propagate __GFP_SKIP_KASAN_POISON to page flags. */
if (kasan_hw_tags_enabled() && (gfp_flags & __GFP_SKIP_KASAN_POISON))
SetPageSkipKASanPoison(page);
--
2.25.1


2021-12-20 22:00:14

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 14/39] kasan, page_alloc: rework kasan_unpoison_pages call site

From: Andrey Konovalov <[email protected]>

Rework the checks around kasan_unpoison_pages() call in
post_alloc_hook().

The logical condition for calling this function is:

- If a software KASAN mode is enabled, we need to mark shadow memory.
- Otherwise, HW_TAGS KASAN is enabled, and it only makes sense to
set tags if they haven't already been cleared by tag_clear_highpage(),
which is indicated by init_tags.

This patch concludes the changes for post_alloc_hook().

Signed-off-by: Andrey Konovalov <[email protected]>

---

Changes v3->v4:
- Make the confition checks more explicit.
- Update patch description.
---
mm/page_alloc.c | 19 ++++++++++++-------
1 file changed, 12 insertions(+), 7 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 205884e3520b..2ef0f531e881 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2433,15 +2433,20 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
/* Note that memory is already initialized by the loop above. */
init = false;
}
- if (kasan_has_integrated_init()) {
- if (!init_tags) {
- kasan_unpoison_pages(page, order, init);
+ /*
+ * If either a software KASAN mode is enabled, or,
+ * in the case of hardware tag-based KASAN,
+ * if memory tags have not been cleared via tag_clear_highpage().
+ */
+ if (IS_ENABLED(CONFIG_KASAN_GENERIC) ||
+ IS_ENABLED(CONFIG_KASAN_SW_TAGS) ||
+ kasan_hw_tags_enabled() && !init_tags) {
+ /* Mark shadow memory or set memory tags. */
+ kasan_unpoison_pages(page, order, init);

- /* Note that memory is already initialized by KASAN. */
+ /* Note that memory is already initialized by KASAN. */
+ if (kasan_has_integrated_init())
init = false;
- }
- } else {
- kasan_unpoison_pages(page, order, init);
}
/* If memory is still not initialized, do it now. */
if (init)
--
2.25.1


2021-12-20 22:00:30

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 15/39] kasan: clean up metadata byte definitions

From: Andrey Konovalov <[email protected]>

Most of the metadata byte values are only used for Generic KASAN.

Remove KASAN_KMALLOC_FREETRACK definition for !CONFIG_KASAN_GENERIC
case, and put it along with other metadata values for the Generic
mode under a corresponding ifdef.

Signed-off-by: Andrey Konovalov <[email protected]>
Reviewed-by: Alexander Potapenko <[email protected]>
---
mm/kasan/kasan.h | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index c17fa8d26ffe..952cd6f9ca46 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -71,15 +71,16 @@ static inline bool kasan_sync_fault_possible(void)
#define KASAN_PAGE_REDZONE 0xFE /* redzone for kmalloc_large allocations */
#define KASAN_KMALLOC_REDZONE 0xFC /* redzone inside slub object */
#define KASAN_KMALLOC_FREE 0xFB /* object was freed (kmem_cache_free/kfree) */
-#define KASAN_KMALLOC_FREETRACK 0xFA /* object was freed and has free track set */
#else
#define KASAN_FREE_PAGE KASAN_TAG_INVALID
#define KASAN_PAGE_REDZONE KASAN_TAG_INVALID
#define KASAN_KMALLOC_REDZONE KASAN_TAG_INVALID
#define KASAN_KMALLOC_FREE KASAN_TAG_INVALID
-#define KASAN_KMALLOC_FREETRACK KASAN_TAG_INVALID
#endif

+#ifdef CONFIG_KASAN_GENERIC
+
+#define KASAN_KMALLOC_FREETRACK 0xFA /* object was freed and has free track set */
#define KASAN_GLOBAL_REDZONE 0xF9 /* redzone for global variable */
#define KASAN_VMALLOC_INVALID 0xF8 /* unallocated space in vmapped page */

@@ -110,6 +111,8 @@ static inline bool kasan_sync_fault_possible(void)
#define KASAN_ABI_VERSION 1
#endif

+#endif /* CONFIG_KASAN_GENERIC */
+
/* Metadata layout customization. */
#define META_BYTES_PER_BLOCK 1
#define META_BLOCKS_PER_ROW 16
--
2.25.1


2021-12-20 22:00:39

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 16/39] kasan: define KASAN_VMALLOC_INVALID for SW_TAGS

From: Andrey Konovalov <[email protected]>

In preparation for adding vmalloc support to SW_TAGS KASAN,
provide a KASAN_VMALLOC_INVALID definition for it.

HW_TAGS KASAN won't be using this value, as it falls back onto
page_alloc for poisoning freed vmalloc() memory.

Signed-off-by: Andrey Konovalov <[email protected]>
---
mm/kasan/kasan.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 952cd6f9ca46..020f3e57a03f 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -71,18 +71,19 @@ static inline bool kasan_sync_fault_possible(void)
#define KASAN_PAGE_REDZONE 0xFE /* redzone for kmalloc_large allocations */
#define KASAN_KMALLOC_REDZONE 0xFC /* redzone inside slub object */
#define KASAN_KMALLOC_FREE 0xFB /* object was freed (kmem_cache_free/kfree) */
+#define KASAN_VMALLOC_INVALID 0xF8 /* unallocated space in vmapped page */
#else
#define KASAN_FREE_PAGE KASAN_TAG_INVALID
#define KASAN_PAGE_REDZONE KASAN_TAG_INVALID
#define KASAN_KMALLOC_REDZONE KASAN_TAG_INVALID
#define KASAN_KMALLOC_FREE KASAN_TAG_INVALID
+#define KASAN_VMALLOC_INVALID KASAN_TAG_INVALID /* only for SW_TAGS */
#endif

#ifdef CONFIG_KASAN_GENERIC

#define KASAN_KMALLOC_FREETRACK 0xFA /* object was freed and has free track set */
#define KASAN_GLOBAL_REDZONE 0xF9 /* redzone for global variable */
-#define KASAN_VMALLOC_INVALID 0xF8 /* unallocated space in vmapped page */

/*
* Stack redzone shadow values
--
2.25.1


2021-12-20 22:00:45

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 18/39] kasan, vmalloc: drop outdated VM_KASAN comment

From: Andrey Konovalov <[email protected]>

The comment about VM_KASAN in include/linux/vmalloc.c is outdated.
VM_KASAN is currently only used to mark vm_areas allocated for
kernel modules when CONFIG_KASAN_VMALLOC is disabled.

Drop the comment.

Signed-off-by: Andrey Konovalov <[email protected]>
Reviewed-by: Alexander Potapenko <[email protected]>
---
include/linux/vmalloc.h | 11 -----------
1 file changed, 11 deletions(-)

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index cde400a9fd87..34ac66a656d4 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -35,17 +35,6 @@ struct notifier_block; /* in notifier.h */
#define VM_DEFER_KMEMLEAK 0
#endif

-/*
- * VM_KASAN is used slightly differently depending on CONFIG_KASAN_VMALLOC.
- *
- * If IS_ENABLED(CONFIG_KASAN_VMALLOC), VM_KASAN is set on a vm_struct after
- * shadow memory has been mapped. It's used to handle allocation errors so that
- * we don't try to poison shadow on free if it was never allocated.
- *
- * Otherwise, VM_KASAN is set for kasan_module_alloc() allocations and used to
- * determine which allocations need the module shadow freed.
- */
-
/* bits [20..32] reserved for arch specific ioremap internals */

/*
--
2.25.1


2021-12-20 22:00:47

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 17/39] kasan, x86, arm64, s390: rename functions for modules shadow

From: Andrey Konovalov <[email protected]>

Rename kasan_free_shadow to kasan_free_module_shadow and
kasan_module_alloc to kasan_alloc_module_shadow.

These functions are used to allocate/free shadow memory for kernel
modules when KASAN_VMALLOC is not enabled. The new names better
reflect their purpose.

Also reword the comment next to their declaration to improve clarity.

Signed-off-by: Andrey Konovalov <[email protected]>
Acked-by: Catalin Marinas <[email protected]>
---
arch/arm64/kernel/module.c | 2 +-
arch/s390/kernel/module.c | 2 +-
arch/x86/kernel/module.c | 2 +-
include/linux/kasan.h | 14 +++++++-------
mm/kasan/shadow.c | 4 ++--
mm/vmalloc.c | 2 +-
6 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index 309a27553c87..d3a1fa818348 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -58,7 +58,7 @@ void *module_alloc(unsigned long size)
PAGE_KERNEL, 0, NUMA_NO_NODE,
__builtin_return_address(0));

- if (p && (kasan_module_alloc(p, size, gfp_mask) < 0)) {
+ if (p && (kasan_alloc_module_shadow(p, size, gfp_mask) < 0)) {
vfree(p);
return NULL;
}
diff --git a/arch/s390/kernel/module.c b/arch/s390/kernel/module.c
index d52d85367bf7..b16bebd9a8b9 100644
--- a/arch/s390/kernel/module.c
+++ b/arch/s390/kernel/module.c
@@ -45,7 +45,7 @@ void *module_alloc(unsigned long size)
p = __vmalloc_node_range(size, MODULE_ALIGN, MODULES_VADDR, MODULES_END,
gfp_mask, PAGE_KERNEL_EXEC, VM_DEFER_KMEMLEAK, NUMA_NO_NODE,
__builtin_return_address(0));
- if (p && (kasan_module_alloc(p, size, gfp_mask) < 0)) {
+ if (p && (kasan_alloc_module_shadow(p, size, gfp_mask) < 0)) {
vfree(p);
return NULL;
}
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index 95fa745e310a..c9eb8aa3b7b8 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -78,7 +78,7 @@ void *module_alloc(unsigned long size)
MODULES_END, gfp_mask,
PAGE_KERNEL, VM_DEFER_KMEMLEAK, NUMA_NO_NODE,
__builtin_return_address(0));
- if (p && (kasan_module_alloc(p, size, gfp_mask) < 0)) {
+ if (p && (kasan_alloc_module_shadow(p, size, gfp_mask) < 0)) {
vfree(p);
return NULL;
}
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index b88ca6b97ba3..55f1d4edf6b5 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -454,17 +454,17 @@ static inline void kasan_populate_early_vm_area_shadow(void *start,
!defined(CONFIG_KASAN_VMALLOC)

/*
- * These functions provide a special case to support backing module
- * allocations with real shadow memory. With KASAN vmalloc, the special
- * case is unnecessary, as the work is handled in the generic case.
+ * These functions allocate and free shadow memory for kernel modules.
+ * They are only required when KASAN_VMALLOC is not supported, as otherwise
+ * shadow memory is allocated by the generic vmalloc handlers.
*/
-int kasan_module_alloc(void *addr, size_t size, gfp_t gfp_mask);
-void kasan_free_shadow(const struct vm_struct *vm);
+int kasan_alloc_module_shadow(void *addr, size_t size, gfp_t gfp_mask);
+void kasan_free_module_shadow(const struct vm_struct *vm);

#else /* (CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS) && !CONFIG_KASAN_VMALLOC */

-static inline int kasan_module_alloc(void *addr, size_t size, gfp_t gfp_mask) { return 0; }
-static inline void kasan_free_shadow(const struct vm_struct *vm) {}
+static inline int kasan_alloc_module_shadow(void *addr, size_t size, gfp_t gfp_mask) { return 0; }
+static inline void kasan_free_module_shadow(const struct vm_struct *vm) {}

#endif /* (CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS) && !CONFIG_KASAN_VMALLOC */

diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index 94136f84b449..e5c4393eb861 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -498,7 +498,7 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end,

#else /* CONFIG_KASAN_VMALLOC */

-int kasan_module_alloc(void *addr, size_t size, gfp_t gfp_mask)
+int kasan_alloc_module_shadow(void *addr, size_t size, gfp_t gfp_mask)
{
void *ret;
size_t scaled_size;
@@ -534,7 +534,7 @@ int kasan_module_alloc(void *addr, size_t size, gfp_t gfp_mask)
return -ENOMEM;
}

-void kasan_free_shadow(const struct vm_struct *vm)
+void kasan_free_module_shadow(const struct vm_struct *vm)
{
if (vm->flags & VM_KASAN)
vfree(kasan_mem_to_shadow(vm->addr));
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index eb6e527a6b77..10011a07231d 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2525,7 +2525,7 @@ struct vm_struct *remove_vm_area(const void *addr)
va->vm = NULL;
spin_unlock(&vmap_area_lock);

- kasan_free_shadow(vm);
+ kasan_free_module_shadow(vm);
free_unmap_vmap_area(va);

return vm;
--
2.25.1


2021-12-20 22:00:49

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 19/39] kasan: reorder vmalloc hooks

From: Andrey Konovalov <[email protected]>

Group functions that [de]populate shadow memory for vmalloc.
Group functions that [un]poison memory for vmalloc.

This patch does no functional changes but prepares KASAN code for
adding vmalloc support to HW_TAGS KASAN.

Signed-off-by: Andrey Konovalov <[email protected]>
Reviewed-by: Alexander Potapenko <[email protected]>
---
include/linux/kasan.h | 20 +++++++++-----------
mm/kasan/shadow.c | 43 ++++++++++++++++++++++---------------------
2 files changed, 31 insertions(+), 32 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 55f1d4edf6b5..46a63374c86f 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -418,34 +418,32 @@ static inline void kasan_init_hw_tags(void) { }

#ifdef CONFIG_KASAN_VMALLOC

+void kasan_populate_early_vm_area_shadow(void *start, unsigned long size);
int kasan_populate_vmalloc(unsigned long addr, unsigned long size);
-void kasan_poison_vmalloc(const void *start, unsigned long size);
-void kasan_unpoison_vmalloc(const void *start, unsigned long size);
void kasan_release_vmalloc(unsigned long start, unsigned long end,
unsigned long free_region_start,
unsigned long free_region_end);

-void kasan_populate_early_vm_area_shadow(void *start, unsigned long size);
+void kasan_unpoison_vmalloc(const void *start, unsigned long size);
+void kasan_poison_vmalloc(const void *start, unsigned long size);

#else /* CONFIG_KASAN_VMALLOC */

+static inline void kasan_populate_early_vm_area_shadow(void *start,
+ unsigned long size) { }
static inline int kasan_populate_vmalloc(unsigned long start,
unsigned long size)
{
return 0;
}
-
-static inline void kasan_poison_vmalloc(const void *start, unsigned long size)
-{ }
-static inline void kasan_unpoison_vmalloc(const void *start, unsigned long size)
-{ }
static inline void kasan_release_vmalloc(unsigned long start,
unsigned long end,
unsigned long free_region_start,
- unsigned long free_region_end) {}
+ unsigned long free_region_end) { }

-static inline void kasan_populate_early_vm_area_shadow(void *start,
- unsigned long size)
+static inline void kasan_unpoison_vmalloc(const void *start, unsigned long size)
+{ }
+static inline void kasan_poison_vmalloc(const void *start, unsigned long size)
{ }

#endif /* CONFIG_KASAN_VMALLOC */
diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index e5c4393eb861..bf7ab62fbfb9 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -345,27 +345,6 @@ int kasan_populate_vmalloc(unsigned long addr, unsigned long size)
return 0;
}

-/*
- * Poison the shadow for a vmalloc region. Called as part of the
- * freeing process at the time the region is freed.
- */
-void kasan_poison_vmalloc(const void *start, unsigned long size)
-{
- if (!is_vmalloc_or_module_addr(start))
- return;
-
- size = round_up(size, KASAN_GRANULE_SIZE);
- kasan_poison(start, size, KASAN_VMALLOC_INVALID, false);
-}
-
-void kasan_unpoison_vmalloc(const void *start, unsigned long size)
-{
- if (!is_vmalloc_or_module_addr(start))
- return;
-
- kasan_unpoison(start, size, false);
-}
-
static int kasan_depopulate_vmalloc_pte(pte_t *ptep, unsigned long addr,
void *unused)
{
@@ -496,6 +475,28 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end,
}
}

+
+void kasan_unpoison_vmalloc(const void *start, unsigned long size)
+{
+ if (!is_vmalloc_or_module_addr(start))
+ return;
+
+ kasan_unpoison(start, size, false);
+}
+
+/*
+ * Poison the shadow for a vmalloc region. Called as part of the
+ * freeing process at the time the region is freed.
+ */
+void kasan_poison_vmalloc(const void *start, unsigned long size)
+{
+ if (!is_vmalloc_or_module_addr(start))
+ return;
+
+ size = round_up(size, KASAN_GRANULE_SIZE);
+ kasan_poison(start, size, KASAN_VMALLOC_INVALID, false);
+}
+
#else /* CONFIG_KASAN_VMALLOC */

int kasan_alloc_module_shadow(void *addr, size_t size, gfp_t gfp_mask)
--
2.25.1


2021-12-20 22:00:53

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 21/39] kasan, vmalloc: reset tags in vmalloc functions

From: Andrey Konovalov <[email protected]>

In preparation for adding vmalloc support to SW/HW_TAGS KASAN,
reset pointer tags in functions that use pointer values in
range checks.

vread() is a special case here. Despite the untagging of the addr
pointer in its prologue, the accesses performed by vread() are checked.

Instead of accessing the virtual mappings though addr directly, vread()
recovers the physical address via page_address(vmalloc_to_page()) and
acceses that. And as page_address() recovers the pointer tag, the
accesses get checked.

Signed-off-by: Andrey Konovalov <[email protected]>

---

Changes v1->v2:
- Clarified the description of untagging in vread().
---
mm/vmalloc.c | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 10011a07231d..eaacdf3abfa7 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -73,7 +73,7 @@ static const bool vmap_allow_huge = false;

bool is_vmalloc_addr(const void *x)
{
- unsigned long addr = (unsigned long)x;
+ unsigned long addr = (unsigned long)kasan_reset_tag(x);

return addr >= VMALLOC_START && addr < VMALLOC_END;
}
@@ -631,7 +631,7 @@ int is_vmalloc_or_module_addr(const void *x)
* just put it in the vmalloc space.
*/
#if defined(CONFIG_MODULES) && defined(MODULES_VADDR)
- unsigned long addr = (unsigned long)x;
+ unsigned long addr = (unsigned long)kasan_reset_tag(x);
if (addr >= MODULES_VADDR && addr < MODULES_END)
return 1;
#endif
@@ -805,6 +805,8 @@ static struct vmap_area *find_vmap_area_exceed_addr(unsigned long addr)
struct vmap_area *va = NULL;
struct rb_node *n = vmap_area_root.rb_node;

+ addr = (unsigned long)kasan_reset_tag((void *)addr);
+
while (n) {
struct vmap_area *tmp;

@@ -826,6 +828,8 @@ static struct vmap_area *__find_vmap_area(unsigned long addr)
{
struct rb_node *n = vmap_area_root.rb_node;

+ addr = (unsigned long)kasan_reset_tag((void *)addr);
+
while (n) {
struct vmap_area *va;

@@ -2144,7 +2148,7 @@ EXPORT_SYMBOL_GPL(vm_unmap_aliases);
void vm_unmap_ram(const void *mem, unsigned int count)
{
unsigned long size = (unsigned long)count << PAGE_SHIFT;
- unsigned long addr = (unsigned long)mem;
+ unsigned long addr = (unsigned long)kasan_reset_tag(mem);
struct vmap_area *va;

might_sleep();
@@ -3406,6 +3410,8 @@ long vread(char *buf, char *addr, unsigned long count)
unsigned long buflen = count;
unsigned long n;

+ addr = kasan_reset_tag(addr);
+
/* Don't allow overflow */
if ((unsigned long) addr + count < count)
count = -(unsigned long) addr;
--
2.25.1


2021-12-20 22:00:58

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 20/39] kasan: add wrappers for vmalloc hooks

From: Andrey Konovalov <[email protected]>

Add wrappers around functions that [un]poison memory for vmalloc
allocations. These functions will be used by HW_TAGS KASAN and
therefore need to be disabled when kasan=off command line argument
is provided.

This patch does no functional changes for software KASAN modes.

Signed-off-by: Andrey Konovalov <[email protected]>
---
include/linux/kasan.h | 17 +++++++++++++++--
mm/kasan/shadow.c | 5 ++---
2 files changed, 17 insertions(+), 5 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 46a63374c86f..da320069e7cf 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -424,8 +424,21 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end,
unsigned long free_region_start,
unsigned long free_region_end);

-void kasan_unpoison_vmalloc(const void *start, unsigned long size);
-void kasan_poison_vmalloc(const void *start, unsigned long size);
+void __kasan_unpoison_vmalloc(const void *start, unsigned long size);
+static __always_inline void kasan_unpoison_vmalloc(const void *start,
+ unsigned long size)
+{
+ if (kasan_enabled())
+ __kasan_unpoison_vmalloc(start, size);
+}
+
+void __kasan_poison_vmalloc(const void *start, unsigned long size);
+static __always_inline void kasan_poison_vmalloc(const void *start,
+ unsigned long size)
+{
+ if (kasan_enabled())
+ __kasan_poison_vmalloc(start, size);
+}

#else /* CONFIG_KASAN_VMALLOC */

diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index bf7ab62fbfb9..39d0b32ebf70 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -475,8 +475,7 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end,
}
}

-
-void kasan_unpoison_vmalloc(const void *start, unsigned long size)
+void __kasan_unpoison_vmalloc(const void *start, unsigned long size)
{
if (!is_vmalloc_or_module_addr(start))
return;
@@ -488,7 +487,7 @@ void kasan_unpoison_vmalloc(const void *start, unsigned long size)
* Poison the shadow for a vmalloc region. Called as part of the
* freeing process at the time the region is freed.
*/
-void kasan_poison_vmalloc(const void *start, unsigned long size)
+void __kasan_poison_vmalloc(const void *start, unsigned long size)
{
if (!is_vmalloc_or_module_addr(start))
return;
--
2.25.1


2021-12-20 22:01:30

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 22/39] kasan, fork: reset pointer tags of vmapped stacks

From: Andrey Konovalov <[email protected]>

Once tag-based KASAN modes start tagging vmalloc() allocations,
kernel stacks start getting tagged if CONFIG_VMAP_STACK is enabled.

Reset the tag of kernel stack pointers after allocation in
alloc_thread_stack_node().

For SW_TAGS KASAN, when CONFIG_KASAN_STACK is enabled, the
instrumentation can't handle the SP register being tagged.

For HW_TAGS KASAN, there's no instrumentation-related issues. However,
the impact of having a tagged SP register needs to be properly evaluated,
so keep it non-tagged for now.

Note, that the memory for the stack allocation still gets tagged to
catch vmalloc-into-stack out-of-bounds accesses.

Signed-off-by: Andrey Konovalov <[email protected]>

---

Changes v2->v3:
- Update patch description.
---
kernel/fork.c | 1 +
1 file changed, 1 insertion(+)

diff --git a/kernel/fork.c b/kernel/fork.c
index 403b9dbbfb62..4125373dba4e 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -254,6 +254,7 @@ static unsigned long *alloc_thread_stack_node(struct task_struct *tsk, int node)
* so cache the vm_struct.
*/
if (stack) {
+ stack = kasan_reset_tag(stack);
tsk->stack_vm_area = find_vm_area(stack);
tsk->stack = stack;
}
--
2.25.1


2021-12-20 22:01:35

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 23/39] kasan, arm64: reset pointer tags of vmapped stacks

From: Andrey Konovalov <[email protected]>

Once tag-based KASAN modes start tagging vmalloc() allocations,
kernel stacks start getting tagged if CONFIG_VMAP_STACK is enabled.

Reset the tag of kernel stack pointers after allocation in
arch_alloc_vmap_stack().

For SW_TAGS KASAN, when CONFIG_KASAN_STACK is enabled, the
instrumentation can't handle the SP register being tagged.

For HW_TAGS KASAN, there's no instrumentation-related issues. However,
the impact of having a tagged SP register needs to be properly evaluated,
so keep it non-tagged for now.

Note, that the memory for the stack allocation still gets tagged to
catch vmalloc-into-stack out-of-bounds accesses.

Signed-off-by: Andrey Konovalov <[email protected]>
Acked-by: Catalin Marinas <[email protected]>

---

Changes v2->v3:
- Add this patch.
---
arch/arm64/include/asm/vmap_stack.h | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/vmap_stack.h b/arch/arm64/include/asm/vmap_stack.h
index 894e031b28d2..20873099c035 100644
--- a/arch/arm64/include/asm/vmap_stack.h
+++ b/arch/arm64/include/asm/vmap_stack.h
@@ -17,10 +17,13 @@
*/
static inline unsigned long *arch_alloc_vmap_stack(size_t stack_size, int node)
{
+ void *p;
+
BUILD_BUG_ON(!IS_ENABLED(CONFIG_VMAP_STACK));

- return __vmalloc_node(stack_size, THREAD_ALIGN, THREADINFO_GFP, node,
+ p = __vmalloc_node(stack_size, THREAD_ALIGN, THREADINFO_GFP, node,
__builtin_return_address(0));
+ return kasan_reset_tag(p);
}

#endif /* __ASM_VMAP_STACK_H */
--
2.25.1


2021-12-20 22:01:40

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 24/39] kasan, vmalloc: add vmalloc tagging for SW_TAGS

From: Andrey Konovalov <[email protected]>

Add vmalloc tagging support to SW_TAGS KASAN.

- __kasan_unpoison_vmalloc() now assigns a random pointer tag, poisons
the virtual mapping accordingly, and embeds the tag into the returned
pointer.

- __get_vm_area_node() (used by vmalloc() and vmap()) and
pcpu_get_vm_areas() save the tagged pointer into vm_struct->addr
(note: not into vmap_area->addr). This requires putting
kasan_unpoison_vmalloc() after setup_vmalloc_vm[_locked]();
otherwise the latter will overwrite the tagged pointer.
The tagged pointer then is naturally propagateed to vmalloc()
and vmap().

- vm_map_ram() returns the tagged pointer directly.

As a result of this change, vm_struct->addr is now tagged.

Enabling KASAN_VMALLOC with SW_TAGS is not yet allowed.

Signed-off-by: Andrey Konovalov <[email protected]>

---

Changes v2->v3:
- Drop accidentally added kasan_unpoison_vmalloc() argument for when
KASAN is off.
- Drop __must_check for kasan_unpoison_vmalloc(), as its result is
sometimes intentionally ignored.
- Move allowing enabling KASAN_VMALLOC with SW_TAGS into a separate
patch.
- Update patch description.

Changes v1->v2:
- Allow enabling KASAN_VMALLOC with SW_TAGS in this patch.
---
include/linux/kasan.h | 16 ++++++++++------
mm/kasan/shadow.c | 6 ++++--
mm/vmalloc.c | 14 ++++++++------
3 files changed, 22 insertions(+), 14 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index da320069e7cf..92c5dfa29a35 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -424,12 +424,13 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end,
unsigned long free_region_start,
unsigned long free_region_end);

-void __kasan_unpoison_vmalloc(const void *start, unsigned long size);
-static __always_inline void kasan_unpoison_vmalloc(const void *start,
- unsigned long size)
+void *__kasan_unpoison_vmalloc(const void *start, unsigned long size);
+static __always_inline void *kasan_unpoison_vmalloc(const void *start,
+ unsigned long size)
{
if (kasan_enabled())
- __kasan_unpoison_vmalloc(start, size);
+ return __kasan_unpoison_vmalloc(start, size);
+ return (void *)start;
}

void __kasan_poison_vmalloc(const void *start, unsigned long size);
@@ -454,8 +455,11 @@ static inline void kasan_release_vmalloc(unsigned long start,
unsigned long free_region_start,
unsigned long free_region_end) { }

-static inline void kasan_unpoison_vmalloc(const void *start, unsigned long size)
-{ }
+static inline void *kasan_unpoison_vmalloc(const void *start,
+ unsigned long size)
+{
+ return (void *)start;
+}
static inline void kasan_poison_vmalloc(const void *start, unsigned long size)
{ }

diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index 39d0b32ebf70..5a866f6663fc 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -475,12 +475,14 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end,
}
}

-void __kasan_unpoison_vmalloc(const void *start, unsigned long size)
+void *__kasan_unpoison_vmalloc(const void *start, unsigned long size)
{
if (!is_vmalloc_or_module_addr(start))
- return;
+ return (void *)start;

+ start = set_tag(start, kasan_random_tag());
kasan_unpoison(start, size, false);
+ return (void *)start;
}

/*
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index eaacdf3abfa7..c0985f74c0c1 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2209,7 +2209,7 @@ void *vm_map_ram(struct page **pages, unsigned int count, int node)
mem = (void *)addr;
}

- kasan_unpoison_vmalloc(mem, size);
+ mem = kasan_unpoison_vmalloc(mem, size);

if (vmap_pages_range(addr, addr + size, PAGE_KERNEL,
pages, PAGE_SHIFT) < 0) {
@@ -2442,10 +2442,10 @@ static struct vm_struct *__get_vm_area_node(unsigned long size,
return NULL;
}

- kasan_unpoison_vmalloc((void *)va->va_start, requested_size);
-
setup_vmalloc_vm(area, va, flags, caller);

+ area->addr = kasan_unpoison_vmalloc(area->addr, requested_size);
+
return area;
}

@@ -3797,9 +3797,6 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
for (area = 0; area < nr_vms; area++) {
if (kasan_populate_vmalloc(vas[area]->va_start, sizes[area]))
goto err_free_shadow;
-
- kasan_unpoison_vmalloc((void *)vas[area]->va_start,
- sizes[area]);
}

/* insert all vm's */
@@ -3812,6 +3809,11 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
}
spin_unlock(&vmap_area_lock);

+ /* mark allocated areas as accessible */
+ for (area = 0; area < nr_vms; area++)
+ vms[area]->addr = kasan_unpoison_vmalloc(vms[area]->addr,
+ vms[area]->size);
+
kfree(vas);
return vms;

--
2.25.1


2021-12-20 22:02:22

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 25/39] kasan, vmalloc, arm64: mark vmalloc mappings as pgprot_tagged

From: Andrey Konovalov <[email protected]>

HW_TAGS KASAN relies on ARM Memory Tagging Extension (MTE). With MTE,
a memory region must be mapped as MT_NORMAL_TAGGED to allow setting
memory tags via MTE-specific instructions.

Add proper protection bits to vmalloc() allocations. These allocations
are always backed by page_alloc pages, so the tags will actually be
getting set on the corresponding physical memory.

Signed-off-by: Andrey Konovalov <[email protected]>
Co-developed-by: Vincenzo Frascino <[email protected]>
Signed-off-by: Vincenzo Frascino <[email protected]>

---

Changes v3->v4:
- Rename arch_vmalloc_pgprot_modify() to arch_vmap_pgprot_tagged()
to be consistent with other arch vmalloc hooks.
- Move checks from arch_vmap_pgprot_tagged() to __vmalloc_node_range()
as the same condition is used for other things in subsequent patches.

Changes v2->v3:
- Update patch description.
---
arch/arm64/include/asm/vmalloc.h | 6 ++++++
include/linux/vmalloc.h | 7 +++++++
mm/vmalloc.c | 9 +++++++++
3 files changed, 22 insertions(+)

diff --git a/arch/arm64/include/asm/vmalloc.h b/arch/arm64/include/asm/vmalloc.h
index b9185503feae..38fafffe699f 100644
--- a/arch/arm64/include/asm/vmalloc.h
+++ b/arch/arm64/include/asm/vmalloc.h
@@ -25,4 +25,10 @@ static inline bool arch_vmap_pmd_supported(pgprot_t prot)

#endif

+#define arch_vmap_pgprot_tagged arch_vmap_pgprot_tagged
+static inline pgprot_t arch_vmap_pgprot_tagged(pgprot_t prot)
+{
+ return pgprot_tagged(prot);
+}
+
#endif /* _ASM_ARM64_VMALLOC_H */
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 34ac66a656d4..0dc02a688207 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -115,6 +115,13 @@ static inline int arch_vmap_pte_supported_shift(unsigned long size)
}
#endif

+#ifndef arch_vmap_pgprot_tagged
+static inline pgprot_t arch_vmap_pgprot_tagged(pgprot_t prot)
+{
+ return prot;
+}
+#endif
+
/*
* Highlevel APIs for driver use
*/
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index c0985f74c0c1..388a17c01376 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -3102,6 +3102,15 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
goto fail;
}

+ /*
+ * Modify protection bits to allow tagging.
+ * This must be done before mapping by __vmalloc_area_node().
+ */
+ if (kasan_hw_tags_enabled() &&
+ pgprot_val(prot) == pgprot_val(PAGE_KERNEL))
+ prot = arch_vmap_pgprot_tagged(prot);
+
+ /* Allocate physical pages and map them into vmalloc space. */
addr = __vmalloc_area_node(area, gfp_mask, prot, shift, node);
if (!addr)
goto fail;
--
2.25.1


2021-12-20 22:02:24

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 26/39] kasan, vmalloc: unpoison VM_ALLOC pages after mapping

From: Andrey Konovalov <[email protected]>

Make KASAN unpoison vmalloc mappings after they have been mapped in
when it's possible: for vmalloc() (indentified via VM_ALLOC) and
vm_map_ram().

The reasons for this are:

- For vmalloc() and vm_map_ram(): pages don't get unpoisoned in case
mapping them fails.
- For vmalloc(): HW_TAGS KASAN needs pages to be mapped to set tags via
kasan_unpoison_vmalloc().

As a part of these changes, the return value of __vmalloc_node_range()
is changed to area->addr. This is a non-functional change, as
__vmalloc_area_node() returns area->addr anyway.

Signed-off-by: Andrey Konovalov <[email protected]>

---

Changes v3->v4:
- Don't forget to save tagged addr to vm_struct->addr for VM_ALLOC
so that find_vm_area(addr)->addr == addr for vmalloc().
- Reword comments.
- Update patch description.

Changes v2->v3:
- Update patch description.
---
mm/vmalloc.c | 30 ++++++++++++++++++++++--------
1 file changed, 22 insertions(+), 8 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 388a17c01376..cc23e181b0ec 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2209,14 +2209,15 @@ void *vm_map_ram(struct page **pages, unsigned int count, int node)
mem = (void *)addr;
}

- mem = kasan_unpoison_vmalloc(mem, size);
-
if (vmap_pages_range(addr, addr + size, PAGE_KERNEL,
pages, PAGE_SHIFT) < 0) {
vm_unmap_ram(mem, count);
return NULL;
}

+ /* Mark the pages as accessible, now that they are mapped. */
+ mem = kasan_unpoison_vmalloc(mem, size);
+
return mem;
}
EXPORT_SYMBOL(vm_map_ram);
@@ -2444,7 +2445,14 @@ static struct vm_struct *__get_vm_area_node(unsigned long size,

setup_vmalloc_vm(area, va, flags, caller);

- area->addr = kasan_unpoison_vmalloc(area->addr, requested_size);
+ /*
+ * Mark pages for non-VM_ALLOC mappings as accessible. Do it now as a
+ * best-effort approach, as they can be mapped outside of vmalloc code.
+ * For VM_ALLOC mappings, the pages are marked as accessible after
+ * getting mapped in __vmalloc_node_range().
+ */
+ if (!(flags & VM_ALLOC))
+ area->addr = kasan_unpoison_vmalloc(area->addr, requested_size);

return area;
}
@@ -3049,7 +3057,7 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
const void *caller)
{
struct vm_struct *area;
- void *addr;
+ void *ret;
unsigned long real_size = size;
unsigned long real_align = align;
unsigned int shift = PAGE_SHIFT;
@@ -3111,10 +3119,13 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
prot = arch_vmap_pgprot_tagged(prot);

/* Allocate physical pages and map them into vmalloc space. */
- addr = __vmalloc_area_node(area, gfp_mask, prot, shift, node);
- if (!addr)
+ ret = __vmalloc_area_node(area, gfp_mask, prot, shift, node);
+ if (!ret)
goto fail;

+ /* Mark the pages as accessible, now that they are mapped. */
+ area->addr = kasan_unpoison_vmalloc(area->addr, real_size);
+
/*
* In this function, newly allocated vm_struct has VM_UNINITIALIZED
* flag. It means that vm_struct is not fully initialized.
@@ -3126,7 +3137,7 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
if (!(vm_flags & VM_DEFER_KMEMLEAK))
kmemleak_vmalloc(area, size, gfp_mask);

- return addr;
+ return area->addr;

fail:
if (shift > PAGE_SHIFT) {
@@ -3818,7 +3829,10 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
}
spin_unlock(&vmap_area_lock);

- /* mark allocated areas as accessible */
+ /*
+ * Mark allocated areas as accessible. Do it now as a best-effort
+ * approach, as they can be mapped outside of vmalloc code.
+ */
for (area = 0; area < nr_vms; area++)
vms[area]->addr = kasan_unpoison_vmalloc(vms[area]->addr,
vms[area]->size);
--
2.25.1


2021-12-20 22:02:27

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 27/39] kasan, mm: only define ___GFP_SKIP_KASAN_POISON with HW_TAGS

From: Andrey Konovalov <[email protected]>

Only define the ___GFP_SKIP_KASAN_POISON flag when CONFIG_KASAN_HW_TAGS
is enabled.

This patch it not useful by itself, but it prepares the code for
additions of new KASAN-specific GFP patches.

Signed-off-by: Andrey Konovalov <[email protected]>

---

Changes v3->v4:
- This is a new patch.
---
include/linux/gfp.h | 8 +++++++-
include/trace/events/mmflags.h | 12 +++++++++---
2 files changed, 16 insertions(+), 4 deletions(-)

diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index d6a184523ca2..22709fcc4d3a 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -54,7 +54,11 @@ struct vm_area_struct;
#define ___GFP_THISNODE 0x200000u
#define ___GFP_ACCOUNT 0x400000u
#define ___GFP_ZEROTAGS 0x800000u
+#ifdef CONFIG_KASAN_HW_TAGS
#define ___GFP_SKIP_KASAN_POISON 0x1000000u
+#else
+#define ___GFP_SKIP_KASAN_POISON 0
+#endif
#ifdef CONFIG_LOCKDEP
#define ___GFP_NOLOCKDEP 0x2000000u
#else
@@ -245,7 +249,9 @@ struct vm_area_struct;
#define __GFP_NOLOCKDEP ((__force gfp_t)___GFP_NOLOCKDEP)

/* Room for N __GFP_FOO bits */
-#define __GFP_BITS_SHIFT (25 + IS_ENABLED(CONFIG_LOCKDEP))
+#define __GFP_BITS_SHIFT (24 + \
+ IS_ENABLED(CONFIG_KASAN_HW_TAGS) + \
+ IS_ENABLED(CONFIG_LOCKDEP))
#define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1))

/**
diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h
index 30f492256b8c..414bf4367283 100644
--- a/include/trace/events/mmflags.h
+++ b/include/trace/events/mmflags.h
@@ -48,12 +48,18 @@
{(unsigned long)__GFP_RECLAIM, "__GFP_RECLAIM"}, \
{(unsigned long)__GFP_DIRECT_RECLAIM, "__GFP_DIRECT_RECLAIM"},\
{(unsigned long)__GFP_KSWAPD_RECLAIM, "__GFP_KSWAPD_RECLAIM"},\
- {(unsigned long)__GFP_ZEROTAGS, "__GFP_ZEROTAGS"}, \
- {(unsigned long)__GFP_SKIP_KASAN_POISON,"__GFP_SKIP_KASAN_POISON"}\
+ {(unsigned long)__GFP_ZEROTAGS, "__GFP_ZEROTAGS"} \
+
+#ifdef CONFIG_KASAN_HW_TAGS
+#define __def_gfpflag_names_kasan \
+ , {(unsigned long)__GFP_SKIP_KASAN_POISON, "__GFP_SKIP_KASAN_POISON"}
+#else
+#define __def_gfpflag_names_kasan
+#endif

#define show_gfp_flags(flags) \
(flags) ? __print_flags(flags, "|", \
- __def_gfpflag_names \
+ __def_gfpflag_names __def_gfpflag_names_kasan \
) : "none"

#ifdef CONFIG_MMU
--
2.25.1


2021-12-20 22:02:31

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 28/39] kasan, page_alloc: allow skipping unpoisoning for HW_TAGS

From: Andrey Konovalov <[email protected]>

Add a new GFP flag __GFP_SKIP_KASAN_UNPOISON that allows skipping KASAN
poisoning for page_alloc allocations. The flag is only effective with
HW_TAGS KASAN.

This flag will be used by vmalloc code for page_alloc allocations
backing vmalloc() mappings in a following patch. The reason to skip
KASAN poisoning for these pages in page_alloc is because vmalloc code
will be poisoning them instead.

Also reword the comment for __GFP_SKIP_KASAN_POISON.

Signed-off-by: Andrey Konovalov <[email protected]>

---

Changes v3->v4:
- Only define __GFP_SKIP_KASAN_POISON when CONFIG_KASAN_HW_TAGS is enabled.

Changes v2->v3:
- Update patch description.

Signed-off-by: Andrey Konovalov <[email protected]>
---
include/linux/gfp.h | 18 ++++++++++++------
include/trace/events/mmflags.h | 4 +++-
mm/page_alloc.c | 31 ++++++++++++++++++++++---------
3 files changed, 37 insertions(+), 16 deletions(-)

diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index 22709fcc4d3a..600f0749c3f2 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -55,12 +55,14 @@ struct vm_area_struct;
#define ___GFP_ACCOUNT 0x400000u
#define ___GFP_ZEROTAGS 0x800000u
#ifdef CONFIG_KASAN_HW_TAGS
-#define ___GFP_SKIP_KASAN_POISON 0x1000000u
+#define ___GFP_SKIP_KASAN_UNPOISON 0x1000000u
+#define ___GFP_SKIP_KASAN_POISON 0x2000000u
#else
+#define ___GFP_SKIP_KASAN_UNPOISON 0
#define ___GFP_SKIP_KASAN_POISON 0
#endif
#ifdef CONFIG_LOCKDEP
-#define ___GFP_NOLOCKDEP 0x2000000u
+#define ___GFP_NOLOCKDEP 0x4000000u
#else
#define ___GFP_NOLOCKDEP 0
#endif
@@ -235,21 +237,25 @@ struct vm_area_struct;
* %__GFP_ZEROTAGS zeroes memory tags at allocation time if the memory itself
* is being zeroed (either via __GFP_ZERO or via init_on_alloc).
*
- * %__GFP_SKIP_KASAN_POISON returns a page which does not need to be poisoned
- * on deallocation. Typically used for userspace pages. Currently only has an
- * effect in HW tags mode.
+ * %__GFP_SKIP_KASAN_UNPOISON makes KASAN skip unpoisoning on page allocation.
+ * Only effective in HW_TAGS mode.
+ *
+ * %__GFP_SKIP_KASAN_POISON makes KASAN skip poisoning on page deallocation.
+ * Typically, used for userspace pages. Only effective in HW_TAGS mode.
*/
#define __GFP_NOWARN ((__force gfp_t)___GFP_NOWARN)
#define __GFP_COMP ((__force gfp_t)___GFP_COMP)
#define __GFP_ZERO ((__force gfp_t)___GFP_ZERO)
#define __GFP_ZEROTAGS ((__force gfp_t)___GFP_ZEROTAGS)
-#define __GFP_SKIP_KASAN_POISON ((__force gfp_t)___GFP_SKIP_KASAN_POISON)
+#define __GFP_SKIP_KASAN_UNPOISON ((__force gfp_t)___GFP_SKIP_KASAN_UNPOISON)
+#define __GFP_SKIP_KASAN_POISON ((__force gfp_t)___GFP_SKIP_KASAN_POISON)

/* Disable lockdep for GFP context tracking */
#define __GFP_NOLOCKDEP ((__force gfp_t)___GFP_NOLOCKDEP)

/* Room for N __GFP_FOO bits */
#define __GFP_BITS_SHIFT (24 + \
+ IS_ENABLED(CONFIG_KASAN_HW_TAGS) + \
IS_ENABLED(CONFIG_KASAN_HW_TAGS) + \
IS_ENABLED(CONFIG_LOCKDEP))
#define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1))
diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h
index 414bf4367283..1329d9c4df56 100644
--- a/include/trace/events/mmflags.h
+++ b/include/trace/events/mmflags.h
@@ -52,7 +52,9 @@

#ifdef CONFIG_KASAN_HW_TAGS
#define __def_gfpflag_names_kasan \
- , {(unsigned long)__GFP_SKIP_KASAN_POISON, "__GFP_SKIP_KASAN_POISON"}
+ , {(unsigned long)__GFP_SKIP_KASAN_POISON, "__GFP_SKIP_KASAN_POISON"} \
+ , {(unsigned long)__GFP_SKIP_KASAN_UNPOISON, \
+ "__GFP_SKIP_KASAN_UNPOISON"}
#else
#define __def_gfpflag_names_kasan
#endif
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2ef0f531e881..2076b5cc7e2c 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2394,6 +2394,26 @@ static bool check_new_pages(struct page *page, unsigned int order)
return false;
}

+static inline bool should_skip_kasan_unpoison(gfp_t flags, bool init_tags)
+{
+ /* Don't skip if a software KASAN mode is enabled. */
+ if (IS_ENABLED(CONFIG_KASAN_GENERIC) ||
+ IS_ENABLED(CONFIG_KASAN_SW_TAGS))
+ return false;
+
+ /* Skip, if hardware tag-based KASAN is not enabled. */
+ if (!kasan_hw_tags_enabled())
+ return true;
+
+ /*
+ * With hardware tag-based KASAN enabled, skip if either:
+ *
+ * 1. Memory tags have already been cleared via tag_clear_highpage().
+ * 2. Skipping has been requested via __GFP_SKIP_KASAN_UNPOISON.
+ */
+ return init_tags || (flags & __GFP_SKIP_KASAN_UNPOISON);
+}
+
inline void post_alloc_hook(struct page *page, unsigned int order,
gfp_t gfp_flags)
{
@@ -2433,15 +2453,8 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
/* Note that memory is already initialized by the loop above. */
init = false;
}
- /*
- * If either a software KASAN mode is enabled, or,
- * in the case of hardware tag-based KASAN,
- * if memory tags have not been cleared via tag_clear_highpage().
- */
- if (IS_ENABLED(CONFIG_KASAN_GENERIC) ||
- IS_ENABLED(CONFIG_KASAN_SW_TAGS) ||
- kasan_hw_tags_enabled() && !init_tags) {
- /* Mark shadow memory or set memory tags. */
+ if (!should_skip_kasan_unpoison(gfp_flags, init_tags)) {
+ /* Unpoison shadow memory or set memory tags. */
kasan_unpoison_pages(page, order, init);

/* Note that memory is already initialized by KASAN. */
--
2.25.1


2021-12-20 22:02:35

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 29/39] kasan, page_alloc: allow skipping memory init for HW_TAGS

From: Andrey Konovalov <[email protected]>

Add a new GFP flag __GFP_SKIP_ZERO that allows to skip memory
initialization. The flag is only effective with HW_TAGS KASAN.

This flag will be used by vmalloc code for page_alloc allocations
backing vmalloc() mappings in a following patch. The reason to skip
memory initialization for these pages in page_alloc is because vmalloc
code will be initializing them instead.

With the current implementation, when __GFP_SKIP_ZERO is provided,
__GFP_ZEROTAGS is ignored. This doesn't matter, as these two flags are
never provided at the same time. However, if this is changed in the
future, this particular implementation detail can be changed as well.

Signed-off-by: Andrey Konovalov <[email protected]>

---

Changes v3->v4:
- Only define __GFP_SKIP_ZERO when CONFIG_KASAN_HW_TAGS is enabled.
- Add __GFP_SKIP_ZERO to include/trace/events/mmflags.h.
- Use proper kasan_hw_tags_enabled() check instead of
IS_ENABLED(CONFIG_KASAN_HW_TAGS). Also add explicit checks for
software modes.

Changes v2->v3:
- Update patch description.

Changes v1->v2:
- Add this patch.
---
include/linux/gfp.h | 16 ++++++++++++----
include/trace/events/mmflags.h | 1 +
mm/page_alloc.c | 18 +++++++++++++++++-
3 files changed, 30 insertions(+), 5 deletions(-)

diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index 600f0749c3f2..c7ebc93296ed 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -55,14 +55,16 @@ struct vm_area_struct;
#define ___GFP_ACCOUNT 0x400000u
#define ___GFP_ZEROTAGS 0x800000u
#ifdef CONFIG_KASAN_HW_TAGS
-#define ___GFP_SKIP_KASAN_UNPOISON 0x1000000u
-#define ___GFP_SKIP_KASAN_POISON 0x2000000u
+#define ___GFP_SKIP_ZERO 0x1000000u
+#define ___GFP_SKIP_KASAN_UNPOISON 0x2000000u
+#define ___GFP_SKIP_KASAN_POISON 0x4000000u
#else
+#define ___GFP_SKIP_ZERO 0
#define ___GFP_SKIP_KASAN_UNPOISON 0
#define ___GFP_SKIP_KASAN_POISON 0
#endif
#ifdef CONFIG_LOCKDEP
-#define ___GFP_NOLOCKDEP 0x4000000u
+#define ___GFP_NOLOCKDEP 0x8000000u
#else
#define ___GFP_NOLOCKDEP 0
#endif
@@ -235,7 +237,11 @@ struct vm_area_struct;
* %__GFP_ZERO returns a zeroed page on success.
*
* %__GFP_ZEROTAGS zeroes memory tags at allocation time if the memory itself
- * is being zeroed (either via __GFP_ZERO or via init_on_alloc).
+ * is being zeroed (either via __GFP_ZERO or via init_on_alloc, provided that
+ * __GFP_SKIP_ZERO is not set).
+ *
+ * %__GFP_SKIP_ZERO makes page_alloc skip zeroing memory.
+ * Only effective when HW_TAGS KASAN is enabled.
*
* %__GFP_SKIP_KASAN_UNPOISON makes KASAN skip unpoisoning on page allocation.
* Only effective in HW_TAGS mode.
@@ -247,6 +253,7 @@ struct vm_area_struct;
#define __GFP_COMP ((__force gfp_t)___GFP_COMP)
#define __GFP_ZERO ((__force gfp_t)___GFP_ZERO)
#define __GFP_ZEROTAGS ((__force gfp_t)___GFP_ZEROTAGS)
+#define __GFP_SKIP_ZERO ((__force gfp_t)___GFP_SKIP_ZERO)
#define __GFP_SKIP_KASAN_UNPOISON ((__force gfp_t)___GFP_SKIP_KASAN_UNPOISON)
#define __GFP_SKIP_KASAN_POISON ((__force gfp_t)___GFP_SKIP_KASAN_POISON)

@@ -255,6 +262,7 @@ struct vm_area_struct;

/* Room for N __GFP_FOO bits */
#define __GFP_BITS_SHIFT (24 + \
+ IS_ENABLED(CONFIG_KASAN_HW_TAGS) + \
IS_ENABLED(CONFIG_KASAN_HW_TAGS) + \
IS_ENABLED(CONFIG_KASAN_HW_TAGS) + \
IS_ENABLED(CONFIG_LOCKDEP))
diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h
index 1329d9c4df56..f18eeb5fdde2 100644
--- a/include/trace/events/mmflags.h
+++ b/include/trace/events/mmflags.h
@@ -52,6 +52,7 @@

#ifdef CONFIG_KASAN_HW_TAGS
#define __def_gfpflag_names_kasan \
+ , {(unsigned long)__GFP_SKIP_ZERO, "__GFP_SKIP_ZERO"} \
, {(unsigned long)__GFP_SKIP_KASAN_POISON, "__GFP_SKIP_KASAN_POISON"} \
, {(unsigned long)__GFP_SKIP_KASAN_UNPOISON, \
"__GFP_SKIP_KASAN_UNPOISON"}
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2076b5cc7e2c..5e22068d4acb 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2414,10 +2414,26 @@ static inline bool should_skip_kasan_unpoison(gfp_t flags, bool init_tags)
return init_tags || (flags & __GFP_SKIP_KASAN_UNPOISON);
}

+static inline bool should_skip_init(gfp_t flags)
+{
+ /* Don't skip if a software KASAN mode is enabled. */
+ if (IS_ENABLED(CONFIG_KASAN_GENERIC) ||
+ IS_ENABLED(CONFIG_KASAN_SW_TAGS))
+ return false;
+
+ /* Don't skip, if hardware tag-based KASAN is not enabled. */
+ if (!kasan_hw_tags_enabled())
+ return false;
+
+ /* For hardware tag-based KASAN, skip if requested. */
+ return (flags & __GFP_SKIP_ZERO);
+}
+
inline void post_alloc_hook(struct page *page, unsigned int order,
gfp_t gfp_flags)
{
- bool init = !want_init_on_free() && want_init_on_alloc(gfp_flags);
+ bool init = !want_init_on_free() && want_init_on_alloc(gfp_flags) &&
+ !should_skip_init(gfp_flags);
bool init_tags = init && (gfp_flags & __GFP_ZEROTAGS);

set_page_private(page, 0);
--
2.25.1


2021-12-20 22:02:40

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 30/39] kasan, vmalloc: add vmalloc tagging for HW_TAGS

From: Andrey Konovalov <[email protected]>

Add vmalloc tagging support to HW_TAGS KASAN.

The key difference between HW_TAGS and the other two KASAN modes
when it comes to vmalloc: HW_TAGS KASAN can only assign tags to
physical memory. The other two modes have shadow memory covering
every mapped virtual memory region.

Make __kasan_unpoison_vmalloc() for HW_TAGS KASAN:

- Skip non-VM_ALLOC mappings as HW_TAGS KASAN can only tag a single
mapping of normal physical memory; see the comment in the function.
- Generate a random tag, tag the returned pointer and the allocation,
and initialize the allocation at the same time.
- Propagate the tag into the page stucts to allow accesses through
page_address(vmalloc_to_page()).

The rest of vmalloc-related KASAN hooks are not needed:

- The shadow-related ones are fully skipped.
- __kasan_poison_vmalloc() is kept as a no-op with a comment.

Poisoning and zeroing of physical pages that are backing vmalloc()
allocations are skipped via __GFP_SKIP_KASAN_UNPOISON and
__GFP_SKIP_ZERO: __kasan_unpoison_vmalloc() does that instead.

Enabling CONFIG_KASAN_VMALLOC with HW_TAGS is not yet allowed.

Signed-off-by: Andrey Konovalov <[email protected]>
Co-developed-by: Vincenzo Frascino <[email protected]>
Signed-off-by: Vincenzo Frascino <[email protected]>

---

Changes v3->v4:
- Fix comment style in __kasan_unpoison_vmalloc().
- Set __GFP_SKIP_KASAN_UNPOISON and __GFP_SKIP_ZERO flags instead of
resetting.
- Move setting KASAN GFP flags to __vmalloc_node_range() and do it
only for normal non-executable mapping when HW_TAGS KASAN is enabled.

Changes v2->v3:
- Switch kasan_unpoison_vmalloc() to using a single flags argument.
- Update kasan_unpoison_vmalloc() arguments in kernel/scs.c.
- Move allowing enabling KASAN_VMALLOC with SW_TAGS into a separate
patch.
- Minor comments fixes.
- Update patch description.

Changes v1->v2:
- Allow enabling CONFIG_KASAN_VMALLOC with HW_TAGS in this patch.
- Move memory init for page_alloc pages backing vmalloc() into
kasan_unpoison_vmalloc().
---
include/linux/kasan.h | 36 +++++++++++++++--
kernel/scs.c | 4 +-
mm/kasan/hw_tags.c | 92 +++++++++++++++++++++++++++++++++++++++++++
mm/kasan/shadow.c | 10 ++++-
mm/vmalloc.c | 51 ++++++++++++++++++------
5 files changed, 175 insertions(+), 18 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 92c5dfa29a35..499f1573dba4 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -25,6 +25,12 @@ struct kunit_kasan_expectation {

#endif

+typedef unsigned int __bitwise kasan_vmalloc_flags_t;
+
+#define KASAN_VMALLOC_NONE 0x00u
+#define KASAN_VMALLOC_INIT 0x01u
+#define KASAN_VMALLOC_VM_ALLOC 0x02u
+
#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)

#include <linux/pgtable.h>
@@ -418,18 +424,39 @@ static inline void kasan_init_hw_tags(void) { }

#ifdef CONFIG_KASAN_VMALLOC

+#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
+
void kasan_populate_early_vm_area_shadow(void *start, unsigned long size);
int kasan_populate_vmalloc(unsigned long addr, unsigned long size);
void kasan_release_vmalloc(unsigned long start, unsigned long end,
unsigned long free_region_start,
unsigned long free_region_end);

-void *__kasan_unpoison_vmalloc(const void *start, unsigned long size);
+#else /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */
+
+static inline void kasan_populate_early_vm_area_shadow(void *start,
+ unsigned long size)
+{ }
+static inline int kasan_populate_vmalloc(unsigned long start,
+ unsigned long size)
+{
+ return 0;
+}
+static inline void kasan_release_vmalloc(unsigned long start,
+ unsigned long end,
+ unsigned long free_region_start,
+ unsigned long free_region_end) { }
+
+#endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */
+
+void *__kasan_unpoison_vmalloc(const void *start, unsigned long size,
+ kasan_vmalloc_flags_t flags);
static __always_inline void *kasan_unpoison_vmalloc(const void *start,
- unsigned long size)
+ unsigned long size,
+ kasan_vmalloc_flags_t flags)
{
if (kasan_enabled())
- return __kasan_unpoison_vmalloc(start, size);
+ return __kasan_unpoison_vmalloc(start, size, flags);
return (void *)start;
}

@@ -456,7 +483,8 @@ static inline void kasan_release_vmalloc(unsigned long start,
unsigned long free_region_end) { }

static inline void *kasan_unpoison_vmalloc(const void *start,
- unsigned long size)
+ unsigned long size,
+ kasan_vmalloc_flags_t flags)
{
return (void *)start;
}
diff --git a/kernel/scs.c b/kernel/scs.c
index 579841be8864..b83bc9251f99 100644
--- a/kernel/scs.c
+++ b/kernel/scs.c
@@ -32,7 +32,7 @@ static void *__scs_alloc(int node)
for (i = 0; i < NR_CACHED_SCS; i++) {
s = this_cpu_xchg(scs_cache[i], NULL);
if (s) {
- kasan_unpoison_vmalloc(s, SCS_SIZE);
+ kasan_unpoison_vmalloc(s, SCS_SIZE, KASAN_VMALLOC_NONE);
memset(s, 0, SCS_SIZE);
return s;
}
@@ -78,7 +78,7 @@ void scs_free(void *s)
if (this_cpu_cmpxchg(scs_cache[i], 0, s) == NULL)
return;

- kasan_unpoison_vmalloc(s, SCS_SIZE);
+ kasan_unpoison_vmalloc(s, SCS_SIZE, KASAN_VMALLOC_NONE);
vfree_atomic(s);
}

diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index 76cf2b6229c7..21104fd51872 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -192,6 +192,98 @@ void __init kasan_init_hw_tags(void)
kasan_stack_collection_enabled() ? "on" : "off");
}

+#ifdef CONFIG_KASAN_VMALLOC
+
+static void unpoison_vmalloc_pages(const void *addr, u8 tag)
+{
+ struct vm_struct *area;
+ int i;
+
+ /*
+ * As hardware tag-based KASAN only tags VM_ALLOC vmalloc allocations
+ * (see the comment in __kasan_unpoison_vmalloc), all of the pages
+ * should belong to a single area.
+ */
+ area = find_vm_area((void *)addr);
+ if (WARN_ON(!area))
+ return;
+
+ for (i = 0; i < area->nr_pages; i++) {
+ struct page *page = area->pages[i];
+
+ page_kasan_tag_set(page, tag);
+ }
+}
+
+void *__kasan_unpoison_vmalloc(const void *start, unsigned long size,
+ kasan_vmalloc_flags_t flags)
+{
+ u8 tag;
+ unsigned long redzone_start, redzone_size;
+
+ if (!is_vmalloc_or_module_addr(start))
+ return (void *)start;
+
+ /*
+ * Skip unpoisoning and assigning a pointer tag for non-VM_ALLOC
+ * mappings as:
+ *
+ * 1. Unlike the software KASAN modes, hardware tag-based KASAN only
+ * supports tagging physical memory. Therefore, it can only tag a
+ * single mapping of normal physical pages.
+ * 2. Hardware tag-based KASAN can only tag memory mapped with special
+ * mapping protection bits, see arch_vmalloc_pgprot_modify().
+ * As non-VM_ALLOC mappings can be mapped outside of vmalloc code,
+ * providing these bits would require tracking all non-VM_ALLOC
+ * mappers.
+ *
+ * Thus, for VM_ALLOC mappings, hardware tag-based KASAN only tags
+ * the first virtual mapping, which is created by vmalloc().
+ * Tagging the page_alloc memory backing that vmalloc() allocation is
+ * skipped, see ___GFP_SKIP_KASAN_UNPOISON.
+ *
+ * For non-VM_ALLOC allocations, page_alloc memory is tagged as usual.
+ */
+ if (!(flags & KASAN_VMALLOC_VM_ALLOC))
+ return (void *)start;
+
+ tag = kasan_random_tag();
+ start = set_tag(start, tag);
+
+ /* Unpoison and initialize memory up to size. */
+ kasan_unpoison(start, size, flags & KASAN_VMALLOC_INIT);
+
+ /*
+ * Explicitly poison and initialize the in-page vmalloc() redzone.
+ * Unlike software KASAN modes, hardware tag-based KASAN doesn't
+ * unpoison memory when populating shadow for vmalloc() space.
+ */
+ redzone_start = round_up((unsigned long)start + size,
+ KASAN_GRANULE_SIZE);
+ redzone_size = round_up(redzone_start, PAGE_SIZE) - redzone_start;
+ kasan_poison((void *)redzone_start, redzone_size, KASAN_TAG_INVALID,
+ flags & KASAN_VMALLOC_INIT);
+
+ /*
+ * Set per-page tag flags to allow accessing physical memory for the
+ * vmalloc() mapping through page_address(vmalloc_to_page()).
+ */
+ unpoison_vmalloc_pages(start, tag);
+
+ return (void *)start;
+}
+
+void __kasan_poison_vmalloc(const void *start, unsigned long size)
+{
+ /*
+ * No tagging here.
+ * The physical pages backing the vmalloc() allocation are poisoned
+ * through the usual page_alloc paths.
+ */
+}
+
+#endif
+
#if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST)

void kasan_enable_tagging_sync(void)
diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index 5a866f6663fc..b958babc8fed 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -475,8 +475,16 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end,
}
}

-void *__kasan_unpoison_vmalloc(const void *start, unsigned long size)
+void *__kasan_unpoison_vmalloc(const void *start, unsigned long size,
+ kasan_vmalloc_flags_t flags)
{
+ /*
+ * Software KASAN modes unpoison both VM_ALLOC and non-VM_ALLOC
+ * mappings, so the KASAN_VMALLOC_VM_ALLOC flag is ignored.
+ * Software KASAN modes can't optimize zeroing memory by combining it
+ * with setting memory tags, so the KASAN_VMALLOC_INIT flag is ignored.
+ */
+
if (!is_vmalloc_or_module_addr(start))
return (void *)start;

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index cc23e181b0ec..47f3de7a3396 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2215,8 +2215,12 @@ void *vm_map_ram(struct page **pages, unsigned int count, int node)
return NULL;
}

- /* Mark the pages as accessible, now that they are mapped. */
- mem = kasan_unpoison_vmalloc(mem, size);
+ /*
+ * Mark the pages as accessible, now that they are mapped.
+ * With hardware tag-based KASAN, marking is skipped for
+ * non-VM_ALLOC mappings, see __kasan_unpoison_vmalloc().
+ */
+ mem = kasan_unpoison_vmalloc(mem, size, KASAN_VMALLOC_NONE);

return mem;
}
@@ -2450,9 +2454,12 @@ static struct vm_struct *__get_vm_area_node(unsigned long size,
* best-effort approach, as they can be mapped outside of vmalloc code.
* For VM_ALLOC mappings, the pages are marked as accessible after
* getting mapped in __vmalloc_node_range().
+ * With hardware tag-based KASAN, marking is skipped for
+ * non-VM_ALLOC mappings, see __kasan_unpoison_vmalloc().
*/
if (!(flags & VM_ALLOC))
- area->addr = kasan_unpoison_vmalloc(area->addr, requested_size);
+ area->addr = kasan_unpoison_vmalloc(area->addr, requested_size,
+ KASAN_VMALLOC_NONE);

return area;
}
@@ -3058,6 +3065,7 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
{
struct vm_struct *area;
void *ret;
+ kasan_vmalloc_flags_t kasan_flags;
unsigned long real_size = size;
unsigned long real_align = align;
unsigned int shift = PAGE_SHIFT;
@@ -3110,21 +3118,39 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
goto fail;
}

- /*
- * Modify protection bits to allow tagging.
- * This must be done before mapping by __vmalloc_area_node().
- */
+ /* Prepare arguments for __vmalloc_area_node(). */
if (kasan_hw_tags_enabled() &&
- pgprot_val(prot) == pgprot_val(PAGE_KERNEL))
+ pgprot_val(prot) == pgprot_val(PAGE_KERNEL)) {
+ /*
+ * Modify protection bits to allow tagging.
+ * This must be done before mapping in __vmalloc_area_node().
+ */
prot = arch_vmap_pgprot_tagged(prot);

+ /*
+ * Skip page_alloc poisoning and zeroing for physical pages
+ * backing VM_ALLOC mapping. Memory is instead poisoned and
+ * zeroed by kasan_unpoison_vmalloc().
+ */
+ gfp_mask |= __GFP_SKIP_KASAN_UNPOISON | __GFP_SKIP_ZERO;
+ }
+
/* Allocate physical pages and map them into vmalloc space. */
ret = __vmalloc_area_node(area, gfp_mask, prot, shift, node);
if (!ret)
goto fail;

- /* Mark the pages as accessible, now that they are mapped. */
- area->addr = kasan_unpoison_vmalloc(area->addr, real_size);
+ /*
+ * Mark the pages as accessible, now that they are mapped.
+ * The init condition should match the one in post_alloc_hook()
+ * (except for the should_skip_init() check) to make sure that memory
+ * is initialized under the same conditions regardless of the enabled
+ * KASAN mode.
+ */
+ kasan_flags = KASAN_VMALLOC_VM_ALLOC;
+ if (!want_init_on_free() && want_init_on_alloc(gfp_mask))
+ kasan_flags |= KASAN_VMALLOC_INIT;
+ area->addr = kasan_unpoison_vmalloc(area->addr, real_size, kasan_flags);

/*
* In this function, newly allocated vm_struct has VM_UNINITIALIZED
@@ -3832,10 +3858,13 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
/*
* Mark allocated areas as accessible. Do it now as a best-effort
* approach, as they can be mapped outside of vmalloc code.
+ * With hardware tag-based KASAN, marking is skipped for
+ * non-VM_ALLOC mappings, see __kasan_unpoison_vmalloc().
*/
for (area = 0; area < nr_vms; area++)
vms[area]->addr = kasan_unpoison_vmalloc(vms[area]->addr,
- vms[area]->size);
+ vms[area]->size,
+ KASAN_VMALLOC_NONE);

kfree(vas);
return vms;
--
2.25.1


2021-12-20 22:02:44

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 31/39] kasan, vmalloc: only tag normal vmalloc allocations

From: Andrey Konovalov <[email protected]>

The kernel can use to allocate executable memory. The only supported way
to do that is via __vmalloc_node_range() with the executable bit set in
the prot argument. (vmap() resets the bit via pgprot_nx()).

Once tag-based KASAN modes start tagging vmalloc allocations, executing
code from such allocations will lead to the PC register getting a tag,
which is not tolerated by the kernel.

Only tag the allocations for normal kernel pages.

Signed-off-by: Andrey Konovalov <[email protected]>

---

Changes v3->v4:
- Rename KASAN_VMALLOC_NOEXEC to KASAN_VMALLOC_PROT_NORMAL.
- Compare with PAGE_KERNEL instead of using pgprot_nx().
- Update patch description.

Changes v2->v3:
- Add this patch.
---
include/linux/kasan.h | 7 ++++---
mm/kasan/hw_tags.c | 7 +++++++
mm/kasan/shadow.c | 7 +++++++
mm/vmalloc.c | 49 +++++++++++++++++++++++++------------------
4 files changed, 47 insertions(+), 23 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 499f1573dba4..3593c95d1fa5 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -27,9 +27,10 @@ struct kunit_kasan_expectation {

typedef unsigned int __bitwise kasan_vmalloc_flags_t;

-#define KASAN_VMALLOC_NONE 0x00u
-#define KASAN_VMALLOC_INIT 0x01u
-#define KASAN_VMALLOC_VM_ALLOC 0x02u
+#define KASAN_VMALLOC_NONE 0x00u
+#define KASAN_VMALLOC_INIT 0x01u
+#define KASAN_VMALLOC_VM_ALLOC 0x02u
+#define KASAN_VMALLOC_PROT_NORMAL 0x04u

#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)

diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index 21104fd51872..2e9378a4f07f 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -247,6 +247,13 @@ void *__kasan_unpoison_vmalloc(const void *start, unsigned long size,
if (!(flags & KASAN_VMALLOC_VM_ALLOC))
return (void *)start;

+ /*
+ * Don't tag executable memory.
+ * The kernel doesn't tolerate having the PC register tagged.
+ */
+ if (!(flags & KASAN_VMALLOC_PROT_NORMAL))
+ return (void *)start;
+
tag = kasan_random_tag();
start = set_tag(start, tag);

diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index b958babc8fed..7272e248db87 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -488,6 +488,13 @@ void *__kasan_unpoison_vmalloc(const void *start, unsigned long size,
if (!is_vmalloc_or_module_addr(start))
return (void *)start;

+ /*
+ * Don't tag executable memory.
+ * The kernel doesn't tolerate having the PC register tagged.
+ */
+ if (!(flags & KASAN_VMALLOC_PROT_NORMAL))
+ return (void *)start;
+
start = set_tag(start, kasan_random_tag());
kasan_unpoison(start, size, false);
return (void *)start;
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 47f3de7a3396..01ec2ef447af 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2220,7 +2220,7 @@ void *vm_map_ram(struct page **pages, unsigned int count, int node)
* With hardware tag-based KASAN, marking is skipped for
* non-VM_ALLOC mappings, see __kasan_unpoison_vmalloc().
*/
- mem = kasan_unpoison_vmalloc(mem, size, KASAN_VMALLOC_NONE);
+ mem = kasan_unpoison_vmalloc(mem, size, KASAN_VMALLOC_PROT_NORMAL);

return mem;
}
@@ -2459,7 +2459,7 @@ static struct vm_struct *__get_vm_area_node(unsigned long size,
*/
if (!(flags & VM_ALLOC))
area->addr = kasan_unpoison_vmalloc(area->addr, requested_size,
- KASAN_VMALLOC_NONE);
+ KASAN_VMALLOC_PROT_NORMAL);

return area;
}
@@ -3065,7 +3065,7 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
{
struct vm_struct *area;
void *ret;
- kasan_vmalloc_flags_t kasan_flags;
+ kasan_vmalloc_flags_t kasan_flags = KASAN_VMALLOC_NONE;
unsigned long real_size = size;
unsigned long real_align = align;
unsigned int shift = PAGE_SHIFT;
@@ -3118,21 +3118,28 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
goto fail;
}

- /* Prepare arguments for __vmalloc_area_node(). */
- if (kasan_hw_tags_enabled() &&
- pgprot_val(prot) == pgprot_val(PAGE_KERNEL)) {
- /*
- * Modify protection bits to allow tagging.
- * This must be done before mapping in __vmalloc_area_node().
- */
- prot = arch_vmap_pgprot_tagged(prot);
+ /*
+ * Prepare arguments for __vmalloc_area_node() and
+ * kasan_unpoison_vmalloc().
+ */
+ if (pgprot_val(prot) == pgprot_val(PAGE_KERNEL)) {
+ if (kasan_hw_tags_enabled()) {
+ /*
+ * Modify protection bits to allow tagging.
+ * This must be done before mapping.
+ */
+ prot = arch_vmap_pgprot_tagged(prot);

- /*
- * Skip page_alloc poisoning and zeroing for physical pages
- * backing VM_ALLOC mapping. Memory is instead poisoned and
- * zeroed by kasan_unpoison_vmalloc().
- */
- gfp_mask |= __GFP_SKIP_KASAN_UNPOISON | __GFP_SKIP_ZERO;
+ /*
+ * Skip page_alloc poisoning and zeroing for physical
+ * pages backing VM_ALLOC mapping. Memory is instead
+ * poisoned and zeroed by kasan_unpoison_vmalloc().
+ */
+ gfp_mask |= __GFP_SKIP_KASAN_UNPOISON | __GFP_SKIP_ZERO;
+ }
+
+ /* Take note that the mapping is PAGE_KERNEL. */
+ kasan_flags |= KASAN_VMALLOC_PROT_NORMAL;
}

/* Allocate physical pages and map them into vmalloc space. */
@@ -3146,10 +3153,13 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
* (except for the should_skip_init() check) to make sure that memory
* is initialized under the same conditions regardless of the enabled
* KASAN mode.
+ * Tag-based KASAN modes only assign tags to normal non-executable
+ * allocations, see __kasan_unpoison_vmalloc().
*/
- kasan_flags = KASAN_VMALLOC_VM_ALLOC;
+ kasan_flags |= KASAN_VMALLOC_VM_ALLOC;
if (!want_init_on_free() && want_init_on_alloc(gfp_mask))
kasan_flags |= KASAN_VMALLOC_INIT;
+ /* KASAN_VMALLOC_PROT_NORMAL already set if required. */
area->addr = kasan_unpoison_vmalloc(area->addr, real_size, kasan_flags);

/*
@@ -3863,8 +3873,7 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
*/
for (area = 0; area < nr_vms; area++)
vms[area]->addr = kasan_unpoison_vmalloc(vms[area]->addr,
- vms[area]->size,
- KASAN_VMALLOC_NONE);
+ vms[area]->size, KASAN_VMALLOC_PROT_NORMAL);

kfree(vas);
return vms;
--
2.25.1


2021-12-20 22:02:53

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 32/39] kasan, arm64: don't tag executable vmalloc allocations

From: Andrey Konovalov <[email protected]>

Besides asking vmalloc memory to be executable via the prot argument
of __vmalloc_node_range() (see the previous patch), the kernel can skip
that bit and instead mark memory as executable via set_memory_x().

Once tag-based KASAN modes start tagging vmalloc allocations, executing
code from such allocations will lead to the PC register getting a tag,
which is not tolerated by the kernel.

Generic kernel code typically allocates memory via module_alloc() if
it intends to mark memory as executable. (On arm64 module_alloc()
uses __vmalloc_node_range() without setting the executable bit).

Thus, reset pointer tags of pointers returned from module_alloc().

However, on arm64 there's an exception: the eBPF subsystem. Instead of
using module_alloc(), it uses vmalloc() (via bpf_jit_alloc_exec())
to allocate its JIT region.

Thus, reset pointer tags of pointers returned from bpf_jit_alloc_exec().

Resetting tags for these pointers results in untagged pointers being
passed to set_memory_x(). This causes conflicts in arithmetic checks
in change_memory_common(), as vm_struct->addr pointer returned by
find_vm_area() is tagged.

Reset pointer tag of find_vm_area(addr)->addr in change_memory_common().

Signed-off-by: Andrey Konovalov <[email protected]>

---

Changes v3->v4:
- Reset pointer tag in change_memory_common().

Changes v2->v3:
- Add this patch.
---
arch/arm64/kernel/module.c | 3 ++-
arch/arm64/mm/pageattr.c | 2 +-
arch/arm64/net/bpf_jit_comp.c | 3 ++-
3 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index d3a1fa818348..f2d4bb14bfab 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -63,7 +63,8 @@ void *module_alloc(unsigned long size)
return NULL;
}

- return p;
+ /* Memory is intended to be executable, reset the pointer tag. */
+ return kasan_reset_tag(p);
}

enum aarch64_reloc_op {
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index a3bacd79507a..64e985eaa52d 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -85,7 +85,7 @@ static int change_memory_common(unsigned long addr, int numpages,
*/
area = find_vm_area((void *)addr);
if (!area ||
- end > (unsigned long)area->addr + area->size ||
+ end > (unsigned long)kasan_reset_tag(area->addr) + area->size ||
!(area->flags & VM_ALLOC))
return -EINVAL;

diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
index 07aad85848fa..381a67922c2d 100644
--- a/arch/arm64/net/bpf_jit_comp.c
+++ b/arch/arm64/net/bpf_jit_comp.c
@@ -1147,7 +1147,8 @@ u64 bpf_jit_alloc_exec_limit(void)

void *bpf_jit_alloc_exec(unsigned long size)
{
- return vmalloc(size);
+ /* Memory is intended to be executable, reset the pointer tag. */
+ return kasan_reset_tag(vmalloc(size));
}

void bpf_jit_free_exec(void *addr)
--
2.25.1


2021-12-20 22:03:00

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 33/39] kasan: mark kasan_arg_stacktrace as __initdata

From: Andrey Konovalov <[email protected]>

As kasan_arg_stacktrace is only used in __init functions, mark it as
__initdata instead of __ro_after_init to allow it be freed after boot.

The other enums for KASAN args are used in kasan_init_hw_tags_cpu(),
which is not marked as __init as a CPU can be hot-plugged after boot.
Clarify this in a comment.

Signed-off-by: Andrey Konovalov <[email protected]>
Suggested-by: Marco Elver <[email protected]>

---

Changes v1->v2:
- Add this patch.
---
mm/kasan/hw_tags.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index 2e9378a4f07f..6509809dd5d8 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -40,7 +40,7 @@ enum kasan_arg_stacktrace {

static enum kasan_arg kasan_arg __ro_after_init;
static enum kasan_arg_mode kasan_arg_mode __ro_after_init;
-static enum kasan_arg_stacktrace kasan_arg_stacktrace __ro_after_init;
+static enum kasan_arg_stacktrace kasan_arg_stacktrace __initdata;

/* Whether KASAN is enabled at all. */
DEFINE_STATIC_KEY_FALSE(kasan_flag_enabled);
@@ -116,7 +116,10 @@ static inline const char *kasan_mode_info(void)
return "sync";
}

-/* kasan_init_hw_tags_cpu() is called for each CPU. */
+/*
+ * kasan_init_hw_tags_cpu() is called for each CPU.
+ * Not marked as __init as a CPU can be hot-plugged after boot.
+ */
void kasan_init_hw_tags_cpu(void)
{
/*
--
2.25.1


2021-12-20 22:03:10

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 34/39] kasan: simplify kasan_init_hw_tags

From: Andrey Konovalov <[email protected]>

Simplify kasan_init_hw_tags():

- Remove excessive comments in kasan_arg_mode switch.
- Combine DEFAULT and ON cases in kasan_arg_stacktrace switch.

Signed-off-by: Andrey Konovalov <[email protected]>

---

Changes v1->v2:
- Add this patch.
---
mm/kasan/hw_tags.c | 12 +++---------
1 file changed, 3 insertions(+), 9 deletions(-)

diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index 6509809dd5d8..99230e666c1b 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -159,20 +159,15 @@ void __init kasan_init_hw_tags(void)

switch (kasan_arg_mode) {
case KASAN_ARG_MODE_DEFAULT:
- /*
- * Default to sync mode.
- */
+ /* Default to sync mode. */
fallthrough;
case KASAN_ARG_MODE_SYNC:
- /* Sync mode enabled. */
kasan_mode = KASAN_MODE_SYNC;
break;
case KASAN_ARG_MODE_ASYNC:
- /* Async mode enabled. */
kasan_mode = KASAN_MODE_ASYNC;
break;
case KASAN_ARG_MODE_ASYMM:
- /* Asymm mode enabled. */
kasan_mode = KASAN_MODE_ASYMM;
break;
}
@@ -180,14 +175,13 @@ void __init kasan_init_hw_tags(void)
switch (kasan_arg_stacktrace) {
case KASAN_ARG_STACKTRACE_DEFAULT:
/* Default to enabling stack trace collection. */
+ fallthrough;
+ case KASAN_ARG_STACKTRACE_ON:
static_branch_enable(&kasan_flag_stacktrace);
break;
case KASAN_ARG_STACKTRACE_OFF:
/* Do nothing, kasan_flag_stacktrace keeps its default value. */
break;
- case KASAN_ARG_STACKTRACE_ON:
- static_branch_enable(&kasan_flag_stacktrace);
- break;
}

pr_info("KernelAddressSanitizer initialized (hw-tags, mode=%s, stacktrace=%s)\n",
--
2.25.1


2021-12-20 22:03:15

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 35/39] kasan: add kasan.vmalloc command line flag

From: Andrey Konovalov <[email protected]>

Allow disabling vmalloc() tagging for HW_TAGS KASAN via a kasan.vmalloc
command line switch.

This is a fail-safe switch intended for production systems that enable
HW_TAGS KASAN. In case vmalloc() tagging ends up having an issue not
detected during testing but that manifests in production, kasan.vmalloc
allows to turn vmalloc() tagging off while leaving page_alloc/slab
tagging on.

Signed-off-by: Andrey Konovalov <[email protected]>

---

Changes v1->v2:
- Mark kasan_arg_stacktrace as __initdata instead of __ro_after_init.
- Combine KASAN_ARG_VMALLOC_DEFAULT and KASAN_ARG_VMALLOC_ON switch
cases.
---
mm/kasan/hw_tags.c | 45 ++++++++++++++++++++++++++++++++++++++++++++-
mm/kasan/kasan.h | 6 ++++++
2 files changed, 50 insertions(+), 1 deletion(-)

diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index 99230e666c1b..657b23cebe28 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -32,6 +32,12 @@ enum kasan_arg_mode {
KASAN_ARG_MODE_ASYMM,
};

+enum kasan_arg_vmalloc {
+ KASAN_ARG_VMALLOC_DEFAULT,
+ KASAN_ARG_VMALLOC_OFF,
+ KASAN_ARG_VMALLOC_ON,
+};
+
enum kasan_arg_stacktrace {
KASAN_ARG_STACKTRACE_DEFAULT,
KASAN_ARG_STACKTRACE_OFF,
@@ -40,6 +46,7 @@ enum kasan_arg_stacktrace {

static enum kasan_arg kasan_arg __ro_after_init;
static enum kasan_arg_mode kasan_arg_mode __ro_after_init;
+static enum kasan_arg_vmalloc kasan_arg_vmalloc __initdata;
static enum kasan_arg_stacktrace kasan_arg_stacktrace __initdata;

/* Whether KASAN is enabled at all. */
@@ -50,6 +57,9 @@ EXPORT_SYMBOL(kasan_flag_enabled);
enum kasan_mode kasan_mode __ro_after_init;
EXPORT_SYMBOL_GPL(kasan_mode);

+/* Whether to enable vmalloc tagging. */
+DEFINE_STATIC_KEY_FALSE(kasan_flag_vmalloc);
+
/* Whether to collect alloc/free stack traces. */
DEFINE_STATIC_KEY_FALSE(kasan_flag_stacktrace);

@@ -89,6 +99,23 @@ static int __init early_kasan_mode(char *arg)
}
early_param("kasan.mode", early_kasan_mode);

+/* kasan.vmalloc=off/on */
+static int __init early_kasan_flag_vmalloc(char *arg)
+{
+ if (!arg)
+ return -EINVAL;
+
+ if (!strcmp(arg, "off"))
+ kasan_arg_vmalloc = KASAN_ARG_VMALLOC_OFF;
+ else if (!strcmp(arg, "on"))
+ kasan_arg_vmalloc = KASAN_ARG_VMALLOC_ON;
+ else
+ return -EINVAL;
+
+ return 0;
+}
+early_param("kasan.vmalloc", early_kasan_flag_vmalloc);
+
/* kasan.stacktrace=off/on */
static int __init early_kasan_flag_stacktrace(char *arg)
{
@@ -172,6 +199,18 @@ void __init kasan_init_hw_tags(void)
break;
}

+ switch (kasan_arg_vmalloc) {
+ case KASAN_ARG_VMALLOC_DEFAULT:
+ /* Default to enabling vmalloc tagging. */
+ fallthrough;
+ case KASAN_ARG_VMALLOC_ON:
+ static_branch_enable(&kasan_flag_vmalloc);
+ break;
+ case KASAN_ARG_VMALLOC_OFF:
+ /* Do nothing, kasan_flag_vmalloc keeps its default value. */
+ break;
+ }
+
switch (kasan_arg_stacktrace) {
case KASAN_ARG_STACKTRACE_DEFAULT:
/* Default to enabling stack trace collection. */
@@ -184,8 +223,9 @@ void __init kasan_init_hw_tags(void)
break;
}

- pr_info("KernelAddressSanitizer initialized (hw-tags, mode=%s, stacktrace=%s)\n",
+ pr_info("KernelAddressSanitizer initialized (hw-tags, mode=%s, vmalloc=%s, stacktrace=%s)\n",
kasan_mode_info(),
+ kasan_vmalloc_enabled() ? "on" : "off",
kasan_stack_collection_enabled() ? "on" : "off");
}

@@ -218,6 +258,9 @@ void *__kasan_unpoison_vmalloc(const void *start, unsigned long size,
u8 tag;
unsigned long redzone_start, redzone_size;

+ if (!kasan_vmalloc_enabled())
+ return (void *)start;
+
if (!is_vmalloc_or_module_addr(start))
return (void *)start;

diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 020f3e57a03f..49a5d5e2e948 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -12,6 +12,7 @@
#include <linux/static_key.h>
#include "../slab.h"

+DECLARE_STATIC_KEY_FALSE(kasan_flag_vmalloc);
DECLARE_STATIC_KEY_FALSE(kasan_flag_stacktrace);

enum kasan_mode {
@@ -22,6 +23,11 @@ enum kasan_mode {

extern enum kasan_mode kasan_mode __ro_after_init;

+static inline bool kasan_vmalloc_enabled(void)
+{
+ return static_branch_likely(&kasan_flag_vmalloc);
+}
+
static inline bool kasan_stack_collection_enabled(void)
{
return static_branch_unlikely(&kasan_flag_stacktrace);
--
2.25.1


2021-12-20 22:03:40

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 36/39] kasan: allow enabling KASAN_VMALLOC and SW/HW_TAGS

From: Andrey Konovalov <[email protected]>

Allow enabling CONFIG_KASAN_VMALLOC with SW_TAGS and HW_TAGS KASAN
modes.

Also adjust CONFIG_KASAN_VMALLOC description:

- Mention HW_TAGS support.
- Remove unneeded internal details: they have no place in Kconfig
description and are already explained in the documentation.

Signed-off-by: Andrey Konovalov <[email protected]>
---
lib/Kconfig.kasan | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 879757b6dd14..1f3e620188a2 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -178,17 +178,17 @@ config KASAN_TAGS_IDENTIFY
memory consumption.

config KASAN_VMALLOC
- bool "Back mappings in vmalloc space with real shadow memory"
- depends on KASAN_GENERIC && HAVE_ARCH_KASAN_VMALLOC
+ bool "Check accesses to vmalloc allocations"
+ depends on HAVE_ARCH_KASAN_VMALLOC
help
- By default, the shadow region for vmalloc space is the read-only
- zero page. This means that KASAN cannot detect errors involving
- vmalloc space.
-
- Enabling this option will hook in to vmap/vmalloc and back those
- mappings with real shadow memory allocated on demand. This allows
- for KASAN to detect more sorts of errors (and to support vmapped
- stacks), but at the cost of higher memory usage.
+ This mode makes KASAN check accesses to vmalloc allocations for
+ validity.
+
+ With software KASAN modes, checking is done for all types of vmalloc
+ allocations. Enabling this option leads to higher memory usage.
+
+ With hardware tag-based KASAN, only VM_ALLOC mappings are checked.
+ There is no additional memory usage.

config KASAN_KUNIT_TEST
tristate "KUnit-compatible tests of KASAN bug detection capabilities" if !KUNIT_ALL_TESTS
--
2.25.1


2021-12-20 22:03:44

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 37/39] arm64: select KASAN_VMALLOC for SW/HW_TAGS modes

From: Andrey Konovalov <[email protected]>

Generic KASAN already selects KASAN_VMALLOC to allow VMAP_STACK to be
selected unconditionally, see commit acc3042d62cb9 ("arm64: Kconfig:
select KASAN_VMALLOC if KANSAN_GENERIC is enabled").

The same change is needed for SW_TAGS KASAN.

HW_TAGS KASAN does not require enabling KASAN_VMALLOC for VMAP_STACK,
they already work together as is. Still, selecting KASAN_VMALLOC still
makes sense to make vmalloc() always protected. In case any bugs in
KASAN's vmalloc() support are discovered, the command line kasan.vmalloc
flag can be used to disable vmalloc() checking.

Select KASAN_VMALLOC for all KASAN modes for arm64.

Signed-off-by: Andrey Konovalov <[email protected]>
Acked-by: Catalin Marinas <[email protected]>

---

Changes v2->v3:
- Update patch description.

Changes v1->v2:
- Split out this patch.
---
arch/arm64/Kconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 508769fe5be5..0833b3e87724 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -205,7 +205,7 @@ config ARM64
select IOMMU_DMA if IOMMU_SUPPORT
select IRQ_DOMAIN
select IRQ_FORCED_THREADING
- select KASAN_VMALLOC if KASAN_GENERIC
+ select KASAN_VMALLOC if KASAN
select MODULES_USE_ELF_RELA
select NEED_DMA_MAP_STATE
select NEED_SG_DMA_LENGTH
--
2.25.1


2021-12-20 22:03:47

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 38/39] kasan: documentation updates

From: Andrey Konovalov <[email protected]>

Update KASAN documentation:

- Bump Clang version requirement for HW_TAGS as ARM64_MTE depends on
AS_HAS_LSE_ATOMICS as of commit 2decad92f4731 ("arm64: mte: Ensure
TIF_MTE_ASYNC_FAULT is set atomically"), which requires Clang 12.
- Add description of the new kasan.vmalloc command line flag.
- Mention that SW_TAGS and HW_TAGS modes now support vmalloc tagging.
- Explicitly say that the "Shadow memory" section is only applicable
to software KASAN modes.
- Mention that shadow-based KASAN_VMALLOC is supported on arm64.

Signed-off-by: Andrey Konovalov <[email protected]>
---
Documentation/dev-tools/kasan.rst | 17 +++++++++++------
1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst
index 8089c559d339..7614a1fc30fa 100644
--- a/Documentation/dev-tools/kasan.rst
+++ b/Documentation/dev-tools/kasan.rst
@@ -30,7 +30,7 @@ Software tag-based KASAN mode is only supported in Clang.

The hardware KASAN mode (#3) relies on hardware to perform the checks but
still requires a compiler version that supports memory tagging instructions.
-This mode is supported in GCC 10+ and Clang 11+.
+This mode is supported in GCC 10+ and Clang 12+.

Both software KASAN modes work with SLUB and SLAB memory allocators,
while the hardware tag-based KASAN currently only supports SLUB.
@@ -206,6 +206,9 @@ additional boot parameters that allow disabling KASAN or controlling features:
Asymmetric mode: a bad access is detected synchronously on reads and
asynchronously on writes.

+- ``kasan.vmalloc=off`` or ``=on`` disables or enables tagging of vmalloc
+ allocations (default: ``on``).
+
- ``kasan.stacktrace=off`` or ``=on`` disables or enables alloc and free stack
traces collection (default: ``on``).

@@ -279,8 +282,8 @@ Software tag-based KASAN uses 0xFF as a match-all pointer tag (accesses through
pointers with the 0xFF pointer tag are not checked). The value 0xFE is currently
reserved to tag freed memory regions.

-Software tag-based KASAN currently only supports tagging of slab and page_alloc
-memory.
+Software tag-based KASAN currently only supports tagging of slab, page_alloc,
+and vmalloc memory.

Hardware tag-based KASAN
~~~~~~~~~~~~~~~~~~~~~~~~
@@ -303,8 +306,8 @@ Hardware tag-based KASAN uses 0xFF as a match-all pointer tag (accesses through
pointers with the 0xFF pointer tag are not checked). The value 0xFE is currently
reserved to tag freed memory regions.

-Hardware tag-based KASAN currently only supports tagging of slab and page_alloc
-memory.
+Hardware tag-based KASAN currently only supports tagging of slab, page_alloc,
+and VM_ALLOC-based vmalloc memory.

If the hardware does not support MTE (pre ARMv8.5), hardware tag-based KASAN
will not be enabled. In this case, all KASAN boot parameters are ignored.
@@ -319,6 +322,8 @@ checking gets disabled.
Shadow memory
-------------

+The contents of this section are only applicable to software KASAN modes.
+
The kernel maps memory in several different parts of the address space.
The range of kernel virtual addresses is large: there is not enough real
memory to support a real shadow region for every address that could be
@@ -349,7 +354,7 @@ CONFIG_KASAN_VMALLOC

With ``CONFIG_KASAN_VMALLOC``, KASAN can cover vmalloc space at the
cost of greater memory usage. Currently, this is supported on x86,
-riscv, s390, and powerpc.
+arm64, riscv, s390, and powerpc.

This works by hooking into vmalloc and vmap and dynamically
allocating real shadow memory to back the mappings.
--
2.25.1


2021-12-20 22:03:52

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH mm v4 39/39] kasan: improve vmalloc tests

From: Andrey Konovalov <[email protected]>

Update the existing vmalloc_oob() test to account for the specifics
of the tag-based modes. Also add a few new checks and comments.

Add new vmalloc-related tests:

- vmalloc_helpers_tags() to check that exported vmalloc helpers can
handle tagged pointers.
- vmap_tags() to check that SW_TAGS mode properly tags vmap() mappings.
- vm_map_ram_tags() to check that SW_TAGS mode properly tags
vm_map_ram() mappings.
- vmalloc_percpu() to check that SW_TAGS mode tags regions allocated
for __alloc_percpu(). The tagging of per-cpu mappings is best-effort;
proper tagging is tracked in [1].

[1] https://bugzilla.kernel.org/show_bug.cgi?id=215019

Signed-off-by: Andrey Konovalov <[email protected]>
---
lib/test_kasan.c | 189 +++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 183 insertions(+), 6 deletions(-)

diff --git a/lib/test_kasan.c b/lib/test_kasan.c
index 847cdbefab46..ae7b2e703f1b 100644
--- a/lib/test_kasan.c
+++ b/lib/test_kasan.c
@@ -19,6 +19,7 @@
#include <linux/uaccess.h>
#include <linux/io.h>
#include <linux/vmalloc.h>
+#include <linux/set_memory.h>

#include <asm/page.h>

@@ -1049,21 +1050,181 @@ static void kmalloc_double_kzfree(struct kunit *test)
KUNIT_EXPECT_KASAN_FAIL(test, kfree_sensitive(ptr));
}

+static void vmalloc_helpers_tags(struct kunit *test)
+{
+ void *ptr;
+ int rv;
+
+ /* This test is intended for tag-based modes. */
+ KASAN_TEST_NEEDS_CONFIG_OFF(test, CONFIG_KASAN_GENERIC);
+
+ KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_VMALLOC);
+
+ ptr = vmalloc(PAGE_SIZE);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
+
+ /* Check that the returned pointer is tagged. */
+ KUNIT_EXPECT_GE(test, (u8)get_tag(ptr), (u8)KASAN_TAG_MIN);
+ KUNIT_EXPECT_LT(test, (u8)get_tag(ptr), (u8)KASAN_TAG_KERNEL);
+
+ /* Make sure exported vmalloc helpers handle tagged pointers. */
+ KUNIT_ASSERT_TRUE(test, is_vmalloc_addr(ptr));
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, vmalloc_to_page(ptr));
+
+ /* Make sure vmalloc'ed memory permissions can be changed. */
+ rv = set_memory_ro((unsigned long)ptr, 1);
+ KUNIT_ASSERT_GE(test, rv, 0);
+ rv = set_memory_rw((unsigned long)ptr, 1);
+ KUNIT_ASSERT_GE(test, rv, 0);
+
+ vfree(ptr);
+}
+
static void vmalloc_oob(struct kunit *test)
{
- void *area;
+ char *v_ptr, *p_ptr;
+ struct page *page;
+ size_t size = PAGE_SIZE / 2 - KASAN_GRANULE_SIZE - 5;

KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_VMALLOC);

+ v_ptr = vmalloc(size);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, v_ptr);
+
/*
- * We have to be careful not to hit the guard page.
+ * We have to be careful not to hit the guard page in vmalloc tests.
* The MMU will catch that and crash us.
*/
- area = vmalloc(3000);
- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, area);

- KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)area)[3100]);
- vfree(area);
+ /* Make sure in-bounds accesses are valid. */
+ v_ptr[0] = 0;
+ v_ptr[size - 1] = 0;
+
+ /*
+ * An unaligned access past the requested vmalloc size.
+ * Only generic KASAN can precisely detect these.
+ */
+ if (IS_ENABLED(CONFIG_KASAN_GENERIC))
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)v_ptr)[size]);
+
+ /* An aligned access into the first out-of-bounds granule. */
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)v_ptr)[size + 5]);
+
+ /* Check that in-bounds accesses to the physical page are valid. */
+ page = vmalloc_to_page(v_ptr);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, page);
+ p_ptr = page_address(page);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p_ptr);
+ p_ptr[0] = 0;
+
+ vfree(v_ptr);
+
+ /*
+ * We can't check for use-after-unmap bugs in this nor in the following
+ * vmalloc tests, as the page might be fully unmapped and accessing it
+ * will crash the kernel.
+ */
+}
+
+static void vmap_tags(struct kunit *test)
+{
+ char *p_ptr, *v_ptr;
+ struct page *p_page, *v_page;
+ size_t order = 1;
+
+ /*
+ * This test is specifically crafted for the software tag-based mode,
+ * the only tag-based mode that poisons vmap mappings.
+ */
+ KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_SW_TAGS);
+
+ KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_VMALLOC);
+
+ p_page = alloc_pages(GFP_KERNEL, order);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p_page);
+ p_ptr = page_address(p_page);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p_ptr);
+
+ v_ptr = vmap(&p_page, 1 << order, VM_MAP, PAGE_KERNEL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, v_ptr);
+
+ /*
+ * We can't check for out-of-bounds bugs in this nor in the following
+ * vmalloc tests, as allocations have page granularity and accessing
+ * the guard page will crash the kernel.
+ */
+
+ KUNIT_EXPECT_GE(test, (u8)get_tag(v_ptr), (u8)KASAN_TAG_MIN);
+ KUNIT_EXPECT_LT(test, (u8)get_tag(v_ptr), (u8)KASAN_TAG_KERNEL);
+
+ /* Make sure that in-bounds accesses through both pointers work. */
+ *p_ptr = 0;
+ *v_ptr = 0;
+
+ /* Make sure vmalloc_to_page() correctly recovers the page pointer. */
+ v_page = vmalloc_to_page(v_ptr);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, v_page);
+ KUNIT_EXPECT_PTR_EQ(test, p_page, v_page);
+
+ vunmap(v_ptr);
+ free_pages((unsigned long)p_ptr, order);
+}
+
+static void vm_map_ram_tags(struct kunit *test)
+{
+ char *p_ptr, *v_ptr;
+ struct page *page;
+ size_t order = 1;
+
+ /*
+ * This test is specifically crafted for the software tag-based mode,
+ * the only tag-based mode that poisons vm_map_ram mappings.
+ */
+ KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_SW_TAGS);
+
+ page = alloc_pages(GFP_KERNEL, order);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, page);
+ p_ptr = page_address(page);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p_ptr);
+
+ v_ptr = vm_map_ram(&page, 1 << order, -1);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, v_ptr);
+
+ KUNIT_EXPECT_GE(test, (u8)get_tag(v_ptr), (u8)KASAN_TAG_MIN);
+ KUNIT_EXPECT_LT(test, (u8)get_tag(v_ptr), (u8)KASAN_TAG_KERNEL);
+
+ /* Make sure that in-bounds accesses through both pointers work. */
+ *p_ptr = 0;
+ *v_ptr = 0;
+
+ vm_unmap_ram(v_ptr, 1 << order);
+ free_pages((unsigned long)p_ptr, order);
+}
+
+static void vmalloc_percpu(struct kunit *test)
+{
+ char __percpu *ptr;
+ int cpu;
+
+ /*
+ * This test is specifically crafted for the software tag-based mode,
+ * the only tag-based mode that poisons percpu mappings.
+ */
+ KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_SW_TAGS);
+
+ ptr = __alloc_percpu(PAGE_SIZE, PAGE_SIZE);
+
+ for_each_possible_cpu(cpu) {
+ char *c_ptr = per_cpu_ptr(ptr, cpu);
+
+ KUNIT_EXPECT_GE(test, (u8)get_tag(c_ptr), (u8)KASAN_TAG_MIN);
+ KUNIT_EXPECT_LT(test, (u8)get_tag(c_ptr), (u8)KASAN_TAG_KERNEL);
+
+ /* Make sure that in-bounds accesses don't crash the kernel. */
+ *c_ptr = 0;
+ }
+
+ free_percpu(ptr);
}

/*
@@ -1097,6 +1258,18 @@ static void match_all_not_assigned(struct kunit *test)
KUNIT_EXPECT_LT(test, (u8)get_tag(ptr), (u8)KASAN_TAG_KERNEL);
free_pages((unsigned long)ptr, order);
}
+
+ if (!IS_ENABLED(CONFIG_KASAN_VMALLOC))
+ return;
+
+ for (i = 0; i < 256; i++) {
+ size = (get_random_int() % 1024) + 1;
+ ptr = vmalloc(size);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
+ KUNIT_EXPECT_GE(test, (u8)get_tag(ptr), (u8)KASAN_TAG_MIN);
+ KUNIT_EXPECT_LT(test, (u8)get_tag(ptr), (u8)KASAN_TAG_KERNEL);
+ vfree(ptr);
+ }
}

/* Check that 0xff works as a match-all pointer tag for tag-based modes. */
@@ -1202,7 +1375,11 @@ static struct kunit_case kasan_kunit_test_cases[] = {
KUNIT_CASE(kasan_bitops_generic),
KUNIT_CASE(kasan_bitops_tags),
KUNIT_CASE(kmalloc_double_kzfree),
+ KUNIT_CASE(vmalloc_helpers_tags),
KUNIT_CASE(vmalloc_oob),
+ KUNIT_CASE(vmap_tags),
+ KUNIT_CASE(vm_map_ram_tags),
+ KUNIT_CASE(vmalloc_percpu),
KUNIT_CASE(match_all_not_assigned),
KUNIT_CASE(match_all_ptr_tag),
KUNIT_CASE(match_all_mem_tag),
--
2.25.1


2021-12-20 22:05:18

by Andrey Konovalov

[permalink] [raw]
Subject: Re: [PATCH mm v4 32/39] kasan, arm64: don't tag executable vmalloc allocations

On Mon, Dec 20, 2021 at 11:02 PM <[email protected]> wrote:
>
> From: Andrey Konovalov <[email protected]>
>
> Besides asking vmalloc memory to be executable via the prot argument
> of __vmalloc_node_range() (see the previous patch), the kernel can skip
> that bit and instead mark memory as executable via set_memory_x().
>
> Once tag-based KASAN modes start tagging vmalloc allocations, executing
> code from such allocations will lead to the PC register getting a tag,
> which is not tolerated by the kernel.
>
> Generic kernel code typically allocates memory via module_alloc() if
> it intends to mark memory as executable. (On arm64 module_alloc()
> uses __vmalloc_node_range() without setting the executable bit).
>
> Thus, reset pointer tags of pointers returned from module_alloc().
>
> However, on arm64 there's an exception: the eBPF subsystem. Instead of
> using module_alloc(), it uses vmalloc() (via bpf_jit_alloc_exec())
> to allocate its JIT region.
>
> Thus, reset pointer tags of pointers returned from bpf_jit_alloc_exec().
>
> Resetting tags for these pointers results in untagged pointers being
> passed to set_memory_x(). This causes conflicts in arithmetic checks
> in change_memory_common(), as vm_struct->addr pointer returned by
> find_vm_area() is tagged.
>
> Reset pointer tag of find_vm_area(addr)->addr in change_memory_common().
>
> Signed-off-by: Andrey Konovalov <[email protected]>
>
> ---
>
> Changes v3->v4:
> - Reset pointer tag in change_memory_common().
>
> Changes v2->v3:
> - Add this patch.
> ---
> arch/arm64/kernel/module.c | 3 ++-
> arch/arm64/mm/pageattr.c | 2 +-
> arch/arm64/net/bpf_jit_comp.c | 3 ++-
> 3 files changed, 5 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
> index d3a1fa818348..f2d4bb14bfab 100644
> --- a/arch/arm64/kernel/module.c
> +++ b/arch/arm64/kernel/module.c
> @@ -63,7 +63,8 @@ void *module_alloc(unsigned long size)
> return NULL;
> }
>
> - return p;
> + /* Memory is intended to be executable, reset the pointer tag. */
> + return kasan_reset_tag(p);
> }
>
> enum aarch64_reloc_op {
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index a3bacd79507a..64e985eaa52d 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -85,7 +85,7 @@ static int change_memory_common(unsigned long addr, int numpages,
> */
> area = find_vm_area((void *)addr);
> if (!area ||
> - end > (unsigned long)area->addr + area->size ||
> + end > (unsigned long)kasan_reset_tag(area->addr) + area->size ||
> !(area->flags & VM_ALLOC))
> return -EINVAL;
>
> diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
> index 07aad85848fa..381a67922c2d 100644
> --- a/arch/arm64/net/bpf_jit_comp.c
> +++ b/arch/arm64/net/bpf_jit_comp.c
> @@ -1147,7 +1147,8 @@ u64 bpf_jit_alloc_exec_limit(void)
>
> void *bpf_jit_alloc_exec(unsigned long size)
> {
> - return vmalloc(size);
> + /* Memory is intended to be executable, reset the pointer tag. */
> + return kasan_reset_tag(vmalloc(size));
> }
>
> void bpf_jit_free_exec(void *addr)
> --
> 2.25.1

Hi Catalin,

I had to change this patch to fix an issue I discovered during
testing. Could you PTAL once again?

Thanks!

2021-12-21 09:17:20

by Alexander Potapenko

[permalink] [raw]
Subject: Re: [PATCH mm v4 07/39] mm: clarify __GFP_ZEROTAGS comment

On Mon, Dec 20, 2021 at 10:59 PM <[email protected]> wrote:
>
> From: Andrey Konovalov <[email protected]>

Reviewed-by: Alexander Potapenko <[email protected]>

>
> __GFP_ZEROTAGS is intended as an optimization: if memory is zeroed during
> allocation, it's possible to set memory tags at the same time with little
> performance impact.
Perhaps you could mention this intention explicitly in the comment?
Right now it still doesn't reference performance.

>
> Clarify this intention of __GFP_ZEROTAGS in the comment.
>
> Signed-off-by: Andrey Konovalov <[email protected]>
> ---
> include/linux/gfp.h | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/gfp.h b/include/linux/gfp.h
> index 0b2d2a636164..d6a184523ca2 100644
> --- a/include/linux/gfp.h
> +++ b/include/linux/gfp.h
> @@ -228,8 +228,8 @@ struct vm_area_struct;
> *
> * %__GFP_ZERO returns a zeroed page on success.
> *
> - * %__GFP_ZEROTAGS returns a page with zeroed memory tags on success, if
> - * __GFP_ZERO is set.
> + * %__GFP_ZEROTAGS zeroes memory tags at allocation time if the memory itself
> + * is being zeroed (either via __GFP_ZERO or via init_on_alloc).
> *
> * %__GFP_SKIP_KASAN_POISON returns a page which does not need to be poisoned
> * on deallocation. Typically used for userspace pages. Currently only has an
> --
> 2.25.1
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/92f3029f3647ab355450ed5c8252bad8cfae1e09.1640036051.git.andreyknvl%40google.com.



--
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Paul Manicle, Halimah DeLaine Prado
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

2021-12-21 09:23:41

by Alexander Potapenko

[permalink] [raw]
Subject: Re: [PATCH mm v4 16/39] kasan: define KASAN_VMALLOC_INVALID for SW_TAGS

On Mon, Dec 20, 2021 at 11:00 PM <[email protected]> wrote:
>
> From: Andrey Konovalov <[email protected]>
>
> In preparation for adding vmalloc support to SW_TAGS KASAN,
> provide a KASAN_VMALLOC_INVALID definition for it.
>
> HW_TAGS KASAN won't be using this value, as it falls back onto
> page_alloc for poisoning freed vmalloc() memory.
>
> Signed-off-by: Andrey Konovalov <[email protected]>
Reviewed-by: Alexander Potapenko <[email protected]>

2021-12-21 11:50:57

by Alexander Potapenko

[permalink] [raw]
Subject: Re: [PATCH mm v4 26/39] kasan, vmalloc: unpoison VM_ALLOC pages after mapping

On Mon, Dec 20, 2021 at 11:02 PM <[email protected]> wrote:
>
> From: Andrey Konovalov <[email protected]>
>
> Make KASAN unpoison vmalloc mappings after they have been mapped in
> when it's possible: for vmalloc() (indentified via VM_ALLOC) and
> vm_map_ram().
>
> The reasons for this are:
>
> - For vmalloc() and vm_map_ram(): pages don't get unpoisoned in case
> mapping them fails.
> - For vmalloc(): HW_TAGS KASAN needs pages to be mapped to set tags via
> kasan_unpoison_vmalloc().
>
> As a part of these changes, the return value of __vmalloc_node_range()
> is changed to area->addr. This is a non-functional change, as
> __vmalloc_area_node() returns area->addr anyway.
>
> Signed-off-by: Andrey Konovalov <[email protected]>
Reviewed-by: Alexander Potapenko <[email protected]>

2021-12-21 12:05:04

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH mm v4 28/39] kasan, page_alloc: allow skipping unpoisoning for HW_TAGS

On Mon, Dec 20, 2021 at 11:02PM +0100, [email protected] wrote:
[...]
> #ifdef CONFIG_KASAN_HW_TAGS
> #define __def_gfpflag_names_kasan \
> - , {(unsigned long)__GFP_SKIP_KASAN_POISON, "__GFP_SKIP_KASAN_POISON"}
> + , {(unsigned long)__GFP_SKIP_KASAN_POISON, "__GFP_SKIP_KASAN_POISON"} \
> + , {(unsigned long)__GFP_SKIP_KASAN_UNPOISON, \
> + "__GFP_SKIP_KASAN_UNPOISON"}
> #else
> #define __def_gfpflag_names_kasan
> #endif

Adhering to 80 cols here makes the above less readable. If you do a v5,
my suggestion is:

diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h
index f18eeb5fdde2..f9f0ae3a4b6b 100644
--- a/include/trace/events/mmflags.h
+++ b/include/trace/events/mmflags.h
@@ -51,11 +51,10 @@
{(unsigned long)__GFP_ZEROTAGS, "__GFP_ZEROTAGS"} \

#ifdef CONFIG_KASAN_HW_TAGS
-#define __def_gfpflag_names_kasan \
- , {(unsigned long)__GFP_SKIP_ZERO, "__GFP_SKIP_ZERO"} \
- , {(unsigned long)__GFP_SKIP_KASAN_POISON, "__GFP_SKIP_KASAN_POISON"} \
- , {(unsigned long)__GFP_SKIP_KASAN_UNPOISON, \
- "__GFP_SKIP_KASAN_UNPOISON"}
+#define __def_gfpflag_names_kasan , \
+ {(unsigned long)__GFP_SKIP_ZERO, "__GFP_SKIP_ZERO"}, \
+ {(unsigned long)__GFP_SKIP_KASAN_POISON, "__GFP_SKIP_KASAN_POISON"}, \
+ {(unsigned long)__GFP_SKIP_KASAN_UNPOISON, "__GFP_SKIP_KASAN_UNPOISON"}
#else
#define __def_gfpflag_names_kasan
#endif

2021-12-21 12:12:05

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH mm v4 29/39] kasan, page_alloc: allow skipping memory init for HW_TAGS

On Mon, Dec 20, 2021 at 11:02PM +0100, [email protected] wrote:
> From: Andrey Konovalov <[email protected]>
[...]
> +static inline bool should_skip_init(gfp_t flags)
> +{
> + /* Don't skip if a software KASAN mode is enabled. */
> + if (IS_ENABLED(CONFIG_KASAN_GENERIC) ||
> + IS_ENABLED(CONFIG_KASAN_SW_TAGS))
> + return false;
> +
> + /* Don't skip, if hardware tag-based KASAN is not enabled. */
> + if (!kasan_hw_tags_enabled())
> + return false;

Why is the IS_ENABLED(CONFIG_KASAN_{GENERIC,SW_TAGS}) check above
required? Isn't kasan_hw_tags_enabled() always false if one of those is
configured?

2021-12-21 12:14:44

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH mm v4 28/39] kasan, page_alloc: allow skipping unpoisoning for HW_TAGS

On Mon, Dec 20, 2021 at 11:02PM +0100, [email protected] wrote:
[...]
> +static inline bool should_skip_kasan_unpoison(gfp_t flags, bool init_tags)
> +{
> + /* Don't skip if a software KASAN mode is enabled. */
> + if (IS_ENABLED(CONFIG_KASAN_GENERIC) ||
> + IS_ENABLED(CONFIG_KASAN_SW_TAGS))
> + return false;
> +
> + /* Skip, if hardware tag-based KASAN is not enabled. */
> + if (!kasan_hw_tags_enabled())
> + return true;

Same question here: why is IS_ENABLED(CONFIG_KASAN_{GENERIC,SW_TAGS})
check required if kasan_hw_tags_enabled() is always false if one of
those is configured?

2021-12-21 12:19:59

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH mm v4 28/39] kasan, page_alloc: allow skipping unpoisoning for HW_TAGS

On Tue, 21 Dec 2021 at 13:14, Marco Elver <[email protected]> wrote:
>
> On Mon, Dec 20, 2021 at 11:02PM +0100, [email protected] wrote:
> [...]
> > +static inline bool should_skip_kasan_unpoison(gfp_t flags, bool init_tags)
> > +{
> > + /* Don't skip if a software KASAN mode is enabled. */
> > + if (IS_ENABLED(CONFIG_KASAN_GENERIC) ||
> > + IS_ENABLED(CONFIG_KASAN_SW_TAGS))
> > + return false;
> > +
> > + /* Skip, if hardware tag-based KASAN is not enabled. */
> > + if (!kasan_hw_tags_enabled())
> > + return true;
>
> Same question here: why is IS_ENABLED(CONFIG_KASAN_{GENERIC,SW_TAGS})
> check required if kasan_hw_tags_enabled() is always false if one of
> those is configured?

Hmm, I pattern-matched too quickly. In this case there's probably no
way around it because the return value is different, so not exactly
like the should_skip_init().

2021-12-21 12:30:35

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH mm v4 29/39] kasan, page_alloc: allow skipping memory init for HW_TAGS

On Mon, Dec 20, 2021 at 11:02PM +0100, [email protected] wrote:
[...]
> /* Room for N __GFP_FOO bits */
> #define __GFP_BITS_SHIFT (24 + \
> + IS_ENABLED(CONFIG_KASAN_HW_TAGS) + \
> IS_ENABLED(CONFIG_KASAN_HW_TAGS) + \
> IS_ENABLED(CONFIG_KASAN_HW_TAGS) + \
> IS_ENABLED(CONFIG_LOCKDEP))

Does '3 * IS_ENABLED(CONFIG_KASAN_HW_TAGS)' work?

2021-12-21 14:22:23

by Alexander Potapenko

[permalink] [raw]
Subject: Re: [PATCH mm v4 20/39] kasan: add wrappers for vmalloc hooks

On Mon, Dec 20, 2021 at 11:00 PM <[email protected]> wrote:
>
> From: Andrey Konovalov <[email protected]>
>
> Add wrappers around functions that [un]poison memory for vmalloc
> allocations. These functions will be used by HW_TAGS KASAN and
> therefore need to be disabled when kasan=off command line argument
> is provided.
>
> This patch does no functional changes for software KASAN modes.
>
> Signed-off-by: Andrey Konovalov <[email protected]>
Reviewed-by: Alexander Potapenko <[email protected]>

2021-12-21 14:43:41

by Alexander Potapenko

[permalink] [raw]
Subject: Re: [PATCH mm v4 35/39] kasan: add kasan.vmalloc command line flag

On Mon, Dec 20, 2021 at 11:02 PM <[email protected]> wrote:
>
> From: Andrey Konovalov <[email protected]>
>
> Allow disabling vmalloc() tagging for HW_TAGS KASAN via a kasan.vmalloc
> command line switch.
>
> This is a fail-safe switch intended for production systems that enable
> HW_TAGS KASAN. In case vmalloc() tagging ends up having an issue not
> detected during testing but that manifests in production, kasan.vmalloc
> allows to turn vmalloc() tagging off while leaving page_alloc/slab
> tagging on.
>
> Signed-off-by: Andrey Konovalov <[email protected]>
>
> ---
>
> Changes v1->v2:
> - Mark kasan_arg_stacktrace as __initdata instead of __ro_after_init.
> - Combine KASAN_ARG_VMALLOC_DEFAULT and KASAN_ARG_VMALLOC_ON switch
> cases.
> ---
> mm/kasan/hw_tags.c | 45 ++++++++++++++++++++++++++++++++++++++++++++-
> mm/kasan/kasan.h | 6 ++++++
> 2 files changed, 50 insertions(+), 1 deletion(-)
>
> diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
> index 99230e666c1b..657b23cebe28 100644
> --- a/mm/kasan/hw_tags.c
> +++ b/mm/kasan/hw_tags.c
> @@ -32,6 +32,12 @@ enum kasan_arg_mode {
> KASAN_ARG_MODE_ASYMM,
> };
>
> +enum kasan_arg_vmalloc {
> + KASAN_ARG_VMALLOC_DEFAULT,
> + KASAN_ARG_VMALLOC_OFF,
> + KASAN_ARG_VMALLOC_ON,
> +};
> +
> enum kasan_arg_stacktrace {
> KASAN_ARG_STACKTRACE_DEFAULT,
> KASAN_ARG_STACKTRACE_OFF,
> @@ -40,6 +46,7 @@ enum kasan_arg_stacktrace {
>
> static enum kasan_arg kasan_arg __ro_after_init;
> static enum kasan_arg_mode kasan_arg_mode __ro_after_init;
> +static enum kasan_arg_vmalloc kasan_arg_vmalloc __initdata;
> static enum kasan_arg_stacktrace kasan_arg_stacktrace __initdata;
>
> /* Whether KASAN is enabled at all. */
> @@ -50,6 +57,9 @@ EXPORT_SYMBOL(kasan_flag_enabled);
> enum kasan_mode kasan_mode __ro_after_init;
> EXPORT_SYMBOL_GPL(kasan_mode);
>
> +/* Whether to enable vmalloc tagging. */
> +DEFINE_STATIC_KEY_FALSE(kasan_flag_vmalloc);
> +
> /* Whether to collect alloc/free stack traces. */
> DEFINE_STATIC_KEY_FALSE(kasan_flag_stacktrace);
>
> @@ -89,6 +99,23 @@ static int __init early_kasan_mode(char *arg)
> }
> early_param("kasan.mode", early_kasan_mode);
>
> +/* kasan.vmalloc=off/on */
> +static int __init early_kasan_flag_vmalloc(char *arg)
> +{
> + if (!arg)
> + return -EINVAL;
> +
> + if (!strcmp(arg, "off"))
> + kasan_arg_vmalloc = KASAN_ARG_VMALLOC_OFF;
> + else if (!strcmp(arg, "on"))
> + kasan_arg_vmalloc = KASAN_ARG_VMALLOC_ON;
> + else
> + return -EINVAL;
> +
> + return 0;
> +}
> +early_param("kasan.vmalloc", early_kasan_flag_vmalloc);
> +
> /* kasan.stacktrace=off/on */
> static int __init early_kasan_flag_stacktrace(char *arg)
> {
> @@ -172,6 +199,18 @@ void __init kasan_init_hw_tags(void)
> break;
> }
>
> + switch (kasan_arg_vmalloc) {
> + case KASAN_ARG_VMALLOC_DEFAULT:
> + /* Default to enabling vmalloc tagging. */
> + fallthrough;
> + case KASAN_ARG_VMALLOC_ON:
> + static_branch_enable(&kasan_flag_vmalloc);
> + break;
> + case KASAN_ARG_VMALLOC_OFF:
> + /* Do nothing, kasan_flag_vmalloc keeps its default value. */
> + break;
> + }

I think we should be setting the default when defining the static key
(e.g. in this case it should be DEFINE_STATIC_KEY_TRUE), so that:
- the _DEFAULT case is always empty;
- the _ON case explicitly enables the static branch
- the _OFF case explicitly disables the branch
This way we'll only need to change DEFINE_STATIC_KEY_TRUE to
DEFINE_STATIC_KEY_FALSE if we want to change the default, but we don't
have to mess up with the rest of the code.
Right now the switch statement is confusing, because the _OFF case
refers to some "default" value, whereas the _DEFAULT one actively
changes the state.

I see that this code is copied from kasan_flag_stacktrace
implementation, and my comment also applies there (but I don't insist
on fixing that one right now).

2021-12-21 15:11:54

by Alexander Potapenko

[permalink] [raw]
Subject: Re: [PATCH mm v4 22/39] kasan, fork: reset pointer tags of vmapped stacks

On Mon, Dec 20, 2021 at 11:01 PM <[email protected]> wrote:
>
> From: Andrey Konovalov <[email protected]>
>
> Once tag-based KASAN modes start tagging vmalloc() allocations,
> kernel stacks start getting tagged if CONFIG_VMAP_STACK is enabled.
>
> Reset the tag of kernel stack pointers after allocation in
> alloc_thread_stack_node().
>
> For SW_TAGS KASAN, when CONFIG_KASAN_STACK is enabled, the
> instrumentation can't handle the SP register being tagged.
>
> For HW_TAGS KASAN, there's no instrumentation-related issues. However,
> the impact of having a tagged SP register needs to be properly evaluated,
> so keep it non-tagged for now.
>
> Note, that the memory for the stack allocation still gets tagged to
> catch vmalloc-into-stack out-of-bounds accesses.
>
> Signed-off-by: Andrey Konovalov <[email protected]>
Reviewed-by: Alexander Potapenko <[email protected]>
>
> ---
>
> Changes v2->v3:
> - Update patch description.
> ---
> kernel/fork.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/kernel/fork.c b/kernel/fork.c
> index 403b9dbbfb62..4125373dba4e 100644
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -254,6 +254,7 @@ static unsigned long *alloc_thread_stack_node(struct task_struct *tsk, int node)
> * so cache the vm_struct.
> */
> if (stack) {
> + stack = kasan_reset_tag(stack);
> tsk->stack_vm_area = find_vm_area(stack);
> tsk->stack = stack;
> }
> --
> 2.25.1
>


--
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Paul Manicle, Halimah DeLaine Prado
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

2021-12-22 07:01:23

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH mm v4 00/39] kasan, vmalloc, arm64: add vmalloc tagging support for SW/HW_TAGS

On Mon, 20 Dec 2021 at 22:58, <[email protected]> wrote:
>
> From: Andrey Konovalov <[email protected]>
>
> Hi,
>
> This patchset adds vmalloc tagging support for SW_TAGS and HW_TAGS
> KASAN modes.
>
> The tree with patches is available here:
>
> https://github.com/xairy/linux/tree/up-kasan-vmalloc-tags-v4-akpm
>
> About half of patches are cleanups I went for along the way. None of
> them seem to be important enough to go through stable, so I decided
> not to split them out into separate patches/series.
>
> The patchset is partially based on an early version of the HW_TAGS
> patchset by Vincenzo that had vmalloc support. Thus, I added a
> Co-developed-by tag into a few patches.
>
> SW_TAGS vmalloc tagging support is straightforward. It reuses all of
> the generic KASAN machinery, but uses shadow memory to store tags
> instead of magic values. Naturally, vmalloc tagging requires adding
> a few kasan_reset_tag() annotations to the vmalloc code.
>
> HW_TAGS vmalloc tagging support stands out. HW_TAGS KASAN is based on
> Arm MTE, which can only assigns tags to physical memory. As a result,
> HW_TAGS KASAN only tags vmalloc() allocations, which are backed by
> page_alloc memory. It ignores vmap() and others.
>
> Changes in v3->v4:
[...]
> Andrey Konovalov (39):
> kasan, page_alloc: deduplicate should_skip_kasan_poison
> kasan, page_alloc: move tag_clear_highpage out of
> kernel_init_free_pages
> kasan, page_alloc: merge kasan_free_pages into free_pages_prepare
> kasan, page_alloc: simplify kasan_poison_pages call site
> kasan, page_alloc: init memory of skipped pages on free
> kasan: drop skip_kasan_poison variable in free_pages_prepare
> mm: clarify __GFP_ZEROTAGS comment
> kasan: only apply __GFP_ZEROTAGS when memory is zeroed
> kasan, page_alloc: refactor init checks in post_alloc_hook
> kasan, page_alloc: merge kasan_alloc_pages into post_alloc_hook
> kasan, page_alloc: combine tag_clear_highpage calls in post_alloc_hook
> kasan, page_alloc: move SetPageSkipKASanPoison in post_alloc_hook
> kasan, page_alloc: move kernel_init_free_pages in post_alloc_hook
> kasan, page_alloc: rework kasan_unpoison_pages call site
> kasan: clean up metadata byte definitions
> kasan: define KASAN_VMALLOC_INVALID for SW_TAGS
> kasan, x86, arm64, s390: rename functions for modules shadow
> kasan, vmalloc: drop outdated VM_KASAN comment
> kasan: reorder vmalloc hooks
> kasan: add wrappers for vmalloc hooks
> kasan, vmalloc: reset tags in vmalloc functions
> kasan, fork: reset pointer tags of vmapped stacks
> kasan, arm64: reset pointer tags of vmapped stacks
> kasan, vmalloc: add vmalloc tagging for SW_TAGS
> kasan, vmalloc, arm64: mark vmalloc mappings as pgprot_tagged
> kasan, vmalloc: unpoison VM_ALLOC pages after mapping
> kasan, mm: only define ___GFP_SKIP_KASAN_POISON with HW_TAGS
> kasan, page_alloc: allow skipping unpoisoning for HW_TAGS
> kasan, page_alloc: allow skipping memory init for HW_TAGS
> kasan, vmalloc: add vmalloc tagging for HW_TAGS
> kasan, vmalloc: only tag normal vmalloc allocations
> kasan, arm64: don't tag executable vmalloc allocations
> kasan: mark kasan_arg_stacktrace as __initdata
> kasan: simplify kasan_init_hw_tags
> kasan: add kasan.vmalloc command line flag
> kasan: allow enabling KASAN_VMALLOC and SW/HW_TAGS
> arm64: select KASAN_VMALLOC for SW/HW_TAGS modes
> kasan: documentation updates
> kasan: improve vmalloc tests

Functionally it all looks good. So rather than acking every patch, for
the whole series:

Acked-by: Marco Elver <[email protected]>

... and in case you do a v5, I've left some minor comments.

Happy holidays!

Thanks,
-- Marco

2021-12-22 11:11:16

by Catalin Marinas

[permalink] [raw]
Subject: Re: [PATCH mm v4 32/39] kasan, arm64: don't tag executable vmalloc allocations

On Mon, Dec 20, 2021 at 11:02:04PM +0100, [email protected] wrote:
> From: Andrey Konovalov <[email protected]>
>
> Besides asking vmalloc memory to be executable via the prot argument
> of __vmalloc_node_range() (see the previous patch), the kernel can skip
> that bit and instead mark memory as executable via set_memory_x().
>
> Once tag-based KASAN modes start tagging vmalloc allocations, executing
> code from such allocations will lead to the PC register getting a tag,
> which is not tolerated by the kernel.
>
> Generic kernel code typically allocates memory via module_alloc() if
> it intends to mark memory as executable. (On arm64 module_alloc()
> uses __vmalloc_node_range() without setting the executable bit).
>
> Thus, reset pointer tags of pointers returned from module_alloc().
>
> However, on arm64 there's an exception: the eBPF subsystem. Instead of
> using module_alloc(), it uses vmalloc() (via bpf_jit_alloc_exec())
> to allocate its JIT region.
>
> Thus, reset pointer tags of pointers returned from bpf_jit_alloc_exec().
>
> Resetting tags for these pointers results in untagged pointers being
> passed to set_memory_x(). This causes conflicts in arithmetic checks
> in change_memory_common(), as vm_struct->addr pointer returned by
> find_vm_area() is tagged.
>
> Reset pointer tag of find_vm_area(addr)->addr in change_memory_common().
>
> Signed-off-by: Andrey Konovalov <[email protected]>
>

Acked-by: Catalin Marinas <[email protected]>

2021-12-30 19:11:39

by Andrey Konovalov

[permalink] [raw]
Subject: Re: [PATCH mm v4 07/39] mm: clarify __GFP_ZEROTAGS comment

On Tue, Dec 21, 2021 at 10:17 AM Alexander Potapenko <[email protected]> wrote:
>
> On Mon, Dec 20, 2021 at 10:59 PM <[email protected]> wrote:
> >
> > From: Andrey Konovalov <[email protected]>
>
> Reviewed-by: Alexander Potapenko <[email protected]>
>
> >
> > __GFP_ZEROTAGS is intended as an optimization: if memory is zeroed during
> > allocation, it's possible to set memory tags at the same time with little
> > performance impact.
> Perhaps you could mention this intention explicitly in the comment?
> Right now it still doesn't reference performance.

Sure, will do in v5. Thanks!

2021-12-30 19:11:42

by Andrey Konovalov

[permalink] [raw]
Subject: Re: [PATCH mm v4 28/39] kasan, page_alloc: allow skipping unpoisoning for HW_TAGS

On Tue, Dec 21, 2021 at 1:05 PM Marco Elver <[email protected]> wrote:
>
> On Mon, Dec 20, 2021 at 11:02PM +0100, [email protected] wrote:
> [...]
> > #ifdef CONFIG_KASAN_HW_TAGS
> > #define __def_gfpflag_names_kasan \
> > - , {(unsigned long)__GFP_SKIP_KASAN_POISON, "__GFP_SKIP_KASAN_POISON"}
> > + , {(unsigned long)__GFP_SKIP_KASAN_POISON, "__GFP_SKIP_KASAN_POISON"} \
> > + , {(unsigned long)__GFP_SKIP_KASAN_UNPOISON, \
> > + "__GFP_SKIP_KASAN_UNPOISON"}
> > #else
> > #define __def_gfpflag_names_kasan
> > #endif
>
> Adhering to 80 cols here makes the above less readable. If you do a v5,
> my suggestion is:
>
> diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h
> index f18eeb5fdde2..f9f0ae3a4b6b 100644
> --- a/include/trace/events/mmflags.h
> +++ b/include/trace/events/mmflags.h
> @@ -51,11 +51,10 @@
> {(unsigned long)__GFP_ZEROTAGS, "__GFP_ZEROTAGS"} \
>
> #ifdef CONFIG_KASAN_HW_TAGS
> -#define __def_gfpflag_names_kasan \
> - , {(unsigned long)__GFP_SKIP_ZERO, "__GFP_SKIP_ZERO"} \
> - , {(unsigned long)__GFP_SKIP_KASAN_POISON, "__GFP_SKIP_KASAN_POISON"} \
> - , {(unsigned long)__GFP_SKIP_KASAN_UNPOISON, \
> - "__GFP_SKIP_KASAN_UNPOISON"}
> +#define __def_gfpflag_names_kasan , \
> + {(unsigned long)__GFP_SKIP_ZERO, "__GFP_SKIP_ZERO"}, \
> + {(unsigned long)__GFP_SKIP_KASAN_POISON, "__GFP_SKIP_KASAN_POISON"}, \
> + {(unsigned long)__GFP_SKIP_KASAN_UNPOISON, "__GFP_SKIP_KASAN_UNPOISON"}
> #else
> #define __def_gfpflag_names_kasan
> #endif

Will do in v5, thanks!

2021-12-30 19:11:49

by Andrey Konovalov

[permalink] [raw]
Subject: Re: [PATCH mm v4 29/39] kasan, page_alloc: allow skipping memory init for HW_TAGS

On Tue, Dec 21, 2021 at 1:12 PM Marco Elver <[email protected]> wrote:
>
> On Mon, Dec 20, 2021 at 11:02PM +0100, [email protected] wrote:
> > From: Andrey Konovalov <[email protected]>
> [...]
> > +static inline bool should_skip_init(gfp_t flags)
> > +{
> > + /* Don't skip if a software KASAN mode is enabled. */
> > + if (IS_ENABLED(CONFIG_KASAN_GENERIC) ||
> > + IS_ENABLED(CONFIG_KASAN_SW_TAGS))
> > + return false;
> > +
> > + /* Don't skip, if hardware tag-based KASAN is not enabled. */
> > + if (!kasan_hw_tags_enabled())
> > + return false;
>
> Why is the IS_ENABLED(CONFIG_KASAN_{GENERIC,SW_TAGS}) check above
> required? Isn't kasan_hw_tags_enabled() always false if one of those is
> configured?

It is. I wanted to include those checks for completeness, but maybe
they just cause confusion instead. Will drop them in v5. Thanks!

2021-12-30 19:11:51

by Andrey Konovalov

[permalink] [raw]
Subject: Re: [PATCH mm v4 29/39] kasan, page_alloc: allow skipping memory init for HW_TAGS

On Tue, Dec 21, 2021 at 1:30 PM Marco Elver <[email protected]> wrote:
>
> On Mon, Dec 20, 2021 at 11:02PM +0100, [email protected] wrote:
> [...]
> > /* Room for N __GFP_FOO bits */
> > #define __GFP_BITS_SHIFT (24 + \
> > + IS_ENABLED(CONFIG_KASAN_HW_TAGS) + \
> > IS_ENABLED(CONFIG_KASAN_HW_TAGS) + \
> > IS_ENABLED(CONFIG_KASAN_HW_TAGS) + \
> > IS_ENABLED(CONFIG_LOCKDEP))
>
> Does '3 * IS_ENABLED(CONFIG_KASAN_HW_TAGS)' work?

Yes, will do in v5.

2021-12-30 19:11:55

by Andrey Konovalov

[permalink] [raw]
Subject: Re: [PATCH mm v4 35/39] kasan: add kasan.vmalloc command line flag

On Tue, Dec 21, 2021 at 3:43 PM Alexander Potapenko <[email protected]> wrote:
>
> >
> > + switch (kasan_arg_vmalloc) {
> > + case KASAN_ARG_VMALLOC_DEFAULT:
> > + /* Default to enabling vmalloc tagging. */
> > + fallthrough;
> > + case KASAN_ARG_VMALLOC_ON:
> > + static_branch_enable(&kasan_flag_vmalloc);
> > + break;
> > + case KASAN_ARG_VMALLOC_OFF:
> > + /* Do nothing, kasan_flag_vmalloc keeps its default value. */
> > + break;
> > + }
>
> I think we should be setting the default when defining the static key
> (e.g. in this case it should be DEFINE_STATIC_KEY_TRUE), so that:
> - the _DEFAULT case is always empty;
> - the _ON case explicitly enables the static branch
> - the _OFF case explicitly disables the branch
> This way we'll only need to change DEFINE_STATIC_KEY_TRUE to
> DEFINE_STATIC_KEY_FALSE if we want to change the default, but we don't
> have to mess up with the rest of the code.
> Right now the switch statement is confusing, because the _OFF case
> refers to some "default" value, whereas the _DEFAULT one actively
> changes the state.
>
> I see that this code is copied from kasan_flag_stacktrace
> implementation, and my comment also applies there (but I don't insist
> on fixing that one right now).

Will do in v5. Thanks!

2021-12-30 19:12:00

by Andrey Konovalov

[permalink] [raw]
Subject: Re: [PATCH mm v4 00/39] kasan, vmalloc, arm64: add vmalloc tagging support for SW/HW_TAGS

On Wed, Dec 22, 2021 at 8:01 AM Marco Elver <[email protected]> wrote:
>
> On Mon, 20 Dec 2021 at 22:58, <[email protected]> wrote:
> >
> > From: Andrey Konovalov <[email protected]>
> >
> > Hi,
> >
> > This patchset adds vmalloc tagging support for SW_TAGS and HW_TAGS
> > KASAN modes.
> >
> > The tree with patches is available here:
> >
> > https://github.com/xairy/linux/tree/up-kasan-vmalloc-tags-v4-akpm
> >
> > About half of patches are cleanups I went for along the way. None of
> > them seem to be important enough to go through stable, so I decided
> > not to split them out into separate patches/series.
> >
> > The patchset is partially based on an early version of the HW_TAGS
> > patchset by Vincenzo that had vmalloc support. Thus, I added a
> > Co-developed-by tag into a few patches.
> >
> > SW_TAGS vmalloc tagging support is straightforward. It reuses all of
> > the generic KASAN machinery, but uses shadow memory to store tags
> > instead of magic values. Naturally, vmalloc tagging requires adding
> > a few kasan_reset_tag() annotations to the vmalloc code.
> >
> > HW_TAGS vmalloc tagging support stands out. HW_TAGS KASAN is based on
> > Arm MTE, which can only assigns tags to physical memory. As a result,
> > HW_TAGS KASAN only tags vmalloc() allocations, which are backed by
> > page_alloc memory. It ignores vmap() and others.
> >
> > Changes in v3->v4:
> [...]
> > Andrey Konovalov (39):
> > kasan, page_alloc: deduplicate should_skip_kasan_poison
> > kasan, page_alloc: move tag_clear_highpage out of
> > kernel_init_free_pages
> > kasan, page_alloc: merge kasan_free_pages into free_pages_prepare
> > kasan, page_alloc: simplify kasan_poison_pages call site
> > kasan, page_alloc: init memory of skipped pages on free
> > kasan: drop skip_kasan_poison variable in free_pages_prepare
> > mm: clarify __GFP_ZEROTAGS comment
> > kasan: only apply __GFP_ZEROTAGS when memory is zeroed
> > kasan, page_alloc: refactor init checks in post_alloc_hook
> > kasan, page_alloc: merge kasan_alloc_pages into post_alloc_hook
> > kasan, page_alloc: combine tag_clear_highpage calls in post_alloc_hook
> > kasan, page_alloc: move SetPageSkipKASanPoison in post_alloc_hook
> > kasan, page_alloc: move kernel_init_free_pages in post_alloc_hook
> > kasan, page_alloc: rework kasan_unpoison_pages call site
> > kasan: clean up metadata byte definitions
> > kasan: define KASAN_VMALLOC_INVALID for SW_TAGS
> > kasan, x86, arm64, s390: rename functions for modules shadow
> > kasan, vmalloc: drop outdated VM_KASAN comment
> > kasan: reorder vmalloc hooks
> > kasan: add wrappers for vmalloc hooks
> > kasan, vmalloc: reset tags in vmalloc functions
> > kasan, fork: reset pointer tags of vmapped stacks
> > kasan, arm64: reset pointer tags of vmapped stacks
> > kasan, vmalloc: add vmalloc tagging for SW_TAGS
> > kasan, vmalloc, arm64: mark vmalloc mappings as pgprot_tagged
> > kasan, vmalloc: unpoison VM_ALLOC pages after mapping
> > kasan, mm: only define ___GFP_SKIP_KASAN_POISON with HW_TAGS
> > kasan, page_alloc: allow skipping unpoisoning for HW_TAGS
> > kasan, page_alloc: allow skipping memory init for HW_TAGS
> > kasan, vmalloc: add vmalloc tagging for HW_TAGS
> > kasan, vmalloc: only tag normal vmalloc allocations
> > kasan, arm64: don't tag executable vmalloc allocations
> > kasan: mark kasan_arg_stacktrace as __initdata
> > kasan: simplify kasan_init_hw_tags
> > kasan: add kasan.vmalloc command line flag
> > kasan: allow enabling KASAN_VMALLOC and SW/HW_TAGS
> > arm64: select KASAN_VMALLOC for SW/HW_TAGS modes
> > kasan: documentation updates
> > kasan: improve vmalloc tests
>
> Functionally it all looks good. So rather than acking every patch, for
> the whole series:
>
> Acked-by: Marco Elver <[email protected]>
>
> ... and in case you do a v5, I've left some minor comments.

I will, thanks!

> Happy holidays!

Happy holidays to you too!