2020-10-22 20:12:36

by Andrey Konovalov

[permalink] [raw]
Subject: [PATCH RFC v2 00/21] kasan: hardware tag-based mode for production use on arm64

This patchset is not complete (hence sending as RFC), but I would like to
start the discussion now and hear people's opinions regarding the
questions mentioned below.

=== Overview

This patchset adopts the existing hardware tag-based KASAN mode [1] for
use in production as a memory corruption mitigation. Hardware tag-based
KASAN relies on arm64 Memory Tagging Extension (MTE) [2] to perform memory
and pointer tagging. Please see [3] and [4] for detailed analysis of how
MTE helps to fight memory safety problems.

The current plan is reuse CONFIG_KASAN_HW_TAGS for production, but add a
boot time switch, that allows to choose between a debugging mode, that
includes all KASAN features as they are, and a production mode, that only
includes the essentials like tag checking.

It is essential that switching between these modes doesn't require
rebuilding the kernel with different configs, as this is required by the
Android GKI initiative [5].

The patch titled "kasan: add and integrate kasan boot parameters" of this
series adds a few new boot parameters:

kasan.mode allows choosing one of main three modes:

- kasan.mode=off - no checks at all
- kasan.mode=prod - only essential production features
- kasan.mode=full - all features

Those mode configs provide default values for three more internal configs
listed below. However it's also possible to override the default values
by providing:

- kasan.stack=off/on - enable stacks collection
(default: on for mode=full, otherwise off)
- kasan.trap=async/sync - use async or sync MTE mode
(default: sync for mode=full, otherwise async)
- kasan.fault=report/panic - only report MTE fault or also panic
(default: report)

=== Benchmarks

For now I've only performed a few simple benchmarks such as measuring
kernel boot time and slab memory usage after boot. The benchmarks were
performed in QEMU and the results below exclude the slowdown caused by
QEMU memory tagging emulation (as it's different from the slowdown that
will be introduced by hardware and therefore irrelevant).

KASAN_HW_TAGS=y + kasan.mode=off introduces no performance or memory
impact compared to KASAN_HW_TAGS=n.

kasan.mode=prod (without executing the tagging instructions) introduces
7% of both performace and memory impact compared to kasan.mode=off.
Note, that 4% of performance and all 7% of memory impact are caused by the
fact that enabling KASAN essentially results in CONFIG_SLAB_MERGE_DEFAULT
being disabled.

Recommended Android config has CONFIG_SLAB_MERGE_DEFAULT disabled (I assume
for security reasons), but Pixel 4 has it enabled. It's arguable, whether
"disabling" CONFIG_SLAB_MERGE_DEFAULT introduces any security benefit on
top of MTE. Without MTE it makes exploiting some heap corruption harder.
With MTE it will only make it harder provided that the attacker is able to
predict allocation tags.

kasan.mode=full has 40% performance and 30% memory impact over
kasan.mode=prod. Both come from alloc/free stack collection.

=== Questions

Any concerns about the boot parameters?

Should we try to deal with CONFIG_SLAB_MERGE_DEFAULT-like behavor mentioned
above?

=== Notes

This patchset is available here:

https://github.com/xairy/linux/tree/up-prod-mte-rfc2

and on Gerrit here:

https://linux-review.googlesource.com/c/linux/kernel/git/torvalds/linux/+/3707

This patchset is based on v5 of "kasan: add hardware tag-based mode for
arm64" patchset [1] (along with some fixes).

For testing in QEMU hardware tag-based KASAN requires:

1. QEMU built from master [6] (use "-machine virt,mte=on -cpu max" arguments
to run).
2. GCC version 10.

[1] https://lore.kernel.org/linux-arm-kernel/[email protected]/
[2] https://community.arm.com/developer/ip-products/processors/b/processors-ip-blog/posts/enhancing-memory-safety
[3] https://arxiv.org/pdf/1802.09517.pdf
[4] https://github.com/microsoft/MSRC-Security-Research/blob/master/papers/2020/Security%20analysis%20of%20memory%20tagging.pdf
[5] https://source.android.com/devices/architecture/kernel/generic-kernel-image
[6] https://github.com/qemu/qemu

=== History

Changes RFCv1->RFCv2:
- Rework boot parameters.
- Drop __init from empty kasan_init_tags() definition.
- Add cpu_supports_mte() helper that can be used during early boot and use
it in kasan_init_tags()
- Lots of new KASAN optimization commits.

Andrey Konovalov (21):
kasan: simplify quarantine_put call site
kasan: rename get_alloc/free_info
kasan: introduce set_alloc_info
kasan: unpoison stack only with CONFIG_KASAN_STACK
kasan: allow VMAP_STACK for HW_TAGS mode
kasan: mark kasan_init_tags as __init
kasan, arm64: move initialization message
kasan: remove __kasan_unpoison_stack
kasan: inline kasan_reset_tag for tag-based modes
kasan: inline random_tag for HW_TAGS
kasan: inline kasan_poison_memory and check_invalid_free
kasan: inline and rename kasan_unpoison_memory
arm64: kasan: Add cpu_supports_tags helper
kasan: add and integrate kasan boot parameters
kasan: check kasan_enabled in annotations
kasan: optimize poisoning in kmalloc and krealloc
kasan: simplify kasan_poison_kfree
kasan: rename kasan_poison_kfree
kasan: don't round_up too much
kasan: simplify assign_tag and set_tag calls
kasan: clarify comment in __kasan_kfree_large

arch/Kconfig | 2 +-
arch/arm64/include/asm/memory.h | 1 +
arch/arm64/include/asm/mte-kasan.h | 6 +
arch/arm64/kernel/mte.c | 20 +++
arch/arm64/kernel/sleep.S | 2 +-
arch/arm64/mm/kasan_init.c | 3 +
arch/x86/kernel/acpi/wakeup_64.S | 2 +-
include/linux/kasan.h | 225 ++++++++++++++++++-------
include/linux/mm.h | 27 ++-
kernel/fork.c | 2 +-
mm/kasan/common.c | 256 ++++++++++++++++-------------
mm/kasan/generic.c | 19 ++-
mm/kasan/hw_tags.c | 182 +++++++++++++++++---
mm/kasan/kasan.h | 102 ++++++++----
mm/kasan/quarantine.c | 5 +-
mm/kasan/report.c | 26 ++-
mm/kasan/report_sw_tags.c | 2 +-
mm/kasan/shadow.c | 1 +
mm/kasan/sw_tags.c | 20 ++-
mm/mempool.c | 2 +-
mm/slab_common.c | 2 +-
mm/slub.c | 3 +-
22 files changed, 641 insertions(+), 269 deletions(-)

--
2.29.0.rc1.297.gfa9743e501-goog


2020-10-22 22:27:52

by Andrey Konovalov

[permalink] [raw]
Subject: [PATCH RFC v2 17/21] kasan: simplify kasan_poison_kfree

kasan_poison_kfree() is currently only called for mempool allocations
that are backed by either kmem_cache_alloc() or kmalloc(). Therefore, the
page passed to kasan_poison_kfree() is always PageSlab() and there's no
need to do the check.

Signed-off-by: Andrey Konovalov <[email protected]>
Link: https://linux-review.googlesource.com/id/If31f88726745da8744c6bea96fb32584e6c2778c
---
mm/kasan/common.c | 11 +----------
1 file changed, 1 insertion(+), 10 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index a581937c2a44..b82dbae0c5d6 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -441,16 +441,7 @@ void __kasan_poison_kfree(void *ptr, unsigned long ip)
struct page *page;

page = virt_to_head_page(ptr);
-
- if (unlikely(!PageSlab(page))) {
- if (ptr != page_address(page)) {
- kasan_report_invalid_free(ptr, ip);
- return;
- }
- kasan_poison_memory(ptr, page_size(page), KASAN_FREE_PAGE);
- } else {
- ____kasan_slab_free(page->slab_cache, ptr, ip, false);
- }
+ ____kasan_slab_free(page->slab_cache, ptr, ip, false);
}

void __kasan_kfree_large(void *ptr, unsigned long ip)
--
2.29.0.rc1.297.gfa9743e501-goog

2020-10-22 22:27:52

by Andrey Konovalov

[permalink] [raw]
Subject: [PATCH RFC v2 03/21] kasan: introduce set_alloc_info

Add set_alloc_info() helper and move kasan_set_track() into it. This will
simplify the code for one of the upcoming changes.

No functional changes.

Signed-off-by: Andrey Konovalov <[email protected]>
Link: https://linux-review.googlesource.com/id/I0316193cbb4ecc9b87b7c2eee0dd79f8ec908c1a
---
mm/kasan/common.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 8fd04415d8f4..a880e5a547ed 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -318,6 +318,11 @@ bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
return __kasan_slab_free(cache, object, ip, true);
}

+static void set_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
+{
+ kasan_set_track(&kasan_get_alloc_meta(cache, object)->alloc_track, flags);
+}
+
static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
size_t size, gfp_t flags, bool keep_tag)
{
@@ -345,7 +350,7 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
KASAN_KMALLOC_REDZONE);

if (cache->flags & SLAB_KASAN)
- kasan_set_track(&kasan_get_alloc_meta(cache, object)->alloc_track, flags);
+ set_alloc_info(cache, (void *)object, flags);

return set_tag(object, tag);
}
--
2.29.0.rc1.297.gfa9743e501-goog

2020-10-22 22:27:52

by Andrey Konovalov

[permalink] [raw]
Subject: [PATCH RFC v2 13/21] arm64: kasan: Add cpu_supports_tags helper

Add an arm64 helper called cpu_supports_mte() that exposes information
about whether the CPU supports memory tagging and that can be called
during early boot (unlike system_supports_mte()).

Use that helper to implement a generic cpu_supports_tags() helper, that
will be used by hardware tag-based KASAN.

Signed-off-by: Andrey Konovalov <[email protected]>
Link: https://linux-review.googlesource.com/id/Ib4b56a42c57c6293df29a0cdfee334c3ca7bdab4
---
arch/arm64/include/asm/memory.h | 1 +
arch/arm64/include/asm/mte-kasan.h | 6 ++++++
arch/arm64/kernel/mte.c | 20 ++++++++++++++++++++
mm/kasan/kasan.h | 4 ++++
4 files changed, 31 insertions(+)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index b5d6b824c21c..f496abfcf7f5 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -232,6 +232,7 @@ static inline const void *__tag_set(const void *addr, u8 tag)
}

#ifdef CONFIG_KASAN_HW_TAGS
+#define arch_cpu_supports_tags() cpu_supports_mte()
#define arch_init_tags(max_tag) mte_init_tags(max_tag)
#define arch_get_random_tag() mte_get_random_tag()
#define arch_get_mem_tag(addr) mte_get_mem_tag(addr)
diff --git a/arch/arm64/include/asm/mte-kasan.h b/arch/arm64/include/asm/mte-kasan.h
index a4c61b926d4a..4c3f2c6b4fe6 100644
--- a/arch/arm64/include/asm/mte-kasan.h
+++ b/arch/arm64/include/asm/mte-kasan.h
@@ -9,6 +9,7 @@

#ifndef __ASSEMBLY__

+#include <linux/init.h>
#include <linux/types.h>

/*
@@ -30,6 +31,7 @@ u8 mte_get_random_tag(void);
void *mte_set_mem_tag_range(void *addr, size_t size, u8 tag);

void mte_init_tags(u64 max_tag);
+bool __init cpu_supports_mte(void);

#else /* CONFIG_ARM64_MTE */

@@ -54,6 +56,10 @@ static inline void *mte_set_mem_tag_range(void *addr, size_t size, u8 tag)
static inline void mte_init_tags(u64 max_tag)
{
}
+static inline bool cpu_supports_mte(void)
+{
+ return false;
+}

#endif /* CONFIG_ARM64_MTE */

diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
index ca8206b7f9a6..8fcd17408515 100644
--- a/arch/arm64/kernel/mte.c
+++ b/arch/arm64/kernel/mte.c
@@ -134,6 +134,26 @@ void mte_init_tags(u64 max_tag)
gcr_kernel_excl = ~incl & SYS_GCR_EL1_EXCL_MASK;
}

+/*
+ * This function can be used during early boot to determine whether the CPU
+ * supports MTE. The alternative that must be used after boot is completed is
+ * system_supports_mte(), but it only works after the cpufeature framework
+ * learns about MTE.
+ */
+bool __init cpu_supports_mte(void)
+{
+ u64 pfr1;
+ u32 val;
+
+ if (!IS_ENABLED(CONFIG_ARM64_MTE))
+ return false;
+
+ pfr1 = read_cpuid(ID_AA64PFR1_EL1);
+ val = cpuid_feature_extract_unsigned_field(pfr1, ID_AA64PFR1_MTE_SHIFT);
+
+ return val >= ID_AA64PFR1_MTE;
+}
+
static void update_sctlr_el1_tcf0(u64 tcf0)
{
/* ISB required for the kernel uaccess routines */
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index da08b2533d73..f7ae0c23f023 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -240,6 +240,9 @@ static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)
#define set_tag(addr, tag) ((void *)arch_kasan_set_tag((addr), (tag)))
#define get_tag(addr) arch_kasan_get_tag(addr)

+#ifndef arch_cpu_supports_tags
+#define arch_cpu_supports_tags() (false)
+#endif
#ifndef arch_init_tags
#define arch_init_tags(max_tag)
#endif
@@ -253,6 +256,7 @@ static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)
#define arch_set_mem_tag_range(addr, size, tag) ((void *)(addr))
#endif

+#define cpu_supports_tags() arch_cpu_supports_tags()
#define init_tags(max_tag) arch_init_tags(max_tag)
#define get_random_tag() arch_get_random_tag()
#define get_mem_tag(addr) arch_get_mem_tag(addr)
--
2.29.0.rc1.297.gfa9743e501-goog

2020-10-22 22:27:52

by Andrey Konovalov

[permalink] [raw]
Subject: [PATCH RFC v2 16/21] kasan: optimize poisoning in kmalloc and krealloc

Since kasan_kmalloc() always follows kasan_slab_alloc(), there's no need
to reunpoison the object data, only to poison the redzone.

This requires changing kasan annotation for early SLUB cache to
kasan_slab_alloc(). Otherwise kasan_kmalloc() doesn't untag the object.
This doesn't do any functional changes, as kmem_cache_node->object_size
is equal to sizeof(struct kmem_cache_node).

Similarly for kasan_krealloc(), as it's called after ksize(), which
already unpoisoned the object, there's no need to do it again.

Signed-off-by: Andrey Konovalov <[email protected]>
Link: https://linux-review.googlesource.com/id/I4083d3b55605f70fef79bca9b90843c4390296f2
---
mm/kasan/common.c | 31 +++++++++++++++++++++----------
mm/slub.c | 3 +--
2 files changed, 22 insertions(+), 12 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index c5ec60e1a4d2..a581937c2a44 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -360,8 +360,14 @@ static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object,
if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) || IS_ENABLED(CONFIG_KASAN_HW_TAGS))
tag = assign_tag(cache, object, false, keep_tag);

- /* Tag is ignored in set_tag without CONFIG_KASAN_SW/HW_TAGS */
- kasan_unpoison_memory(set_tag(object, tag), size);
+ /*
+ * Don't unpoison the object when keeping the tag. Tag is kept for:
+ * 1. krealloc(), and then the memory has already been unpoisoned via ksize();
+ * 2. kmalloc(), and then the memory has already been unpoisoned by kasan_kmalloc().
+ * Tag is ignored in set_tag() without CONFIG_KASAN_SW/HW_TAGS.
+ */
+ if (!keep_tag)
+ kasan_unpoison_memory(set_tag(object, tag), size);
kasan_poison_memory((void *)redzone_start, redzone_end - redzone_start,
KASAN_KMALLOC_REDZONE);

@@ -384,10 +390,9 @@ void * __must_check __kasan_kmalloc(struct kmem_cache *cache, const void *object
}
EXPORT_SYMBOL(__kasan_kmalloc);

-void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
- gfp_t flags)
+static void * __must_check ____kasan_kmalloc_large(struct page *page, const void *ptr,
+ size_t size, gfp_t flags, bool realloc)
{
- struct page *page;
unsigned long redzone_start;
unsigned long redzone_end;

@@ -397,18 +402,24 @@ void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
if (unlikely(ptr == NULL))
return NULL;

- page = virt_to_page(ptr);
- redzone_start = round_up((unsigned long)(ptr + size),
- KASAN_GRANULE_SIZE);
+ redzone_start = round_up((unsigned long)(ptr + size), KASAN_GRANULE_SIZE);
redzone_end = (unsigned long)ptr + page_size(page);

- kasan_unpoison_memory(ptr, size);
+ /* ksize() in __do_krealloc() already unpoisoned the memory. */
+ if (!realloc)
+ kasan_unpoison_memory(ptr, size);
kasan_poison_memory((void *)redzone_start, redzone_end - redzone_start,
KASAN_PAGE_REDZONE);

return (void *)ptr;
}

+void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
+ gfp_t flags)
+{
+ return ____kasan_kmalloc_large(virt_to_page(ptr), ptr, size, flags, false);
+}
+
void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flags)
{
struct page *page;
@@ -419,7 +430,7 @@ void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flag
page = virt_to_head_page(object);

if (unlikely(!PageSlab(page)))
- return __kasan_kmalloc_large(object, size, flags);
+ return ____kasan_kmalloc_large(page, object, size, flags, true);
else
return ____kasan_kmalloc(page->slab_cache, object, size,
flags, true);
diff --git a/mm/slub.c b/mm/slub.c
index 1d3f2355df3b..afb035b0bf2d 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3535,8 +3535,7 @@ static void early_kmem_cache_node_alloc(int node)
init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
init_tracking(kmem_cache_node, n);
#endif
- n = kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node),
- GFP_KERNEL);
+ n = kasan_slab_alloc(kmem_cache_node, n, GFP_KERNEL);
page->freelist = get_freepointer(kmem_cache_node, n);
page->inuse = 1;
page->frozen = 0;
--
2.29.0.rc1.297.gfa9743e501-goog

2020-10-22 22:27:52

by Andrey Konovalov

[permalink] [raw]
Subject: [PATCH RFC v2 14/21] kasan: add and integrate kasan boot parameters

TODO: no meaningful description here yet, please see the cover letter
for this RFC series.

Signed-off-by: Andrey Konovalov <[email protected]>
Link: https://linux-review.googlesource.com/id/If7d37003875b2ed3e0935702c8015c223d6416a4
---
mm/kasan/common.c | 92 +++++++++++++-----------
mm/kasan/generic.c | 5 ++
mm/kasan/hw_tags.c | 169 ++++++++++++++++++++++++++++++++++++++++++++-
mm/kasan/kasan.h | 9 +++
mm/kasan/report.c | 14 +++-
mm/kasan/sw_tags.c | 5 ++
6 files changed, 250 insertions(+), 44 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 1a5e6c279a72..cc129ef62ab1 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -129,35 +129,37 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
unsigned int redzone_size;
int redzone_adjust;

- /* Add alloc meta. */
- cache->kasan_info.alloc_meta_offset = *size;
- *size += sizeof(struct kasan_alloc_meta);
-
- /* Add free meta. */
- if (IS_ENABLED(CONFIG_KASAN_GENERIC) &&
- (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor ||
- cache->object_size < sizeof(struct kasan_free_meta))) {
- cache->kasan_info.free_meta_offset = *size;
- *size += sizeof(struct kasan_free_meta);
- }
-
- redzone_size = optimal_redzone(cache->object_size);
- redzone_adjust = redzone_size - (*size - cache->object_size);
- if (redzone_adjust > 0)
- *size += redzone_adjust;
-
- *size = min_t(unsigned int, KMALLOC_MAX_SIZE,
- max(*size, cache->object_size + redzone_size));
+ if (static_branch_unlikely(&kasan_stack)) {
+ /* Add alloc meta. */
+ cache->kasan_info.alloc_meta_offset = *size;
+ *size += sizeof(struct kasan_alloc_meta);
+
+ /* Add free meta. */
+ if (IS_ENABLED(CONFIG_KASAN_GENERIC) &&
+ (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor ||
+ cache->object_size < sizeof(struct kasan_free_meta))) {
+ cache->kasan_info.free_meta_offset = *size;
+ *size += sizeof(struct kasan_free_meta);
+ }

- /*
- * If the metadata doesn't fit, don't enable KASAN at all.
- */
- if (*size <= cache->kasan_info.alloc_meta_offset ||
- *size <= cache->kasan_info.free_meta_offset) {
- cache->kasan_info.alloc_meta_offset = 0;
- cache->kasan_info.free_meta_offset = 0;
- *size = orig_size;
- return;
+ redzone_size = optimal_redzone(cache->object_size);
+ redzone_adjust = redzone_size - (*size - cache->object_size);
+ if (redzone_adjust > 0)
+ *size += redzone_adjust;
+
+ *size = min_t(unsigned int, KMALLOC_MAX_SIZE,
+ max(*size, cache->object_size + redzone_size));
+
+ /*
+ * If the metadata doesn't fit, don't enable KASAN at all.
+ */
+ if (*size <= cache->kasan_info.alloc_meta_offset ||
+ *size <= cache->kasan_info.free_meta_offset) {
+ cache->kasan_info.alloc_meta_offset = 0;
+ cache->kasan_info.free_meta_offset = 0;
+ *size = orig_size;
+ return;
+ }
}

*flags |= SLAB_KASAN;
@@ -165,10 +167,12 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,

size_t kasan_metadata_size(struct kmem_cache *cache)
{
- return (cache->kasan_info.alloc_meta_offset ?
- sizeof(struct kasan_alloc_meta) : 0) +
- (cache->kasan_info.free_meta_offset ?
- sizeof(struct kasan_free_meta) : 0);
+ if (static_branch_unlikely(&kasan_stack))
+ return (cache->kasan_info.alloc_meta_offset ?
+ sizeof(struct kasan_alloc_meta) : 0) +
+ (cache->kasan_info.free_meta_offset ?
+ sizeof(struct kasan_free_meta) : 0);
+ return 0;
}

struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache,
@@ -270,8 +274,10 @@ void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
if (!(cache->flags & SLAB_KASAN))
return (void *)object;

- alloc_meta = kasan_get_alloc_meta(cache, object);
- __memset(alloc_meta, 0, sizeof(*alloc_meta));
+ if (static_branch_unlikely(&kasan_stack)) {
+ alloc_meta = kasan_get_alloc_meta(cache, object);
+ __memset(alloc_meta, 0, sizeof(*alloc_meta));
+ }

if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) || IS_ENABLED(CONFIG_KASAN_HW_TAGS))
object = set_tag(object, assign_tag(cache, object, true, false));
@@ -308,15 +314,19 @@ static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
rounded_up_size = round_up(cache->object_size, KASAN_GRANULE_SIZE);
kasan_poison_memory(object, rounded_up_size, KASAN_KMALLOC_FREE);

- if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine) ||
- unlikely(!(cache->flags & SLAB_KASAN)))
- return false;
+ if (static_branch_unlikely(&kasan_stack)) {
+ if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine) ||
+ unlikely(!(cache->flags & SLAB_KASAN)))
+ return false;
+
+ kasan_set_free_info(cache, object, tag);

- kasan_set_free_info(cache, object, tag);
+ quarantine_put(cache, object);

- quarantine_put(cache, object);
+ return IS_ENABLED(CONFIG_KASAN_GENERIC);
+ }

- return IS_ENABLED(CONFIG_KASAN_GENERIC);
+ return false;
}

bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
@@ -355,7 +365,7 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
kasan_poison_memory((void *)redzone_start, redzone_end - redzone_start,
KASAN_KMALLOC_REDZONE);

- if (cache->flags & SLAB_KASAN)
+ if (static_branch_unlikely(&kasan_stack) && (cache->flags & SLAB_KASAN))
set_alloc_info(cache, (void *)object, flags);

return set_tag(object, tag);
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index d259e4c3aefd..20a1e753e0c5 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -33,6 +33,11 @@
#include "kasan.h"
#include "../slab.h"

+/* See the comments in hw_tags.c */
+DEFINE_STATIC_KEY_TRUE_RO(kasan_enabled);
+EXPORT_SYMBOL(kasan_enabled);
+DEFINE_STATIC_KEY_TRUE_RO(kasan_stack);
+
/*
* All functions below always inlined so compiler could
* perform better optimizations in each of __asan_loadX/__assn_storeX
diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index 915142da6b57..bccd781011ad 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -8,6 +8,8 @@

#define pr_fmt(fmt) "kasan: " fmt

+#include <linux/init.h>
+#include <linux/jump_label.h>
#include <linux/kasan.h>
#include <linux/kernel.h>
#include <linux/memory.h>
@@ -17,10 +19,175 @@

#include "kasan.h"

+enum kasan_arg_mode {
+ KASAN_ARG_MODE_OFF,
+ KASAN_ARG_MODE_PROD,
+ KASAN_ARG_MODE_FULL,
+};
+
+enum kasan_arg_stack {
+ KASAN_ARG_STACK_DEFAULT,
+ KASAN_ARG_STACK_OFF,
+ KASAN_ARG_STACK_ON,
+};
+
+enum kasan_arg_trap {
+ KASAN_ARG_TRAP_DEFAULT,
+ KASAN_ARG_TRAP_ASYNC,
+ KASAN_ARG_TRAP_SYNC,
+};
+
+enum kasan_arg_fault {
+ KASAN_ARG_FAULT_DEFAULT,
+ KASAN_ARG_FAULT_REPORT,
+ KASAN_ARG_FAULT_PANIC,
+};
+
+static enum kasan_arg_mode kasan_arg_mode __ro_after_init;
+static enum kasan_arg_stack kasan_arg_stack __ro_after_init;
+static enum kasan_arg_fault kasan_arg_fault __ro_after_init;
+static enum kasan_arg_trap kasan_arg_trap __ro_after_init;
+
+/* Whether KASAN is enabled at all. */
+DEFINE_STATIC_KEY_FALSE_RO(kasan_enabled);
+EXPORT_SYMBOL(kasan_enabled);
+
+/* Whether to collect alloc/free stack traces. */
+DEFINE_STATIC_KEY_FALSE_RO(kasan_stack);
+
+/* Whether to use synchronous or asynchronous tag checking. */
+static bool kasan_sync __ro_after_init;
+
+/* Whether panic or disable tag checking on fault. */
+bool kasan_panic __ro_after_init;
+
+/* kasan.mode=off/prod/full */
+static int __init early_kasan_mode(char *arg)
+{
+ if (!arg)
+ return -EINVAL;
+
+ if (!strcmp(arg, "off"))
+ kasan_arg_mode = KASAN_ARG_MODE_OFF;
+ else if (!strcmp(arg, "prod"))
+ kasan_arg_mode = KASAN_ARG_MODE_PROD;
+ else if (!strcmp(arg, "full"))
+ kasan_arg_mode = KASAN_ARG_MODE_FULL;
+ else
+ return -EINVAL;
+
+ return 0;
+}
+early_param("kasan.mode", early_kasan_mode);
+
+/* kasan.stack=off/on */
+static int __init early_kasan_stack(char *arg)
+{
+ if (!arg)
+ return -EINVAL;
+
+ if (!strcmp(arg, "off"))
+ kasan_arg_stack = KASAN_ARG_STACK_OFF;
+ else if (!strcmp(arg, "on"))
+ kasan_arg_stack = KASAN_ARG_STACK_ON;
+ else
+ return -EINVAL;
+
+ return 0;
+}
+early_param("kasan.stack", early_kasan_stack);
+
+/* kasan.trap=sync/async */
+static int __init early_kasan_trap(char *arg)
+{
+ if (!arg)
+ return -EINVAL;
+
+ if (!strcmp(arg, "ASYNC"))
+ kasan_arg_trap = KASAN_ARG_TRAP_ASYNC;
+ else if (!strcmp(arg, "sync"))
+ kasan_arg_trap = KASAN_ARG_TRAP_SYNC;
+ else
+ return -EINVAL;
+
+ return 0;
+}
+early_param("kasan.trap", early_kasan_trap);
+
+/* kasan.fault=report/panic */
+static int __init early_kasan_fault(char *arg)
+{
+ if (!arg)
+ return -EINVAL;
+
+ if (!strcmp(arg, "report"))
+ kasan_arg_fault = KASAN_ARG_FAULT_REPORT;
+ else if (!strcmp(arg, "panic"))
+ kasan_arg_fault = KASAN_ARG_FAULT_PANIC;
+ else
+ return -EINVAL;
+
+ return 0;
+}
+early_param("kasan.fault", early_kasan_fault);
+
void __init kasan_init_tags(void)
{
- init_tags(KASAN_TAG_MAX);
+ if (!cpu_supports_tags())
+ return;
+
+ /* First, preset values based on the mode. */
+
+ switch (kasan_arg_mode) {
+ case KASAN_ARG_MODE_OFF:
+ return;
+ case KASAN_ARG_MODE_PROD:
+ static_branch_enable(&kasan_enabled);
+ break;
+ case KASAN_ARG_MODE_FULL:
+ static_branch_enable(&kasan_enabled);
+ static_branch_enable(&kasan_stack);
+ kasan_sync = true;
+ break;
+ }
+
+ /* Now, optionally override the presets. */

+ switch (kasan_arg_stack) {
+ case KASAN_ARG_STACK_OFF:
+ static_branch_disable(&kasan_stack);
+ break;
+ case KASAN_ARG_STACK_ON:
+ static_branch_enable(&kasan_stack);
+ break;
+ default:
+ break;
+ }
+
+ switch (kasan_arg_trap) {
+ case KASAN_ARG_TRAP_ASYNC:
+ kasan_sync = false;
+ break;
+ case KASAN_ARG_TRAP_SYNC:
+ kasan_sync = true;
+ break;
+ default:
+ break;
+ }
+
+ switch (kasan_arg_fault) {
+ case KASAN_ARG_FAULT_REPORT:
+ kasan_panic = false;
+ break;
+ case KASAN_ARG_FAULT_PANIC:
+ kasan_panic = true;
+ break;
+ default:
+ break;
+ }
+
+ /* TODO: choose between sync and async based on kasan_sync. */
+ init_tags(KASAN_TAG_MAX);
pr_info("KernelAddressSanitizer initialized\n");
}

diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index f7ae0c23f023..00b47bc753aa 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -2,9 +2,18 @@
#ifndef __MM_KASAN_KASAN_H
#define __MM_KASAN_KASAN_H

+#include <linux/jump_label.h>
#include <linux/kasan.h>
#include <linux/stackdepot.h>

+#ifdef CONFIG_KASAN_HW_TAGS
+DECLARE_STATIC_KEY_FALSE(kasan_stack);
+#else
+DECLARE_STATIC_KEY_TRUE(kasan_stack);
+#endif
+
+extern bool kasan_panic __ro_after_init;
+
#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
#define KASAN_GRANULE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
#else
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index dee5350b459c..426dd1962d3c 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -97,6 +97,10 @@ static void end_report(unsigned long *flags)
panic_on_warn = 0;
panic("panic_on_warn set ...\n");
}
+#ifdef CONFIG_KASAN_HW_TAGS
+ if (kasan_panic)
+ panic("kasan.fault=panic set ...\n");
+#endif
kasan_enable_current();
}

@@ -159,8 +163,8 @@ static void describe_object_addr(struct kmem_cache *cache, void *object,
(void *)(object_addr + cache->object_size));
}

-static void describe_object(struct kmem_cache *cache, void *object,
- const void *addr, u8 tag)
+static void describe_object_stacks(struct kmem_cache *cache, void *object,
+ const void *addr, u8 tag)
{
struct kasan_alloc_meta *alloc_meta = kasan_get_alloc_meta(cache, object);

@@ -188,7 +192,13 @@ static void describe_object(struct kmem_cache *cache, void *object,
}
#endif
}
+}

+static void describe_object(struct kmem_cache *cache, void *object,
+ const void *addr, u8 tag)
+{
+ if (static_branch_unlikely(&kasan_stack))
+ describe_object_stacks(cache, object, addr, tag);
describe_object_addr(cache, object, addr);
}

diff --git a/mm/kasan/sw_tags.c b/mm/kasan/sw_tags.c
index 4db41f274702..b6d185adf2c5 100644
--- a/mm/kasan/sw_tags.c
+++ b/mm/kasan/sw_tags.c
@@ -33,6 +33,11 @@
#include "kasan.h"
#include "../slab.h"

+/* See the comments in hw_tags.c */
+DEFINE_STATIC_KEY_TRUE_RO(kasan_enabled);
+EXPORT_SYMBOL(kasan_enabled);
+DEFINE_STATIC_KEY_TRUE_RO(kasan_stack);
+
static DEFINE_PER_CPU(u32, prng_state);

void __init kasan_init_tags(void)
--
2.29.0.rc1.297.gfa9743e501-goog

2020-10-22 22:27:52

by Andrey Konovalov

[permalink] [raw]
Subject: [PATCH RFC v2 15/21] kasan: check kasan_enabled in annotations

Declare the kasan_enabled static key in include/linux/kasan.h and in
include/linux/mm.h and check it in all kasan annotations. This allows to
avoid any slowdown caused by function calls when kasan_enabled is
disabled.

Signed-off-by: Andrey Konovalov <[email protected]>
Link: https://linux-review.googlesource.com/id/I2589451d3c96c97abbcbf714baabe6161c6f153e
---
include/linux/kasan.h | 210 ++++++++++++++++++++++++++++++++----------
include/linux/mm.h | 27 ++++--
mm/kasan/common.c | 60 ++++++------
3 files changed, 211 insertions(+), 86 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 2b9023224474..8654275aa62e 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -2,6 +2,7 @@
#ifndef _LINUX_KASAN_H
#define _LINUX_KASAN_H

+#include <linux/jump_label.h>
#include <linux/types.h>

struct kmem_cache;
@@ -66,40 +67,154 @@ static inline void kasan_disable_current(void) {}

#ifdef CONFIG_KASAN

-void kasan_alloc_pages(struct page *page, unsigned int order);
-void kasan_free_pages(struct page *page, unsigned int order);
+struct kasan_cache {
+ int alloc_meta_offset;
+ int free_meta_offset;
+};
+
+#ifdef CONFIG_KASAN_HW_TAGS
+DECLARE_STATIC_KEY_FALSE(kasan_enabled);
+#else
+DECLARE_STATIC_KEY_TRUE(kasan_enabled);
+#endif

-void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
- slab_flags_t *flags);
+void __kasan_alloc_pages(struct page *page, unsigned int order);
+static inline void kasan_alloc_pages(struct page *page, unsigned int order)
+{
+ if (static_branch_likely(&kasan_enabled))
+ __kasan_alloc_pages(page, order);
+}

-void kasan_unpoison_data(const void *address, size_t size);
-void kasan_unpoison_slab(const void *ptr);
+void __kasan_free_pages(struct page *page, unsigned int order);
+static inline void kasan_free_pages(struct page *page, unsigned int order)
+{
+ if (static_branch_likely(&kasan_enabled))
+ __kasan_free_pages(page, order);
+}

-void kasan_poison_slab(struct page *page);
-void kasan_unpoison_object_data(struct kmem_cache *cache, void *object);
-void kasan_poison_object_data(struct kmem_cache *cache, void *object);
-void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
- const void *object);
+void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
+ slab_flags_t *flags);
+static inline void kasan_cache_create(struct kmem_cache *cache,
+ unsigned int *size, slab_flags_t *flags)
+{
+ if (static_branch_likely(&kasan_enabled))
+ __kasan_cache_create(cache, size, flags);
+}

-void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
- gfp_t flags);
-void kasan_kfree_large(void *ptr, unsigned long ip);
-void kasan_poison_kfree(void *ptr, unsigned long ip);
-void * __must_check kasan_kmalloc(struct kmem_cache *s, const void *object,
- size_t size, gfp_t flags);
-void * __must_check kasan_krealloc(const void *object, size_t new_size,
- gfp_t flags);
+size_t __kasan_metadata_size(struct kmem_cache *cache);
+static inline size_t kasan_metadata_size(struct kmem_cache *cache)
+{
+ if (static_branch_likely(&kasan_enabled))
+ return __kasan_metadata_size(cache);
+ return 0;
+}

-void * __must_check kasan_slab_alloc(struct kmem_cache *s, void *object,
- gfp_t flags);
-bool kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip);
+void __kasan_unpoison_data(const void *addr, size_t size);
+static inline void kasan_unpoison_data(const void *addr, size_t size)
+{
+ if (static_branch_likely(&kasan_enabled))
+ __kasan_unpoison_data(addr, size);
+}

-struct kasan_cache {
- int alloc_meta_offset;
- int free_meta_offset;
-};
+void __kasan_unpoison_slab(const void *ptr);
+static inline void kasan_unpoison_slab(const void *ptr)
+{
+ if (static_branch_likely(&kasan_enabled))
+ __kasan_unpoison_slab(ptr);
+}
+
+void __kasan_poison_slab(struct page *page);
+static inline void kasan_poison_slab(struct page *page)
+{
+ if (static_branch_likely(&kasan_enabled))
+ return __kasan_poison_slab(page);
+}
+
+void __kasan_unpoison_object_data(struct kmem_cache *cache, void *object);
+static inline void kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
+{
+ if (static_branch_likely(&kasan_enabled))
+ return __kasan_unpoison_object_data(cache, object);
+}
+
+void __kasan_poison_object_data(struct kmem_cache *cache, void *object);
+static inline void kasan_poison_object_data(struct kmem_cache *cache, void *object)
+{
+ if (static_branch_likely(&kasan_enabled))
+ __kasan_poison_object_data(cache, object);
+}
+
+void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache,
+ const void *object);
+static inline void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
+ const void *object)
+{
+ if (static_branch_likely(&kasan_enabled))
+ return __kasan_init_slab_obj(cache, object);
+ return (void *)object;
+}
+
+bool __kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip);
+static inline bool kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip)
+{
+ if (static_branch_likely(&kasan_enabled))
+ return __kasan_slab_free(s, object, ip);
+ return false;
+}

-size_t kasan_metadata_size(struct kmem_cache *cache);
+void * __must_check __kasan_slab_alloc(struct kmem_cache *s,
+ void *object, gfp_t flags);
+static inline void * __must_check kasan_slab_alloc(struct kmem_cache *s,
+ void *object, gfp_t flags)
+{
+ if (static_branch_likely(&kasan_enabled))
+ return __kasan_slab_alloc(s, object, flags);
+ return object;
+}
+
+void * __must_check __kasan_kmalloc(struct kmem_cache *s, const void *object,
+ size_t size, gfp_t flags);
+static inline void * __must_check kasan_kmalloc(struct kmem_cache *s, const void *object,
+ size_t size, gfp_t flags)
+{
+ if (static_branch_likely(&kasan_enabled))
+ return __kasan_kmalloc(s, object, size, flags);
+ return (void *)object;
+}
+
+void * __must_check __kasan_kmalloc_large(const void *ptr,
+ size_t size, gfp_t flags);
+static inline void * __must_check kasan_kmalloc_large(const void *ptr,
+ size_t size, gfp_t flags)
+{
+ if (static_branch_likely(&kasan_enabled))
+ return __kasan_kmalloc_large(ptr, size, flags);
+ return (void *)ptr;
+}
+
+void * __must_check __kasan_krealloc(const void *object,
+ size_t new_size, gfp_t flags);
+static inline void * __must_check kasan_krealloc(const void *object,
+ size_t new_size, gfp_t flags)
+{
+ if (static_branch_likely(&kasan_enabled))
+ return __kasan_krealloc(object, new_size, flags);
+ return (void *)object;
+}
+
+void __kasan_poison_kfree(void *ptr, unsigned long ip);
+static inline void kasan_poison_kfree(void *ptr, unsigned long ip)
+{
+ if (static_branch_likely(&kasan_enabled))
+ __kasan_poison_kfree(ptr, ip);
+}
+
+void __kasan_kfree_large(void *ptr, unsigned long ip);
+static inline void kasan_kfree_large(void *ptr, unsigned long ip)
+{
+ if (static_branch_likely(&kasan_enabled))
+ __kasan_kfree_large(ptr, ip);
+}

bool kasan_save_enable_multi_shot(void);
void kasan_restore_multi_shot(bool enabled);
@@ -108,14 +223,12 @@ void kasan_restore_multi_shot(bool enabled);

static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
static inline void kasan_free_pages(struct page *page, unsigned int order) {}
-
static inline void kasan_cache_create(struct kmem_cache *cache,
unsigned int *size,
slab_flags_t *flags) {}
-
-static inline void kasan_unpoison_data(const void *address, size_t size) { }
-static inline void kasan_unpoison_slab(const void *ptr) { }
-
+static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; }
+static inline void kasan_unpoison_data(const void *address, size_t size) {}
+static inline void kasan_unpoison_slab(const void *ptr) {}
static inline void kasan_poison_slab(struct page *page) {}
static inline void kasan_unpoison_object_data(struct kmem_cache *cache,
void *object) {}
@@ -126,36 +239,33 @@ static inline void *kasan_init_slab_obj(struct kmem_cache *cache,
{
return (void *)object;
}
-
-static inline void *kasan_kmalloc_large(void *ptr, size_t size, gfp_t flags)
+static inline bool kasan_slab_free(struct kmem_cache *s, void *object,
+ unsigned long ip)
{
- return ptr;
+ return false;
}
-static inline void kasan_kfree_large(void *ptr, unsigned long ip) {}
-static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {}
-static inline void *kasan_kmalloc(struct kmem_cache *s, const void *object,
- size_t size, gfp_t flags)
+static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object,
+ gfp_t flags)
{
- return (void *)object;
+ return object;
}
-static inline void *kasan_krealloc(const void *object, size_t new_size,
- gfp_t flags)
+static inline void *kasan_kmalloc(struct kmem_cache *s, const void *object,
+ size_t size, gfp_t flags)
{
return (void *)object;
}

-static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object,
- gfp_t flags)
+static inline void *kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags)
{
- return object;
+ return (void *)ptr;
}
-static inline bool kasan_slab_free(struct kmem_cache *s, void *object,
- unsigned long ip)
+static inline void *kasan_krealloc(const void *object, size_t new_size,
+ gfp_t flags)
{
- return false;
+ return (void *)object;
}
-
-static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; }
+static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {}
+static inline void kasan_kfree_large(void *ptr, unsigned long ip) {}

#endif /* CONFIG_KASAN */

diff --git a/include/linux/mm.h b/include/linux/mm.h
index a3cac68c737c..701e9d7666d6 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1412,22 +1412,36 @@ static inline bool cpupid_match_pid(struct task_struct *task, int cpupid)
#endif /* CONFIG_NUMA_BALANCING */

#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
+
+#ifdef CONFIG_KASAN_HW_TAGS
+DECLARE_STATIC_KEY_FALSE(kasan_enabled);
+#else
+DECLARE_STATIC_KEY_TRUE(kasan_enabled);
+#endif
+
static inline u8 page_kasan_tag(const struct page *page)
{
- return (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;
+ if (static_branch_likely(&kasan_enabled))
+ return (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;
+ return 0xff;
}

static inline void page_kasan_tag_set(struct page *page, u8 tag)
{
- page->flags &= ~(KASAN_TAG_MASK << KASAN_TAG_PGSHIFT);
- page->flags |= (tag & KASAN_TAG_MASK) << KASAN_TAG_PGSHIFT;
+ if (static_branch_likely(&kasan_enabled)) {
+ page->flags &= ~(KASAN_TAG_MASK << KASAN_TAG_PGSHIFT);
+ page->flags |= (tag & KASAN_TAG_MASK) << KASAN_TAG_PGSHIFT;
+ }
}

static inline void page_kasan_tag_reset(struct page *page)
{
- page_kasan_tag_set(page, 0xff);
+ if (static_branch_likely(&kasan_enabled))
+ page_kasan_tag_set(page, 0xff);
}
-#else
+
+#else /* CONFIG_KASAN_SW_TAGS || CONFIG_KASAN_HW_TAGS */
+
static inline u8 page_kasan_tag(const struct page *page)
{
return 0xff;
@@ -1435,7 +1449,8 @@ static inline u8 page_kasan_tag(const struct page *page)

static inline void page_kasan_tag_set(struct page *page, u8 tag) { }
static inline void page_kasan_tag_reset(struct page *page) { }
-#endif
+
+#endif /* CONFIG_KASAN_SW_TAGS || CONFIG_KASAN_HW_TAGS */

static inline struct zone *page_zone(const struct page *page)
{
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index cc129ef62ab1..c5ec60e1a4d2 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -81,7 +81,7 @@ asmlinkage void kasan_unpoison_task_stack_below(const void *watermark)
}
#endif /* CONFIG_KASAN_STACK */

-void kasan_alloc_pages(struct page *page, unsigned int order)
+void __kasan_alloc_pages(struct page *page, unsigned int order)
{
u8 tag;
unsigned long i;
@@ -95,7 +95,7 @@ void kasan_alloc_pages(struct page *page, unsigned int order)
kasan_unpoison_memory(page_address(page), PAGE_SIZE << order);
}

-void kasan_free_pages(struct page *page, unsigned int order)
+void __kasan_free_pages(struct page *page, unsigned int order)
{
if (likely(!PageHighMem(page)))
kasan_poison_memory(page_address(page),
@@ -122,8 +122,8 @@ static inline unsigned int optimal_redzone(unsigned int object_size)
object_size <= (1 << 16) - 1024 ? 1024 : 2048;
}

-void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
- slab_flags_t *flags)
+void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
+ slab_flags_t *flags)
{
unsigned int orig_size = *size;
unsigned int redzone_size;
@@ -165,7 +165,7 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
*flags |= SLAB_KASAN;
}

-size_t kasan_metadata_size(struct kmem_cache *cache)
+size_t __kasan_metadata_size(struct kmem_cache *cache)
{
if (static_branch_unlikely(&kasan_stack))
return (cache->kasan_info.alloc_meta_offset ?
@@ -188,17 +188,17 @@ struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache,
return (void *)reset_tag(object) + cache->kasan_info.free_meta_offset;
}

-void kasan_unpoison_data(const void *address, size_t size)
+void __kasan_unpoison_data(const void *addr, size_t size)
{
- kasan_unpoison_memory(address, size);
+ kasan_unpoison_memory(addr, size);
}

-void kasan_unpoison_slab(const void *ptr)
+void __kasan_unpoison_slab(const void *ptr)
{
kasan_unpoison_memory(ptr, __ksize(ptr));
}

-void kasan_poison_slab(struct page *page)
+void __kasan_poison_slab(struct page *page)
{
unsigned long i;

@@ -208,12 +208,12 @@ void kasan_poison_slab(struct page *page)
KASAN_KMALLOC_REDZONE);
}

-void kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
+void __kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
{
kasan_unpoison_memory(object, cache->object_size);
}

-void kasan_poison_object_data(struct kmem_cache *cache, void *object)
+void __kasan_poison_object_data(struct kmem_cache *cache, void *object)
{
kasan_poison_memory(object,
round_up(cache->object_size, KASAN_GRANULE_SIZE),
@@ -266,7 +266,7 @@ static u8 assign_tag(struct kmem_cache *cache, const void *object,
#endif
}

-void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
+void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache,
const void *object)
{
struct kasan_alloc_meta *alloc_meta;
@@ -285,7 +285,7 @@ void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
return (void *)object;
}

-static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
+static bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
unsigned long ip, bool quarantine)
{
u8 tag;
@@ -329,9 +329,9 @@ static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
return false;
}

-bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
+bool __kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
{
- return __kasan_slab_free(cache, object, ip, true);
+ return ____kasan_slab_free(cache, object, ip, true);
}

static void set_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
@@ -339,7 +339,7 @@ static void set_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
kasan_set_track(&kasan_get_alloc_meta(cache, object)->alloc_track, flags);
}

-static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
+static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object,
size_t size, gfp_t flags, bool keep_tag)
{
unsigned long redzone_start;
@@ -371,20 +371,20 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
return set_tag(object, tag);
}

-void * __must_check kasan_slab_alloc(struct kmem_cache *cache, void *object,
- gfp_t flags)
+void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
+ void *object, gfp_t flags)
{
- return __kasan_kmalloc(cache, object, cache->object_size, flags, false);
+ return ____kasan_kmalloc(cache, object, cache->object_size, flags, false);
}

-void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object,
- size_t size, gfp_t flags)
+void * __must_check __kasan_kmalloc(struct kmem_cache *cache, const void *object,
+ size_t size, gfp_t flags)
{
- return __kasan_kmalloc(cache, object, size, flags, true);
+ return ____kasan_kmalloc(cache, object, size, flags, true);
}
-EXPORT_SYMBOL(kasan_kmalloc);
+EXPORT_SYMBOL(__kasan_kmalloc);

-void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
+void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
gfp_t flags)
{
struct page *page;
@@ -409,7 +409,7 @@ void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
return (void *)ptr;
}

-void * __must_check kasan_krealloc(const void *object, size_t size, gfp_t flags)
+void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flags)
{
struct page *page;

@@ -419,13 +419,13 @@ void * __must_check kasan_krealloc(const void *object, size_t size, gfp_t flags)
page = virt_to_head_page(object);

if (unlikely(!PageSlab(page)))
- return kasan_kmalloc_large(object, size, flags);
+ return __kasan_kmalloc_large(object, size, flags);
else
- return __kasan_kmalloc(page->slab_cache, object, size,
+ return ____kasan_kmalloc(page->slab_cache, object, size,
flags, true);
}

-void kasan_poison_kfree(void *ptr, unsigned long ip)
+void __kasan_poison_kfree(void *ptr, unsigned long ip)
{
struct page *page;

@@ -438,11 +438,11 @@ void kasan_poison_kfree(void *ptr, unsigned long ip)
}
kasan_poison_memory(ptr, page_size(page), KASAN_FREE_PAGE);
} else {
- __kasan_slab_free(page->slab_cache, ptr, ip, false);
+ ____kasan_slab_free(page->slab_cache, ptr, ip, false);
}
}

-void kasan_kfree_large(void *ptr, unsigned long ip)
+void __kasan_kfree_large(void *ptr, unsigned long ip)
{
if (ptr != page_address(virt_to_head_page(ptr)))
kasan_report_invalid_free(ptr, ip);
--
2.29.0.rc1.297.gfa9743e501-goog

2020-10-22 23:59:13

by Dmitry Vyukov

[permalink] [raw]
Subject: Re: [PATCH RFC v2 00/21] kasan: hardware tag-based mode for production use on arm64

On Thu, Oct 22, 2020 at 3:19 PM Andrey Konovalov <[email protected]> wrote:
>
> This patchset is not complete (hence sending as RFC), but I would like to
> start the discussion now and hear people's opinions regarding the
> questions mentioned below.
>
> === Overview
>
> This patchset adopts the existing hardware tag-based KASAN mode [1] for
> use in production as a memory corruption mitigation. Hardware tag-based
> KASAN relies on arm64 Memory Tagging Extension (MTE) [2] to perform memory
> and pointer tagging. Please see [3] and [4] for detailed analysis of how
> MTE helps to fight memory safety problems.
>
> The current plan is reuse CONFIG_KASAN_HW_TAGS for production, but add a
> boot time switch, that allows to choose between a debugging mode, that
> includes all KASAN features as they are, and a production mode, that only
> includes the essentials like tag checking.
>
> It is essential that switching between these modes doesn't require
> rebuilding the kernel with different configs, as this is required by the
> Android GKI initiative [5].
>
> The patch titled "kasan: add and integrate kasan boot parameters" of this
> series adds a few new boot parameters:
>
> kasan.mode allows choosing one of main three modes:
>
> - kasan.mode=off - no checks at all
> - kasan.mode=prod - only essential production features
> - kasan.mode=full - all features
>
> Those mode configs provide default values for three more internal configs
> listed below. However it's also possible to override the default values
> by providing:
>
> - kasan.stack=off/on - enable stacks collection
> (default: on for mode=full, otherwise off)
> - kasan.trap=async/sync - use async or sync MTE mode
> (default: sync for mode=full, otherwise async)
> - kasan.fault=report/panic - only report MTE fault or also panic
> (default: report)
>
> === Benchmarks
>
> For now I've only performed a few simple benchmarks such as measuring
> kernel boot time and slab memory usage after boot. The benchmarks were
> performed in QEMU and the results below exclude the slowdown caused by
> QEMU memory tagging emulation (as it's different from the slowdown that
> will be introduced by hardware and therefore irrelevant).
>
> KASAN_HW_TAGS=y + kasan.mode=off introduces no performance or memory
> impact compared to KASAN_HW_TAGS=n.
>
> kasan.mode=prod (without executing the tagging instructions) introduces
> 7% of both performace and memory impact compared to kasan.mode=off.
> Note, that 4% of performance and all 7% of memory impact are caused by the
> fact that enabling KASAN essentially results in CONFIG_SLAB_MERGE_DEFAULT
> being disabled.
>
> Recommended Android config has CONFIG_SLAB_MERGE_DEFAULT disabled (I assume
> for security reasons), but Pixel 4 has it enabled. It's arguable, whether
> "disabling" CONFIG_SLAB_MERGE_DEFAULT introduces any security benefit on
> top of MTE. Without MTE it makes exploiting some heap corruption harder.
> With MTE it will only make it harder provided that the attacker is able to
> predict allocation tags.
>
> kasan.mode=full has 40% performance and 30% memory impact over
> kasan.mode=prod. Both come from alloc/free stack collection.
>
> === Questions
>
> Any concerns about the boot parameters?

For boot parameters I think we are now "safe" in the sense that we
provide maximum possible flexibility and can defer any actual
decisions.

> Should we try to deal with CONFIG_SLAB_MERGE_DEFAULT-like behavor mentioned
> above?

How hard it is to allow KASAN with CONFIG_SLAB_MERGE_DEFAULT? Are
there any principal conflicts?
The numbers you provided look quite substantial (on a par of what MTE
itself may introduce). So I would assume if a vendor does not have
CONFIG_SLAB_MERGE_DEFAULT disabled, it may not want to disable it
because of MTE (effectively doubles overhead).

2020-10-28 07:37:21

by Dmitry Vyukov

[permalink] [raw]
Subject: Re: [PATCH RFC v2 03/21] kasan: introduce set_alloc_info

On Thu, Oct 22, 2020 at 3:19 PM Andrey Konovalov <[email protected]> wrote:
>
> Add set_alloc_info() helper and move kasan_set_track() into it. This will
> simplify the code for one of the upcoming changes.
>
> No functional changes.
>
> Signed-off-by: Andrey Konovalov <[email protected]>
> Link: https://linux-review.googlesource.com/id/I0316193cbb4ecc9b87b7c2eee0dd79f8ec908c1a

Reviewed-by: Dmitry Vyukov <[email protected]>

> ---
> mm/kasan/common.c | 7 ++++++-
> 1 file changed, 6 insertions(+), 1 deletion(-)
>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 8fd04415d8f4..a880e5a547ed 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -318,6 +318,11 @@ bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
> return __kasan_slab_free(cache, object, ip, true);
> }
>
> +static void set_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
> +{
> + kasan_set_track(&kasan_get_alloc_meta(cache, object)->alloc_track, flags);
> +}
> +
> static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
> size_t size, gfp_t flags, bool keep_tag)
> {
> @@ -345,7 +350,7 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
> KASAN_KMALLOC_REDZONE);
>
> if (cache->flags & SLAB_KASAN)
> - kasan_set_track(&kasan_get_alloc_meta(cache, object)->alloc_track, flags);
> + set_alloc_info(cache, (void *)object, flags);
>
> return set_tag(object, tag);
> }
> --
> 2.29.0.rc1.297.gfa9743e501-goog
>

2020-10-28 22:15:42

by Dmitry Vyukov

[permalink] [raw]
Subject: Re: [PATCH RFC v2 15/21] kasan: check kasan_enabled in annotations

On Thu, Oct 22, 2020 at 3:20 PM Andrey Konovalov <[email protected]> wrote:
>
> Declare the kasan_enabled static key in include/linux/kasan.h and in
> include/linux/mm.h and check it in all kasan annotations. This allows to
> avoid any slowdown caused by function calls when kasan_enabled is
> disabled.
>
> Signed-off-by: Andrey Konovalov <[email protected]>
> Link: https://linux-review.googlesource.com/id/I2589451d3c96c97abbcbf714baabe6161c6f153e
> ---
> include/linux/kasan.h | 210 ++++++++++++++++++++++++++++++++----------
> include/linux/mm.h | 27 ++++--
> mm/kasan/common.c | 60 ++++++------
> 3 files changed, 211 insertions(+), 86 deletions(-)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 2b9023224474..8654275aa62e 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -2,6 +2,7 @@
> #ifndef _LINUX_KASAN_H
> #define _LINUX_KASAN_H
>
> +#include <linux/jump_label.h>
> #include <linux/types.h>
>
> struct kmem_cache;
> @@ -66,40 +67,154 @@ static inline void kasan_disable_current(void) {}
>
> #ifdef CONFIG_KASAN
>
> -void kasan_alloc_pages(struct page *page, unsigned int order);
> -void kasan_free_pages(struct page *page, unsigned int order);
> +struct kasan_cache {
> + int alloc_meta_offset;
> + int free_meta_offset;
> +};
> +
> +#ifdef CONFIG_KASAN_HW_TAGS
> +DECLARE_STATIC_KEY_FALSE(kasan_enabled);
> +#else
> +DECLARE_STATIC_KEY_TRUE(kasan_enabled);
> +#endif
>
> -void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> - slab_flags_t *flags);
> +void __kasan_alloc_pages(struct page *page, unsigned int order);
> +static inline void kasan_alloc_pages(struct page *page, unsigned int order)
> +{
> + if (static_branch_likely(&kasan_enabled))
> + __kasan_alloc_pages(page, order);

The patch looks fine per se, but I think with the suggestion in the
previous patch, this should be:

if (kasan_is_enabled())
__kasan_alloc_pages(page, order);

No overhead for other modes and less logic duplication.

> +}
>
> -void kasan_unpoison_data(const void *address, size_t size);
> -void kasan_unpoison_slab(const void *ptr);
> +void __kasan_free_pages(struct page *page, unsigned int order);
> +static inline void kasan_free_pages(struct page *page, unsigned int order)
> +{
> + if (static_branch_likely(&kasan_enabled))
> + __kasan_free_pages(page, order);
> +}
>
> -void kasan_poison_slab(struct page *page);
> -void kasan_unpoison_object_data(struct kmem_cache *cache, void *object);
> -void kasan_poison_object_data(struct kmem_cache *cache, void *object);
> -void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
> - const void *object);
> +void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> + slab_flags_t *flags);
> +static inline void kasan_cache_create(struct kmem_cache *cache,
> + unsigned int *size, slab_flags_t *flags)
> +{
> + if (static_branch_likely(&kasan_enabled))
> + __kasan_cache_create(cache, size, flags);
> +}
>
> -void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
> - gfp_t flags);
> -void kasan_kfree_large(void *ptr, unsigned long ip);
> -void kasan_poison_kfree(void *ptr, unsigned long ip);
> -void * __must_check kasan_kmalloc(struct kmem_cache *s, const void *object,
> - size_t size, gfp_t flags);
> -void * __must_check kasan_krealloc(const void *object, size_t new_size,
> - gfp_t flags);
> +size_t __kasan_metadata_size(struct kmem_cache *cache);
> +static inline size_t kasan_metadata_size(struct kmem_cache *cache)
> +{
> + if (static_branch_likely(&kasan_enabled))
> + return __kasan_metadata_size(cache);
> + return 0;
> +}
>
> -void * __must_check kasan_slab_alloc(struct kmem_cache *s, void *object,
> - gfp_t flags);
> -bool kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip);
> +void __kasan_unpoison_data(const void *addr, size_t size);
> +static inline void kasan_unpoison_data(const void *addr, size_t size)
> +{
> + if (static_branch_likely(&kasan_enabled))
> + __kasan_unpoison_data(addr, size);
> +}
>
> -struct kasan_cache {
> - int alloc_meta_offset;
> - int free_meta_offset;
> -};
> +void __kasan_unpoison_slab(const void *ptr);
> +static inline void kasan_unpoison_slab(const void *ptr)
> +{
> + if (static_branch_likely(&kasan_enabled))
> + __kasan_unpoison_slab(ptr);
> +}
> +
> +void __kasan_poison_slab(struct page *page);
> +static inline void kasan_poison_slab(struct page *page)
> +{
> + if (static_branch_likely(&kasan_enabled))
> + return __kasan_poison_slab(page);
> +}
> +
> +void __kasan_unpoison_object_data(struct kmem_cache *cache, void *object);
> +static inline void kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
> +{
> + if (static_branch_likely(&kasan_enabled))
> + return __kasan_unpoison_object_data(cache, object);
> +}
> +
> +void __kasan_poison_object_data(struct kmem_cache *cache, void *object);
> +static inline void kasan_poison_object_data(struct kmem_cache *cache, void *object)
> +{
> + if (static_branch_likely(&kasan_enabled))
> + __kasan_poison_object_data(cache, object);
> +}
> +
> +void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache,
> + const void *object);
> +static inline void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
> + const void *object)
> +{
> + if (static_branch_likely(&kasan_enabled))
> + return __kasan_init_slab_obj(cache, object);
> + return (void *)object;
> +}
> +
> +bool __kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip);
> +static inline bool kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip)
> +{
> + if (static_branch_likely(&kasan_enabled))
> + return __kasan_slab_free(s, object, ip);
> + return false;
> +}
>
> -size_t kasan_metadata_size(struct kmem_cache *cache);
> +void * __must_check __kasan_slab_alloc(struct kmem_cache *s,
> + void *object, gfp_t flags);
> +static inline void * __must_check kasan_slab_alloc(struct kmem_cache *s,
> + void *object, gfp_t flags)
> +{
> + if (static_branch_likely(&kasan_enabled))
> + return __kasan_slab_alloc(s, object, flags);
> + return object;
> +}
> +
> +void * __must_check __kasan_kmalloc(struct kmem_cache *s, const void *object,
> + size_t size, gfp_t flags);
> +static inline void * __must_check kasan_kmalloc(struct kmem_cache *s, const void *object,
> + size_t size, gfp_t flags)
> +{
> + if (static_branch_likely(&kasan_enabled))
> + return __kasan_kmalloc(s, object, size, flags);
> + return (void *)object;
> +}
> +
> +void * __must_check __kasan_kmalloc_large(const void *ptr,
> + size_t size, gfp_t flags);
> +static inline void * __must_check kasan_kmalloc_large(const void *ptr,
> + size_t size, gfp_t flags)
> +{
> + if (static_branch_likely(&kasan_enabled))
> + return __kasan_kmalloc_large(ptr, size, flags);
> + return (void *)ptr;
> +}
> +
> +void * __must_check __kasan_krealloc(const void *object,
> + size_t new_size, gfp_t flags);
> +static inline void * __must_check kasan_krealloc(const void *object,
> + size_t new_size, gfp_t flags)
> +{
> + if (static_branch_likely(&kasan_enabled))
> + return __kasan_krealloc(object, new_size, flags);
> + return (void *)object;
> +}
> +
> +void __kasan_poison_kfree(void *ptr, unsigned long ip);
> +static inline void kasan_poison_kfree(void *ptr, unsigned long ip)
> +{
> + if (static_branch_likely(&kasan_enabled))
> + __kasan_poison_kfree(ptr, ip);
> +}
> +
> +void __kasan_kfree_large(void *ptr, unsigned long ip);
> +static inline void kasan_kfree_large(void *ptr, unsigned long ip)
> +{
> + if (static_branch_likely(&kasan_enabled))
> + __kasan_kfree_large(ptr, ip);
> +}
>
> bool kasan_save_enable_multi_shot(void);
> void kasan_restore_multi_shot(bool enabled);
> @@ -108,14 +223,12 @@ void kasan_restore_multi_shot(bool enabled);
>
> static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
> static inline void kasan_free_pages(struct page *page, unsigned int order) {}
> -
> static inline void kasan_cache_create(struct kmem_cache *cache,
> unsigned int *size,
> slab_flags_t *flags) {}
> -
> -static inline void kasan_unpoison_data(const void *address, size_t size) { }
> -static inline void kasan_unpoison_slab(const void *ptr) { }
> -
> +static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; }
> +static inline void kasan_unpoison_data(const void *address, size_t size) {}
> +static inline void kasan_unpoison_slab(const void *ptr) {}
> static inline void kasan_poison_slab(struct page *page) {}
> static inline void kasan_unpoison_object_data(struct kmem_cache *cache,
> void *object) {}
> @@ -126,36 +239,33 @@ static inline void *kasan_init_slab_obj(struct kmem_cache *cache,
> {
> return (void *)object;
> }
> -
> -static inline void *kasan_kmalloc_large(void *ptr, size_t size, gfp_t flags)
> +static inline bool kasan_slab_free(struct kmem_cache *s, void *object,
> + unsigned long ip)
> {
> - return ptr;
> + return false;
> }
> -static inline void kasan_kfree_large(void *ptr, unsigned long ip) {}
> -static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {}
> -static inline void *kasan_kmalloc(struct kmem_cache *s, const void *object,
> - size_t size, gfp_t flags)
> +static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object,
> + gfp_t flags)
> {
> - return (void *)object;
> + return object;
> }
> -static inline void *kasan_krealloc(const void *object, size_t new_size,
> - gfp_t flags)
> +static inline void *kasan_kmalloc(struct kmem_cache *s, const void *object,
> + size_t size, gfp_t flags)
> {
> return (void *)object;
> }
>
> -static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object,
> - gfp_t flags)
> +static inline void *kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags)
> {
> - return object;
> + return (void *)ptr;
> }
> -static inline bool kasan_slab_free(struct kmem_cache *s, void *object,
> - unsigned long ip)
> +static inline void *kasan_krealloc(const void *object, size_t new_size,
> + gfp_t flags)
> {
> - return false;
> + return (void *)object;
> }
> -
> -static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; }
> +static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {}
> +static inline void kasan_kfree_large(void *ptr, unsigned long ip) {}
>
> #endif /* CONFIG_KASAN */
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index a3cac68c737c..701e9d7666d6 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1412,22 +1412,36 @@ static inline bool cpupid_match_pid(struct task_struct *task, int cpupid)
> #endif /* CONFIG_NUMA_BALANCING */
>
> #if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
> +
> +#ifdef CONFIG_KASAN_HW_TAGS
> +DECLARE_STATIC_KEY_FALSE(kasan_enabled);
> +#else
> +DECLARE_STATIC_KEY_TRUE(kasan_enabled);
> +#endif
> +
> static inline u8 page_kasan_tag(const struct page *page)
> {
> - return (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;
> + if (static_branch_likely(&kasan_enabled))
> + return (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;
> + return 0xff;
> }
>
> static inline void page_kasan_tag_set(struct page *page, u8 tag)
> {
> - page->flags &= ~(KASAN_TAG_MASK << KASAN_TAG_PGSHIFT);
> - page->flags |= (tag & KASAN_TAG_MASK) << KASAN_TAG_PGSHIFT;
> + if (static_branch_likely(&kasan_enabled)) {
> + page->flags &= ~(KASAN_TAG_MASK << KASAN_TAG_PGSHIFT);
> + page->flags |= (tag & KASAN_TAG_MASK) << KASAN_TAG_PGSHIFT;
> + }
> }
>
> static inline void page_kasan_tag_reset(struct page *page)
> {
> - page_kasan_tag_set(page, 0xff);
> + if (static_branch_likely(&kasan_enabled))
> + page_kasan_tag_set(page, 0xff);
> }
> -#else
> +
> +#else /* CONFIG_KASAN_SW_TAGS || CONFIG_KASAN_HW_TAGS */
> +
> static inline u8 page_kasan_tag(const struct page *page)
> {
> return 0xff;
> @@ -1435,7 +1449,8 @@ static inline u8 page_kasan_tag(const struct page *page)
>
> static inline void page_kasan_tag_set(struct page *page, u8 tag) { }
> static inline void page_kasan_tag_reset(struct page *page) { }
> -#endif
> +
> +#endif /* CONFIG_KASAN_SW_TAGS || CONFIG_KASAN_HW_TAGS */
>
> static inline struct zone *page_zone(const struct page *page)
> {
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index cc129ef62ab1..c5ec60e1a4d2 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -81,7 +81,7 @@ asmlinkage void kasan_unpoison_task_stack_below(const void *watermark)
> }
> #endif /* CONFIG_KASAN_STACK */
>
> -void kasan_alloc_pages(struct page *page, unsigned int order)
> +void __kasan_alloc_pages(struct page *page, unsigned int order)
> {
> u8 tag;
> unsigned long i;
> @@ -95,7 +95,7 @@ void kasan_alloc_pages(struct page *page, unsigned int order)
> kasan_unpoison_memory(page_address(page), PAGE_SIZE << order);
> }
>
> -void kasan_free_pages(struct page *page, unsigned int order)
> +void __kasan_free_pages(struct page *page, unsigned int order)
> {
> if (likely(!PageHighMem(page)))
> kasan_poison_memory(page_address(page),
> @@ -122,8 +122,8 @@ static inline unsigned int optimal_redzone(unsigned int object_size)
> object_size <= (1 << 16) - 1024 ? 1024 : 2048;
> }
>
> -void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> - slab_flags_t *flags)
> +void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> + slab_flags_t *flags)
> {
> unsigned int orig_size = *size;
> unsigned int redzone_size;
> @@ -165,7 +165,7 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> *flags |= SLAB_KASAN;
> }
>
> -size_t kasan_metadata_size(struct kmem_cache *cache)
> +size_t __kasan_metadata_size(struct kmem_cache *cache)
> {
> if (static_branch_unlikely(&kasan_stack))
> return (cache->kasan_info.alloc_meta_offset ?
> @@ -188,17 +188,17 @@ struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache,
> return (void *)reset_tag(object) + cache->kasan_info.free_meta_offset;
> }
>
> -void kasan_unpoison_data(const void *address, size_t size)
> +void __kasan_unpoison_data(const void *addr, size_t size)
> {
> - kasan_unpoison_memory(address, size);
> + kasan_unpoison_memory(addr, size);
> }
>
> -void kasan_unpoison_slab(const void *ptr)
> +void __kasan_unpoison_slab(const void *ptr)
> {
> kasan_unpoison_memory(ptr, __ksize(ptr));
> }
>
> -void kasan_poison_slab(struct page *page)
> +void __kasan_poison_slab(struct page *page)
> {
> unsigned long i;
>
> @@ -208,12 +208,12 @@ void kasan_poison_slab(struct page *page)
> KASAN_KMALLOC_REDZONE);
> }
>
> -void kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
> +void __kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
> {
> kasan_unpoison_memory(object, cache->object_size);
> }
>
> -void kasan_poison_object_data(struct kmem_cache *cache, void *object)
> +void __kasan_poison_object_data(struct kmem_cache *cache, void *object)
> {
> kasan_poison_memory(object,
> round_up(cache->object_size, KASAN_GRANULE_SIZE),
> @@ -266,7 +266,7 @@ static u8 assign_tag(struct kmem_cache *cache, const void *object,
> #endif
> }
>
> -void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
> +void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache,
> const void *object)
> {
> struct kasan_alloc_meta *alloc_meta;
> @@ -285,7 +285,7 @@ void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
> return (void *)object;
> }
>
> -static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
> +static bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
> unsigned long ip, bool quarantine)
> {
> u8 tag;
> @@ -329,9 +329,9 @@ static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
> return false;
> }
>
> -bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
> +bool __kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
> {
> - return __kasan_slab_free(cache, object, ip, true);
> + return ____kasan_slab_free(cache, object, ip, true);
> }
>
> static void set_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
> @@ -339,7 +339,7 @@ static void set_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
> kasan_set_track(&kasan_get_alloc_meta(cache, object)->alloc_track, flags);
> }
>
> -static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
> +static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object,
> size_t size, gfp_t flags, bool keep_tag)
> {
> unsigned long redzone_start;
> @@ -371,20 +371,20 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
> return set_tag(object, tag);
> }
>
> -void * __must_check kasan_slab_alloc(struct kmem_cache *cache, void *object,
> - gfp_t flags)
> +void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
> + void *object, gfp_t flags)
> {
> - return __kasan_kmalloc(cache, object, cache->object_size, flags, false);
> + return ____kasan_kmalloc(cache, object, cache->object_size, flags, false);
> }
>
> -void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object,
> - size_t size, gfp_t flags)
> +void * __must_check __kasan_kmalloc(struct kmem_cache *cache, const void *object,
> + size_t size, gfp_t flags)
> {
> - return __kasan_kmalloc(cache, object, size, flags, true);
> + return ____kasan_kmalloc(cache, object, size, flags, true);
> }
> -EXPORT_SYMBOL(kasan_kmalloc);
> +EXPORT_SYMBOL(__kasan_kmalloc);
>
> -void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
> +void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
> gfp_t flags)
> {
> struct page *page;
> @@ -409,7 +409,7 @@ void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
> return (void *)ptr;
> }
>
> -void * __must_check kasan_krealloc(const void *object, size_t size, gfp_t flags)
> +void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flags)
> {
> struct page *page;
>
> @@ -419,13 +419,13 @@ void * __must_check kasan_krealloc(const void *object, size_t size, gfp_t flags)
> page = virt_to_head_page(object);
>
> if (unlikely(!PageSlab(page)))
> - return kasan_kmalloc_large(object, size, flags);
> + return __kasan_kmalloc_large(object, size, flags);
> else
> - return __kasan_kmalloc(page->slab_cache, object, size,
> + return ____kasan_kmalloc(page->slab_cache, object, size,
> flags, true);
> }
>
> -void kasan_poison_kfree(void *ptr, unsigned long ip)
> +void __kasan_poison_kfree(void *ptr, unsigned long ip)
> {
> struct page *page;
>
> @@ -438,11 +438,11 @@ void kasan_poison_kfree(void *ptr, unsigned long ip)
> }
> kasan_poison_memory(ptr, page_size(page), KASAN_FREE_PAGE);
> } else {
> - __kasan_slab_free(page->slab_cache, ptr, ip, false);
> + ____kasan_slab_free(page->slab_cache, ptr, ip, false);
> }
> }
>
> -void kasan_kfree_large(void *ptr, unsigned long ip)
> +void __kasan_kfree_large(void *ptr, unsigned long ip)
> {
> if (ptr != page_address(virt_to_head_page(ptr)))
> kasan_report_invalid_free(ptr, ip);
> --
> 2.29.0.rc1.297.gfa9743e501-goog
>

2020-10-29 07:45:05

by Dmitry Vyukov

[permalink] [raw]
Subject: Re: [PATCH RFC v2 16/21] kasan: optimize poisoning in kmalloc and krealloc

On Thu, Oct 22, 2020 at 3:20 PM Andrey Konovalov <[email protected]> wrote:
>
> Since kasan_kmalloc() always follows kasan_slab_alloc(), there's no need
> to reunpoison the object data, only to poison the redzone.
>
> This requires changing kasan annotation for early SLUB cache to
> kasan_slab_alloc(). Otherwise kasan_kmalloc() doesn't untag the object.
> This doesn't do any functional changes, as kmem_cache_node->object_size
> is equal to sizeof(struct kmem_cache_node).
>
> Similarly for kasan_krealloc(), as it's called after ksize(), which
> already unpoisoned the object, there's no need to do it again.

Have you considered doing this the other way around: make krealloc
call __ksize and unpoison in kasan_krealloc?
This has the advantage of more precise poisoning as ksize will
unpoison the whole underlying object.

But then maybe we will need to move first checks in ksize into __ksize
as we may need them in krealloc as well.





> Signed-off-by: Andrey Konovalov <[email protected]>
> Link: https://linux-review.googlesource.com/id/I4083d3b55605f70fef79bca9b90843c4390296f2
> ---
> mm/kasan/common.c | 31 +++++++++++++++++++++----------
> mm/slub.c | 3 +--
> 2 files changed, 22 insertions(+), 12 deletions(-)
>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index c5ec60e1a4d2..a581937c2a44 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -360,8 +360,14 @@ static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object,
> if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) || IS_ENABLED(CONFIG_KASAN_HW_TAGS))
> tag = assign_tag(cache, object, false, keep_tag);
>
> - /* Tag is ignored in set_tag without CONFIG_KASAN_SW/HW_TAGS */
> - kasan_unpoison_memory(set_tag(object, tag), size);
> + /*
> + * Don't unpoison the object when keeping the tag. Tag is kept for:
> + * 1. krealloc(), and then the memory has already been unpoisoned via ksize();
> + * 2. kmalloc(), and then the memory has already been unpoisoned by kasan_kmalloc().
> + * Tag is ignored in set_tag() without CONFIG_KASAN_SW/HW_TAGS.
> + */
> + if (!keep_tag)
> + kasan_unpoison_memory(set_tag(object, tag), size);
> kasan_poison_memory((void *)redzone_start, redzone_end - redzone_start,
> KASAN_KMALLOC_REDZONE);
>
> @@ -384,10 +390,9 @@ void * __must_check __kasan_kmalloc(struct kmem_cache *cache, const void *object
> }
> EXPORT_SYMBOL(__kasan_kmalloc);
>
> -void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
> - gfp_t flags)
> +static void * __must_check ____kasan_kmalloc_large(struct page *page, const void *ptr,
> + size_t size, gfp_t flags, bool realloc)
> {
> - struct page *page;
> unsigned long redzone_start;
> unsigned long redzone_end;
>
> @@ -397,18 +402,24 @@ void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
> if (unlikely(ptr == NULL))
> return NULL;
>
> - page = virt_to_page(ptr);
> - redzone_start = round_up((unsigned long)(ptr + size),
> - KASAN_GRANULE_SIZE);
> + redzone_start = round_up((unsigned long)(ptr + size), KASAN_GRANULE_SIZE);
> redzone_end = (unsigned long)ptr + page_size(page);
>
> - kasan_unpoison_memory(ptr, size);
> + /* ksize() in __do_krealloc() already unpoisoned the memory. */
> + if (!realloc)
> + kasan_unpoison_memory(ptr, size);
> kasan_poison_memory((void *)redzone_start, redzone_end - redzone_start,
> KASAN_PAGE_REDZONE);
>
> return (void *)ptr;
> }
>
> +void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
> + gfp_t flags)
> +{
> + return ____kasan_kmalloc_large(virt_to_page(ptr), ptr, size, flags, false);
> +}
> +
> void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flags)
> {
> struct page *page;
> @@ -419,7 +430,7 @@ void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flag
> page = virt_to_head_page(object);
>
> if (unlikely(!PageSlab(page)))
> - return __kasan_kmalloc_large(object, size, flags);
> + return ____kasan_kmalloc_large(page, object, size, flags, true);
> else
> return ____kasan_kmalloc(page->slab_cache, object, size,
> flags, true);
> diff --git a/mm/slub.c b/mm/slub.c
> index 1d3f2355df3b..afb035b0bf2d 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3535,8 +3535,7 @@ static void early_kmem_cache_node_alloc(int node)
> init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
> init_tracking(kmem_cache_node, n);
> #endif
> - n = kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node),
> - GFP_KERNEL);
> + n = kasan_slab_alloc(kmem_cache_node, n, GFP_KERNEL);
> page->freelist = get_freepointer(kmem_cache_node, n);
> page->inuse = 1;
> page->frozen = 0;
> --
> 2.29.0.rc1.297.gfa9743e501-goog
>

2020-10-29 07:48:29

by Dmitry Vyukov

[permalink] [raw]
Subject: Re: [PATCH RFC v2 17/21] kasan: simplify kasan_poison_kfree

On Thu, Oct 22, 2020 at 3:20 PM Andrey Konovalov <[email protected]> wrote:
>
> kasan_poison_kfree() is currently only called for mempool allocations
> that are backed by either kmem_cache_alloc() or kmalloc(). Therefore, the
> page passed to kasan_poison_kfree() is always PageSlab() and there's no
> need to do the check.
>
> Signed-off-by: Andrey Konovalov <[email protected]>
> Link: https://linux-review.googlesource.com/id/If31f88726745da8744c6bea96fb32584e6c2778c

Reviewed-by: Dmitry Vyukov <[email protected]>

> ---
> mm/kasan/common.c | 11 +----------
> 1 file changed, 1 insertion(+), 10 deletions(-)
>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index a581937c2a44..b82dbae0c5d6 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -441,16 +441,7 @@ void __kasan_poison_kfree(void *ptr, unsigned long ip)
> struct page *page;
>
> page = virt_to_head_page(ptr);
> -
> - if (unlikely(!PageSlab(page))) {
> - if (ptr != page_address(page)) {
> - kasan_report_invalid_free(ptr, ip);
> - return;
> - }
> - kasan_poison_memory(ptr, page_size(page), KASAN_FREE_PAGE);
> - } else {
> - ____kasan_slab_free(page->slab_cache, ptr, ip, false);
> - }
> + ____kasan_slab_free(page->slab_cache, ptr, ip, false);
> }
>
> void __kasan_kfree_large(void *ptr, unsigned long ip)
> --
> 2.29.0.rc1.297.gfa9743e501-goog
>

2020-10-29 08:30:40

by Dmitry Vyukov

[permalink] [raw]
Subject: Re: [PATCH RFC v2 13/21] arm64: kasan: Add cpu_supports_tags helper

On Thu, Oct 22, 2020 at 3:19 PM Andrey Konovalov <[email protected]> wrote:
>
> Add an arm64 helper called cpu_supports_mte() that exposes information
> about whether the CPU supports memory tagging and that can be called
> during early boot (unlike system_supports_mte()).
>
> Use that helper to implement a generic cpu_supports_tags() helper, that
> will be used by hardware tag-based KASAN.
>
> Signed-off-by: Andrey Konovalov <[email protected]>
> Link: https://linux-review.googlesource.com/id/Ib4b56a42c57c6293df29a0cdfee334c3ca7bdab4

Reviewed-by: Dmitry Vyukov <[email protected]>

> ---
> arch/arm64/include/asm/memory.h | 1 +
> arch/arm64/include/asm/mte-kasan.h | 6 ++++++
> arch/arm64/kernel/mte.c | 20 ++++++++++++++++++++
> mm/kasan/kasan.h | 4 ++++
> 4 files changed, 31 insertions(+)
>
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index b5d6b824c21c..f496abfcf7f5 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -232,6 +232,7 @@ static inline const void *__tag_set(const void *addr, u8 tag)
> }
>
> #ifdef CONFIG_KASAN_HW_TAGS
> +#define arch_cpu_supports_tags() cpu_supports_mte()
> #define arch_init_tags(max_tag) mte_init_tags(max_tag)
> #define arch_get_random_tag() mte_get_random_tag()
> #define arch_get_mem_tag(addr) mte_get_mem_tag(addr)
> diff --git a/arch/arm64/include/asm/mte-kasan.h b/arch/arm64/include/asm/mte-kasan.h
> index a4c61b926d4a..4c3f2c6b4fe6 100644
> --- a/arch/arm64/include/asm/mte-kasan.h
> +++ b/arch/arm64/include/asm/mte-kasan.h
> @@ -9,6 +9,7 @@
>
> #ifndef __ASSEMBLY__
>
> +#include <linux/init.h>
> #include <linux/types.h>
>
> /*
> @@ -30,6 +31,7 @@ u8 mte_get_random_tag(void);
> void *mte_set_mem_tag_range(void *addr, size_t size, u8 tag);
>
> void mte_init_tags(u64 max_tag);
> +bool __init cpu_supports_mte(void);
>
> #else /* CONFIG_ARM64_MTE */
>
> @@ -54,6 +56,10 @@ static inline void *mte_set_mem_tag_range(void *addr, size_t size, u8 tag)
> static inline void mte_init_tags(u64 max_tag)
> {
> }
> +static inline bool cpu_supports_mte(void)
> +{
> + return false;
> +}
>
> #endif /* CONFIG_ARM64_MTE */
>
> diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
> index ca8206b7f9a6..8fcd17408515 100644
> --- a/arch/arm64/kernel/mte.c
> +++ b/arch/arm64/kernel/mte.c
> @@ -134,6 +134,26 @@ void mte_init_tags(u64 max_tag)
> gcr_kernel_excl = ~incl & SYS_GCR_EL1_EXCL_MASK;
> }
>
> +/*
> + * This function can be used during early boot to determine whether the CPU
> + * supports MTE. The alternative that must be used after boot is completed is
> + * system_supports_mte(), but it only works after the cpufeature framework
> + * learns about MTE.
> + */
> +bool __init cpu_supports_mte(void)
> +{
> + u64 pfr1;
> + u32 val;
> +
> + if (!IS_ENABLED(CONFIG_ARM64_MTE))
> + return false;
> +
> + pfr1 = read_cpuid(ID_AA64PFR1_EL1);
> + val = cpuid_feature_extract_unsigned_field(pfr1, ID_AA64PFR1_MTE_SHIFT);
> +
> + return val >= ID_AA64PFR1_MTE;
> +}
> +
> static void update_sctlr_el1_tcf0(u64 tcf0)
> {
> /* ISB required for the kernel uaccess routines */
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index da08b2533d73..f7ae0c23f023 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -240,6 +240,9 @@ static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)
> #define set_tag(addr, tag) ((void *)arch_kasan_set_tag((addr), (tag)))
> #define get_tag(addr) arch_kasan_get_tag(addr)
>
> +#ifndef arch_cpu_supports_tags
> +#define arch_cpu_supports_tags() (false)
> +#endif
> #ifndef arch_init_tags
> #define arch_init_tags(max_tag)
> #endif
> @@ -253,6 +256,7 @@ static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)
> #define arch_set_mem_tag_range(addr, size, tag) ((void *)(addr))
> #endif
>
> +#define cpu_supports_tags() arch_cpu_supports_tags()
> #define init_tags(max_tag) arch_init_tags(max_tag)
> #define get_random_tag() arch_get_random_tag()
> #define get_mem_tag(addr) arch_get_mem_tag(addr)
> --
> 2.29.0.rc1.297.gfa9743e501-goog
>

2020-10-29 10:00:19

by Dmitry Vyukov

[permalink] [raw]
Subject: Re: [PATCH RFC v2 14/21] kasan: add and integrate kasan boot parameters

On Thu, Oct 22, 2020 at 3:19 PM Andrey Konovalov <[email protected]> wrote:
>
> TODO: no meaningful description here yet, please see the cover letter
> for this RFC series.
>
> Signed-off-by: Andrey Konovalov <[email protected]>
> Link: https://linux-review.googlesource.com/id/If7d37003875b2ed3e0935702c8015c223d6416a4
> ---
> mm/kasan/common.c | 92 +++++++++++++-----------
> mm/kasan/generic.c | 5 ++
> mm/kasan/hw_tags.c | 169 ++++++++++++++++++++++++++++++++++++++++++++-
> mm/kasan/kasan.h | 9 +++
> mm/kasan/report.c | 14 +++-
> mm/kasan/sw_tags.c | 5 ++
> 6 files changed, 250 insertions(+), 44 deletions(-)
>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 1a5e6c279a72..cc129ef62ab1 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -129,35 +129,37 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> unsigned int redzone_size;
> int redzone_adjust;
>
> - /* Add alloc meta. */
> - cache->kasan_info.alloc_meta_offset = *size;
> - *size += sizeof(struct kasan_alloc_meta);
> -
> - /* Add free meta. */
> - if (IS_ENABLED(CONFIG_KASAN_GENERIC) &&
> - (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor ||
> - cache->object_size < sizeof(struct kasan_free_meta))) {
> - cache->kasan_info.free_meta_offset = *size;
> - *size += sizeof(struct kasan_free_meta);
> - }
> -
> - redzone_size = optimal_redzone(cache->object_size);
> - redzone_adjust = redzone_size - (*size - cache->object_size);
> - if (redzone_adjust > 0)
> - *size += redzone_adjust;
> -
> - *size = min_t(unsigned int, KMALLOC_MAX_SIZE,
> - max(*size, cache->object_size + redzone_size));
> + if (static_branch_unlikely(&kasan_stack)) {

Initially I thought kasan_stack is related to stack instrumentation.
And then wondered why we check it during slab creation.
I suggest giving it a slightly longer and more descriptive name.

... reading code further, it also disables quarantine, right?
Something to mention somewhere.



> + /* Add alloc meta. */
> + cache->kasan_info.alloc_meta_offset = *size;
> + *size += sizeof(struct kasan_alloc_meta);
> +
> + /* Add free meta. */
> + if (IS_ENABLED(CONFIG_KASAN_GENERIC) &&
> + (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor ||
> + cache->object_size < sizeof(struct kasan_free_meta))) {
> + cache->kasan_info.free_meta_offset = *size;
> + *size += sizeof(struct kasan_free_meta);
> + }
>
> - /*
> - * If the metadata doesn't fit, don't enable KASAN at all.
> - */
> - if (*size <= cache->kasan_info.alloc_meta_offset ||
> - *size <= cache->kasan_info.free_meta_offset) {
> - cache->kasan_info.alloc_meta_offset = 0;
> - cache->kasan_info.free_meta_offset = 0;
> - *size = orig_size;
> - return;
> + redzone_size = optimal_redzone(cache->object_size);
> + redzone_adjust = redzone_size - (*size - cache->object_size);
> + if (redzone_adjust > 0)
> + *size += redzone_adjust;
> +
> + *size = min_t(unsigned int, KMALLOC_MAX_SIZE,
> + max(*size, cache->object_size + redzone_size));
> +
> + /*
> + * If the metadata doesn't fit, don't enable KASAN at all.
> + */
> + if (*size <= cache->kasan_info.alloc_meta_offset ||
> + *size <= cache->kasan_info.free_meta_offset) {
> + cache->kasan_info.alloc_meta_offset = 0;
> + cache->kasan_info.free_meta_offset = 0;
> + *size = orig_size;
> + return;
> + }
> }
>
> *flags |= SLAB_KASAN;
> @@ -165,10 +167,12 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
>
> size_t kasan_metadata_size(struct kmem_cache *cache)
> {
> - return (cache->kasan_info.alloc_meta_offset ?
> - sizeof(struct kasan_alloc_meta) : 0) +
> - (cache->kasan_info.free_meta_offset ?
> - sizeof(struct kasan_free_meta) : 0);
> + if (static_branch_unlikely(&kasan_stack))
> + return (cache->kasan_info.alloc_meta_offset ?
> + sizeof(struct kasan_alloc_meta) : 0) +
> + (cache->kasan_info.free_meta_offset ?
> + sizeof(struct kasan_free_meta) : 0);
> + return 0;
> }
>
> struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache,
> @@ -270,8 +274,10 @@ void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
> if (!(cache->flags & SLAB_KASAN))
> return (void *)object;
>
> - alloc_meta = kasan_get_alloc_meta(cache, object);
> - __memset(alloc_meta, 0, sizeof(*alloc_meta));
> + if (static_branch_unlikely(&kasan_stack)) {

Interestingly, now SLAB_KASAN is always set when kasan_stack is not
enabled. So it seems to me we can move the SLAB_KASAN check into this
unlikely branch now.

> + alloc_meta = kasan_get_alloc_meta(cache, object);
> + __memset(alloc_meta, 0, sizeof(*alloc_meta));
> + }
>
> if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) || IS_ENABLED(CONFIG_KASAN_HW_TAGS))
> object = set_tag(object, assign_tag(cache, object, true, false));
> @@ -308,15 +314,19 @@ static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
> rounded_up_size = round_up(cache->object_size, KASAN_GRANULE_SIZE);
> kasan_poison_memory(object, rounded_up_size, KASAN_KMALLOC_FREE);
>
> - if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine) ||
> - unlikely(!(cache->flags & SLAB_KASAN)))
> - return false;
> + if (static_branch_unlikely(&kasan_stack)) {
> + if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine) ||
> + unlikely(!(cache->flags & SLAB_KASAN)))
> + return false;
> +
> + kasan_set_free_info(cache, object, tag);
>
> - kasan_set_free_info(cache, object, tag);
> + quarantine_put(cache, object);
>
> - quarantine_put(cache, object);
> + return IS_ENABLED(CONFIG_KASAN_GENERIC);
> + }
>
> - return IS_ENABLED(CONFIG_KASAN_GENERIC);
> + return false;
> }
>
> bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
> @@ -355,7 +365,7 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
> kasan_poison_memory((void *)redzone_start, redzone_end - redzone_start,
> KASAN_KMALLOC_REDZONE);
>
> - if (cache->flags & SLAB_KASAN)
> + if (static_branch_unlikely(&kasan_stack) && (cache->flags & SLAB_KASAN))
> set_alloc_info(cache, (void *)object, flags);
>
> return set_tag(object, tag);
> diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
> index d259e4c3aefd..20a1e753e0c5 100644
> --- a/mm/kasan/generic.c
> +++ b/mm/kasan/generic.c
> @@ -33,6 +33,11 @@
> #include "kasan.h"
> #include "../slab.h"
>
> +/* See the comments in hw_tags.c */
> +DEFINE_STATIC_KEY_TRUE_RO(kasan_enabled);
> +EXPORT_SYMBOL(kasan_enabled);
> +DEFINE_STATIC_KEY_TRUE_RO(kasan_stack);
> +
> /*
> * All functions below always inlined so compiler could
> * perform better optimizations in each of __asan_loadX/__assn_storeX
> diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
> index 915142da6b57..bccd781011ad 100644
> --- a/mm/kasan/hw_tags.c
> +++ b/mm/kasan/hw_tags.c
> @@ -8,6 +8,8 @@
>
> #define pr_fmt(fmt) "kasan: " fmt
>
> +#include <linux/init.h>
> +#include <linux/jump_label.h>
> #include <linux/kasan.h>
> #include <linux/kernel.h>
> #include <linux/memory.h>
> @@ -17,10 +19,175 @@
>
> #include "kasan.h"
>
> +enum kasan_arg_mode {
> + KASAN_ARG_MODE_OFF,
> + KASAN_ARG_MODE_PROD,
> + KASAN_ARG_MODE_FULL,
> +};
> +
> +enum kasan_arg_stack {
> + KASAN_ARG_STACK_DEFAULT,
> + KASAN_ARG_STACK_OFF,
> + KASAN_ARG_STACK_ON,
> +};
> +
> +enum kasan_arg_trap {
> + KASAN_ARG_TRAP_DEFAULT,
> + KASAN_ARG_TRAP_ASYNC,
> + KASAN_ARG_TRAP_SYNC,
> +};
> +
> +enum kasan_arg_fault {
> + KASAN_ARG_FAULT_DEFAULT,
> + KASAN_ARG_FAULT_REPORT,
> + KASAN_ARG_FAULT_PANIC,
> +};
> +
> +static enum kasan_arg_mode kasan_arg_mode __ro_after_init;
> +static enum kasan_arg_stack kasan_arg_stack __ro_after_init;
> +static enum kasan_arg_fault kasan_arg_fault __ro_after_init;
> +static enum kasan_arg_trap kasan_arg_trap __ro_after_init;
> +
> +/* Whether KASAN is enabled at all. */
> +DEFINE_STATIC_KEY_FALSE_RO(kasan_enabled);
> +EXPORT_SYMBOL(kasan_enabled);
> +
> +/* Whether to collect alloc/free stack traces. */
> +DEFINE_STATIC_KEY_FALSE_RO(kasan_stack);
> +
> +/* Whether to use synchronous or asynchronous tag checking. */
> +static bool kasan_sync __ro_after_init;
> +
> +/* Whether panic or disable tag checking on fault. */
> +bool kasan_panic __ro_after_init;
> +
> +/* kasan.mode=off/prod/full */
> +static int __init early_kasan_mode(char *arg)
> +{
> + if (!arg)
> + return -EINVAL;
> +
> + if (!strcmp(arg, "off"))
> + kasan_arg_mode = KASAN_ARG_MODE_OFF;
> + else if (!strcmp(arg, "prod"))
> + kasan_arg_mode = KASAN_ARG_MODE_PROD;
> + else if (!strcmp(arg, "full"))
> + kasan_arg_mode = KASAN_ARG_MODE_FULL;
> + else
> + return -EINVAL;
> +
> + return 0;
> +}
> +early_param("kasan.mode", early_kasan_mode);
> +
> +/* kasan.stack=off/on */
> +static int __init early_kasan_stack(char *arg)
> +{
> + if (!arg)
> + return -EINVAL;
> +
> + if (!strcmp(arg, "off"))
> + kasan_arg_stack = KASAN_ARG_STACK_OFF;
> + else if (!strcmp(arg, "on"))
> + kasan_arg_stack = KASAN_ARG_STACK_ON;
> + else
> + return -EINVAL;
> +
> + return 0;
> +}
> +early_param("kasan.stack", early_kasan_stack);
> +
> +/* kasan.trap=sync/async */
> +static int __init early_kasan_trap(char *arg)
> +{
> + if (!arg)
> + return -EINVAL;
> +
> + if (!strcmp(arg, "ASYNC"))
> + kasan_arg_trap = KASAN_ARG_TRAP_ASYNC;
> + else if (!strcmp(arg, "sync"))
> + kasan_arg_trap = KASAN_ARG_TRAP_SYNC;
> + else
> + return -EINVAL;
> +
> + return 0;
> +}
> +early_param("kasan.trap", early_kasan_trap);
> +
> +/* kasan.fault=report/panic */
> +static int __init early_kasan_fault(char *arg)
> +{
> + if (!arg)
> + return -EINVAL;
> +
> + if (!strcmp(arg, "report"))
> + kasan_arg_fault = KASAN_ARG_FAULT_REPORT;
> + else if (!strcmp(arg, "panic"))
> + kasan_arg_fault = KASAN_ARG_FAULT_PANIC;
> + else
> + return -EINVAL;
> +
> + return 0;
> +}
> +early_param("kasan.fault", early_kasan_fault);
> +
> void __init kasan_init_tags(void)
> {
> - init_tags(KASAN_TAG_MAX);
> + if (!cpu_supports_tags())
> + return;
> +
> + /* First, preset values based on the mode. */
> +
> + switch (kasan_arg_mode) {
> + case KASAN_ARG_MODE_OFF:
> + return;
> + case KASAN_ARG_MODE_PROD:
> + static_branch_enable(&kasan_enabled);
> + break;
> + case KASAN_ARG_MODE_FULL:
> + static_branch_enable(&kasan_enabled);
> + static_branch_enable(&kasan_stack);
> + kasan_sync = true;
> + break;
> + }
> +
> + /* Now, optionally override the presets. */
>
> + switch (kasan_arg_stack) {
> + case KASAN_ARG_STACK_OFF:
> + static_branch_disable(&kasan_stack);
> + break;
> + case KASAN_ARG_STACK_ON:
> + static_branch_enable(&kasan_stack);
> + break;
> + default:
> + break;
> + }
> +
> + switch (kasan_arg_trap) {
> + case KASAN_ARG_TRAP_ASYNC:
> + kasan_sync = false;
> + break;
> + case KASAN_ARG_TRAP_SYNC:
> + kasan_sync = true;
> + break;
> + default:
> + break;
> + }
> +
> + switch (kasan_arg_fault) {
> + case KASAN_ARG_FAULT_REPORT:
> + kasan_panic = false;
> + break;
> + case KASAN_ARG_FAULT_PANIC:
> + kasan_panic = true;
> + break;
> + default:
> + break;
> + }
> +
> + /* TODO: choose between sync and async based on kasan_sync. */
> + init_tags(KASAN_TAG_MAX);
> pr_info("KernelAddressSanitizer initialized\n");
> }
>
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index f7ae0c23f023..00b47bc753aa 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -2,9 +2,18 @@
> #ifndef __MM_KASAN_KASAN_H
> #define __MM_KASAN_KASAN_H
>
> +#include <linux/jump_label.h>
> #include <linux/kasan.h>
> #include <linux/stackdepot.h>
>
> +#ifdef CONFIG_KASAN_HW_TAGS
> +DECLARE_STATIC_KEY_FALSE(kasan_stack);
> +#else
> +DECLARE_STATIC_KEY_TRUE(kasan_stack);
> +#endif

kasan_stack and kasan_enabled make sense and changed only in hw_tags mode.
It would be cleaner (and faster for other modes) to abstract static keys as:

#ifdef CONFIG_KASAN_HW_TAGS
#include <linux/jump_label.h>
DECLARE_STATIC_KEY_FALSE(kasan_stack);
static inline bool kasan_stack_collection_enabled()
{
return static_branch_unlikely(&kasan_stack);
}
#else
static inline bool kasan_stack_collection_enabled() { return true; }
#endif

This way we don't need to include and define static keys for other modes.

> +extern bool kasan_panic __ro_after_init;
> +
> #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
> #define KASAN_GRANULE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
> #else
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index dee5350b459c..426dd1962d3c 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -97,6 +97,10 @@ static void end_report(unsigned long *flags)
> panic_on_warn = 0;
> panic("panic_on_warn set ...\n");
> }
> +#ifdef CONFIG_KASAN_HW_TAGS
> + if (kasan_panic)
> + panic("kasan.fault=panic set ...\n");
> +#endif
> kasan_enable_current();
> }
>
> @@ -159,8 +163,8 @@ static void describe_object_addr(struct kmem_cache *cache, void *object,
> (void *)(object_addr + cache->object_size));
> }
>
> -static void describe_object(struct kmem_cache *cache, void *object,
> - const void *addr, u8 tag)
> +static void describe_object_stacks(struct kmem_cache *cache, void *object,
> + const void *addr, u8 tag)
> {
> struct kasan_alloc_meta *alloc_meta = kasan_get_alloc_meta(cache, object);
>
> @@ -188,7 +192,13 @@ static void describe_object(struct kmem_cache *cache, void *object,
> }
> #endif
> }
> +}
>
> +static void describe_object(struct kmem_cache *cache, void *object,
> + const void *addr, u8 tag)
> +{
> + if (static_branch_unlikely(&kasan_stack))
> + describe_object_stacks(cache, object, addr, tag);
> describe_object_addr(cache, object, addr);
> }
>
> diff --git a/mm/kasan/sw_tags.c b/mm/kasan/sw_tags.c
> index 4db41f274702..b6d185adf2c5 100644
> --- a/mm/kasan/sw_tags.c
> +++ b/mm/kasan/sw_tags.c
> @@ -33,6 +33,11 @@
> #include "kasan.h"
> #include "../slab.h"
>
> +/* See the comments in hw_tags.c */
> +DEFINE_STATIC_KEY_TRUE_RO(kasan_enabled);
> +EXPORT_SYMBOL(kasan_enabled);
> +DEFINE_STATIC_KEY_TRUE_RO(kasan_stack);
> +
> static DEFINE_PER_CPU(u32, prng_state);
>
> void __init kasan_init_tags(void)
> --
> 2.29.0.rc1.297.gfa9743e501-goog
>

2020-10-30 15:01:12

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH RFC v2 14/21] kasan: add and integrate kasan boot parameters

On Thu, 22 Oct 2020 at 15:19, Andrey Konovalov <[email protected]> wrote:
>
> TODO: no meaningful description here yet, please see the cover letter
> for this RFC series.
>
> Signed-off-by: Andrey Konovalov <[email protected]>
> Link: https://linux-review.googlesource.com/id/If7d37003875b2ed3e0935702c8015c223d6416a4
> ---
> mm/kasan/common.c | 92 +++++++++++++-----------
> mm/kasan/generic.c | 5 ++
> mm/kasan/hw_tags.c | 169 ++++++++++++++++++++++++++++++++++++++++++++-
> mm/kasan/kasan.h | 9 +++
> mm/kasan/report.c | 14 +++-
> mm/kasan/sw_tags.c | 5 ++
> 6 files changed, 250 insertions(+), 44 deletions(-)
>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 1a5e6c279a72..cc129ef62ab1 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -129,35 +129,37 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> unsigned int redzone_size;
> int redzone_adjust;
>
> - /* Add alloc meta. */
> - cache->kasan_info.alloc_meta_offset = *size;
> - *size += sizeof(struct kasan_alloc_meta);
> -
> - /* Add free meta. */
> - if (IS_ENABLED(CONFIG_KASAN_GENERIC) &&
> - (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor ||
> - cache->object_size < sizeof(struct kasan_free_meta))) {
> - cache->kasan_info.free_meta_offset = *size;
> - *size += sizeof(struct kasan_free_meta);
> - }
> -
> - redzone_size = optimal_redzone(cache->object_size);
> - redzone_adjust = redzone_size - (*size - cache->object_size);
> - if (redzone_adjust > 0)
> - *size += redzone_adjust;
> -
> - *size = min_t(unsigned int, KMALLOC_MAX_SIZE,
> - max(*size, cache->object_size + redzone_size));
> + if (static_branch_unlikely(&kasan_stack)) {

I just looked at this file in your Github repo, and noticed that this
could just be

if (!static_branch_unlikely(&kasan_stack))
return;

since the if-block ends at the function. That might hopefully make the
diff a bit smaller.

Thanks,
-- Marco

2020-10-30 19:33:41

by Andrey Konovalov

[permalink] [raw]
Subject: Re: [PATCH RFC v2 14/21] kasan: add and integrate kasan boot parameters

On Wed, Oct 28, 2020 at 1:27 PM Dmitry Vyukov <[email protected]> wrote:
>
> On Thu, Oct 22, 2020 at 3:19 PM Andrey Konovalov <[email protected]> wrote:
> >
> > TODO: no meaningful description here yet, please see the cover letter
> > for this RFC series.
> >
> > Signed-off-by: Andrey Konovalov <[email protected]>
> > Link: https://linux-review.googlesource.com/id/If7d37003875b2ed3e0935702c8015c223d6416a4
> > ---
> > mm/kasan/common.c | 92 +++++++++++++-----------
> > mm/kasan/generic.c | 5 ++
> > mm/kasan/hw_tags.c | 169 ++++++++++++++++++++++++++++++++++++++++++++-
> > mm/kasan/kasan.h | 9 +++
> > mm/kasan/report.c | 14 +++-
> > mm/kasan/sw_tags.c | 5 ++
> > 6 files changed, 250 insertions(+), 44 deletions(-)
> >
> > diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> > index 1a5e6c279a72..cc129ef62ab1 100644
> > --- a/mm/kasan/common.c
> > +++ b/mm/kasan/common.c
> > @@ -129,35 +129,37 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> > unsigned int redzone_size;
> > int redzone_adjust;
> >
> > - /* Add alloc meta. */
> > - cache->kasan_info.alloc_meta_offset = *size;
> > - *size += sizeof(struct kasan_alloc_meta);
> > -
> > - /* Add free meta. */
> > - if (IS_ENABLED(CONFIG_KASAN_GENERIC) &&
> > - (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor ||
> > - cache->object_size < sizeof(struct kasan_free_meta))) {
> > - cache->kasan_info.free_meta_offset = *size;
> > - *size += sizeof(struct kasan_free_meta);
> > - }
> > -
> > - redzone_size = optimal_redzone(cache->object_size);
> > - redzone_adjust = redzone_size - (*size - cache->object_size);
> > - if (redzone_adjust > 0)
> > - *size += redzone_adjust;
> > -
> > - *size = min_t(unsigned int, KMALLOC_MAX_SIZE,
> > - max(*size, cache->object_size + redzone_size));
> > + if (static_branch_unlikely(&kasan_stack)) {
>
> Initially I thought kasan_stack is related to stack instrumentation.
> And then wondered why we check it during slab creation.
> I suggest giving it a slightly longer and more descriptive name.

Will do.

> ... reading code further, it also disables quarantine, right?
> Something to mention somewhere.

Quarantine is not supported for anything but generic KASAN. Maybe it
makes sense to put this into documentation...

> > + /* Add alloc meta. */
> > + cache->kasan_info.alloc_meta_offset = *size;
> > + *size += sizeof(struct kasan_alloc_meta);
> > +
> > + /* Add free meta. */
> > + if (IS_ENABLED(CONFIG_KASAN_GENERIC) &&
> > + (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor ||
> > + cache->object_size < sizeof(struct kasan_free_meta))) {
> > + cache->kasan_info.free_meta_offset = *size;
> > + *size += sizeof(struct kasan_free_meta);
> > + }
> >
> > - /*
> > - * If the metadata doesn't fit, don't enable KASAN at all.
> > - */
> > - if (*size <= cache->kasan_info.alloc_meta_offset ||
> > - *size <= cache->kasan_info.free_meta_offset) {
> > - cache->kasan_info.alloc_meta_offset = 0;
> > - cache->kasan_info.free_meta_offset = 0;
> > - *size = orig_size;
> > - return;
> > + redzone_size = optimal_redzone(cache->object_size);
> > + redzone_adjust = redzone_size - (*size - cache->object_size);
> > + if (redzone_adjust > 0)
> > + *size += redzone_adjust;
> > +
> > + *size = min_t(unsigned int, KMALLOC_MAX_SIZE,
> > + max(*size, cache->object_size + redzone_size));
> > +
> > + /*
> > + * If the metadata doesn't fit, don't enable KASAN at all.
> > + */
> > + if (*size <= cache->kasan_info.alloc_meta_offset ||
> > + *size <= cache->kasan_info.free_meta_offset) {
> > + cache->kasan_info.alloc_meta_offset = 0;
> > + cache->kasan_info.free_meta_offset = 0;
> > + *size = orig_size;
> > + return;
> > + }
> > }
> >
> > *flags |= SLAB_KASAN;
> > @@ -165,10 +167,12 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> >
> > size_t kasan_metadata_size(struct kmem_cache *cache)
> > {
> > - return (cache->kasan_info.alloc_meta_offset ?
> > - sizeof(struct kasan_alloc_meta) : 0) +
> > - (cache->kasan_info.free_meta_offset ?
> > - sizeof(struct kasan_free_meta) : 0);
> > + if (static_branch_unlikely(&kasan_stack))
> > + return (cache->kasan_info.alloc_meta_offset ?
> > + sizeof(struct kasan_alloc_meta) : 0) +
> > + (cache->kasan_info.free_meta_offset ?
> > + sizeof(struct kasan_free_meta) : 0);
> > + return 0;
> > }
> >
> > struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache,
> > @@ -270,8 +274,10 @@ void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
> > if (!(cache->flags & SLAB_KASAN))
> > return (void *)object;
> >
> > - alloc_meta = kasan_get_alloc_meta(cache, object);
> > - __memset(alloc_meta, 0, sizeof(*alloc_meta));
> > + if (static_branch_unlikely(&kasan_stack)) {
>
> Interestingly, now SLAB_KASAN is always set when kasan_stack is not
> enabled. So it seems to me we can move the SLAB_KASAN check into this
> unlikely branch now.

Yes, will fix. I'll also include a complete rework of SLAB_KASAN into
the next version.

>
> > + alloc_meta = kasan_get_alloc_meta(cache, object);
> > + __memset(alloc_meta, 0, sizeof(*alloc_meta));
> > + }
> >
> > if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) || IS_ENABLED(CONFIG_KASAN_HW_TAGS))
> > object = set_tag(object, assign_tag(cache, object, true, false));
> > @@ -308,15 +314,19 @@ static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
> > rounded_up_size = round_up(cache->object_size, KASAN_GRANULE_SIZE);
> > kasan_poison_memory(object, rounded_up_size, KASAN_KMALLOC_FREE);
> >
> > - if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine) ||
> > - unlikely(!(cache->flags & SLAB_KASAN)))
> > - return false;
> > + if (static_branch_unlikely(&kasan_stack)) {
> > + if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine) ||
> > + unlikely(!(cache->flags & SLAB_KASAN)))
> > + return false;
> > +
> > + kasan_set_free_info(cache, object, tag);
> >
> > - kasan_set_free_info(cache, object, tag);
> > + quarantine_put(cache, object);
> >
> > - quarantine_put(cache, object);
> > + return IS_ENABLED(CONFIG_KASAN_GENERIC);
> > + }
> >
> > - return IS_ENABLED(CONFIG_KASAN_GENERIC);
> > + return false;
> > }
> >
> > bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
> > @@ -355,7 +365,7 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
> > kasan_poison_memory((void *)redzone_start, redzone_end - redzone_start,
> > KASAN_KMALLOC_REDZONE);
> >
> > - if (cache->flags & SLAB_KASAN)
> > + if (static_branch_unlikely(&kasan_stack) && (cache->flags & SLAB_KASAN))
> > set_alloc_info(cache, (void *)object, flags);
> >
> > return set_tag(object, tag);
> > diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
> > index d259e4c3aefd..20a1e753e0c5 100644
> > --- a/mm/kasan/generic.c
> > +++ b/mm/kasan/generic.c
> > @@ -33,6 +33,11 @@
> > #include "kasan.h"
> > #include "../slab.h"
> >
> > +/* See the comments in hw_tags.c */
> > +DEFINE_STATIC_KEY_TRUE_RO(kasan_enabled);
> > +EXPORT_SYMBOL(kasan_enabled);
> > +DEFINE_STATIC_KEY_TRUE_RO(kasan_stack);
> > +
> > /*
> > * All functions below always inlined so compiler could
> > * perform better optimizations in each of __asan_loadX/__assn_storeX
> > diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
> > index 915142da6b57..bccd781011ad 100644
> > --- a/mm/kasan/hw_tags.c
> > +++ b/mm/kasan/hw_tags.c
> > @@ -8,6 +8,8 @@
> >
> > #define pr_fmt(fmt) "kasan: " fmt
> >
> > +#include <linux/init.h>
> > +#include <linux/jump_label.h>
> > #include <linux/kasan.h>
> > #include <linux/kernel.h>
> > #include <linux/memory.h>
> > @@ -17,10 +19,175 @@
> >
> > #include "kasan.h"
> >
> > +enum kasan_arg_mode {
> > + KASAN_ARG_MODE_OFF,
> > + KASAN_ARG_MODE_PROD,
> > + KASAN_ARG_MODE_FULL,
> > +};
> > +
> > +enum kasan_arg_stack {
> > + KASAN_ARG_STACK_DEFAULT,
> > + KASAN_ARG_STACK_OFF,
> > + KASAN_ARG_STACK_ON,
> > +};
> > +
> > +enum kasan_arg_trap {
> > + KASAN_ARG_TRAP_DEFAULT,
> > + KASAN_ARG_TRAP_ASYNC,
> > + KASAN_ARG_TRAP_SYNC,
> > +};
> > +
> > +enum kasan_arg_fault {
> > + KASAN_ARG_FAULT_DEFAULT,
> > + KASAN_ARG_FAULT_REPORT,
> > + KASAN_ARG_FAULT_PANIC,
> > +};
> > +
> > +static enum kasan_arg_mode kasan_arg_mode __ro_after_init;
> > +static enum kasan_arg_stack kasan_arg_stack __ro_after_init;
> > +static enum kasan_arg_fault kasan_arg_fault __ro_after_init;
> > +static enum kasan_arg_trap kasan_arg_trap __ro_after_init;
> > +
> > +/* Whether KASAN is enabled at all. */
> > +DEFINE_STATIC_KEY_FALSE_RO(kasan_enabled);
> > +EXPORT_SYMBOL(kasan_enabled);
> > +
> > +/* Whether to collect alloc/free stack traces. */
> > +DEFINE_STATIC_KEY_FALSE_RO(kasan_stack);
> > +
> > +/* Whether to use synchronous or asynchronous tag checking. */
> > +static bool kasan_sync __ro_after_init;
> > +
> > +/* Whether panic or disable tag checking on fault. */
> > +bool kasan_panic __ro_after_init;
> > +
> > +/* kasan.mode=off/prod/full */
> > +static int __init early_kasan_mode(char *arg)
> > +{
> > + if (!arg)
> > + return -EINVAL;
> > +
> > + if (!strcmp(arg, "off"))
> > + kasan_arg_mode = KASAN_ARG_MODE_OFF;
> > + else if (!strcmp(arg, "prod"))
> > + kasan_arg_mode = KASAN_ARG_MODE_PROD;
> > + else if (!strcmp(arg, "full"))
> > + kasan_arg_mode = KASAN_ARG_MODE_FULL;
> > + else
> > + return -EINVAL;
> > +
> > + return 0;
> > +}
> > +early_param("kasan.mode", early_kasan_mode);
> > +
> > +/* kasan.stack=off/on */
> > +static int __init early_kasan_stack(char *arg)
> > +{
> > + if (!arg)
> > + return -EINVAL;
> > +
> > + if (!strcmp(arg, "off"))
> > + kasan_arg_stack = KASAN_ARG_STACK_OFF;
> > + else if (!strcmp(arg, "on"))
> > + kasan_arg_stack = KASAN_ARG_STACK_ON;
> > + else
> > + return -EINVAL;
> > +
> > + return 0;
> > +}
> > +early_param("kasan.stack", early_kasan_stack);
> > +
> > +/* kasan.trap=sync/async */
> > +static int __init early_kasan_trap(char *arg)
> > +{
> > + if (!arg)
> > + return -EINVAL;
> > +
> > + if (!strcmp(arg, "ASYNC"))
> > + kasan_arg_trap = KASAN_ARG_TRAP_ASYNC;
> > + else if (!strcmp(arg, "sync"))
> > + kasan_arg_trap = KASAN_ARG_TRAP_SYNC;
> > + else
> > + return -EINVAL;
> > +
> > + return 0;
> > +}
> > +early_param("kasan.trap", early_kasan_trap);
> > +
> > +/* kasan.fault=report/panic */
> > +static int __init early_kasan_fault(char *arg)
> > +{
> > + if (!arg)
> > + return -EINVAL;
> > +
> > + if (!strcmp(arg, "report"))
> > + kasan_arg_fault = KASAN_ARG_FAULT_REPORT;
> > + else if (!strcmp(arg, "panic"))
> > + kasan_arg_fault = KASAN_ARG_FAULT_PANIC;
> > + else
> > + return -EINVAL;
> > +
> > + return 0;
> > +}
> > +early_param("kasan.fault", early_kasan_fault);
> > +
> > void __init kasan_init_tags(void)
> > {
> > - init_tags(KASAN_TAG_MAX);
> > + if (!cpu_supports_tags())
> > + return;
> > +
> > + /* First, preset values based on the mode. */
> > +
> > + switch (kasan_arg_mode) {
> > + case KASAN_ARG_MODE_OFF:
> > + return;
> > + case KASAN_ARG_MODE_PROD:
> > + static_branch_enable(&kasan_enabled);
> > + break;
> > + case KASAN_ARG_MODE_FULL:
> > + static_branch_enable(&kasan_enabled);
> > + static_branch_enable(&kasan_stack);
> > + kasan_sync = true;
> > + break;
> > + }
> > +
> > + /* Now, optionally override the presets. */
> >
> > + switch (kasan_arg_stack) {
> > + case KASAN_ARG_STACK_OFF:
> > + static_branch_disable(&kasan_stack);
> > + break;
> > + case KASAN_ARG_STACK_ON:
> > + static_branch_enable(&kasan_stack);
> > + break;
> > + default:
> > + break;
> > + }
> > +
> > + switch (kasan_arg_trap) {
> > + case KASAN_ARG_TRAP_ASYNC:
> > + kasan_sync = false;
> > + break;
> > + case KASAN_ARG_TRAP_SYNC:
> > + kasan_sync = true;
> > + break;
> > + default:
> > + break;
> > + }
> > +
> > + switch (kasan_arg_fault) {
> > + case KASAN_ARG_FAULT_REPORT:
> > + kasan_panic = false;
> > + break;
> > + case KASAN_ARG_FAULT_PANIC:
> > + kasan_panic = true;
> > + break;
> > + default:
> > + break;
> > + }
> > +
> > + /* TODO: choose between sync and async based on kasan_sync. */
> > + init_tags(KASAN_TAG_MAX);
> > pr_info("KernelAddressSanitizer initialized\n");
> > }
> >
> > diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> > index f7ae0c23f023..00b47bc753aa 100644
> > --- a/mm/kasan/kasan.h
> > +++ b/mm/kasan/kasan.h
> > @@ -2,9 +2,18 @@
> > #ifndef __MM_KASAN_KASAN_H
> > #define __MM_KASAN_KASAN_H
> >
> > +#include <linux/jump_label.h>
> > #include <linux/kasan.h>
> > #include <linux/stackdepot.h>
> >
> > +#ifdef CONFIG_KASAN_HW_TAGS
> > +DECLARE_STATIC_KEY_FALSE(kasan_stack);
> > +#else
> > +DECLARE_STATIC_KEY_TRUE(kasan_stack);
> > +#endif
>
> kasan_stack and kasan_enabled make sense and changed only in hw_tags mode.
> It would be cleaner (and faster for other modes) to abstract static keys as:
>
> #ifdef CONFIG_KASAN_HW_TAGS
> #include <linux/jump_label.h>
> DECLARE_STATIC_KEY_FALSE(kasan_stack);
> static inline bool kasan_stack_collection_enabled()
> {
> return static_branch_unlikely(&kasan_stack);
> }
> #else
> static inline bool kasan_stack_collection_enabled() { return true; }
> #endif
>
> This way we don't need to include and define static keys for other modes.

Sounds good, will do.

>
> > +extern bool kasan_panic __ro_after_init;
> > +
> > #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
> > #define KASAN_GRANULE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
> > #else
> > diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> > index dee5350b459c..426dd1962d3c 100644
> > --- a/mm/kasan/report.c
> > +++ b/mm/kasan/report.c
> > @@ -97,6 +97,10 @@ static void end_report(unsigned long *flags)
> > panic_on_warn = 0;
> > panic("panic_on_warn set ...\n");
> > }
> > +#ifdef CONFIG_KASAN_HW_TAGS
> > + if (kasan_panic)
> > + panic("kasan.fault=panic set ...\n");
> > +#endif
> > kasan_enable_current();
> > }
> >
> > @@ -159,8 +163,8 @@ static void describe_object_addr(struct kmem_cache *cache, void *object,
> > (void *)(object_addr + cache->object_size));
> > }
> >
> > -static void describe_object(struct kmem_cache *cache, void *object,
> > - const void *addr, u8 tag)
> > +static void describe_object_stacks(struct kmem_cache *cache, void *object,
> > + const void *addr, u8 tag)
> > {
> > struct kasan_alloc_meta *alloc_meta = kasan_get_alloc_meta(cache, object);
> >
> > @@ -188,7 +192,13 @@ static void describe_object(struct kmem_cache *cache, void *object,
> > }
> > #endif
> > }
> > +}
> >
> > +static void describe_object(struct kmem_cache *cache, void *object,
> > + const void *addr, u8 tag)
> > +{
> > + if (static_branch_unlikely(&kasan_stack))
> > + describe_object_stacks(cache, object, addr, tag);
> > describe_object_addr(cache, object, addr);
> > }
> >
> > diff --git a/mm/kasan/sw_tags.c b/mm/kasan/sw_tags.c
> > index 4db41f274702..b6d185adf2c5 100644
> > --- a/mm/kasan/sw_tags.c
> > +++ b/mm/kasan/sw_tags.c
> > @@ -33,6 +33,11 @@
> > #include "kasan.h"
> > #include "../slab.h"
> >
> > +/* See the comments in hw_tags.c */
> > +DEFINE_STATIC_KEY_TRUE_RO(kasan_enabled);
> > +EXPORT_SYMBOL(kasan_enabled);
> > +DEFINE_STATIC_KEY_TRUE_RO(kasan_stack);
> > +
> > static DEFINE_PER_CPU(u32, prng_state);
> >
> > void __init kasan_init_tags(void)
> > --
> > 2.29.0.rc1.297.gfa9743e501-goog
> >

2020-10-30 19:35:52

by Andrey Konovalov

[permalink] [raw]
Subject: Re: [PATCH RFC v2 14/21] kasan: add and integrate kasan boot parameters

On Fri, Oct 30, 2020 at 3:45 PM Marco Elver <[email protected]> wrote:
>
> On Thu, 22 Oct 2020 at 15:19, Andrey Konovalov <[email protected]> wrote:
> >
> > TODO: no meaningful description here yet, please see the cover letter
> > for this RFC series.
> >
> > Signed-off-by: Andrey Konovalov <[email protected]>
> > Link: https://linux-review.googlesource.com/id/If7d37003875b2ed3e0935702c8015c223d6416a4
> > ---
> > mm/kasan/common.c | 92 +++++++++++++-----------
> > mm/kasan/generic.c | 5 ++
> > mm/kasan/hw_tags.c | 169 ++++++++++++++++++++++++++++++++++++++++++++-
> > mm/kasan/kasan.h | 9 +++
> > mm/kasan/report.c | 14 +++-
> > mm/kasan/sw_tags.c | 5 ++
> > 6 files changed, 250 insertions(+), 44 deletions(-)
> >
> > diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> > index 1a5e6c279a72..cc129ef62ab1 100644
> > --- a/mm/kasan/common.c
> > +++ b/mm/kasan/common.c
> > @@ -129,35 +129,37 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> > unsigned int redzone_size;
> > int redzone_adjust;
> >
> > - /* Add alloc meta. */
> > - cache->kasan_info.alloc_meta_offset = *size;
> > - *size += sizeof(struct kasan_alloc_meta);
> > -
> > - /* Add free meta. */
> > - if (IS_ENABLED(CONFIG_KASAN_GENERIC) &&
> > - (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor ||
> > - cache->object_size < sizeof(struct kasan_free_meta))) {
> > - cache->kasan_info.free_meta_offset = *size;
> > - *size += sizeof(struct kasan_free_meta);
> > - }
> > -
> > - redzone_size = optimal_redzone(cache->object_size);
> > - redzone_adjust = redzone_size - (*size - cache->object_size);
> > - if (redzone_adjust > 0)
> > - *size += redzone_adjust;
> > -
> > - *size = min_t(unsigned int, KMALLOC_MAX_SIZE,
> > - max(*size, cache->object_size + redzone_size));
> > + if (static_branch_unlikely(&kasan_stack)) {
>
> I just looked at this file in your Github repo, and noticed that this
> could just be
>
> if (!static_branch_unlikely(&kasan_stack))
> return;
>
> since the if-block ends at the function. That might hopefully make the
> diff a bit smaller.

Will do, thanks!

2020-10-30 19:51:46

by Andrey Konovalov

[permalink] [raw]
Subject: Re: [PATCH RFC v2 15/21] kasan: check kasan_enabled in annotations

On Wed, Oct 28, 2020 at 5:47 PM Dmitry Vyukov <[email protected]> wrote:
>
> On Thu, Oct 22, 2020 at 3:20 PM Andrey Konovalov <[email protected]> wrote:
> >
> > Declare the kasan_enabled static key in include/linux/kasan.h and in
> > include/linux/mm.h and check it in all kasan annotations. This allows to
> > avoid any slowdown caused by function calls when kasan_enabled is
> > disabled.
> >
> > Signed-off-by: Andrey Konovalov <[email protected]>
> > Link: https://linux-review.googlesource.com/id/I2589451d3c96c97abbcbf714baabe6161c6f153e
> > ---
> > include/linux/kasan.h | 210 ++++++++++++++++++++++++++++++++----------
> > include/linux/mm.h | 27 ++++--
> > mm/kasan/common.c | 60 ++++++------
> > 3 files changed, 211 insertions(+), 86 deletions(-)
> >
> > diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> > index 2b9023224474..8654275aa62e 100644
> > --- a/include/linux/kasan.h
> > +++ b/include/linux/kasan.h
> > @@ -2,6 +2,7 @@
> > #ifndef _LINUX_KASAN_H
> > #define _LINUX_KASAN_H
> >
> > +#include <linux/jump_label.h>
> > #include <linux/types.h>
> >
> > struct kmem_cache;
> > @@ -66,40 +67,154 @@ static inline void kasan_disable_current(void) {}
> >
> > #ifdef CONFIG_KASAN
> >
> > -void kasan_alloc_pages(struct page *page, unsigned int order);
> > -void kasan_free_pages(struct page *page, unsigned int order);
> > +struct kasan_cache {
> > + int alloc_meta_offset;
> > + int free_meta_offset;
> > +};
> > +
> > +#ifdef CONFIG_KASAN_HW_TAGS
> > +DECLARE_STATIC_KEY_FALSE(kasan_enabled);
> > +#else
> > +DECLARE_STATIC_KEY_TRUE(kasan_enabled);
> > +#endif
> >
> > -void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> > - slab_flags_t *flags);
> > +void __kasan_alloc_pages(struct page *page, unsigned int order);
> > +static inline void kasan_alloc_pages(struct page *page, unsigned int order)
> > +{
> > + if (static_branch_likely(&kasan_enabled))
> > + __kasan_alloc_pages(page, order);
>
> The patch looks fine per se, but I think with the suggestion in the
> previous patch, this should be:
>
> if (kasan_is_enabled())
> __kasan_alloc_pages(page, order);
>
> No overhead for other modes and less logic duplication.

Will do, thanks!

>
> > +}
> >
> > -void kasan_unpoison_data(const void *address, size_t size);
> > -void kasan_unpoison_slab(const void *ptr);
> > +void __kasan_free_pages(struct page *page, unsigned int order);
> > +static inline void kasan_free_pages(struct page *page, unsigned int order)
> > +{
> > + if (static_branch_likely(&kasan_enabled))
> > + __kasan_free_pages(page, order);
> > +}
> >
> > -void kasan_poison_slab(struct page *page);
> > -void kasan_unpoison_object_data(struct kmem_cache *cache, void *object);
> > -void kasan_poison_object_data(struct kmem_cache *cache, void *object);
> > -void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
> > - const void *object);
> > +void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> > + slab_flags_t *flags);
> > +static inline void kasan_cache_create(struct kmem_cache *cache,
> > + unsigned int *size, slab_flags_t *flags)
> > +{
> > + if (static_branch_likely(&kasan_enabled))
> > + __kasan_cache_create(cache, size, flags);
> > +}
> >
> > -void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
> > - gfp_t flags);
> > -void kasan_kfree_large(void *ptr, unsigned long ip);
> > -void kasan_poison_kfree(void *ptr, unsigned long ip);
> > -void * __must_check kasan_kmalloc(struct kmem_cache *s, const void *object,
> > - size_t size, gfp_t flags);
> > -void * __must_check kasan_krealloc(const void *object, size_t new_size,
> > - gfp_t flags);
> > +size_t __kasan_metadata_size(struct kmem_cache *cache);
> > +static inline size_t kasan_metadata_size(struct kmem_cache *cache)
> > +{
> > + if (static_branch_likely(&kasan_enabled))
> > + return __kasan_metadata_size(cache);
> > + return 0;
> > +}
> >
> > -void * __must_check kasan_slab_alloc(struct kmem_cache *s, void *object,
> > - gfp_t flags);
> > -bool kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip);
> > +void __kasan_unpoison_data(const void *addr, size_t size);
> > +static inline void kasan_unpoison_data(const void *addr, size_t size)
> > +{
> > + if (static_branch_likely(&kasan_enabled))
> > + __kasan_unpoison_data(addr, size);
> > +}
> >
> > -struct kasan_cache {
> > - int alloc_meta_offset;
> > - int free_meta_offset;
> > -};
> > +void __kasan_unpoison_slab(const void *ptr);
> > +static inline void kasan_unpoison_slab(const void *ptr)
> > +{
> > + if (static_branch_likely(&kasan_enabled))
> > + __kasan_unpoison_slab(ptr);
> > +}
> > +
> > +void __kasan_poison_slab(struct page *page);
> > +static inline void kasan_poison_slab(struct page *page)
> > +{
> > + if (static_branch_likely(&kasan_enabled))
> > + return __kasan_poison_slab(page);
> > +}
> > +
> > +void __kasan_unpoison_object_data(struct kmem_cache *cache, void *object);
> > +static inline void kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
> > +{
> > + if (static_branch_likely(&kasan_enabled))
> > + return __kasan_unpoison_object_data(cache, object);
> > +}
> > +
> > +void __kasan_poison_object_data(struct kmem_cache *cache, void *object);
> > +static inline void kasan_poison_object_data(struct kmem_cache *cache, void *object)
> > +{
> > + if (static_branch_likely(&kasan_enabled))
> > + __kasan_poison_object_data(cache, object);
> > +}
> > +
> > +void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache,
> > + const void *object);
> > +static inline void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
> > + const void *object)
> > +{
> > + if (static_branch_likely(&kasan_enabled))
> > + return __kasan_init_slab_obj(cache, object);
> > + return (void *)object;
> > +}
> > +
> > +bool __kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip);
> > +static inline bool kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip)
> > +{
> > + if (static_branch_likely(&kasan_enabled))
> > + return __kasan_slab_free(s, object, ip);
> > + return false;
> > +}
> >
> > -size_t kasan_metadata_size(struct kmem_cache *cache);
> > +void * __must_check __kasan_slab_alloc(struct kmem_cache *s,
> > + void *object, gfp_t flags);
> > +static inline void * __must_check kasan_slab_alloc(struct kmem_cache *s,
> > + void *object, gfp_t flags)
> > +{
> > + if (static_branch_likely(&kasan_enabled))
> > + return __kasan_slab_alloc(s, object, flags);
> > + return object;
> > +}
> > +
> > +void * __must_check __kasan_kmalloc(struct kmem_cache *s, const void *object,
> > + size_t size, gfp_t flags);
> > +static inline void * __must_check kasan_kmalloc(struct kmem_cache *s, const void *object,
> > + size_t size, gfp_t flags)
> > +{
> > + if (static_branch_likely(&kasan_enabled))
> > + return __kasan_kmalloc(s, object, size, flags);
> > + return (void *)object;
> > +}
> > +
> > +void * __must_check __kasan_kmalloc_large(const void *ptr,
> > + size_t size, gfp_t flags);
> > +static inline void * __must_check kasan_kmalloc_large(const void *ptr,
> > + size_t size, gfp_t flags)
> > +{
> > + if (static_branch_likely(&kasan_enabled))
> > + return __kasan_kmalloc_large(ptr, size, flags);
> > + return (void *)ptr;
> > +}
> > +
> > +void * __must_check __kasan_krealloc(const void *object,
> > + size_t new_size, gfp_t flags);
> > +static inline void * __must_check kasan_krealloc(const void *object,
> > + size_t new_size, gfp_t flags)
> > +{
> > + if (static_branch_likely(&kasan_enabled))
> > + return __kasan_krealloc(object, new_size, flags);
> > + return (void *)object;
> > +}
> > +
> > +void __kasan_poison_kfree(void *ptr, unsigned long ip);
> > +static inline void kasan_poison_kfree(void *ptr, unsigned long ip)
> > +{
> > + if (static_branch_likely(&kasan_enabled))
> > + __kasan_poison_kfree(ptr, ip);
> > +}
> > +
> > +void __kasan_kfree_large(void *ptr, unsigned long ip);
> > +static inline void kasan_kfree_large(void *ptr, unsigned long ip)
> > +{
> > + if (static_branch_likely(&kasan_enabled))
> > + __kasan_kfree_large(ptr, ip);
> > +}
> >
> > bool kasan_save_enable_multi_shot(void);
> > void kasan_restore_multi_shot(bool enabled);
> > @@ -108,14 +223,12 @@ void kasan_restore_multi_shot(bool enabled);
> >
> > static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
> > static inline void kasan_free_pages(struct page *page, unsigned int order) {}
> > -
> > static inline void kasan_cache_create(struct kmem_cache *cache,
> > unsigned int *size,
> > slab_flags_t *flags) {}
> > -
> > -static inline void kasan_unpoison_data(const void *address, size_t size) { }
> > -static inline void kasan_unpoison_slab(const void *ptr) { }
> > -
> > +static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; }
> > +static inline void kasan_unpoison_data(const void *address, size_t size) {}
> > +static inline void kasan_unpoison_slab(const void *ptr) {}
> > static inline void kasan_poison_slab(struct page *page) {}
> > static inline void kasan_unpoison_object_data(struct kmem_cache *cache,
> > void *object) {}
> > @@ -126,36 +239,33 @@ static inline void *kasan_init_slab_obj(struct kmem_cache *cache,
> > {
> > return (void *)object;
> > }
> > -
> > -static inline void *kasan_kmalloc_large(void *ptr, size_t size, gfp_t flags)
> > +static inline bool kasan_slab_free(struct kmem_cache *s, void *object,
> > + unsigned long ip)
> > {
> > - return ptr;
> > + return false;
> > }
> > -static inline void kasan_kfree_large(void *ptr, unsigned long ip) {}
> > -static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {}
> > -static inline void *kasan_kmalloc(struct kmem_cache *s, const void *object,
> > - size_t size, gfp_t flags)
> > +static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object,
> > + gfp_t flags)
> > {
> > - return (void *)object;
> > + return object;
> > }
> > -static inline void *kasan_krealloc(const void *object, size_t new_size,
> > - gfp_t flags)
> > +static inline void *kasan_kmalloc(struct kmem_cache *s, const void *object,
> > + size_t size, gfp_t flags)
> > {
> > return (void *)object;
> > }
> >
> > -static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object,
> > - gfp_t flags)
> > +static inline void *kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags)
> > {
> > - return object;
> > + return (void *)ptr;
> > }
> > -static inline bool kasan_slab_free(struct kmem_cache *s, void *object,
> > - unsigned long ip)
> > +static inline void *kasan_krealloc(const void *object, size_t new_size,
> > + gfp_t flags)
> > {
> > - return false;
> > + return (void *)object;
> > }
> > -
> > -static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; }
> > +static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {}
> > +static inline void kasan_kfree_large(void *ptr, unsigned long ip) {}
> >
> > #endif /* CONFIG_KASAN */
> >
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index a3cac68c737c..701e9d7666d6 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -1412,22 +1412,36 @@ static inline bool cpupid_match_pid(struct task_struct *task, int cpupid)
> > #endif /* CONFIG_NUMA_BALANCING */
> >
> > #if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
> > +
> > +#ifdef CONFIG_KASAN_HW_TAGS
> > +DECLARE_STATIC_KEY_FALSE(kasan_enabled);
> > +#else
> > +DECLARE_STATIC_KEY_TRUE(kasan_enabled);
> > +#endif
> > +
> > static inline u8 page_kasan_tag(const struct page *page)
> > {
> > - return (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;
> > + if (static_branch_likely(&kasan_enabled))
> > + return (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;
> > + return 0xff;
> > }
> >
> > static inline void page_kasan_tag_set(struct page *page, u8 tag)
> > {
> > - page->flags &= ~(KASAN_TAG_MASK << KASAN_TAG_PGSHIFT);
> > - page->flags |= (tag & KASAN_TAG_MASK) << KASAN_TAG_PGSHIFT;
> > + if (static_branch_likely(&kasan_enabled)) {
> > + page->flags &= ~(KASAN_TAG_MASK << KASAN_TAG_PGSHIFT);
> > + page->flags |= (tag & KASAN_TAG_MASK) << KASAN_TAG_PGSHIFT;
> > + }
> > }
> >
> > static inline void page_kasan_tag_reset(struct page *page)
> > {
> > - page_kasan_tag_set(page, 0xff);
> > + if (static_branch_likely(&kasan_enabled))
> > + page_kasan_tag_set(page, 0xff);
> > }
> > -#else
> > +
> > +#else /* CONFIG_KASAN_SW_TAGS || CONFIG_KASAN_HW_TAGS */
> > +
> > static inline u8 page_kasan_tag(const struct page *page)
> > {
> > return 0xff;
> > @@ -1435,7 +1449,8 @@ static inline u8 page_kasan_tag(const struct page *page)
> >
> > static inline void page_kasan_tag_set(struct page *page, u8 tag) { }
> > static inline void page_kasan_tag_reset(struct page *page) { }
> > -#endif
> > +
> > +#endif /* CONFIG_KASAN_SW_TAGS || CONFIG_KASAN_HW_TAGS */
> >
> > static inline struct zone *page_zone(const struct page *page)
> > {
> > diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> > index cc129ef62ab1..c5ec60e1a4d2 100644
> > --- a/mm/kasan/common.c
> > +++ b/mm/kasan/common.c
> > @@ -81,7 +81,7 @@ asmlinkage void kasan_unpoison_task_stack_below(const void *watermark)
> > }
> > #endif /* CONFIG_KASAN_STACK */
> >
> > -void kasan_alloc_pages(struct page *page, unsigned int order)
> > +void __kasan_alloc_pages(struct page *page, unsigned int order)
> > {
> > u8 tag;
> > unsigned long i;
> > @@ -95,7 +95,7 @@ void kasan_alloc_pages(struct page *page, unsigned int order)
> > kasan_unpoison_memory(page_address(page), PAGE_SIZE << order);
> > }
> >
> > -void kasan_free_pages(struct page *page, unsigned int order)
> > +void __kasan_free_pages(struct page *page, unsigned int order)
> > {
> > if (likely(!PageHighMem(page)))
> > kasan_poison_memory(page_address(page),
> > @@ -122,8 +122,8 @@ static inline unsigned int optimal_redzone(unsigned int object_size)
> > object_size <= (1 << 16) - 1024 ? 1024 : 2048;
> > }
> >
> > -void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> > - slab_flags_t *flags)
> > +void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> > + slab_flags_t *flags)
> > {
> > unsigned int orig_size = *size;
> > unsigned int redzone_size;
> > @@ -165,7 +165,7 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> > *flags |= SLAB_KASAN;
> > }
> >
> > -size_t kasan_metadata_size(struct kmem_cache *cache)
> > +size_t __kasan_metadata_size(struct kmem_cache *cache)
> > {
> > if (static_branch_unlikely(&kasan_stack))
> > return (cache->kasan_info.alloc_meta_offset ?
> > @@ -188,17 +188,17 @@ struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache,
> > return (void *)reset_tag(object) + cache->kasan_info.free_meta_offset;
> > }
> >
> > -void kasan_unpoison_data(const void *address, size_t size)
> > +void __kasan_unpoison_data(const void *addr, size_t size)
> > {
> > - kasan_unpoison_memory(address, size);
> > + kasan_unpoison_memory(addr, size);
> > }
> >
> > -void kasan_unpoison_slab(const void *ptr)
> > +void __kasan_unpoison_slab(const void *ptr)
> > {
> > kasan_unpoison_memory(ptr, __ksize(ptr));
> > }
> >
> > -void kasan_poison_slab(struct page *page)
> > +void __kasan_poison_slab(struct page *page)
> > {
> > unsigned long i;
> >
> > @@ -208,12 +208,12 @@ void kasan_poison_slab(struct page *page)
> > KASAN_KMALLOC_REDZONE);
> > }
> >
> > -void kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
> > +void __kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
> > {
> > kasan_unpoison_memory(object, cache->object_size);
> > }
> >
> > -void kasan_poison_object_data(struct kmem_cache *cache, void *object)
> > +void __kasan_poison_object_data(struct kmem_cache *cache, void *object)
> > {
> > kasan_poison_memory(object,
> > round_up(cache->object_size, KASAN_GRANULE_SIZE),
> > @@ -266,7 +266,7 @@ static u8 assign_tag(struct kmem_cache *cache, const void *object,
> > #endif
> > }
> >
> > -void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
> > +void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache,
> > const void *object)
> > {
> > struct kasan_alloc_meta *alloc_meta;
> > @@ -285,7 +285,7 @@ void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
> > return (void *)object;
> > }
> >
> > -static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
> > +static bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
> > unsigned long ip, bool quarantine)
> > {
> > u8 tag;
> > @@ -329,9 +329,9 @@ static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
> > return false;
> > }
> >
> > -bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
> > +bool __kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
> > {
> > - return __kasan_slab_free(cache, object, ip, true);
> > + return ____kasan_slab_free(cache, object, ip, true);
> > }
> >
> > static void set_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
> > @@ -339,7 +339,7 @@ static void set_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
> > kasan_set_track(&kasan_get_alloc_meta(cache, object)->alloc_track, flags);
> > }
> >
> > -static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
> > +static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object,
> > size_t size, gfp_t flags, bool keep_tag)
> > {
> > unsigned long redzone_start;
> > @@ -371,20 +371,20 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
> > return set_tag(object, tag);
> > }
> >
> > -void * __must_check kasan_slab_alloc(struct kmem_cache *cache, void *object,
> > - gfp_t flags)
> > +void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
> > + void *object, gfp_t flags)
> > {
> > - return __kasan_kmalloc(cache, object, cache->object_size, flags, false);
> > + return ____kasan_kmalloc(cache, object, cache->object_size, flags, false);
> > }
> >
> > -void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object,
> > - size_t size, gfp_t flags)
> > +void * __must_check __kasan_kmalloc(struct kmem_cache *cache, const void *object,
> > + size_t size, gfp_t flags)
> > {
> > - return __kasan_kmalloc(cache, object, size, flags, true);
> > + return ____kasan_kmalloc(cache, object, size, flags, true);
> > }
> > -EXPORT_SYMBOL(kasan_kmalloc);
> > +EXPORT_SYMBOL(__kasan_kmalloc);
> >
> > -void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
> > +void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
> > gfp_t flags)
> > {
> > struct page *page;
> > @@ -409,7 +409,7 @@ void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
> > return (void *)ptr;
> > }
> >
> > -void * __must_check kasan_krealloc(const void *object, size_t size, gfp_t flags)
> > +void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flags)
> > {
> > struct page *page;
> >
> > @@ -419,13 +419,13 @@ void * __must_check kasan_krealloc(const void *object, size_t size, gfp_t flags)
> > page = virt_to_head_page(object);
> >
> > if (unlikely(!PageSlab(page)))
> > - return kasan_kmalloc_large(object, size, flags);
> > + return __kasan_kmalloc_large(object, size, flags);
> > else
> > - return __kasan_kmalloc(page->slab_cache, object, size,
> > + return ____kasan_kmalloc(page->slab_cache, object, size,
> > flags, true);
> > }
> >
> > -void kasan_poison_kfree(void *ptr, unsigned long ip)
> > +void __kasan_poison_kfree(void *ptr, unsigned long ip)
> > {
> > struct page *page;
> >
> > @@ -438,11 +438,11 @@ void kasan_poison_kfree(void *ptr, unsigned long ip)
> > }
> > kasan_poison_memory(ptr, page_size(page), KASAN_FREE_PAGE);
> > } else {
> > - __kasan_slab_free(page->slab_cache, ptr, ip, false);
> > + ____kasan_slab_free(page->slab_cache, ptr, ip, false);
> > }
> > }
> >
> > -void kasan_kfree_large(void *ptr, unsigned long ip)
> > +void __kasan_kfree_large(void *ptr, unsigned long ip)
> > {
> > if (ptr != page_address(virt_to_head_page(ptr)))
> > kasan_report_invalid_free(ptr, ip);
> > --
> > 2.29.0.rc1.297.gfa9743e501-goog
> >

2020-11-02 15:19:15

by Andrey Konovalov

[permalink] [raw]
Subject: Re: [PATCH RFC v2 16/21] kasan: optimize poisoning in kmalloc and krealloc

On Wed, Oct 28, 2020 at 5:55 PM Dmitry Vyukov <[email protected]> wrote:
>
> On Thu, Oct 22, 2020 at 3:20 PM Andrey Konovalov <[email protected]> wrote:
> >
> > Since kasan_kmalloc() always follows kasan_slab_alloc(), there's no need
> > to reunpoison the object data, only to poison the redzone.
> >
> > This requires changing kasan annotation for early SLUB cache to
> > kasan_slab_alloc(). Otherwise kasan_kmalloc() doesn't untag the object.
> > This doesn't do any functional changes, as kmem_cache_node->object_size
> > is equal to sizeof(struct kmem_cache_node).
> >
> > Similarly for kasan_krealloc(), as it's called after ksize(), which
> > already unpoisoned the object, there's no need to do it again.
>
> Have you considered doing this the other way around: make krealloc
> call __ksize and unpoison in kasan_krealloc?
> This has the advantage of more precise poisoning as ksize will
> unpoison the whole underlying object.
>
> But then maybe we will need to move first checks in ksize into __ksize
> as we may need them in krealloc as well.

This might be a good idea. I won't implement this for the next
version, but will look into this after that. Thanks!

>
>
>
>
>
> > Signed-off-by: Andrey Konovalov <[email protected]>
> > Link: https://linux-review.googlesource.com/id/I4083d3b55605f70fef79bca9b90843c4390296f2
> > ---
> > mm/kasan/common.c | 31 +++++++++++++++++++++----------
> > mm/slub.c | 3 +--
> > 2 files changed, 22 insertions(+), 12 deletions(-)
> >
> > diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> > index c5ec60e1a4d2..a581937c2a44 100644
> > --- a/mm/kasan/common.c
> > +++ b/mm/kasan/common.c
> > @@ -360,8 +360,14 @@ static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object,
> > if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) || IS_ENABLED(CONFIG_KASAN_HW_TAGS))
> > tag = assign_tag(cache, object, false, keep_tag);
> >
> > - /* Tag is ignored in set_tag without CONFIG_KASAN_SW/HW_TAGS */
> > - kasan_unpoison_memory(set_tag(object, tag), size);
> > + /*
> > + * Don't unpoison the object when keeping the tag. Tag is kept for:
> > + * 1. krealloc(), and then the memory has already been unpoisoned via ksize();
> > + * 2. kmalloc(), and then the memory has already been unpoisoned by kasan_kmalloc().
> > + * Tag is ignored in set_tag() without CONFIG_KASAN_SW/HW_TAGS.
> > + */
> > + if (!keep_tag)
> > + kasan_unpoison_memory(set_tag(object, tag), size);
> > kasan_poison_memory((void *)redzone_start, redzone_end - redzone_start,
> > KASAN_KMALLOC_REDZONE);
> >
> > @@ -384,10 +390,9 @@ void * __must_check __kasan_kmalloc(struct kmem_cache *cache, const void *object
> > }
> > EXPORT_SYMBOL(__kasan_kmalloc);
> >
> > -void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
> > - gfp_t flags)
> > +static void * __must_check ____kasan_kmalloc_large(struct page *page, const void *ptr,
> > + size_t size, gfp_t flags, bool realloc)
> > {
> > - struct page *page;
> > unsigned long redzone_start;
> > unsigned long redzone_end;
> >
> > @@ -397,18 +402,24 @@ void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
> > if (unlikely(ptr == NULL))
> > return NULL;
> >
> > - page = virt_to_page(ptr);
> > - redzone_start = round_up((unsigned long)(ptr + size),
> > - KASAN_GRANULE_SIZE);
> > + redzone_start = round_up((unsigned long)(ptr + size), KASAN_GRANULE_SIZE);
> > redzone_end = (unsigned long)ptr + page_size(page);
> >
> > - kasan_unpoison_memory(ptr, size);
> > + /* ksize() in __do_krealloc() already unpoisoned the memory. */
> > + if (!realloc)
> > + kasan_unpoison_memory(ptr, size);
> > kasan_poison_memory((void *)redzone_start, redzone_end - redzone_start,
> > KASAN_PAGE_REDZONE);
> >
> > return (void *)ptr;
> > }
> >
> > +void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
> > + gfp_t flags)
> > +{
> > + return ____kasan_kmalloc_large(virt_to_page(ptr), ptr, size, flags, false);
> > +}
> > +
> > void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flags)
> > {
> > struct page *page;
> > @@ -419,7 +430,7 @@ void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flag
> > page = virt_to_head_page(object);
> >
> > if (unlikely(!PageSlab(page)))
> > - return __kasan_kmalloc_large(object, size, flags);
> > + return ____kasan_kmalloc_large(page, object, size, flags, true);
> > else
> > return ____kasan_kmalloc(page->slab_cache, object, size,
> > flags, true);
> > diff --git a/mm/slub.c b/mm/slub.c
> > index 1d3f2355df3b..afb035b0bf2d 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -3535,8 +3535,7 @@ static void early_kmem_cache_node_alloc(int node)
> > init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
> > init_tracking(kmem_cache_node, n);
> > #endif
> > - n = kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node),
> > - GFP_KERNEL);
> > + n = kasan_slab_alloc(kmem_cache_node, n, GFP_KERNEL);
> > page->freelist = get_freepointer(kmem_cache_node, n);
> > page->inuse = 1;
> > page->frozen = 0;
> > --
> > 2.29.0.rc1.297.gfa9743e501-goog
> >