2020-11-13 22:25:23

by Andrey Konovalov

[permalink] [raw]
Subject: [PATCH mm v3 00/19] kasan: boot parameters for hardware tag-based mode

=== Overview

Hardware tag-based KASAN mode [1] is intended to eventually be used in
production as a security mitigation. Therefore there's a need for finer
control over KASAN features and for an existence of a kill switch.

This patchset adds a few boot parameters for hardware tag-based KASAN that
allow to disable or otherwise control particular KASAN features, as well
as provides some initial optimizations for running KASAN in production.

There's another planned patchset what will further optimize hardware
tag-based KASAN, provide proper benchmarking and tests, and will fully
enable tag-based KASAN for production use.

Hardware tag-based KASAN relies on arm64 Memory Tagging Extension (MTE)
[2] to perform memory and pointer tagging. Please see [3] and [4] for
detailed analysis of how MTE helps to fight memory safety problems.

The features that can be controlled are:

1. Whether KASAN is enabled at all.
2. Whether KASAN collects and saves alloc/free stacks.
3. Whether KASAN panics on a detected bug or not.

The patch titled "kasan: add and integrate kasan boot parameters" of this
series adds a few new boot parameters.

kasan.mode allows to choose one of three main modes:

- kasan.mode=off - KASAN is disabled, no tag checks are performed
- kasan.mode=prod - only essential production features are enabled
- kasan.mode=full - all KASAN features are enabled

The chosen mode provides default control values for the features mentioned
above. However it's also possible to override the default values by
providing:

- kasan.stacktrace=off/on - enable stacks collection
(default: on for mode=full, otherwise off)
- kasan.fault=report/panic - only report tag fault or also panic
(default: report)

If kasan.mode parameter is not provided, it defaults to full when
CONFIG_DEBUG_KERNEL is enabled, and to prod otherwise.

It is essential that switching between these modes doesn't require
rebuilding the kernel with different configs, as this is required by
the Android GKI (Generic Kernel Image) initiative.

=== Benchmarks

For now I've only performed a few simple benchmarks such as measuring
kernel boot time and slab memory usage after boot. There's an upcoming
patchset which will optimize KASAN further and include more detailed
benchmarking results.

The benchmarks were performed in QEMU and the results below exclude the
slowdown caused by QEMU memory tagging emulation (as it's different from
the slowdown that will be introduced by hardware and is therefore
irrelevant).

KASAN_HW_TAGS=y + kasan.mode=off introduces no performance or memory
impact compared to KASAN_HW_TAGS=n.

kasan.mode=prod (manually excluding tagging) introduces 3% of performance
and no memory impact (except memory used by hardware to store tags)
compared to kasan.mode=off.

kasan.mode=full has about 40% performance and 30% memory impact over
kasan.mode=prod. Both come from alloc/free stack collection.

=== Notes

This patchset is available here:

https://github.com/xairy/linux/tree/up-boot-mte-v3

This patchset is based on v10 of "kasan: add hardware tag-based mode for
arm64" patchset [1].

For testing in QEMU hardware tag-based KASAN requires:

1. QEMU built from master [6] (use "-machine virt,mte=on -cpu max" arguments
to run).
2. GCC version 10.

[1] https://lkml.org/lkml/2020/11/13/1154
[2] https://community.arm.com/developer/ip-products/processors/b/processors-ip-blog/posts/enhancing-memory-safety
[3] https://arxiv.org/pdf/1802.09517.pdf
[4] https://github.com/microsoft/MSRC-Security-Research/blob/master/papers/2020/Security%20analysis%20of%20memory%20tagging.pdf
[5] https://source.android.com/devices/architecture/kernel/generic-kernel-image
[6] https://github.com/qemu/qemu

=== History

Changes v2 -> v3:
- Rebase onto v10 of the HW_TAGS series.
- Add missing return type for kasan_enabled().
- Always define random_tag() as a function.
- Mark kasan wrappers as __always_inline.
- Don't "kasan: simplify kasan_poison_kfree" as it's based on a false
assumption, add a comment instead.
- Address documentation comments.
- Use <linux/static_key.h> instead of <linux/jump_label.h>.
- Rework switches in mm/kasan/hw_tags.c.
- Don't init tag in ____kasan_kmalloc().
- Correctly check SLAB_TYPESAFE_BY_RCU flag in mm/kasan/common.c.
- Readability fixes for "kasan: clean up metadata allocation and usage".
- Change kasan_never_merge() to return SLAB_KASAN instead of excluding it
from flags.
- (Vincenzo) Address concerns from checkpatch.pl (courtesy of Marco Elver).

Changes v1 -> v2:
- Rebased onto v9 of the HW_TAGS patchset.
- Don't initialize static branches in kasan_init_hw_tags_cpu(), as
cpu_enable_mte() can't sleep; do in in kasan_init_hw_tags() instead.
- Rename kasan.stacks to kasan.stacktrace.

Changes RFC v2 -> v1:
- Rebrand the patchset from fully enabling production use to partially
addressing that; another optimization and testing patchset will be
required.
- Rebase onto v8 of KASAN_HW_TAGS series.
- Fix "ASYNC" -> "async" typo.
- Rework depends condition for VMAP_STACK and update config text.
- Remove unneeded reset_tag() macro, use kasan_reset_tag() instead.
- Rename kasan.stack to kasan.stacks to avoid confusion with stack
instrumentation.
- Introduce kasan_stack_collection_enabled() and kasan_is_enabled()
helpers.
- Simplify kasan_stack_collection_enabled() usage.
- Rework SLAB_KASAN flag and metadata allocation (see the corresponding
patch for details).
- Allow cache merging with KASAN_HW_TAGS when kasan.stacks is off.
- Use sync mode dy default for both prod and full KASAN modes.
- Drop kasan.trap=sync/async boot parameter, as async mode isn't supported
yet.
- Choose prod or full mode depending on CONFIG_DEBUG_KERNEL when no
kasan.mode boot parameter is provided.
- Drop krealloc optimization changes, those will be included in a separate
patchset.
- Update KASAN documentation to mention boot parameters.

Changes RFC v1 -> RFC v2:
- Rework boot parameters.
- Drop __init from empty kasan_init_tags() definition.
- Add cpu_supports_mte() helper that can be used during early boot and use
it in kasan_init_tags()
- Lots of new KASAN optimization commits.

Andrey Konovalov (19):
kasan: simplify quarantine_put call site
kasan: rename get_alloc/free_info
kasan: introduce set_alloc_info
kasan, arm64: unpoison stack only with CONFIG_KASAN_STACK
kasan: allow VMAP_STACK for HW_TAGS mode
kasan: remove __kasan_unpoison_stack
kasan: inline kasan_reset_tag for tag-based modes
kasan: inline random_tag for HW_TAGS
kasan: open-code kasan_unpoison_slab
kasan: inline (un)poison_range and check_invalid_free
kasan: add and integrate kasan boot parameters
kasan, mm: check kasan_enabled in annotations
kasan, mm: rename kasan_poison_kfree
kasan: don't round_up too much
kasan: simplify assign_tag and set_tag calls
kasan: clarify comment in __kasan_kfree_large
kasan: clean up metadata allocation and usage
kasan, mm: allow cache merging with no metadata
kasan: update documentation

Documentation/dev-tools/kasan.rst | 186 ++++++++++++--------
arch/Kconfig | 8 +-
arch/arm64/kernel/sleep.S | 2 +-
arch/x86/kernel/acpi/wakeup_64.S | 2 +-
include/linux/kasan.h | 245 ++++++++++++++++++++------
include/linux/mm.h | 22 ++-
mm/kasan/common.c | 283 ++++++++++++++++++------------
mm/kasan/generic.c | 27 +--
mm/kasan/hw_tags.c | 185 +++++++++++++++----
mm/kasan/kasan.h | 120 +++++++++----
mm/kasan/quarantine.c | 13 +-
mm/kasan/report.c | 61 ++++---
mm/kasan/report_hw_tags.c | 2 +-
mm/kasan/report_sw_tags.c | 15 +-
mm/kasan/shadow.c | 5 +-
mm/kasan/sw_tags.c | 17 +-
mm/mempool.c | 4 +-
mm/slab_common.c | 3 +-
18 files changed, 824 insertions(+), 376 deletions(-)

--
2.29.2.299.gdc1121823c-goog


2020-11-13 22:25:42

by Andrey Konovalov

[permalink] [raw]
Subject: [PATCH mm v3 10/19] kasan: inline (un)poison_range and check_invalid_free

Using (un)poison_range() or check_invalid_free() currently results in
function calls. Move their definitions to mm/kasan/kasan.h and turn them
into static inline functions for hardware tag-based mode to avoid
unneeded function calls.

Signed-off-by: Andrey Konovalov <[email protected]>
Link: https://linux-review.googlesource.com/id/Ia9d8191024a12d1374675b3d27197f10193f50bb
---
mm/kasan/hw_tags.c | 30 ------------------------------
mm/kasan/kasan.h | 45 ++++++++++++++++++++++++++++++++++++++++-----
2 files changed, 40 insertions(+), 35 deletions(-)

diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index 3cdd87d189f6..863fed4edd3f 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -10,7 +10,6 @@

#include <linux/kasan.h>
#include <linux/kernel.h>
-#include <linux/kfence.h>
#include <linux/memory.h>
#include <linux/mm.h>
#include <linux/string.h>
@@ -31,35 +30,6 @@ void __init kasan_init_hw_tags(void)
pr_info("KernelAddressSanitizer initialized\n");
}

-void poison_range(const void *address, size_t size, u8 value)
-{
- /* Skip KFENCE memory if called explicitly outside of sl*b. */
- if (is_kfence_address(address))
- return;
-
- hw_set_mem_tag_range(kasan_reset_tag(address),
- round_up(size, KASAN_GRANULE_SIZE), value);
-}
-
-void unpoison_range(const void *address, size_t size)
-{
- /* Skip KFENCE memory if called explicitly outside of sl*b. */
- if (is_kfence_address(address))
- return;
-
- hw_set_mem_tag_range(kasan_reset_tag(address),
- round_up(size, KASAN_GRANULE_SIZE), get_tag(address));
-}
-
-bool check_invalid_free(void *addr)
-{
- u8 ptr_tag = get_tag(addr);
- u8 mem_tag = hw_get_mem_tag(addr);
-
- return (mem_tag == KASAN_TAG_INVALID) ||
- (ptr_tag != KASAN_TAG_KERNEL && ptr_tag != mem_tag);
-}
-
void kasan_set_free_info(struct kmem_cache *cache,
void *object, u8 tag)
{
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 7876a2547b7d..8aa83b7ad79e 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -3,6 +3,7 @@
#define __MM_KASAN_KASAN_H

#include <linux/kasan.h>
+#include <linux/kfence.h>
#include <linux/stackdepot.h>

#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
@@ -154,9 +155,6 @@ struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache,
struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache,
const void *object);

-void poison_range(const void *address, size_t size, u8 value);
-void unpoison_range(const void *address, size_t size);
-
#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)

static inline const void *kasan_shadow_to_mem(const void *shadow_addr)
@@ -196,8 +194,6 @@ void print_tags(u8 addr_tag, const void *addr);
static inline void print_tags(u8 addr_tag, const void *addr) { }
#endif

-bool check_invalid_free(void *addr);
-
void *find_first_bad_addr(void *addr, size_t size);
const char *get_bug_type(struct kasan_access_info *info);
void metadata_fetch_row(char *buffer, void *row);
@@ -278,6 +274,45 @@ static inline u8 random_tag(void) { return hw_get_random_tag(); }
static inline u8 random_tag(void) { return 0; }
#endif

+#ifdef CONFIG_KASAN_HW_TAGS
+
+static inline void poison_range(const void *address, size_t size, u8 value)
+{
+ /* Skip KFENCE memory if called explicitly outside of sl*b. */
+ if (is_kfence_address(address))
+ return;
+
+ hw_set_mem_tag_range(kasan_reset_tag(address),
+ round_up(size, KASAN_GRANULE_SIZE), value);
+}
+
+static inline void unpoison_range(const void *address, size_t size)
+{
+ /* Skip KFENCE memory if called explicitly outside of sl*b. */
+ if (is_kfence_address(address))
+ return;
+
+ hw_set_mem_tag_range(kasan_reset_tag(address),
+ round_up(size, KASAN_GRANULE_SIZE), get_tag(address));
+}
+
+static inline bool check_invalid_free(void *addr)
+{
+ u8 ptr_tag = get_tag(addr);
+ u8 mem_tag = hw_get_mem_tag(addr);
+
+ return (mem_tag == KASAN_TAG_INVALID) ||
+ (ptr_tag != KASAN_TAG_KERNEL && ptr_tag != mem_tag);
+}
+
+#else /* CONFIG_KASAN_HW_TAGS */
+
+void poison_range(const void *address, size_t size, u8 value);
+void unpoison_range(const void *address, size_t size);
+bool check_invalid_free(void *addr);
+
+#endif /* CONFIG_KASAN_HW_TAGS */
+
/*
* Exported functions for interfaces called from assembly or from generated
* code. Declarations here to avoid warning about missing declarations.
--
2.29.2.299.gdc1121823c-goog

2020-11-13 22:25:48

by Andrey Konovalov

[permalink] [raw]
Subject: [PATCH mm v3 11/19] kasan: add and integrate kasan boot parameters

Hardware tag-based KASAN mode is intended to eventually be used in
production as a security mitigation. Therefore there's a need for finer
control over KASAN features and for an existence of a kill switch.

This change adds a few boot parameters for hardware tag-based KASAN that
allow to disable or otherwise control particular KASAN features.

The features that can be controlled are:

1. Whether KASAN is enabled at all.
2. Whether KASAN collects and saves alloc/free stacks.
3. Whether KASAN panics on a detected bug or not.

With this change a new boot parameter kasan.mode allows to choose one of
three main modes:

- kasan.mode=off - KASAN is disabled, no tag checks are performed
- kasan.mode=prod - only essential production features are enabled
- kasan.mode=full - all KASAN features are enabled

The chosen mode provides default control values for the features mentioned
above. However it's also possible to override the default values by
providing:

- kasan.stacktrace=off/on - enable alloc/free stack collection
(default: on for mode=full, otherwise off)
- kasan.fault=report/panic - only report tag fault or also panic
(default: report)

If kasan.mode parameter is not provided, it defaults to full when
CONFIG_DEBUG_KERNEL is enabled, and to prod otherwise.

It is essential that switching between these modes doesn't require
rebuilding the kernel with different configs, as this is required by
the Android GKI (Generic Kernel Image) initiative [1].

[1] https://source.android.com/devices/architecture/kernel/generic-kernel-image

Signed-off-by: Andrey Konovalov <[email protected]>
Link: https://linux-review.googlesource.com/id/If7d37003875b2ed3e0935702c8015c223d6416a4
---
mm/kasan/common.c | 22 +++++--
mm/kasan/hw_tags.c | 151 +++++++++++++++++++++++++++++++++++++++++++++
mm/kasan/kasan.h | 16 +++++
mm/kasan/report.c | 14 ++++-
4 files changed, 196 insertions(+), 7 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 1ac4f435c679..a11e3e75eb08 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -135,6 +135,11 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
unsigned int redzone_size;
int redzone_adjust;

+ if (!kasan_stack_collection_enabled()) {
+ *flags |= SLAB_KASAN;
+ return;
+ }
+
/* Add alloc meta. */
cache->kasan_info.alloc_meta_offset = *size;
*size += sizeof(struct kasan_alloc_meta);
@@ -171,6 +176,8 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,

size_t kasan_metadata_size(struct kmem_cache *cache)
{
+ if (!kasan_stack_collection_enabled())
+ return 0;
return (cache->kasan_info.alloc_meta_offset ?
sizeof(struct kasan_alloc_meta) : 0) +
(cache->kasan_info.free_meta_offset ?
@@ -263,11 +270,13 @@ void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
{
struct kasan_alloc_meta *alloc_meta;

- if (!(cache->flags & SLAB_KASAN))
- return (void *)object;
+ if (kasan_stack_collection_enabled()) {
+ if (!(cache->flags & SLAB_KASAN))
+ return (void *)object;

- alloc_meta = kasan_get_alloc_meta(cache, object);
- __memset(alloc_meta, 0, sizeof(*alloc_meta));
+ alloc_meta = kasan_get_alloc_meta(cache, object);
+ __memset(alloc_meta, 0, sizeof(*alloc_meta));
+ }

if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) || IS_ENABLED(CONFIG_KASAN_HW_TAGS))
object = set_tag(object, assign_tag(cache, object, true, false));
@@ -307,6 +316,9 @@ static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
rounded_up_size = round_up(cache->object_size, KASAN_GRANULE_SIZE);
poison_range(object, rounded_up_size, KASAN_KMALLOC_FREE);

+ if (!kasan_stack_collection_enabled())
+ return false;
+
if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine) ||
unlikely(!(cache->flags & SLAB_KASAN)))
return false;
@@ -357,7 +369,7 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
poison_range((void *)redzone_start, redzone_end - redzone_start,
KASAN_KMALLOC_REDZONE);

- if (cache->flags & SLAB_KASAN)
+ if (kasan_stack_collection_enabled() && (cache->flags & SLAB_KASAN))
set_alloc_info(cache, (void *)object, flags);

return set_tag(object, tag);
diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index 863fed4edd3f..30ce88935e9d 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -8,18 +8,115 @@

#define pr_fmt(fmt) "kasan: " fmt

+#include <linux/init.h>
#include <linux/kasan.h>
#include <linux/kernel.h>
#include <linux/memory.h>
#include <linux/mm.h>
+#include <linux/static_key.h>
#include <linux/string.h>
#include <linux/types.h>

#include "kasan.h"

+enum kasan_arg_mode {
+ KASAN_ARG_MODE_DEFAULT,
+ KASAN_ARG_MODE_OFF,
+ KASAN_ARG_MODE_PROD,
+ KASAN_ARG_MODE_FULL,
+};
+
+enum kasan_arg_stacktrace {
+ KASAN_ARG_STACKTRACE_DEFAULT,
+ KASAN_ARG_STACKTRACE_OFF,
+ KASAN_ARG_STACKTRACE_ON,
+};
+
+enum kasan_arg_fault {
+ KASAN_ARG_FAULT_DEFAULT,
+ KASAN_ARG_FAULT_REPORT,
+ KASAN_ARG_FAULT_PANIC,
+};
+
+static enum kasan_arg_mode kasan_arg_mode __ro_after_init;
+static enum kasan_arg_stacktrace kasan_arg_stacktrace __ro_after_init;
+static enum kasan_arg_fault kasan_arg_fault __ro_after_init;
+
+/* Whether KASAN is enabled at all. */
+DEFINE_STATIC_KEY_FALSE_RO(kasan_flag_enabled);
+EXPORT_SYMBOL(kasan_flag_enabled);
+
+/* Whether to collect alloc/free stack traces. */
+DEFINE_STATIC_KEY_FALSE_RO(kasan_flag_stacktrace);
+
+/* Whether panic or disable tag checking on fault. */
+bool kasan_flag_panic __ro_after_init;
+
+/* kasan.mode=off/prod/full */
+static int __init early_kasan_mode(char *arg)
+{
+ if (!arg)
+ return -EINVAL;
+
+ if (!strcmp(arg, "off"))
+ kasan_arg_mode = KASAN_ARG_MODE_OFF;
+ else if (!strcmp(arg, "prod"))
+ kasan_arg_mode = KASAN_ARG_MODE_PROD;
+ else if (!strcmp(arg, "full"))
+ kasan_arg_mode = KASAN_ARG_MODE_FULL;
+ else
+ return -EINVAL;
+
+ return 0;
+}
+early_param("kasan.mode", early_kasan_mode);
+
+/* kasan.stack=off/on */
+static int __init early_kasan_flag_stacktrace(char *arg)
+{
+ if (!arg)
+ return -EINVAL;
+
+ if (!strcmp(arg, "off"))
+ kasan_arg_stacktrace = KASAN_ARG_STACKTRACE_OFF;
+ else if (!strcmp(arg, "on"))
+ kasan_arg_stacktrace = KASAN_ARG_STACKTRACE_ON;
+ else
+ return -EINVAL;
+
+ return 0;
+}
+early_param("kasan.stacktrace", early_kasan_flag_stacktrace);
+
+/* kasan.fault=report/panic */
+static int __init early_kasan_fault(char *arg)
+{
+ if (!arg)
+ return -EINVAL;
+
+ if (!strcmp(arg, "report"))
+ kasan_arg_fault = KASAN_ARG_FAULT_REPORT;
+ else if (!strcmp(arg, "panic"))
+ kasan_arg_fault = KASAN_ARG_FAULT_PANIC;
+ else
+ return -EINVAL;
+
+ return 0;
+}
+early_param("kasan.fault", early_kasan_fault);
+
/* kasan_init_hw_tags_cpu() is called for each CPU. */
void kasan_init_hw_tags_cpu(void)
{
+ /*
+ * There's no need to check that the hardware is MTE-capable here,
+ * as this function is only called for MTE-capable hardware.
+ */
+
+ /* If KASAN is disabled, do nothing. */
+ if (kasan_arg_mode == KASAN_ARG_MODE_OFF)
+ return;
+
hw_init_tags(KASAN_TAG_MAX);
hw_enable_tagging();
}
@@ -27,6 +124,60 @@ void kasan_init_hw_tags_cpu(void)
/* kasan_init_hw_tags() is called once on boot CPU. */
void __init kasan_init_hw_tags(void)
{
+ /* If hardware doesn't support MTE, do nothing. */
+ if (!system_supports_mte())
+ return;
+
+ /* Choose KASAN mode if kasan boot parameter is not provided. */
+ if (kasan_arg_mode == KASAN_ARG_MODE_DEFAULT) {
+ if (IS_ENABLED(CONFIG_DEBUG_KERNEL))
+ kasan_arg_mode = KASAN_ARG_MODE_FULL;
+ else
+ kasan_arg_mode = KASAN_ARG_MODE_PROD;
+ }
+
+ /* Preset parameter values based on the mode. */
+ switch (kasan_arg_mode) {
+ case KASAN_ARG_MODE_DEFAULT:
+ /* Shouldn't happen as per the check above. */
+ WARN_ON(1);
+ return;
+ case KASAN_ARG_MODE_OFF:
+ /* If KASAN is disabled, do nothing. */
+ return;
+ case KASAN_ARG_MODE_PROD:
+ static_branch_enable(&kasan_flag_enabled);
+ break;
+ case KASAN_ARG_MODE_FULL:
+ static_branch_enable(&kasan_flag_enabled);
+ static_branch_enable(&kasan_flag_stacktrace);
+ break;
+ }
+
+ /* Now, optionally override the presets. */
+
+ switch (kasan_arg_stacktrace) {
+ case KASAN_ARG_STACKTRACE_DEFAULT:
+ break;
+ case KASAN_ARG_STACKTRACE_OFF:
+ static_branch_disable(&kasan_flag_stacktrace);
+ break;
+ case KASAN_ARG_STACKTRACE_ON:
+ static_branch_enable(&kasan_flag_stacktrace);
+ break;
+ }
+
+ switch (kasan_arg_fault) {
+ case KASAN_ARG_FAULT_DEFAULT:
+ break;
+ case KASAN_ARG_FAULT_REPORT:
+ kasan_flag_panic = false;
+ break;
+ case KASAN_ARG_FAULT_PANIC:
+ kasan_flag_panic = true;
+ break;
+ }
+
pr_info("KernelAddressSanitizer initialized\n");
}

diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 8aa83b7ad79e..d01a5ac34f70 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -6,6 +6,22 @@
#include <linux/kfence.h>
#include <linux/stackdepot.h>

+#ifdef CONFIG_KASAN_HW_TAGS
+#include <linux/static_key.h>
+DECLARE_STATIC_KEY_FALSE(kasan_flag_stacktrace);
+static inline bool kasan_stack_collection_enabled(void)
+{
+ return static_branch_unlikely(&kasan_flag_stacktrace);
+}
+#else
+static inline bool kasan_stack_collection_enabled(void)
+{
+ return true;
+}
+#endif
+
+extern bool kasan_flag_panic __ro_after_init;
+
#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
#define KASAN_GRANULE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
#else
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 76a0e3ae2049..ffa6076b1710 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -99,6 +99,10 @@ static void end_report(unsigned long *flags)
panic_on_warn = 0;
panic("panic_on_warn set ...\n");
}
+#ifdef CONFIG_KASAN_HW_TAGS
+ if (kasan_flag_panic)
+ panic("kasan.fault=panic set ...\n");
+#endif
kasan_enable_current();
}

@@ -161,8 +165,8 @@ static void describe_object_addr(struct kmem_cache *cache, void *object,
(void *)(object_addr + cache->object_size));
}

-static void describe_object(struct kmem_cache *cache, void *object,
- const void *addr, u8 tag)
+static void describe_object_stacks(struct kmem_cache *cache, void *object,
+ const void *addr, u8 tag)
{
struct kasan_alloc_meta *alloc_meta = kasan_get_alloc_meta(cache, object);

@@ -190,7 +194,13 @@ static void describe_object(struct kmem_cache *cache, void *object,
}
#endif
}
+}

+static void describe_object(struct kmem_cache *cache, void *object,
+ const void *addr, u8 tag)
+{
+ if (kasan_stack_collection_enabled())
+ describe_object_stacks(cache, object, addr, tag);
describe_object_addr(cache, object, addr);
}

--
2.29.2.299.gdc1121823c-goog

2020-11-13 22:25:48

by Andrey Konovalov

[permalink] [raw]
Subject: [PATCH mm v3 12/19] kasan, mm: check kasan_enabled in annotations

Declare the kasan_enabled static key in include/linux/kasan.h and in
include/linux/mm.h and check it in all kasan annotations. This allows to
avoid any slowdown caused by function calls when kasan_enabled is
disabled.

Co-developed-by: Vincenzo Frascino <[email protected]>
Signed-off-by: Vincenzo Frascino <[email protected]>
Signed-off-by: Andrey Konovalov <[email protected]>
Link: https://linux-review.googlesource.com/id/I2589451d3c96c97abbcbf714baabe6161c6f153e
---
include/linux/kasan.h | 213 ++++++++++++++++++++++++++++++++----------
include/linux/mm.h | 22 +++--
mm/kasan/common.c | 56 +++++------
3 files changed, 210 insertions(+), 81 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 872bf145ddde..6bd95243a583 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -2,6 +2,7 @@
#ifndef _LINUX_KASAN_H
#define _LINUX_KASAN_H

+#include <linux/static_key.h>
#include <linux/types.h>

struct kmem_cache;
@@ -74,54 +75,176 @@ static inline void kasan_disable_current(void) {}

#ifdef CONFIG_KASAN

-void kasan_unpoison_range(const void *address, size_t size);
+struct kasan_cache {
+ int alloc_meta_offset;
+ int free_meta_offset;
+};

-void kasan_alloc_pages(struct page *page, unsigned int order);
-void kasan_free_pages(struct page *page, unsigned int order);
+#ifdef CONFIG_KASAN_HW_TAGS
+DECLARE_STATIC_KEY_FALSE(kasan_flag_enabled);
+static __always_inline bool kasan_enabled(void)
+{
+ return static_branch_likely(&kasan_flag_enabled);
+}
+#else
+static inline bool kasan_enabled(void)
+{
+ return true;
+}
+#endif

-void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
- slab_flags_t *flags);
+void __kasan_unpoison_range(const void *addr, size_t size);
+static __always_inline void kasan_unpoison_range(const void *addr, size_t size)
+{
+ if (kasan_enabled())
+ __kasan_unpoison_range(addr, size);
+}

-void kasan_poison_slab(struct page *page);
-void kasan_unpoison_object_data(struct kmem_cache *cache, void *object);
-void kasan_poison_object_data(struct kmem_cache *cache, void *object);
-void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
- const void *object);
+void __kasan_alloc_pages(struct page *page, unsigned int order);
+static __always_inline void kasan_alloc_pages(struct page *page,
+ unsigned int order)
+{
+ if (kasan_enabled())
+ __kasan_alloc_pages(page, order);
+}

-void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
- gfp_t flags);
-void kasan_kfree_large(void *ptr, unsigned long ip);
-void kasan_poison_kfree(void *ptr, unsigned long ip);
-void * __must_check kasan_kmalloc(struct kmem_cache *s, const void *object,
- size_t size, gfp_t flags);
-void * __must_check kasan_krealloc(const void *object, size_t new_size,
- gfp_t flags);
+void __kasan_free_pages(struct page *page, unsigned int order);
+static __always_inline void kasan_free_pages(struct page *page,
+ unsigned int order)
+{
+ if (kasan_enabled())
+ __kasan_free_pages(page, order);
+}

-void * __must_check kasan_slab_alloc(struct kmem_cache *s, void *object,
- gfp_t flags);
-bool kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip);
+void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
+ slab_flags_t *flags);
+static __always_inline void kasan_cache_create(struct kmem_cache *cache,
+ unsigned int *size, slab_flags_t *flags)
+{
+ if (kasan_enabled())
+ __kasan_cache_create(cache, size, flags);
+}

-struct kasan_cache {
- int alloc_meta_offset;
- int free_meta_offset;
-};
+size_t __kasan_metadata_size(struct kmem_cache *cache);
+static __always_inline size_t kasan_metadata_size(struct kmem_cache *cache)
+{
+ if (kasan_enabled())
+ return __kasan_metadata_size(cache);
+ return 0;
+}
+
+void __kasan_poison_slab(struct page *page);
+static __always_inline void kasan_poison_slab(struct page *page)
+{
+ if (kasan_enabled())
+ return __kasan_poison_slab(page);
+}
+
+void __kasan_unpoison_object_data(struct kmem_cache *cache, void *object);
+static __always_inline void kasan_unpoison_object_data(struct kmem_cache *cache,
+ void *object)
+{
+ if (kasan_enabled())
+ return __kasan_unpoison_object_data(cache, object);
+}
+
+void __kasan_poison_object_data(struct kmem_cache *cache, void *object);
+static __always_inline void kasan_poison_object_data(struct kmem_cache *cache,
+ void *object)
+{
+ if (kasan_enabled())
+ __kasan_poison_object_data(cache, object);
+}
+
+void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache,
+ const void *object);
+static __always_inline void * __must_check kasan_init_slab_obj(
+ struct kmem_cache *cache, const void *object)
+{
+ if (kasan_enabled())
+ return __kasan_init_slab_obj(cache, object);
+ return (void *)object;
+}
+
+bool __kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip);
+static __always_inline bool kasan_slab_free(struct kmem_cache *s, void *object,
+ unsigned long ip)
+{
+ if (kasan_enabled())
+ return __kasan_slab_free(s, object, ip);
+ return false;
+}
+
+void * __must_check __kasan_slab_alloc(struct kmem_cache *s,
+ void *object, gfp_t flags);
+static __always_inline void * __must_check kasan_slab_alloc(
+ struct kmem_cache *s, void *object, gfp_t flags)
+{
+ if (kasan_enabled())
+ return __kasan_slab_alloc(s, object, flags);
+ return object;
+}
+
+void * __must_check __kasan_kmalloc(struct kmem_cache *s, const void *object,
+ size_t size, gfp_t flags);
+static __always_inline void * __must_check kasan_kmalloc(struct kmem_cache *s,
+ const void *object, size_t size, gfp_t flags)
+{
+ if (kasan_enabled())
+ return __kasan_kmalloc(s, object, size, flags);
+ return (void *)object;
+}

-size_t kasan_metadata_size(struct kmem_cache *cache);
+void * __must_check __kasan_kmalloc_large(const void *ptr,
+ size_t size, gfp_t flags);
+static __always_inline void * __must_check kasan_kmalloc_large(const void *ptr,
+ size_t size, gfp_t flags)
+{
+ if (kasan_enabled())
+ return __kasan_kmalloc_large(ptr, size, flags);
+ return (void *)ptr;
+}
+
+void * __must_check __kasan_krealloc(const void *object,
+ size_t new_size, gfp_t flags);
+static __always_inline void * __must_check kasan_krealloc(const void *object,
+ size_t new_size, gfp_t flags)
+{
+ if (kasan_enabled())
+ return __kasan_krealloc(object, new_size, flags);
+ return (void *)object;
+}
+
+void __kasan_poison_kfree(void *ptr, unsigned long ip);
+static __always_inline void kasan_poison_kfree(void *ptr, unsigned long ip)
+{
+ if (kasan_enabled())
+ __kasan_poison_kfree(ptr, ip);
+}
+
+void __kasan_kfree_large(void *ptr, unsigned long ip);
+static __always_inline void kasan_kfree_large(void *ptr, unsigned long ip)
+{
+ if (kasan_enabled())
+ __kasan_kfree_large(ptr, ip);
+}

bool kasan_save_enable_multi_shot(void);
void kasan_restore_multi_shot(bool enabled);

#else /* CONFIG_KASAN */

+static inline bool kasan_enabled(void)
+{
+ return false;
+}
static inline void kasan_unpoison_range(const void *address, size_t size) {}
-
static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
static inline void kasan_free_pages(struct page *page, unsigned int order) {}
-
static inline void kasan_cache_create(struct kmem_cache *cache,
unsigned int *size,
slab_flags_t *flags) {}
-
+static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; }
static inline void kasan_poison_slab(struct page *page) {}
static inline void kasan_unpoison_object_data(struct kmem_cache *cache,
void *object) {}
@@ -132,36 +255,32 @@ static inline void *kasan_init_slab_obj(struct kmem_cache *cache,
{
return (void *)object;
}
-
-static inline void *kasan_kmalloc_large(void *ptr, size_t size, gfp_t flags)
+static inline bool kasan_slab_free(struct kmem_cache *s, void *object,
+ unsigned long ip)
{
- return ptr;
+ return false;
+}
+static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object,
+ gfp_t flags)
+{
+ return object;
}
-static inline void kasan_kfree_large(void *ptr, unsigned long ip) {}
-static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {}
static inline void *kasan_kmalloc(struct kmem_cache *s, const void *object,
size_t size, gfp_t flags)
{
return (void *)object;
}
+static inline void *kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags)
+{
+ return (void *)ptr;
+}
static inline void *kasan_krealloc(const void *object, size_t new_size,
gfp_t flags)
{
return (void *)object;
}
-
-static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object,
- gfp_t flags)
-{
- return object;
-}
-static inline bool kasan_slab_free(struct kmem_cache *s, void *object,
- unsigned long ip)
-{
- return false;
-}
-
-static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; }
+static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {}
+static inline void kasan_kfree_large(void *ptr, unsigned long ip) {}

#endif /* CONFIG_KASAN */

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 947f4f1a6536..24f47e140a4c 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -31,6 +31,7 @@
#include <linux/sizes.h>
#include <linux/sched.h>
#include <linux/pgtable.h>
+#include <linux/kasan.h>

struct mempolicy;
struct anon_vma;
@@ -1415,22 +1416,30 @@ static inline bool cpupid_match_pid(struct task_struct *task, int cpupid)
#endif /* CONFIG_NUMA_BALANCING */

#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
+
static inline u8 page_kasan_tag(const struct page *page)
{
- return (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;
+ if (kasan_enabled())
+ return (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;
+ return 0xff;
}

static inline void page_kasan_tag_set(struct page *page, u8 tag)
{
- page->flags &= ~(KASAN_TAG_MASK << KASAN_TAG_PGSHIFT);
- page->flags |= (tag & KASAN_TAG_MASK) << KASAN_TAG_PGSHIFT;
+ if (kasan_enabled()) {
+ page->flags &= ~(KASAN_TAG_MASK << KASAN_TAG_PGSHIFT);
+ page->flags |= (tag & KASAN_TAG_MASK) << KASAN_TAG_PGSHIFT;
+ }
}

static inline void page_kasan_tag_reset(struct page *page)
{
- page_kasan_tag_set(page, 0xff);
+ if (kasan_enabled())
+ page_kasan_tag_set(page, 0xff);
}
-#else
+
+#else /* CONFIG_KASAN_SW_TAGS || CONFIG_KASAN_HW_TAGS */
+
static inline u8 page_kasan_tag(const struct page *page)
{
return 0xff;
@@ -1438,7 +1447,8 @@ static inline u8 page_kasan_tag(const struct page *page)

static inline void page_kasan_tag_set(struct page *page, u8 tag) { }
static inline void page_kasan_tag_reset(struct page *page) { }
-#endif
+
+#endif /* CONFIG_KASAN_SW_TAGS || CONFIG_KASAN_HW_TAGS */

static inline struct zone *page_zone(const struct page *page)
{
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index a11e3e75eb08..17918bd20ed9 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -59,7 +59,7 @@ void kasan_disable_current(void)
}
#endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */

-void kasan_unpoison_range(const void *address, size_t size)
+void __kasan_unpoison_range(const void *address, size_t size)
{
unpoison_range(address, size);
}
@@ -87,7 +87,7 @@ asmlinkage void kasan_unpoison_task_stack_below(const void *watermark)
}
#endif /* CONFIG_KASAN_STACK */

-void kasan_alloc_pages(struct page *page, unsigned int order)
+void __kasan_alloc_pages(struct page *page, unsigned int order)
{
u8 tag;
unsigned long i;
@@ -101,7 +101,7 @@ void kasan_alloc_pages(struct page *page, unsigned int order)
unpoison_range(page_address(page), PAGE_SIZE << order);
}

-void kasan_free_pages(struct page *page, unsigned int order)
+void __kasan_free_pages(struct page *page, unsigned int order)
{
if (likely(!PageHighMem(page)))
poison_range(page_address(page),
@@ -128,8 +128,8 @@ static inline unsigned int optimal_redzone(unsigned int object_size)
object_size <= (1 << 16) - 1024 ? 1024 : 2048;
}

-void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
- slab_flags_t *flags)
+void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
+ slab_flags_t *flags)
{
unsigned int orig_size = *size;
unsigned int redzone_size;
@@ -174,7 +174,7 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
*flags |= SLAB_KASAN;
}

-size_t kasan_metadata_size(struct kmem_cache *cache)
+size_t __kasan_metadata_size(struct kmem_cache *cache)
{
if (!kasan_stack_collection_enabled())
return 0;
@@ -197,7 +197,7 @@ struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache,
return kasan_reset_tag(object) + cache->kasan_info.free_meta_offset;
}

-void kasan_poison_slab(struct page *page)
+void __kasan_poison_slab(struct page *page)
{
unsigned long i;

@@ -207,12 +207,12 @@ void kasan_poison_slab(struct page *page)
KASAN_KMALLOC_REDZONE);
}

-void kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
+void __kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
{
unpoison_range(object, cache->object_size);
}

-void kasan_poison_object_data(struct kmem_cache *cache, void *object)
+void __kasan_poison_object_data(struct kmem_cache *cache, void *object)
{
poison_range(object,
round_up(cache->object_size, KASAN_GRANULE_SIZE),
@@ -265,7 +265,7 @@ static u8 assign_tag(struct kmem_cache *cache, const void *object,
#endif
}

-void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
+void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache,
const void *object)
{
struct kasan_alloc_meta *alloc_meta;
@@ -284,7 +284,7 @@ void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
return (void *)object;
}

-static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
+static bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
unsigned long ip, bool quarantine)
{
u8 tag;
@@ -330,9 +330,9 @@ static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
return IS_ENABLED(CONFIG_KASAN_GENERIC);
}

-bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
+bool __kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
{
- return __kasan_slab_free(cache, object, ip, true);
+ return ____kasan_slab_free(cache, object, ip, true);
}

static void set_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
@@ -340,7 +340,7 @@ static void set_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
kasan_set_track(&kasan_get_alloc_meta(cache, object)->alloc_track, flags);
}

-static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
+static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object,
size_t size, gfp_t flags, bool keep_tag)
{
unsigned long redzone_start;
@@ -375,20 +375,20 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
return set_tag(object, tag);
}

-void * __must_check kasan_slab_alloc(struct kmem_cache *cache, void *object,
- gfp_t flags)
+void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
+ void *object, gfp_t flags)
{
- return __kasan_kmalloc(cache, object, cache->object_size, flags, false);
+ return ____kasan_kmalloc(cache, object, cache->object_size, flags, false);
}

-void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object,
- size_t size, gfp_t flags)
+void * __must_check __kasan_kmalloc(struct kmem_cache *cache, const void *object,
+ size_t size, gfp_t flags)
{
- return __kasan_kmalloc(cache, object, size, flags, true);
+ return ____kasan_kmalloc(cache, object, size, flags, true);
}
-EXPORT_SYMBOL(kasan_kmalloc);
+EXPORT_SYMBOL(__kasan_kmalloc);

-void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
+void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
gfp_t flags)
{
struct page *page;
@@ -413,7 +413,7 @@ void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
return (void *)ptr;
}

-void * __must_check kasan_krealloc(const void *object, size_t size, gfp_t flags)
+void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flags)
{
struct page *page;

@@ -423,13 +423,13 @@ void * __must_check kasan_krealloc(const void *object, size_t size, gfp_t flags)
page = virt_to_head_page(object);

if (unlikely(!PageSlab(page)))
- return kasan_kmalloc_large(object, size, flags);
+ return __kasan_kmalloc_large(object, size, flags);
else
- return __kasan_kmalloc(page->slab_cache, object, size,
+ return ____kasan_kmalloc(page->slab_cache, object, size,
flags, true);
}

-void kasan_poison_kfree(void *ptr, unsigned long ip)
+void __kasan_poison_kfree(void *ptr, unsigned long ip)
{
struct page *page;

@@ -442,11 +442,11 @@ void kasan_poison_kfree(void *ptr, unsigned long ip)
}
poison_range(ptr, page_size(page), KASAN_FREE_PAGE);
} else {
- __kasan_slab_free(page->slab_cache, ptr, ip, false);
+ ____kasan_slab_free(page->slab_cache, ptr, ip, false);
}
}

-void kasan_kfree_large(void *ptr, unsigned long ip)
+void __kasan_kfree_large(void *ptr, unsigned long ip)
{
if (ptr != page_address(virt_to_head_page(ptr)))
kasan_report_invalid_free(ptr, ip);
--
2.29.2.299.gdc1121823c-goog

2020-11-13 22:26:00

by Andrey Konovalov

[permalink] [raw]
Subject: [PATCH mm v3 18/19] kasan, mm: allow cache merging with no metadata

The reason cache merging is disabled with KASAN is because KASAN puts its
metadata right after the allocated object. When the merged caches have
slightly different sizes, the metadata ends up in different places, which
KASAN doesn't support.

It might be possible to adjust the metadata allocation algorithm and make
it friendly to the cache merging code. Instead this change takes a simpler
approach and allows merging caches when no metadata is present. Which is
the case for hardware tag-based KASAN with kasan.mode=prod.

Co-developed-by: Vincenzo Frascino <[email protected]>
Signed-off-by: Vincenzo Frascino <[email protected]>
Signed-off-by: Andrey Konovalov <[email protected]>
Link: https://linux-review.googlesource.com/id/Ia114847dfb2244f297d2cb82d592bf6a07455dba
---
include/linux/kasan.h | 21 +++++++++++++++++++--
mm/kasan/common.c | 11 +++++++++++
mm/slab_common.c | 3 ++-
3 files changed, 32 insertions(+), 3 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 16cf53eac29b..173a8e81d001 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -81,17 +81,30 @@ struct kasan_cache {
};

#ifdef CONFIG_KASAN_HW_TAGS
+
DECLARE_STATIC_KEY_FALSE(kasan_flag_enabled);
+
static __always_inline bool kasan_enabled(void)
{
return static_branch_likely(&kasan_flag_enabled);
}
-#else
+
+#else /* CONFIG_KASAN_HW_TAGS */
+
static inline bool kasan_enabled(void)
{
return true;
}
-#endif
+
+#endif /* CONFIG_KASAN_HW_TAGS */
+
+slab_flags_t __kasan_never_merge(void);
+static __always_inline slab_flags_t kasan_never_merge(void)
+{
+ if (kasan_enabled())
+ return __kasan_never_merge();
+ return 0;
+}

void __kasan_unpoison_range(const void *addr, size_t size);
static __always_inline void kasan_unpoison_range(const void *addr, size_t size)
@@ -238,6 +251,10 @@ static inline bool kasan_enabled(void)
{
return false;
}
+static inline slab_flags_t kasan_never_merge(void)
+{
+ return 0;
+}
static inline void kasan_unpoison_range(const void *address, size_t size) {}
static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
static inline void kasan_free_pages(struct page *page, unsigned int order) {}
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index cf874243efab..a5a4dcb1254d 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -87,6 +87,17 @@ asmlinkage void kasan_unpoison_task_stack_below(const void *watermark)
}
#endif /* CONFIG_KASAN_STACK */

+/*
+ * Only allow cache merging when stack collection is disabled and no metadata
+ * is present.
+ */
+slab_flags_t __kasan_never_merge(void)
+{
+ if (kasan_stack_collection_enabled())
+ return SLAB_KASAN;
+ return 0;
+}
+
void __kasan_alloc_pages(struct page *page, unsigned int order)
{
u8 tag;
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 0b5ae1819a8b..075b23ce94ec 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -19,6 +19,7 @@
#include <linux/seq_file.h>
#include <linux/proc_fs.h>
#include <linux/debugfs.h>
+#include <linux/kasan.h>
#include <asm/cacheflush.h>
#include <asm/tlbflush.h>
#include <asm/page.h>
@@ -54,7 +55,7 @@ static DECLARE_WORK(slab_caches_to_rcu_destroy_work,
*/
#define SLAB_NEVER_MERGE (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER | \
SLAB_TRACE | SLAB_TYPESAFE_BY_RCU | SLAB_NOLEAKTRACE | \
- SLAB_FAILSLAB | SLAB_KASAN)
+ SLAB_FAILSLAB | kasan_never_merge())

#define SLAB_MERGE_SAME (SLAB_RECLAIM_ACCOUNT | SLAB_CACHE_DMA | \
SLAB_CACHE_DMA32 | SLAB_ACCOUNT)
--
2.29.2.299.gdc1121823c-goog

2020-11-13 22:26:10

by Andrey Konovalov

[permalink] [raw]
Subject: [PATCH mm v3 04/19] kasan, arm64: unpoison stack only with CONFIG_KASAN_STACK

There's a config option CONFIG_KASAN_STACK that has to be enabled for
KASAN to use stack instrumentation and perform validity checks for
stack variables.

There's no need to unpoison stack when CONFIG_KASAN_STACK is not enabled.
Only call kasan_unpoison_task_stack[_below]() when CONFIG_KASAN_STACK is
enabled.

Note, that CONFIG_KASAN_STACK is an option that is currently always
defined when CONFIG_KASAN is enabled, and therefore has to be tested
with #if instead of #ifdef.

Signed-off-by: Andrey Konovalov <[email protected]>
Reviewed-by: Marco Elver <[email protected]>
Acked-by: Catalin Marinas <[email protected]>
Link: https://linux-review.googlesource.com/id/If8a891e9fe01ea543e00b576852685afec0887e3
---
arch/arm64/kernel/sleep.S | 2 +-
arch/x86/kernel/acpi/wakeup_64.S | 2 +-
include/linux/kasan.h | 10 ++++++----
mm/kasan/common.c | 2 ++
4 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S
index ba40d57757d6..bdadfa56b40e 100644
--- a/arch/arm64/kernel/sleep.S
+++ b/arch/arm64/kernel/sleep.S
@@ -133,7 +133,7 @@ SYM_FUNC_START(_cpu_resume)
*/
bl cpu_do_resume

-#ifdef CONFIG_KASAN
+#if defined(CONFIG_KASAN) && CONFIG_KASAN_STACK
mov x0, sp
bl kasan_unpoison_task_stack_below
#endif
diff --git a/arch/x86/kernel/acpi/wakeup_64.S b/arch/x86/kernel/acpi/wakeup_64.S
index c8daa92f38dc..5d3a0b8fd379 100644
--- a/arch/x86/kernel/acpi/wakeup_64.S
+++ b/arch/x86/kernel/acpi/wakeup_64.S
@@ -112,7 +112,7 @@ SYM_FUNC_START(do_suspend_lowlevel)
movq pt_regs_r14(%rax), %r14
movq pt_regs_r15(%rax), %r15

-#ifdef CONFIG_KASAN
+#if defined(CONFIG_KASAN) && CONFIG_KASAN_STACK
/*
* The suspend path may have poisoned some areas deeper in the stack,
* which we now need to unpoison.
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 0c89e6fdd29e..f2109bf0c5f9 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -76,8 +76,6 @@ static inline void kasan_disable_current(void) {}

void kasan_unpoison_range(const void *address, size_t size);

-void kasan_unpoison_task_stack(struct task_struct *task);
-
void kasan_alloc_pages(struct page *page, unsigned int order);
void kasan_free_pages(struct page *page, unsigned int order);

@@ -122,8 +120,6 @@ void kasan_restore_multi_shot(bool enabled);

static inline void kasan_unpoison_range(const void *address, size_t size) {}

-static inline void kasan_unpoison_task_stack(struct task_struct *task) {}
-
static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
static inline void kasan_free_pages(struct page *page, unsigned int order) {}

@@ -175,6 +171,12 @@ static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; }

#endif /* CONFIG_KASAN */

+#if defined(CONFIG_KASAN) && CONFIG_KASAN_STACK
+void kasan_unpoison_task_stack(struct task_struct *task);
+#else
+static inline void kasan_unpoison_task_stack(struct task_struct *task) {}
+#endif
+
#ifdef CONFIG_KASAN_GENERIC

void kasan_cache_shrink(struct kmem_cache *cache);
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 0a420f1dbc54..7648a2452a01 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -64,6 +64,7 @@ void kasan_unpoison_range(const void *address, size_t size)
unpoison_range(address, size);
}

+#if CONFIG_KASAN_STACK
static void __kasan_unpoison_stack(struct task_struct *task, const void *sp)
{
void *base = task_stack_page(task);
@@ -90,6 +91,7 @@ asmlinkage void kasan_unpoison_task_stack_below(const void *watermark)

unpoison_range(base, watermark - base);
}
+#endif /* CONFIG_KASAN_STACK */

void kasan_alloc_pages(struct page *page, unsigned int order)
{
--
2.29.2.299.gdc1121823c-goog

2020-11-13 22:26:19

by Andrey Konovalov

[permalink] [raw]
Subject: [PATCH mm v3 19/19] kasan: update documentation

This change updates KASAN documentation to reflect the addition of boot
parameters and also reworks and clarifies some of the existing sections,
in particular: defines what a memory granule is, mentions quarantine,
makes Kunit section more readable.

Signed-off-by: Andrey Konovalov <[email protected]>
Link: https://linux-review.googlesource.com/id/Ib1f83e91be273264b25f42b04448ac96b858849f
---
Documentation/dev-tools/kasan.rst | 186 +++++++++++++++++++-----------
1 file changed, 116 insertions(+), 70 deletions(-)

diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst
index ffbae8ce5748..0d5d77919b1a 100644
--- a/Documentation/dev-tools/kasan.rst
+++ b/Documentation/dev-tools/kasan.rst
@@ -4,8 +4,9 @@ The Kernel Address Sanitizer (KASAN)
Overview
--------

-KernelAddressSANitizer (KASAN) is a dynamic memory error detector designed to
-find out-of-bound and use-after-free bugs. KASAN has three modes:
+KernelAddressSANitizer (KASAN) is a dynamic memory safety error detector
+designed to find out-of-bound and use-after-free bugs. KASAN has three modes:
+
1. generic KASAN (similar to userspace ASan),
2. software tag-based KASAN (similar to userspace HWASan),
3. hardware tag-based KASAN (based on hardware memory tagging).
@@ -39,23 +40,13 @@ CONFIG_KASAN_INLINE. Outline and inline are compiler instrumentation types.
The former produces smaller binary while the latter is 1.1 - 2 times faster.

Both software KASAN modes work with both SLUB and SLAB memory allocators,
-hardware tag-based KASAN currently only support SLUB.
-For better bug detection and nicer reporting, enable CONFIG_STACKTRACE.
+while the hardware tag-based KASAN currently only support SLUB.
+
+For better error reports that include stack traces, enable CONFIG_STACKTRACE.

To augment reports with last allocation and freeing stack of the physical page,
it is recommended to enable also CONFIG_PAGE_OWNER and boot with page_owner=on.

-To disable instrumentation for specific files or directories, add a line
-similar to the following to the respective kernel Makefile:
-
-- For a single file (e.g. main.o)::
-
- KASAN_SANITIZE_main.o := n
-
-- For all files in one directory::
-
- KASAN_SANITIZE := n
-
Error reports
~~~~~~~~~~~~~

@@ -140,22 +131,75 @@ freed (in case of a use-after-free bug report). Next comes a description of
the accessed slab object and information about the accessed memory page.

In the last section the report shows memory state around the accessed address.
-Reading this part requires some understanding of how KASAN works.
-
-The state of each 8 aligned bytes of memory is encoded in one shadow byte.
-Those 8 bytes can be accessible, partially accessible, freed or be a redzone.
-We use the following encoding for each shadow byte: 0 means that all 8 bytes
-of the corresponding memory region are accessible; number N (1 <= N <= 7) means
-that the first N bytes are accessible, and other (8 - N) bytes are not;
-any negative value indicates that the entire 8-byte word is inaccessible.
-We use different negative values to distinguish between different kinds of
-inaccessible memory like redzones or freed memory (see mm/kasan/kasan.h).
+Internally KASAN tracks memory state separately for each memory granule, which
+is either 8 or 16 aligned bytes depending on KASAN mode. Each number in the
+memory state section of the report shows the state of one of the memory
+granules that surround the accessed address.
+
+For generic KASAN the size of each memory granule is 8. The state of each
+granule is encoded in one shadow byte. Those 8 bytes can be accessible,
+partially accessible, freed or be a part of a redzone. KASAN uses the following
+encoding for each shadow byte: 0 means that all 8 bytes of the corresponding
+memory region are accessible; number N (1 <= N <= 7) means that the first N
+bytes are accessible, and other (8 - N) bytes are not; any negative value
+indicates that the entire 8-byte word is inaccessible. KASAN uses different
+negative values to distinguish between different kinds of inaccessible memory
+like redzones or freed memory (see mm/kasan/kasan.h).

In the report above the arrows point to the shadow byte 03, which means that
the accessed address is partially accessible.

For tag-based KASAN this last report section shows the memory tags around the
-accessed address (see Implementation details section).
+accessed address (see `Implementation details`_ section).
+
+Boot parameters
+~~~~~~~~~~~~~~~
+
+Hardware tag-based KASAN mode (see the section about different mode below) is
+intended for use in production as a security mitigation. Therefore it supports
+boot parameters that allow to disable KASAN competely or otherwise control
+particular KASAN features.
+
+The things that can be controlled are:
+
+1. Whether KASAN is enabled at all.
+2. Whether KASAN collects and saves alloc/free stacks.
+3. Whether KASAN panics on a detected bug or not.
+
+The ``kasan.mode`` boot parameter allows to choose one of three main modes:
+
+- ``kasan.mode=off`` - KASAN is disabled, no tag checks are performed
+- ``kasan.mode=prod`` - only essential production features are enabled
+- ``kasan.mode=full`` - all KASAN features are enabled
+
+The chosen mode provides default control values for the features mentioned
+above. However it's also possible to override the default values by providing:
+
+- ``kasan.stacktrace=off`` or ``=on`` - enable alloc/free stack collection
+ (default: ``on`` for ``mode=full``,
+ otherwise ``off``)
+- ``kasan.fault=report`` or ``=panic`` - only print KASAN report or also panic
+ (default: ``report``)
+
+If ``kasan.mode`` parameter is not provided, it defaults to ``full`` when
+``CONFIG_DEBUG_KERNEL`` is enabled, and to ``prod`` otherwise.
+
+For developers
+~~~~~~~~~~~~~~
+
+Software KASAN modes use compiler instrumentation to insert validity checks.
+Such instrumentation might be incompatible with some part of the kernel, and
+therefore needs to be disabled. To disable instrumentation for specific files
+or directories, add a line similar to the following to the respective kernel
+Makefile:
+
+- For a single file (e.g. main.o)::
+
+ KASAN_SANITIZE_main.o := n
+
+- For all files in one directory::
+
+ KASAN_SANITIZE := n


Implementation details
@@ -164,10 +208,10 @@ Implementation details
Generic KASAN
~~~~~~~~~~~~~

-From a high level, our approach to memory error detection is similar to that
-of kmemcheck: use shadow memory to record whether each byte of memory is safe
-to access, and use compile-time instrumentation to insert checks of shadow
-memory on each memory access.
+From a high level perspective, KASAN's approach to memory error detection is
+similar to that of kmemcheck: use shadow memory to record whether each byte of
+memory is safe to access, and use compile-time instrumentation to insert checks
+of shadow memory on each memory access.

Generic KASAN dedicates 1/8th of kernel memory to its shadow memory (e.g. 16TB
to cover 128TB on x86_64) and uses direct mapping with a scale and offset to
@@ -194,7 +238,10 @@ function calls GCC directly inserts the code to check the shadow memory.
This option significantly enlarges kernel but it gives x1.1-x2 performance
boost over outline instrumented kernel.

-Generic KASAN prints up to 2 call_rcu() call stacks in reports, the last one
+Generic KASAN is the only mode that delays the reuse of freed object via
+quarantine (see mm/kasan/quarantine.c for implementation).
+
+Generic KASAN prints up to two call_rcu() call stacks in reports, the last one
and the second to last.

Software tag-based KASAN
@@ -304,15 +351,15 @@ therefore be wasteful. Furthermore, to ensure that different mappings
use different shadow pages, mappings would have to be aligned to
``KASAN_GRANULE_SIZE * PAGE_SIZE``.

-Instead, we share backing space across multiple mappings. We allocate
+Instead, KASAN shares backing space across multiple mappings. It allocates
a backing page when a mapping in vmalloc space uses a particular page
of the shadow region. This page can be shared by other vmalloc
mappings later on.

-We hook in to the vmap infrastructure to lazily clean up unused shadow
+KASAN hooks into the vmap infrastructure to lazily clean up unused shadow
memory.

-To avoid the difficulties around swapping mappings around, we expect
+To avoid the difficulties around swapping mappings around, KASAN expects
that the part of the shadow region that covers the vmalloc space will
not be covered by the early shadow page, but will be left
unmapped. This will require changes in arch-specific code.
@@ -323,24 +370,31 @@ architectures that do not have a fixed module region.
CONFIG_KASAN_KUNIT_TEST & CONFIG_TEST_KASAN_MODULE
--------------------------------------------------

-``CONFIG_KASAN_KUNIT_TEST`` utilizes the KUnit Test Framework for testing.
-This means each test focuses on a small unit of functionality and
-there are a few ways these tests can be run.
+KASAN tests consist on two parts:
+
+1. Tests that are integrated with the KUnit Test Framework. Enabled with
+``CONFIG_KASAN_KUNIT_TEST``. These tests can be run and partially verified
+automatically in a few different ways, see the instructions below.

-Each test will print the KASAN report if an error is detected and then
-print the number of the test and the status of the test:
+2. Tests that are currently incompatible with KUnit. Enabled with
+``CONFIG_TEST_KASAN_MODULE`` and can only be run as a module. These tests can
+only be verified manually, by loading the kernel module and inspecting the
+kernel log for KASAN reports.

-pass::
+Each KUnit-compatible KASAN test prints a KASAN report if an error is detected.
+Then the test prints its number and status.
+
+When a test passes::

ok 28 - kmalloc_double_kzfree

-or, if kmalloc failed::
+When a test fails due to a failed ``kmalloc``::

# kmalloc_large_oob_right: ASSERTION FAILED at lib/test_kasan.c:163
Expected ptr is not null, but is
not ok 4 - kmalloc_large_oob_right

-or, if a KASAN report was expected, but not found::
+When a test fails due to a missing KASAN report::

# kmalloc_double_kzfree: EXPECTATION FAILED at lib/test_kasan.c:629
Expected kasan_data->report_expected == kasan_data->report_found, but
@@ -348,46 +402,38 @@ or, if a KASAN report was expected, but not found::
kasan_data->report_found == 0
not ok 28 - kmalloc_double_kzfree

-All test statuses are tracked as they run and an overall status will
-be printed at the end::
+At the end the cumulative status of all KASAN tests is printed. On success::

ok 1 - kasan

-or::
+Or, if one of the tests failed::

not ok 1 - kasan

-(1) Loadable Module
-~~~~~~~~~~~~~~~~~~~~
+
+There are a few ways to run KUnit-compatible KASAN tests.
+
+1. Loadable module
+~~~~~~~~~~~~~~~~~~

With ``CONFIG_KUNIT`` enabled, ``CONFIG_KASAN_KUNIT_TEST`` can be built as
-a loadable module and run on any architecture that supports KASAN
-using something like insmod or modprobe. The module is called ``test_kasan``.
+a loadable module and run on any architecture that supports KASAN by loading
+the module with insmod or modprobe. The module is called ``test_kasan``.

-(2) Built-In
-~~~~~~~~~~~~~
+2. Built-In
+~~~~~~~~~~~

With ``CONFIG_KUNIT`` built-in, ``CONFIG_KASAN_KUNIT_TEST`` can be built-in
-on any architecure that supports KASAN. These and any other KUnit
-tests enabled will run and print the results at boot as a late-init
-call.
+on any architecure that supports KASAN. These and any other KUnit tests enabled
+will run and print the results at boot as a late-init call.

-(3) Using kunit_tool
-~~~~~~~~~~~~~~~~~~~~~
+3. Using kunit_tool
+~~~~~~~~~~~~~~~~~~~

-With ``CONFIG_KUNIT`` and ``CONFIG_KASAN_KUNIT_TEST`` built-in, we can also
-use kunit_tool to see the results of these along with other KUnit
-tests in a more readable way. This will not print the KASAN reports
-of tests that passed. Use `KUnit documentation <https://www.kernel.org/doc/html/latest/dev-tools/kunit/index.html>`_ for more up-to-date
-information on kunit_tool.
+With ``CONFIG_KUNIT`` and ``CONFIG_KASAN_KUNIT_TEST`` built-in, it's also
+possible use ``kunit_tool`` to see the results of these and other KUnit tests
+in a more readable way. This will not print the KASAN reports of the tests that
+passed. Use `KUnit documentation <https://www.kernel.org/doc/html/latest/dev-tools/kunit/index.html>`_
+for more up-to-date information on ``kunit_tool``.

.. _KUnit: https://www.kernel.org/doc/html/latest/dev-tools/kunit/index.html
-
-``CONFIG_TEST_KASAN_MODULE`` is a set of KASAN tests that could not be
-converted to KUnit. These tests can be run only as a module with
-``CONFIG_TEST_KASAN_MODULE`` built as a loadable module and
-``CONFIG_KASAN`` built-in. The type of error expected and the
-function being run is printed before the expression expected to give
-an error. Then the error is printed, if found, and that test
-should be interpretted to pass only if the error was the one expected
-by the test.
--
2.29.2.299.gdc1121823c-goog

2020-11-13 22:26:30

by Andrey Konovalov

[permalink] [raw]
Subject: [PATCH mm v3 02/19] kasan: rename get_alloc/free_info

Rename get_alloc_info() and get_free_info() to kasan_get_alloc_meta()
and kasan_get_free_meta() to better reflect what those do and avoid
confusion with kasan_set_free_info().

No functional changes.

Signed-off-by: Andrey Konovalov <[email protected]>
Reviewed-by: Dmitry Vyukov <[email protected]>
Reviewed-by: Marco Elver <[email protected]>
Link: https://linux-review.googlesource.com/id/Ib6e4ba61c8b12112b403d3479a9799ac8fff8de1
---
mm/kasan/common.c | 16 ++++++++--------
mm/kasan/generic.c | 12 ++++++------
mm/kasan/hw_tags.c | 4 ++--
mm/kasan/kasan.h | 8 ++++----
mm/kasan/quarantine.c | 4 ++--
mm/kasan/report.c | 12 ++++++------
mm/kasan/report_sw_tags.c | 2 +-
mm/kasan/sw_tags.c | 4 ++--
8 files changed, 31 insertions(+), 31 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index e11fac2ee30c..8197399b0a1f 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -181,14 +181,14 @@ size_t kasan_metadata_size(struct kmem_cache *cache)
sizeof(struct kasan_free_meta) : 0);
}

-struct kasan_alloc_meta *get_alloc_info(struct kmem_cache *cache,
- const void *object)
+struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache,
+ const void *object)
{
return (void *)reset_tag(object) + cache->kasan_info.alloc_meta_offset;
}

-struct kasan_free_meta *get_free_info(struct kmem_cache *cache,
- const void *object)
+struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache,
+ const void *object)
{
BUILD_BUG_ON(sizeof(struct kasan_free_meta) > 32);
return (void *)reset_tag(object) + cache->kasan_info.free_meta_offset;
@@ -265,13 +265,13 @@ static u8 assign_tag(struct kmem_cache *cache, const void *object,
void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
const void *object)
{
- struct kasan_alloc_meta *alloc_info;
+ struct kasan_alloc_meta *alloc_meta;

if (!(cache->flags & SLAB_KASAN))
return (void *)object;

- alloc_info = get_alloc_info(cache, object);
- __memset(alloc_info, 0, sizeof(*alloc_info));
+ alloc_meta = kasan_get_alloc_meta(cache, object);
+ __memset(alloc_meta, 0, sizeof(*alloc_meta));

if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) || IS_ENABLED(CONFIG_KASAN_HW_TAGS))
object = set_tag(object, assign_tag(cache, object, true, false));
@@ -357,7 +357,7 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
KASAN_KMALLOC_REDZONE);

if (cache->flags & SLAB_KASAN)
- kasan_set_track(&get_alloc_info(cache, object)->alloc_track, flags);
+ kasan_set_track(&kasan_get_alloc_meta(cache, object)->alloc_track, flags);

return set_tag(object, tag);
}
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index da3608187c25..9c6b77f8c4a4 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -329,7 +329,7 @@ void kasan_record_aux_stack(void *addr)
{
struct page *page = kasan_addr_to_page(addr);
struct kmem_cache *cache;
- struct kasan_alloc_meta *alloc_info;
+ struct kasan_alloc_meta *alloc_meta;
void *object;

if (is_kfence_address(addr) || !(page && PageSlab(page)))
@@ -337,13 +337,13 @@ void kasan_record_aux_stack(void *addr)

cache = page->slab_cache;
object = nearest_obj(cache, page, addr);
- alloc_info = get_alloc_info(cache, object);
+ alloc_meta = kasan_get_alloc_meta(cache, object);

/*
* record the last two call_rcu() call stacks.
*/
- alloc_info->aux_stack[1] = alloc_info->aux_stack[0];
- alloc_info->aux_stack[0] = kasan_save_stack(GFP_NOWAIT);
+ alloc_meta->aux_stack[1] = alloc_meta->aux_stack[0];
+ alloc_meta->aux_stack[0] = kasan_save_stack(GFP_NOWAIT);
}

void kasan_set_free_info(struct kmem_cache *cache,
@@ -351,7 +351,7 @@ void kasan_set_free_info(struct kmem_cache *cache,
{
struct kasan_free_meta *free_meta;

- free_meta = get_free_info(cache, object);
+ free_meta = kasan_get_free_meta(cache, object);
kasan_set_track(&free_meta->free_track, GFP_NOWAIT);

/*
@@ -365,5 +365,5 @@ struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
{
if (*(u8 *)kasan_mem_to_shadow(object) != KASAN_KMALLOC_FREETRACK)
return NULL;
- return &get_free_info(cache, object)->free_track;
+ return &kasan_get_free_meta(cache, object)->free_track;
}
diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index 3f9232464ed4..68e77363e58b 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -75,7 +75,7 @@ void kasan_set_free_info(struct kmem_cache *cache,
{
struct kasan_alloc_meta *alloc_meta;

- alloc_meta = get_alloc_info(cache, object);
+ alloc_meta = kasan_get_alloc_meta(cache, object);
kasan_set_track(&alloc_meta->free_track[0], GFP_NOWAIT);
}

@@ -84,6 +84,6 @@ struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
{
struct kasan_alloc_meta *alloc_meta;

- alloc_meta = get_alloc_info(cache, object);
+ alloc_meta = kasan_get_alloc_meta(cache, object);
return &alloc_meta->free_track[0];
}
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 13c511e85d5f..0eab7e4cecb8 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -149,10 +149,10 @@ struct kasan_free_meta {
#endif
};

-struct kasan_alloc_meta *get_alloc_info(struct kmem_cache *cache,
- const void *object);
-struct kasan_free_meta *get_free_info(struct kmem_cache *cache,
- const void *object);
+struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache,
+ const void *object);
+struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache,
+ const void *object);

void poison_range(const void *address, size_t size, u8 value);
void unpoison_range(const void *address, size_t size);
diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c
index a0792f0d6d0f..0da3d37e1589 100644
--- a/mm/kasan/quarantine.c
+++ b/mm/kasan/quarantine.c
@@ -166,7 +166,7 @@ void quarantine_put(struct kmem_cache *cache, void *object)
unsigned long flags;
struct qlist_head *q;
struct qlist_head temp = QLIST_INIT;
- struct kasan_free_meta *info = get_free_info(cache, object);
+ struct kasan_free_meta *meta = kasan_get_free_meta(cache, object);

/*
* Note: irq must be disabled until after we move the batch to the
@@ -179,7 +179,7 @@ void quarantine_put(struct kmem_cache *cache, void *object)
local_irq_save(flags);

q = this_cpu_ptr(&cpu_quarantine);
- qlist_put(q, &info->quarantine_link, cache->size);
+ qlist_put(q, &meta->quarantine_link, cache->size);
if (unlikely(q->bytes > QUARANTINE_PERCPU_SIZE)) {
qlist_move_all(q, &temp);

diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index a69c2827a125..df16bef0d810 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -164,12 +164,12 @@ static void describe_object_addr(struct kmem_cache *cache, void *object,
static void describe_object(struct kmem_cache *cache, void *object,
const void *addr, u8 tag)
{
- struct kasan_alloc_meta *alloc_info = get_alloc_info(cache, object);
+ struct kasan_alloc_meta *alloc_meta = kasan_get_alloc_meta(cache, object);

if (cache->flags & SLAB_KASAN) {
struct kasan_track *free_track;

- print_track(&alloc_info->alloc_track, "Allocated");
+ print_track(&alloc_meta->alloc_track, "Allocated");
pr_err("\n");
free_track = kasan_get_free_track(cache, object, tag);
if (free_track) {
@@ -178,14 +178,14 @@ static void describe_object(struct kmem_cache *cache, void *object,
}

#ifdef CONFIG_KASAN_GENERIC
- if (alloc_info->aux_stack[0]) {
+ if (alloc_meta->aux_stack[0]) {
pr_err("Last call_rcu():\n");
- print_stack(alloc_info->aux_stack[0]);
+ print_stack(alloc_meta->aux_stack[0]);
pr_err("\n");
}
- if (alloc_info->aux_stack[1]) {
+ if (alloc_meta->aux_stack[1]) {
pr_err("Second to last call_rcu():\n");
- print_stack(alloc_info->aux_stack[1]);
+ print_stack(alloc_meta->aux_stack[1]);
pr_err("\n");
}
#endif
diff --git a/mm/kasan/report_sw_tags.c b/mm/kasan/report_sw_tags.c
index aebc44a29e83..317100fd95b9 100644
--- a/mm/kasan/report_sw_tags.c
+++ b/mm/kasan/report_sw_tags.c
@@ -46,7 +46,7 @@ const char *get_bug_type(struct kasan_access_info *info)
if (page && PageSlab(page)) {
cache = page->slab_cache;
object = nearest_obj(cache, page, (void *)addr);
- alloc_meta = get_alloc_info(cache, object);
+ alloc_meta = kasan_get_alloc_meta(cache, object);

for (i = 0; i < KASAN_NR_FREE_STACKS; i++)
if (alloc_meta->free_pointer_tag[i] == tag)
diff --git a/mm/kasan/sw_tags.c b/mm/kasan/sw_tags.c
index a518483f3965..6d7648cc3b98 100644
--- a/mm/kasan/sw_tags.c
+++ b/mm/kasan/sw_tags.c
@@ -174,7 +174,7 @@ void kasan_set_free_info(struct kmem_cache *cache,
struct kasan_alloc_meta *alloc_meta;
u8 idx = 0;

- alloc_meta = get_alloc_info(cache, object);
+ alloc_meta = kasan_get_alloc_meta(cache, object);

#ifdef CONFIG_KASAN_SW_TAGS_IDENTIFY
idx = alloc_meta->free_track_idx;
@@ -191,7 +191,7 @@ struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
struct kasan_alloc_meta *alloc_meta;
int i = 0;

- alloc_meta = get_alloc_info(cache, object);
+ alloc_meta = kasan_get_alloc_meta(cache, object);

#ifdef CONFIG_KASAN_SW_TAGS_IDENTIFY
for (i = 0; i < KASAN_NR_FREE_STACKS; i++) {
--
2.29.2.299.gdc1121823c-goog

2020-11-13 22:27:17

by Andrey Konovalov

[permalink] [raw]
Subject: [PATCH mm v3 03/19] kasan: introduce set_alloc_info

Add set_alloc_info() helper and move kasan_set_track() into it. This will
simplify the code for one of the upcoming changes.

No functional changes.

Signed-off-by: Andrey Konovalov <[email protected]>
Reviewed-by: Dmitry Vyukov <[email protected]>
Reviewed-by: Marco Elver <[email protected]>
Link: https://linux-review.googlesource.com/id/I0316193cbb4ecc9b87b7c2eee0dd79f8ec908c1a
---
mm/kasan/common.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 8197399b0a1f..0a420f1dbc54 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -327,6 +327,11 @@ bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
return __kasan_slab_free(cache, object, ip, true);
}

+static void set_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
+{
+ kasan_set_track(&kasan_get_alloc_meta(cache, object)->alloc_track, flags);
+}
+
static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
size_t size, gfp_t flags, bool keep_tag)
{
@@ -357,7 +362,7 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
KASAN_KMALLOC_REDZONE);

if (cache->flags & SLAB_KASAN)
- kasan_set_track(&kasan_get_alloc_meta(cache, object)->alloc_track, flags);
+ set_alloc_info(cache, (void *)object, flags);

return set_tag(object, tag);
}
--
2.29.2.299.gdc1121823c-goog

2020-11-13 22:27:24

by Andrey Konovalov

[permalink] [raw]
Subject: [PATCH mm v3 01/19] kasan: simplify quarantine_put call site

Move get_free_info() call into quarantine_put() to simplify the call site.

No functional changes.

Signed-off-by: Andrey Konovalov <[email protected]>
Reviewed-by: Dmitry Vyukov <[email protected]>
Reviewed-by: Marco Elver <[email protected]>
Link: https://linux-review.googlesource.com/id/Iab0f04e7ebf8d83247024b7190c67c3c34c7940f
---
mm/kasan/common.c | 2 +-
mm/kasan/kasan.h | 5 ++---
mm/kasan/quarantine.c | 3 ++-
3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 998aede4d172..e11fac2ee30c 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -317,7 +317,7 @@ static bool __kasan_slab_free(struct kmem_cache *cache, void *object,

kasan_set_free_info(cache, object, tag);

- quarantine_put(get_free_info(cache, object), cache);
+ quarantine_put(cache, object);

return IS_ENABLED(CONFIG_KASAN_GENERIC);
}
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 64560cc71191..13c511e85d5f 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -216,12 +216,11 @@ struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,

#if defined(CONFIG_KASAN_GENERIC) && \
(defined(CONFIG_SLAB) || defined(CONFIG_SLUB))
-void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache);
+void quarantine_put(struct kmem_cache *cache, void *object);
void quarantine_reduce(void);
void quarantine_remove_cache(struct kmem_cache *cache);
#else
-static inline void quarantine_put(struct kasan_free_meta *info,
- struct kmem_cache *cache) { }
+static inline void quarantine_put(struct kmem_cache *cache, void *object) { }
static inline void quarantine_reduce(void) { }
static inline void quarantine_remove_cache(struct kmem_cache *cache) { }
#endif
diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c
index 580ff5610fc1..a0792f0d6d0f 100644
--- a/mm/kasan/quarantine.c
+++ b/mm/kasan/quarantine.c
@@ -161,11 +161,12 @@ static void qlist_free_all(struct qlist_head *q, struct kmem_cache *cache)
qlist_init(q);
}

-void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache)
+void quarantine_put(struct kmem_cache *cache, void *object)
{
unsigned long flags;
struct qlist_head *q;
struct qlist_head temp = QLIST_INIT;
+ struct kasan_free_meta *info = get_free_info(cache, object);

/*
* Note: irq must be disabled until after we move the batch to the
--
2.29.2.299.gdc1121823c-goog

2020-11-16 11:03:52

by Dmitry Vyukov

[permalink] [raw]
Subject: Re: [PATCH mm v3 04/19] kasan, arm64: unpoison stack only with CONFIG_KASAN_STACK

On Fri, Nov 13, 2020 at 11:20 PM Andrey Konovalov <[email protected]> wrote:
>
> There's a config option CONFIG_KASAN_STACK that has to be enabled for
> KASAN to use stack instrumentation and perform validity checks for
> stack variables.
>
> There's no need to unpoison stack when CONFIG_KASAN_STACK is not enabled.
> Only call kasan_unpoison_task_stack[_below]() when CONFIG_KASAN_STACK is
> enabled.
>
> Note, that CONFIG_KASAN_STACK is an option that is currently always
> defined when CONFIG_KASAN is enabled, and therefore has to be tested
> with #if instead of #ifdef.
>
> Signed-off-by: Andrey Konovalov <[email protected]>
> Reviewed-by: Marco Elver <[email protected]>

Reviewed-by: Dmitry Vyukov <[email protected]>

> Acked-by: Catalin Marinas <[email protected]>
> Link: https://linux-review.googlesource.com/id/If8a891e9fe01ea543e00b576852685afec0887e3
> ---
> arch/arm64/kernel/sleep.S | 2 +-
> arch/x86/kernel/acpi/wakeup_64.S | 2 +-
> include/linux/kasan.h | 10 ++++++----
> mm/kasan/common.c | 2 ++
> 4 files changed, 10 insertions(+), 6 deletions(-)
>
> diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S
> index ba40d57757d6..bdadfa56b40e 100644
> --- a/arch/arm64/kernel/sleep.S
> +++ b/arch/arm64/kernel/sleep.S
> @@ -133,7 +133,7 @@ SYM_FUNC_START(_cpu_resume)
> */
> bl cpu_do_resume
>
> -#ifdef CONFIG_KASAN
> +#if defined(CONFIG_KASAN) && CONFIG_KASAN_STACK
> mov x0, sp
> bl kasan_unpoison_task_stack_below
> #endif
> diff --git a/arch/x86/kernel/acpi/wakeup_64.S b/arch/x86/kernel/acpi/wakeup_64.S
> index c8daa92f38dc..5d3a0b8fd379 100644
> --- a/arch/x86/kernel/acpi/wakeup_64.S
> +++ b/arch/x86/kernel/acpi/wakeup_64.S
> @@ -112,7 +112,7 @@ SYM_FUNC_START(do_suspend_lowlevel)
> movq pt_regs_r14(%rax), %r14
> movq pt_regs_r15(%rax), %r15
>
> -#ifdef CONFIG_KASAN
> +#if defined(CONFIG_KASAN) && CONFIG_KASAN_STACK
> /*
> * The suspend path may have poisoned some areas deeper in the stack,
> * which we now need to unpoison.
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 0c89e6fdd29e..f2109bf0c5f9 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -76,8 +76,6 @@ static inline void kasan_disable_current(void) {}
>
> void kasan_unpoison_range(const void *address, size_t size);
>
> -void kasan_unpoison_task_stack(struct task_struct *task);
> -
> void kasan_alloc_pages(struct page *page, unsigned int order);
> void kasan_free_pages(struct page *page, unsigned int order);
>
> @@ -122,8 +120,6 @@ void kasan_restore_multi_shot(bool enabled);
>
> static inline void kasan_unpoison_range(const void *address, size_t size) {}
>
> -static inline void kasan_unpoison_task_stack(struct task_struct *task) {}
> -
> static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
> static inline void kasan_free_pages(struct page *page, unsigned int order) {}
>
> @@ -175,6 +171,12 @@ static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; }
>
> #endif /* CONFIG_KASAN */
>
> +#if defined(CONFIG_KASAN) && CONFIG_KASAN_STACK
> +void kasan_unpoison_task_stack(struct task_struct *task);
> +#else
> +static inline void kasan_unpoison_task_stack(struct task_struct *task) {}
> +#endif
> +
> #ifdef CONFIG_KASAN_GENERIC
>
> void kasan_cache_shrink(struct kmem_cache *cache);
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 0a420f1dbc54..7648a2452a01 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -64,6 +64,7 @@ void kasan_unpoison_range(const void *address, size_t size)
> unpoison_range(address, size);
> }
>
> +#if CONFIG_KASAN_STACK
> static void __kasan_unpoison_stack(struct task_struct *task, const void *sp)
> {
> void *base = task_stack_page(task);
> @@ -90,6 +91,7 @@ asmlinkage void kasan_unpoison_task_stack_below(const void *watermark)
>
> unpoison_range(base, watermark - base);
> }
> +#endif /* CONFIG_KASAN_STACK */
>
> void kasan_alloc_pages(struct page *page, unsigned int order)
> {
> --
> 2.29.2.299.gdc1121823c-goog
>

2020-11-16 14:48:19

by Vincenzo Frascino

[permalink] [raw]
Subject: Re: [PATCH mm v3 00/19] kasan: boot parameters for hardware tag-based mode

On 11/13/20 10:19 PM, Andrey Konovalov wrote:
> === Overview
>
> Hardware tag-based KASAN mode [1] is intended to eventually be used in
> production as a security mitigation. Therefore there's a need for finer
> control over KASAN features and for an existence of a kill switch.
>
> This patchset adds a few boot parameters for hardware tag-based KASAN that
> allow to disable or otherwise control particular KASAN features, as well
> as provides some initial optimizations for running KASAN in production.
>
> There's another planned patchset what will further optimize hardware
> tag-based KASAN, provide proper benchmarking and tests, and will fully
> enable tag-based KASAN for production use.
>
> Hardware tag-based KASAN relies on arm64 Memory Tagging Extension (MTE)
> [2] to perform memory and pointer tagging. Please see [3] and [4] for
> detailed analysis of how MTE helps to fight memory safety problems.
>
> The features that can be controlled are:
>
> 1. Whether KASAN is enabled at all.
> 2. Whether KASAN collects and saves alloc/free stacks.
> 3. Whether KASAN panics on a detected bug or not.
>
> The patch titled "kasan: add and integrate kasan boot parameters" of this
> series adds a few new boot parameters.
>
> kasan.mode allows to choose one of three main modes:
>
> - kasan.mode=off - KASAN is disabled, no tag checks are performed
> - kasan.mode=prod - only essential production features are enabled
> - kasan.mode=full - all KASAN features are enabled
>
> The chosen mode provides default control values for the features mentioned
> above. However it's also possible to override the default values by
> providing:
>
> - kasan.stacktrace=off/on - enable stacks collection
> (default: on for mode=full, otherwise off)
> - kasan.fault=report/panic - only report tag fault or also panic
> (default: report)
>
> If kasan.mode parameter is not provided, it defaults to full when
> CONFIG_DEBUG_KERNEL is enabled, and to prod otherwise.
>
> It is essential that switching between these modes doesn't require
> rebuilding the kernel with different configs, as this is required by
> the Android GKI (Generic Kernel Image) initiative.
>

Tested-by: Vincenzo Frascino <[email protected]>

> === Benchmarks
>
> For now I've only performed a few simple benchmarks such as measuring
> kernel boot time and slab memory usage after boot. There's an upcoming
> patchset which will optimize KASAN further and include more detailed
> benchmarking results.
>
> The benchmarks were performed in QEMU and the results below exclude the
> slowdown caused by QEMU memory tagging emulation (as it's different from
> the slowdown that will be introduced by hardware and is therefore
> irrelevant).
>
> KASAN_HW_TAGS=y + kasan.mode=off introduces no performance or memory
> impact compared to KASAN_HW_TAGS=n.
>
> kasan.mode=prod (manually excluding tagging) introduces 3% of performance
> and no memory impact (except memory used by hardware to store tags)
> compared to kasan.mode=off.
>
> kasan.mode=full has about 40% performance and 30% memory impact over
> kasan.mode=prod. Both come from alloc/free stack collection.
>
> === Notes
>
> This patchset is available here:
>
> https://github.com/xairy/linux/tree/up-boot-mte-v3
>
> This patchset is based on v10 of "kasan: add hardware tag-based mode for
> arm64" patchset [1].
>
> For testing in QEMU hardware tag-based KASAN requires:
>
> 1. QEMU built from master [6] (use "-machine virt,mte=on -cpu max" arguments
> to run).
> 2. GCC version 10.
>
> [1] https://lkml.org/lkml/2020/11/13/1154
> [2] https://community.arm.com/developer/ip-products/processors/b/processors-ip-blog/posts/enhancing-memory-safety
> [3] https://arxiv.org/pdf/1802.09517.pdf
> [4] https://github.com/microsoft/MSRC-Security-Research/blob/master/papers/2020/Security%20analysis%20of%20memory%20tagging.pdf
> [5] https://source.android.com/devices/architecture/kernel/generic-kernel-image
> [6] https://github.com/qemu/qemu
>
> === History
>
> Changes v2 -> v3:
> - Rebase onto v10 of the HW_TAGS series.
> - Add missing return type for kasan_enabled().
> - Always define random_tag() as a function.
> - Mark kasan wrappers as __always_inline.
> - Don't "kasan: simplify kasan_poison_kfree" as it's based on a false
> assumption, add a comment instead.
> - Address documentation comments.
> - Use <linux/static_key.h> instead of <linux/jump_label.h>.
> - Rework switches in mm/kasan/hw_tags.c.
> - Don't init tag in ____kasan_kmalloc().
> - Correctly check SLAB_TYPESAFE_BY_RCU flag in mm/kasan/common.c.
> - Readability fixes for "kasan: clean up metadata allocation and usage".
> - Change kasan_never_merge() to return SLAB_KASAN instead of excluding it
> from flags.
> - (Vincenzo) Address concerns from checkpatch.pl (courtesy of Marco Elver).
>
> Changes v1 -> v2:
> - Rebased onto v9 of the HW_TAGS patchset.
> - Don't initialize static branches in kasan_init_hw_tags_cpu(), as
> cpu_enable_mte() can't sleep; do in in kasan_init_hw_tags() instead.
> - Rename kasan.stacks to kasan.stacktrace.
>
> Changes RFC v2 -> v1:
> - Rebrand the patchset from fully enabling production use to partially
> addressing that; another optimization and testing patchset will be
> required.
> - Rebase onto v8 of KASAN_HW_TAGS series.
> - Fix "ASYNC" -> "async" typo.
> - Rework depends condition for VMAP_STACK and update config text.
> - Remove unneeded reset_tag() macro, use kasan_reset_tag() instead.
> - Rename kasan.stack to kasan.stacks to avoid confusion with stack
> instrumentation.
> - Introduce kasan_stack_collection_enabled() and kasan_is_enabled()
> helpers.
> - Simplify kasan_stack_collection_enabled() usage.
> - Rework SLAB_KASAN flag and metadata allocation (see the corresponding
> patch for details).
> - Allow cache merging with KASAN_HW_TAGS when kasan.stacks is off.
> - Use sync mode dy default for both prod and full KASAN modes.
> - Drop kasan.trap=sync/async boot parameter, as async mode isn't supported
> yet.
> - Choose prod or full mode depending on CONFIG_DEBUG_KERNEL when no
> kasan.mode boot parameter is provided.
> - Drop krealloc optimization changes, those will be included in a separate
> patchset.
> - Update KASAN documentation to mention boot parameters.
>
> Changes RFC v1 -> RFC v2:
> - Rework boot parameters.
> - Drop __init from empty kasan_init_tags() definition.
> - Add cpu_supports_mte() helper that can be used during early boot and use
> it in kasan_init_tags()
> - Lots of new KASAN optimization commits.
>
> Andrey Konovalov (19):
> kasan: simplify quarantine_put call site
> kasan: rename get_alloc/free_info
> kasan: introduce set_alloc_info
> kasan, arm64: unpoison stack only with CONFIG_KASAN_STACK
> kasan: allow VMAP_STACK for HW_TAGS mode
> kasan: remove __kasan_unpoison_stack
> kasan: inline kasan_reset_tag for tag-based modes
> kasan: inline random_tag for HW_TAGS
> kasan: open-code kasan_unpoison_slab
> kasan: inline (un)poison_range and check_invalid_free
> kasan: add and integrate kasan boot parameters
> kasan, mm: check kasan_enabled in annotations
> kasan, mm: rename kasan_poison_kfree
> kasan: don't round_up too much
> kasan: simplify assign_tag and set_tag calls
> kasan: clarify comment in __kasan_kfree_large
> kasan: clean up metadata allocation and usage
> kasan, mm: allow cache merging with no metadata
> kasan: update documentation
>
> Documentation/dev-tools/kasan.rst | 186 ++++++++++++--------
> arch/Kconfig | 8 +-
> arch/arm64/kernel/sleep.S | 2 +-
> arch/x86/kernel/acpi/wakeup_64.S | 2 +-
> include/linux/kasan.h | 245 ++++++++++++++++++++------
> include/linux/mm.h | 22 ++-
> mm/kasan/common.c | 283 ++++++++++++++++++------------
> mm/kasan/generic.c | 27 +--
> mm/kasan/hw_tags.c | 185 +++++++++++++++----
> mm/kasan/kasan.h | 120 +++++++++----
> mm/kasan/quarantine.c | 13 +-
> mm/kasan/report.c | 61 ++++---
> mm/kasan/report_hw_tags.c | 2 +-
> mm/kasan/report_sw_tags.c | 15 +-
> mm/kasan/shadow.c | 5 +-
> mm/kasan/sw_tags.c | 17 +-
> mm/mempool.c | 4 +-
> mm/slab_common.c | 3 +-
> 18 files changed, 824 insertions(+), 376 deletions(-)
>

--
Regards,
Vincenzo

2020-11-16 15:14:02

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH mm v3 10/19] kasan: inline (un)poison_range and check_invalid_free

On Fri, Nov 13, 2020 at 11:20PM +0100, Andrey Konovalov wrote:
> Using (un)poison_range() or check_invalid_free() currently results in
> function calls. Move their definitions to mm/kasan/kasan.h and turn them
> into static inline functions for hardware tag-based mode to avoid
> unneeded function calls.
>
> Signed-off-by: Andrey Konovalov <[email protected]>
> Link: https://linux-review.googlesource.com/id/Ia9d8191024a12d1374675b3d27197f10193f50bb

Reviewed-by: Marco Elver <[email protected]>

> ---
> mm/kasan/hw_tags.c | 30 ------------------------------
> mm/kasan/kasan.h | 45 ++++++++++++++++++++++++++++++++++++++++-----
> 2 files changed, 40 insertions(+), 35 deletions(-)
>
> diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
> index 3cdd87d189f6..863fed4edd3f 100644
> --- a/mm/kasan/hw_tags.c
> +++ b/mm/kasan/hw_tags.c
> @@ -10,7 +10,6 @@
>
> #include <linux/kasan.h>
> #include <linux/kernel.h>
> -#include <linux/kfence.h>
> #include <linux/memory.h>
> #include <linux/mm.h>
> #include <linux/string.h>
> @@ -31,35 +30,6 @@ void __init kasan_init_hw_tags(void)
> pr_info("KernelAddressSanitizer initialized\n");
> }
>
> -void poison_range(const void *address, size_t size, u8 value)
> -{
> - /* Skip KFENCE memory if called explicitly outside of sl*b. */
> - if (is_kfence_address(address))
> - return;
> -
> - hw_set_mem_tag_range(kasan_reset_tag(address),
> - round_up(size, KASAN_GRANULE_SIZE), value);
> -}
> -
> -void unpoison_range(const void *address, size_t size)
> -{
> - /* Skip KFENCE memory if called explicitly outside of sl*b. */
> - if (is_kfence_address(address))
> - return;
> -
> - hw_set_mem_tag_range(kasan_reset_tag(address),
> - round_up(size, KASAN_GRANULE_SIZE), get_tag(address));
> -}
> -
> -bool check_invalid_free(void *addr)
> -{
> - u8 ptr_tag = get_tag(addr);
> - u8 mem_tag = hw_get_mem_tag(addr);
> -
> - return (mem_tag == KASAN_TAG_INVALID) ||
> - (ptr_tag != KASAN_TAG_KERNEL && ptr_tag != mem_tag);
> -}
> -
> void kasan_set_free_info(struct kmem_cache *cache,
> void *object, u8 tag)
> {
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 7876a2547b7d..8aa83b7ad79e 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -3,6 +3,7 @@
> #define __MM_KASAN_KASAN_H
>
> #include <linux/kasan.h>
> +#include <linux/kfence.h>
> #include <linux/stackdepot.h>
>
> #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
> @@ -154,9 +155,6 @@ struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache,
> struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache,
> const void *object);
>
> -void poison_range(const void *address, size_t size, u8 value);
> -void unpoison_range(const void *address, size_t size);
> -
> #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
>
> static inline const void *kasan_shadow_to_mem(const void *shadow_addr)
> @@ -196,8 +194,6 @@ void print_tags(u8 addr_tag, const void *addr);
> static inline void print_tags(u8 addr_tag, const void *addr) { }
> #endif
>
> -bool check_invalid_free(void *addr);
> -
> void *find_first_bad_addr(void *addr, size_t size);
> const char *get_bug_type(struct kasan_access_info *info);
> void metadata_fetch_row(char *buffer, void *row);
> @@ -278,6 +274,45 @@ static inline u8 random_tag(void) { return hw_get_random_tag(); }
> static inline u8 random_tag(void) { return 0; }
> #endif
>
> +#ifdef CONFIG_KASAN_HW_TAGS
> +
> +static inline void poison_range(const void *address, size_t size, u8 value)
> +{
> + /* Skip KFENCE memory if called explicitly outside of sl*b. */
> + if (is_kfence_address(address))
> + return;
> +
> + hw_set_mem_tag_range(kasan_reset_tag(address),
> + round_up(size, KASAN_GRANULE_SIZE), value);
> +}
> +
> +static inline void unpoison_range(const void *address, size_t size)
> +{
> + /* Skip KFENCE memory if called explicitly outside of sl*b. */
> + if (is_kfence_address(address))
> + return;
> +
> + hw_set_mem_tag_range(kasan_reset_tag(address),
> + round_up(size, KASAN_GRANULE_SIZE), get_tag(address));
> +}
> +
> +static inline bool check_invalid_free(void *addr)
> +{
> + u8 ptr_tag = get_tag(addr);
> + u8 mem_tag = hw_get_mem_tag(addr);
> +
> + return (mem_tag == KASAN_TAG_INVALID) ||
> + (ptr_tag != KASAN_TAG_KERNEL && ptr_tag != mem_tag);
> +}
> +
> +#else /* CONFIG_KASAN_HW_TAGS */
> +
> +void poison_range(const void *address, size_t size, u8 value);
> +void unpoison_range(const void *address, size_t size);
> +bool check_invalid_free(void *addr);
> +
> +#endif /* CONFIG_KASAN_HW_TAGS */
> +
> /*
> * Exported functions for interfaces called from assembly or from generated
> * code. Declarations here to avoid warning about missing declarations.
> --
> 2.29.2.299.gdc1121823c-goog
>

2020-11-16 15:30:38

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH mm v3 12/19] kasan, mm: check kasan_enabled in annotations

On Fri, Nov 13, 2020 at 11:20PM +0100, Andrey Konovalov wrote:
> Declare the kasan_enabled static key in include/linux/kasan.h and in
> include/linux/mm.h and check it in all kasan annotations. This allows to
> avoid any slowdown caused by function calls when kasan_enabled is
> disabled.
>
> Co-developed-by: Vincenzo Frascino <[email protected]>
> Signed-off-by: Vincenzo Frascino <[email protected]>
> Signed-off-by: Andrey Konovalov <[email protected]>
> Link: https://linux-review.googlesource.com/id/I2589451d3c96c97abbcbf714baabe6161c6f153e

Reviewed-by: Marco Elver <[email protected]>

> ---
> include/linux/kasan.h | 213 ++++++++++++++++++++++++++++++++----------
> include/linux/mm.h | 22 +++--
> mm/kasan/common.c | 56 +++++------
> 3 files changed, 210 insertions(+), 81 deletions(-)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 872bf145ddde..6bd95243a583 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -2,6 +2,7 @@
> #ifndef _LINUX_KASAN_H
> #define _LINUX_KASAN_H
>
> +#include <linux/static_key.h>
> #include <linux/types.h>
>
> struct kmem_cache;
> @@ -74,54 +75,176 @@ static inline void kasan_disable_current(void) {}
>
> #ifdef CONFIG_KASAN
>
> -void kasan_unpoison_range(const void *address, size_t size);
> +struct kasan_cache {
> + int alloc_meta_offset;
> + int free_meta_offset;
> +};
>
> -void kasan_alloc_pages(struct page *page, unsigned int order);
> -void kasan_free_pages(struct page *page, unsigned int order);
> +#ifdef CONFIG_KASAN_HW_TAGS
> +DECLARE_STATIC_KEY_FALSE(kasan_flag_enabled);
> +static __always_inline bool kasan_enabled(void)
> +{
> + return static_branch_likely(&kasan_flag_enabled);
> +}
> +#else
> +static inline bool kasan_enabled(void)
> +{
> + return true;
> +}
> +#endif
>
> -void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> - slab_flags_t *flags);
> +void __kasan_unpoison_range(const void *addr, size_t size);
> +static __always_inline void kasan_unpoison_range(const void *addr, size_t size)
> +{
> + if (kasan_enabled())
> + __kasan_unpoison_range(addr, size);
> +}
>
> -void kasan_poison_slab(struct page *page);
> -void kasan_unpoison_object_data(struct kmem_cache *cache, void *object);
> -void kasan_poison_object_data(struct kmem_cache *cache, void *object);
> -void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
> - const void *object);
> +void __kasan_alloc_pages(struct page *page, unsigned int order);
> +static __always_inline void kasan_alloc_pages(struct page *page,
> + unsigned int order)
> +{
> + if (kasan_enabled())
> + __kasan_alloc_pages(page, order);
> +}
>
> -void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
> - gfp_t flags);
> -void kasan_kfree_large(void *ptr, unsigned long ip);
> -void kasan_poison_kfree(void *ptr, unsigned long ip);
> -void * __must_check kasan_kmalloc(struct kmem_cache *s, const void *object,
> - size_t size, gfp_t flags);
> -void * __must_check kasan_krealloc(const void *object, size_t new_size,
> - gfp_t flags);
> +void __kasan_free_pages(struct page *page, unsigned int order);
> +static __always_inline void kasan_free_pages(struct page *page,
> + unsigned int order)
> +{
> + if (kasan_enabled())
> + __kasan_free_pages(page, order);
> +}
>
> -void * __must_check kasan_slab_alloc(struct kmem_cache *s, void *object,
> - gfp_t flags);
> -bool kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip);
> +void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> + slab_flags_t *flags);
> +static __always_inline void kasan_cache_create(struct kmem_cache *cache,
> + unsigned int *size, slab_flags_t *flags)
> +{
> + if (kasan_enabled())
> + __kasan_cache_create(cache, size, flags);
> +}
>
> -struct kasan_cache {
> - int alloc_meta_offset;
> - int free_meta_offset;
> -};
> +size_t __kasan_metadata_size(struct kmem_cache *cache);
> +static __always_inline size_t kasan_metadata_size(struct kmem_cache *cache)
> +{
> + if (kasan_enabled())
> + return __kasan_metadata_size(cache);
> + return 0;
> +}
> +
> +void __kasan_poison_slab(struct page *page);
> +static __always_inline void kasan_poison_slab(struct page *page)
> +{
> + if (kasan_enabled())
> + return __kasan_poison_slab(page);
> +}
> +
> +void __kasan_unpoison_object_data(struct kmem_cache *cache, void *object);
> +static __always_inline void kasan_unpoison_object_data(struct kmem_cache *cache,
> + void *object)
> +{
> + if (kasan_enabled())
> + return __kasan_unpoison_object_data(cache, object);
> +}
> +
> +void __kasan_poison_object_data(struct kmem_cache *cache, void *object);
> +static __always_inline void kasan_poison_object_data(struct kmem_cache *cache,
> + void *object)
> +{
> + if (kasan_enabled())
> + __kasan_poison_object_data(cache, object);
> +}
> +
> +void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache,
> + const void *object);
> +static __always_inline void * __must_check kasan_init_slab_obj(
> + struct kmem_cache *cache, const void *object)
> +{
> + if (kasan_enabled())
> + return __kasan_init_slab_obj(cache, object);
> + return (void *)object;
> +}
> +
> +bool __kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip);
> +static __always_inline bool kasan_slab_free(struct kmem_cache *s, void *object,
> + unsigned long ip)
> +{
> + if (kasan_enabled())
> + return __kasan_slab_free(s, object, ip);
> + return false;
> +}
> +
> +void * __must_check __kasan_slab_alloc(struct kmem_cache *s,
> + void *object, gfp_t flags);
> +static __always_inline void * __must_check kasan_slab_alloc(
> + struct kmem_cache *s, void *object, gfp_t flags)
> +{
> + if (kasan_enabled())
> + return __kasan_slab_alloc(s, object, flags);
> + return object;
> +}
> +
> +void * __must_check __kasan_kmalloc(struct kmem_cache *s, const void *object,
> + size_t size, gfp_t flags);
> +static __always_inline void * __must_check kasan_kmalloc(struct kmem_cache *s,
> + const void *object, size_t size, gfp_t flags)
> +{
> + if (kasan_enabled())
> + return __kasan_kmalloc(s, object, size, flags);
> + return (void *)object;
> +}
>
> -size_t kasan_metadata_size(struct kmem_cache *cache);
> +void * __must_check __kasan_kmalloc_large(const void *ptr,
> + size_t size, gfp_t flags);
> +static __always_inline void * __must_check kasan_kmalloc_large(const void *ptr,
> + size_t size, gfp_t flags)
> +{
> + if (kasan_enabled())
> + return __kasan_kmalloc_large(ptr, size, flags);
> + return (void *)ptr;
> +}
> +
> +void * __must_check __kasan_krealloc(const void *object,
> + size_t new_size, gfp_t flags);
> +static __always_inline void * __must_check kasan_krealloc(const void *object,
> + size_t new_size, gfp_t flags)
> +{
> + if (kasan_enabled())
> + return __kasan_krealloc(object, new_size, flags);
> + return (void *)object;
> +}
> +
> +void __kasan_poison_kfree(void *ptr, unsigned long ip);
> +static __always_inline void kasan_poison_kfree(void *ptr, unsigned long ip)
> +{
> + if (kasan_enabled())
> + __kasan_poison_kfree(ptr, ip);
> +}
> +
> +void __kasan_kfree_large(void *ptr, unsigned long ip);
> +static __always_inline void kasan_kfree_large(void *ptr, unsigned long ip)
> +{
> + if (kasan_enabled())
> + __kasan_kfree_large(ptr, ip);
> +}
>
> bool kasan_save_enable_multi_shot(void);
> void kasan_restore_multi_shot(bool enabled);
>
> #else /* CONFIG_KASAN */
>
> +static inline bool kasan_enabled(void)
> +{
> + return false;
> +}
> static inline void kasan_unpoison_range(const void *address, size_t size) {}
> -
> static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
> static inline void kasan_free_pages(struct page *page, unsigned int order) {}
> -
> static inline void kasan_cache_create(struct kmem_cache *cache,
> unsigned int *size,
> slab_flags_t *flags) {}
> -
> +static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; }
> static inline void kasan_poison_slab(struct page *page) {}
> static inline void kasan_unpoison_object_data(struct kmem_cache *cache,
> void *object) {}
> @@ -132,36 +255,32 @@ static inline void *kasan_init_slab_obj(struct kmem_cache *cache,
> {
> return (void *)object;
> }
> -
> -static inline void *kasan_kmalloc_large(void *ptr, size_t size, gfp_t flags)
> +static inline bool kasan_slab_free(struct kmem_cache *s, void *object,
> + unsigned long ip)
> {
> - return ptr;
> + return false;
> +}
> +static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object,
> + gfp_t flags)
> +{
> + return object;
> }
> -static inline void kasan_kfree_large(void *ptr, unsigned long ip) {}
> -static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {}
> static inline void *kasan_kmalloc(struct kmem_cache *s, const void *object,
> size_t size, gfp_t flags)
> {
> return (void *)object;
> }
> +static inline void *kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags)
> +{
> + return (void *)ptr;
> +}
> static inline void *kasan_krealloc(const void *object, size_t new_size,
> gfp_t flags)
> {
> return (void *)object;
> }
> -
> -static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object,
> - gfp_t flags)
> -{
> - return object;
> -}
> -static inline bool kasan_slab_free(struct kmem_cache *s, void *object,
> - unsigned long ip)
> -{
> - return false;
> -}
> -
> -static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; }
> +static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {}
> +static inline void kasan_kfree_large(void *ptr, unsigned long ip) {}
>
> #endif /* CONFIG_KASAN */
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 947f4f1a6536..24f47e140a4c 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -31,6 +31,7 @@
> #include <linux/sizes.h>
> #include <linux/sched.h>
> #include <linux/pgtable.h>
> +#include <linux/kasan.h>
>
> struct mempolicy;
> struct anon_vma;
> @@ -1415,22 +1416,30 @@ static inline bool cpupid_match_pid(struct task_struct *task, int cpupid)
> #endif /* CONFIG_NUMA_BALANCING */
>
> #if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
> +
> static inline u8 page_kasan_tag(const struct page *page)
> {
> - return (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;
> + if (kasan_enabled())
> + return (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;
> + return 0xff;
> }
>
> static inline void page_kasan_tag_set(struct page *page, u8 tag)
> {
> - page->flags &= ~(KASAN_TAG_MASK << KASAN_TAG_PGSHIFT);
> - page->flags |= (tag & KASAN_TAG_MASK) << KASAN_TAG_PGSHIFT;
> + if (kasan_enabled()) {
> + page->flags &= ~(KASAN_TAG_MASK << KASAN_TAG_PGSHIFT);
> + page->flags |= (tag & KASAN_TAG_MASK) << KASAN_TAG_PGSHIFT;
> + }
> }
>
> static inline void page_kasan_tag_reset(struct page *page)
> {
> - page_kasan_tag_set(page, 0xff);
> + if (kasan_enabled())
> + page_kasan_tag_set(page, 0xff);
> }
> -#else
> +
> +#else /* CONFIG_KASAN_SW_TAGS || CONFIG_KASAN_HW_TAGS */
> +
> static inline u8 page_kasan_tag(const struct page *page)
> {
> return 0xff;
> @@ -1438,7 +1447,8 @@ static inline u8 page_kasan_tag(const struct page *page)
>
> static inline void page_kasan_tag_set(struct page *page, u8 tag) { }
> static inline void page_kasan_tag_reset(struct page *page) { }
> -#endif
> +
> +#endif /* CONFIG_KASAN_SW_TAGS || CONFIG_KASAN_HW_TAGS */
>
> static inline struct zone *page_zone(const struct page *page)
> {
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index a11e3e75eb08..17918bd20ed9 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -59,7 +59,7 @@ void kasan_disable_current(void)
> }
> #endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */
>
> -void kasan_unpoison_range(const void *address, size_t size)
> +void __kasan_unpoison_range(const void *address, size_t size)
> {
> unpoison_range(address, size);
> }
> @@ -87,7 +87,7 @@ asmlinkage void kasan_unpoison_task_stack_below(const void *watermark)
> }
> #endif /* CONFIG_KASAN_STACK */
>
> -void kasan_alloc_pages(struct page *page, unsigned int order)
> +void __kasan_alloc_pages(struct page *page, unsigned int order)
> {
> u8 tag;
> unsigned long i;
> @@ -101,7 +101,7 @@ void kasan_alloc_pages(struct page *page, unsigned int order)
> unpoison_range(page_address(page), PAGE_SIZE << order);
> }
>
> -void kasan_free_pages(struct page *page, unsigned int order)
> +void __kasan_free_pages(struct page *page, unsigned int order)
> {
> if (likely(!PageHighMem(page)))
> poison_range(page_address(page),
> @@ -128,8 +128,8 @@ static inline unsigned int optimal_redzone(unsigned int object_size)
> object_size <= (1 << 16) - 1024 ? 1024 : 2048;
> }
>
> -void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> - slab_flags_t *flags)
> +void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> + slab_flags_t *flags)
> {
> unsigned int orig_size = *size;
> unsigned int redzone_size;
> @@ -174,7 +174,7 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> *flags |= SLAB_KASAN;
> }
>
> -size_t kasan_metadata_size(struct kmem_cache *cache)
> +size_t __kasan_metadata_size(struct kmem_cache *cache)
> {
> if (!kasan_stack_collection_enabled())
> return 0;
> @@ -197,7 +197,7 @@ struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache,
> return kasan_reset_tag(object) + cache->kasan_info.free_meta_offset;
> }
>
> -void kasan_poison_slab(struct page *page)
> +void __kasan_poison_slab(struct page *page)
> {
> unsigned long i;
>
> @@ -207,12 +207,12 @@ void kasan_poison_slab(struct page *page)
> KASAN_KMALLOC_REDZONE);
> }
>
> -void kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
> +void __kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
> {
> unpoison_range(object, cache->object_size);
> }
>
> -void kasan_poison_object_data(struct kmem_cache *cache, void *object)
> +void __kasan_poison_object_data(struct kmem_cache *cache, void *object)
> {
> poison_range(object,
> round_up(cache->object_size, KASAN_GRANULE_SIZE),
> @@ -265,7 +265,7 @@ static u8 assign_tag(struct kmem_cache *cache, const void *object,
> #endif
> }
>
> -void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
> +void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache,
> const void *object)
> {
> struct kasan_alloc_meta *alloc_meta;
> @@ -284,7 +284,7 @@ void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
> return (void *)object;
> }
>
> -static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
> +static bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
> unsigned long ip, bool quarantine)
> {
> u8 tag;
> @@ -330,9 +330,9 @@ static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
> return IS_ENABLED(CONFIG_KASAN_GENERIC);
> }
>
> -bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
> +bool __kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
> {
> - return __kasan_slab_free(cache, object, ip, true);
> + return ____kasan_slab_free(cache, object, ip, true);
> }
>
> static void set_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
> @@ -340,7 +340,7 @@ static void set_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
> kasan_set_track(&kasan_get_alloc_meta(cache, object)->alloc_track, flags);
> }
>
> -static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
> +static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object,
> size_t size, gfp_t flags, bool keep_tag)
> {
> unsigned long redzone_start;
> @@ -375,20 +375,20 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
> return set_tag(object, tag);
> }
>
> -void * __must_check kasan_slab_alloc(struct kmem_cache *cache, void *object,
> - gfp_t flags)
> +void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
> + void *object, gfp_t flags)
> {
> - return __kasan_kmalloc(cache, object, cache->object_size, flags, false);
> + return ____kasan_kmalloc(cache, object, cache->object_size, flags, false);
> }
>
> -void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object,
> - size_t size, gfp_t flags)
> +void * __must_check __kasan_kmalloc(struct kmem_cache *cache, const void *object,
> + size_t size, gfp_t flags)
> {
> - return __kasan_kmalloc(cache, object, size, flags, true);
> + return ____kasan_kmalloc(cache, object, size, flags, true);
> }
> -EXPORT_SYMBOL(kasan_kmalloc);
> +EXPORT_SYMBOL(__kasan_kmalloc);
>
> -void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
> +void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
> gfp_t flags)
> {
> struct page *page;
> @@ -413,7 +413,7 @@ void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
> return (void *)ptr;
> }
>
> -void * __must_check kasan_krealloc(const void *object, size_t size, gfp_t flags)
> +void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flags)
> {
> struct page *page;
>
> @@ -423,13 +423,13 @@ void * __must_check kasan_krealloc(const void *object, size_t size, gfp_t flags)
> page = virt_to_head_page(object);
>
> if (unlikely(!PageSlab(page)))
> - return kasan_kmalloc_large(object, size, flags);
> + return __kasan_kmalloc_large(object, size, flags);
> else
> - return __kasan_kmalloc(page->slab_cache, object, size,
> + return ____kasan_kmalloc(page->slab_cache, object, size,
> flags, true);
> }
>
> -void kasan_poison_kfree(void *ptr, unsigned long ip)
> +void __kasan_poison_kfree(void *ptr, unsigned long ip)
> {
> struct page *page;
>
> @@ -442,11 +442,11 @@ void kasan_poison_kfree(void *ptr, unsigned long ip)
> }
> poison_range(ptr, page_size(page), KASAN_FREE_PAGE);
> } else {
> - __kasan_slab_free(page->slab_cache, ptr, ip, false);
> + ____kasan_slab_free(page->slab_cache, ptr, ip, false);
> }
> }
>
> -void kasan_kfree_large(void *ptr, unsigned long ip)
> +void __kasan_kfree_large(void *ptr, unsigned long ip)
> {
> if (ptr != page_address(virt_to_head_page(ptr)))
> kasan_report_invalid_free(ptr, ip);
> --
> 2.29.2.299.gdc1121823c-goog
>

2020-11-16 15:50:32

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH mm v3 18/19] kasan, mm: allow cache merging with no metadata

On Fri, Nov 13, 2020 at 11:20PM +0100, Andrey Konovalov wrote:
> The reason cache merging is disabled with KASAN is because KASAN puts its
> metadata right after the allocated object. When the merged caches have
> slightly different sizes, the metadata ends up in different places, which
> KASAN doesn't support.
>
> It might be possible to adjust the metadata allocation algorithm and make
> it friendly to the cache merging code. Instead this change takes a simpler
> approach and allows merging caches when no metadata is present. Which is
> the case for hardware tag-based KASAN with kasan.mode=prod.
>
> Co-developed-by: Vincenzo Frascino <[email protected]>
> Signed-off-by: Vincenzo Frascino <[email protected]>
> Signed-off-by: Andrey Konovalov <[email protected]>
> Link: https://linux-review.googlesource.com/id/Ia114847dfb2244f297d2cb82d592bf6a07455dba

Reviewed-by: Marco Elver <[email protected]>

> ---
> include/linux/kasan.h | 21 +++++++++++++++++++--
> mm/kasan/common.c | 11 +++++++++++
> mm/slab_common.c | 3 ++-
> 3 files changed, 32 insertions(+), 3 deletions(-)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 16cf53eac29b..173a8e81d001 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -81,17 +81,30 @@ struct kasan_cache {
> };
>
> #ifdef CONFIG_KASAN_HW_TAGS
> +
> DECLARE_STATIC_KEY_FALSE(kasan_flag_enabled);
> +
> static __always_inline bool kasan_enabled(void)
> {
> return static_branch_likely(&kasan_flag_enabled);
> }
> -#else
> +
> +#else /* CONFIG_KASAN_HW_TAGS */
> +
> static inline bool kasan_enabled(void)
> {
> return true;
> }
> -#endif
> +
> +#endif /* CONFIG_KASAN_HW_TAGS */
> +
> +slab_flags_t __kasan_never_merge(void);
> +static __always_inline slab_flags_t kasan_never_merge(void)
> +{
> + if (kasan_enabled())
> + return __kasan_never_merge();
> + return 0;
> +}
>
> void __kasan_unpoison_range(const void *addr, size_t size);
> static __always_inline void kasan_unpoison_range(const void *addr, size_t size)
> @@ -238,6 +251,10 @@ static inline bool kasan_enabled(void)
> {
> return false;
> }
> +static inline slab_flags_t kasan_never_merge(void)
> +{
> + return 0;
> +}
> static inline void kasan_unpoison_range(const void *address, size_t size) {}
> static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
> static inline void kasan_free_pages(struct page *page, unsigned int order) {}
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index cf874243efab..a5a4dcb1254d 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -87,6 +87,17 @@ asmlinkage void kasan_unpoison_task_stack_below(const void *watermark)
> }
> #endif /* CONFIG_KASAN_STACK */
>
> +/*
> + * Only allow cache merging when stack collection is disabled and no metadata
> + * is present.
> + */
> +slab_flags_t __kasan_never_merge(void)
> +{
> + if (kasan_stack_collection_enabled())
> + return SLAB_KASAN;
> + return 0;
> +}
> +
> void __kasan_alloc_pages(struct page *page, unsigned int order)
> {
> u8 tag;
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index 0b5ae1819a8b..075b23ce94ec 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -19,6 +19,7 @@
> #include <linux/seq_file.h>
> #include <linux/proc_fs.h>
> #include <linux/debugfs.h>
> +#include <linux/kasan.h>
> #include <asm/cacheflush.h>
> #include <asm/tlbflush.h>
> #include <asm/page.h>
> @@ -54,7 +55,7 @@ static DECLARE_WORK(slab_caches_to_rcu_destroy_work,
> */
> #define SLAB_NEVER_MERGE (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER | \
> SLAB_TRACE | SLAB_TYPESAFE_BY_RCU | SLAB_NOLEAKTRACE | \
> - SLAB_FAILSLAB | SLAB_KASAN)
> + SLAB_FAILSLAB | kasan_never_merge())
>
> #define SLAB_MERGE_SAME (SLAB_RECLAIM_ACCOUNT | SLAB_CACHE_DMA | \
> SLAB_CACHE_DMA32 | SLAB_ACCOUNT)
> --
> 2.29.2.299.gdc1121823c-goog
>

2020-11-16 15:51:26

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH mm v3 19/19] kasan: update documentation

On Fri, Nov 13, 2020 at 11:20PM +0100, Andrey Konovalov wrote:
> This change updates KASAN documentation to reflect the addition of boot
> parameters and also reworks and clarifies some of the existing sections,
> in particular: defines what a memory granule is, mentions quarantine,
> makes Kunit section more readable.
>
> Signed-off-by: Andrey Konovalov <[email protected]>
> Link: https://linux-review.googlesource.com/id/Ib1f83e91be273264b25f42b04448ac96b858849f

Reviewed-by: Marco Elver <[email protected]>

> ---
> Documentation/dev-tools/kasan.rst | 186 +++++++++++++++++++-----------
> 1 file changed, 116 insertions(+), 70 deletions(-)
>
> diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst
> index ffbae8ce5748..0d5d77919b1a 100644
> --- a/Documentation/dev-tools/kasan.rst
> +++ b/Documentation/dev-tools/kasan.rst
> @@ -4,8 +4,9 @@ The Kernel Address Sanitizer (KASAN)
> Overview
> --------
>
> -KernelAddressSANitizer (KASAN) is a dynamic memory error detector designed to
> -find out-of-bound and use-after-free bugs. KASAN has three modes:
> +KernelAddressSANitizer (KASAN) is a dynamic memory safety error detector
> +designed to find out-of-bound and use-after-free bugs. KASAN has three modes:
> +
> 1. generic KASAN (similar to userspace ASan),
> 2. software tag-based KASAN (similar to userspace HWASan),
> 3. hardware tag-based KASAN (based on hardware memory tagging).
> @@ -39,23 +40,13 @@ CONFIG_KASAN_INLINE. Outline and inline are compiler instrumentation types.
> The former produces smaller binary while the latter is 1.1 - 2 times faster.
>
> Both software KASAN modes work with both SLUB and SLAB memory allocators,
> -hardware tag-based KASAN currently only support SLUB.
> -For better bug detection and nicer reporting, enable CONFIG_STACKTRACE.
> +while the hardware tag-based KASAN currently only support SLUB.
> +
> +For better error reports that include stack traces, enable CONFIG_STACKTRACE.
>
> To augment reports with last allocation and freeing stack of the physical page,
> it is recommended to enable also CONFIG_PAGE_OWNER and boot with page_owner=on.
>
> -To disable instrumentation for specific files or directories, add a line
> -similar to the following to the respective kernel Makefile:
> -
> -- For a single file (e.g. main.o)::
> -
> - KASAN_SANITIZE_main.o := n
> -
> -- For all files in one directory::
> -
> - KASAN_SANITIZE := n
> -
> Error reports
> ~~~~~~~~~~~~~
>
> @@ -140,22 +131,75 @@ freed (in case of a use-after-free bug report). Next comes a description of
> the accessed slab object and information about the accessed memory page.
>
> In the last section the report shows memory state around the accessed address.
> -Reading this part requires some understanding of how KASAN works.
> -
> -The state of each 8 aligned bytes of memory is encoded in one shadow byte.
> -Those 8 bytes can be accessible, partially accessible, freed or be a redzone.
> -We use the following encoding for each shadow byte: 0 means that all 8 bytes
> -of the corresponding memory region are accessible; number N (1 <= N <= 7) means
> -that the first N bytes are accessible, and other (8 - N) bytes are not;
> -any negative value indicates that the entire 8-byte word is inaccessible.
> -We use different negative values to distinguish between different kinds of
> -inaccessible memory like redzones or freed memory (see mm/kasan/kasan.h).
> +Internally KASAN tracks memory state separately for each memory granule, which
> +is either 8 or 16 aligned bytes depending on KASAN mode. Each number in the
> +memory state section of the report shows the state of one of the memory
> +granules that surround the accessed address.
> +
> +For generic KASAN the size of each memory granule is 8. The state of each
> +granule is encoded in one shadow byte. Those 8 bytes can be accessible,
> +partially accessible, freed or be a part of a redzone. KASAN uses the following
> +encoding for each shadow byte: 0 means that all 8 bytes of the corresponding
> +memory region are accessible; number N (1 <= N <= 7) means that the first N
> +bytes are accessible, and other (8 - N) bytes are not; any negative value
> +indicates that the entire 8-byte word is inaccessible. KASAN uses different
> +negative values to distinguish between different kinds of inaccessible memory
> +like redzones or freed memory (see mm/kasan/kasan.h).
>
> In the report above the arrows point to the shadow byte 03, which means that
> the accessed address is partially accessible.
>
> For tag-based KASAN this last report section shows the memory tags around the
> -accessed address (see Implementation details section).
> +accessed address (see `Implementation details`_ section).
> +
> +Boot parameters
> +~~~~~~~~~~~~~~~
> +
> +Hardware tag-based KASAN mode (see the section about different mode below) is
> +intended for use in production as a security mitigation. Therefore it supports
> +boot parameters that allow to disable KASAN competely or otherwise control
> +particular KASAN features.
> +
> +The things that can be controlled are:
> +
> +1. Whether KASAN is enabled at all.
> +2. Whether KASAN collects and saves alloc/free stacks.
> +3. Whether KASAN panics on a detected bug or not.
> +
> +The ``kasan.mode`` boot parameter allows to choose one of three main modes:
> +
> +- ``kasan.mode=off`` - KASAN is disabled, no tag checks are performed
> +- ``kasan.mode=prod`` - only essential production features are enabled
> +- ``kasan.mode=full`` - all KASAN features are enabled
> +
> +The chosen mode provides default control values for the features mentioned
> +above. However it's also possible to override the default values by providing:
> +
> +- ``kasan.stacktrace=off`` or ``=on`` - enable alloc/free stack collection
> + (default: ``on`` for ``mode=full``,
> + otherwise ``off``)
> +- ``kasan.fault=report`` or ``=panic`` - only print KASAN report or also panic
> + (default: ``report``)
> +
> +If ``kasan.mode`` parameter is not provided, it defaults to ``full`` when
> +``CONFIG_DEBUG_KERNEL`` is enabled, and to ``prod`` otherwise.
> +
> +For developers
> +~~~~~~~~~~~~~~
> +
> +Software KASAN modes use compiler instrumentation to insert validity checks.
> +Such instrumentation might be incompatible with some part of the kernel, and
> +therefore needs to be disabled. To disable instrumentation for specific files
> +or directories, add a line similar to the following to the respective kernel
> +Makefile:
> +
> +- For a single file (e.g. main.o)::
> +
> + KASAN_SANITIZE_main.o := n
> +
> +- For all files in one directory::
> +
> + KASAN_SANITIZE := n
>
>
> Implementation details
> @@ -164,10 +208,10 @@ Implementation details
> Generic KASAN
> ~~~~~~~~~~~~~
>
> -From a high level, our approach to memory error detection is similar to that
> -of kmemcheck: use shadow memory to record whether each byte of memory is safe
> -to access, and use compile-time instrumentation to insert checks of shadow
> -memory on each memory access.
> +From a high level perspective, KASAN's approach to memory error detection is
> +similar to that of kmemcheck: use shadow memory to record whether each byte of
> +memory is safe to access, and use compile-time instrumentation to insert checks
> +of shadow memory on each memory access.
>
> Generic KASAN dedicates 1/8th of kernel memory to its shadow memory (e.g. 16TB
> to cover 128TB on x86_64) and uses direct mapping with a scale and offset to
> @@ -194,7 +238,10 @@ function calls GCC directly inserts the code to check the shadow memory.
> This option significantly enlarges kernel but it gives x1.1-x2 performance
> boost over outline instrumented kernel.
>
> -Generic KASAN prints up to 2 call_rcu() call stacks in reports, the last one
> +Generic KASAN is the only mode that delays the reuse of freed object via
> +quarantine (see mm/kasan/quarantine.c for implementation).
> +
> +Generic KASAN prints up to two call_rcu() call stacks in reports, the last one
> and the second to last.
>
> Software tag-based KASAN
> @@ -304,15 +351,15 @@ therefore be wasteful. Furthermore, to ensure that different mappings
> use different shadow pages, mappings would have to be aligned to
> ``KASAN_GRANULE_SIZE * PAGE_SIZE``.
>
> -Instead, we share backing space across multiple mappings. We allocate
> +Instead, KASAN shares backing space across multiple mappings. It allocates
> a backing page when a mapping in vmalloc space uses a particular page
> of the shadow region. This page can be shared by other vmalloc
> mappings later on.
>
> -We hook in to the vmap infrastructure to lazily clean up unused shadow
> +KASAN hooks into the vmap infrastructure to lazily clean up unused shadow
> memory.
>
> -To avoid the difficulties around swapping mappings around, we expect
> +To avoid the difficulties around swapping mappings around, KASAN expects
> that the part of the shadow region that covers the vmalloc space will
> not be covered by the early shadow page, but will be left
> unmapped. This will require changes in arch-specific code.
> @@ -323,24 +370,31 @@ architectures that do not have a fixed module region.
> CONFIG_KASAN_KUNIT_TEST & CONFIG_TEST_KASAN_MODULE
> --------------------------------------------------
>
> -``CONFIG_KASAN_KUNIT_TEST`` utilizes the KUnit Test Framework for testing.
> -This means each test focuses on a small unit of functionality and
> -there are a few ways these tests can be run.
> +KASAN tests consist on two parts:
> +
> +1. Tests that are integrated with the KUnit Test Framework. Enabled with
> +``CONFIG_KASAN_KUNIT_TEST``. These tests can be run and partially verified
> +automatically in a few different ways, see the instructions below.
>
> -Each test will print the KASAN report if an error is detected and then
> -print the number of the test and the status of the test:
> +2. Tests that are currently incompatible with KUnit. Enabled with
> +``CONFIG_TEST_KASAN_MODULE`` and can only be run as a module. These tests can
> +only be verified manually, by loading the kernel module and inspecting the
> +kernel log for KASAN reports.
>
> -pass::
> +Each KUnit-compatible KASAN test prints a KASAN report if an error is detected.
> +Then the test prints its number and status.
> +
> +When a test passes::
>
> ok 28 - kmalloc_double_kzfree
>
> -or, if kmalloc failed::
> +When a test fails due to a failed ``kmalloc``::
>
> # kmalloc_large_oob_right: ASSERTION FAILED at lib/test_kasan.c:163
> Expected ptr is not null, but is
> not ok 4 - kmalloc_large_oob_right
>
> -or, if a KASAN report was expected, but not found::
> +When a test fails due to a missing KASAN report::
>
> # kmalloc_double_kzfree: EXPECTATION FAILED at lib/test_kasan.c:629
> Expected kasan_data->report_expected == kasan_data->report_found, but
> @@ -348,46 +402,38 @@ or, if a KASAN report was expected, but not found::
> kasan_data->report_found == 0
> not ok 28 - kmalloc_double_kzfree
>
> -All test statuses are tracked as they run and an overall status will
> -be printed at the end::
> +At the end the cumulative status of all KASAN tests is printed. On success::
>
> ok 1 - kasan
>
> -or::
> +Or, if one of the tests failed::
>
> not ok 1 - kasan
>
> -(1) Loadable Module
> -~~~~~~~~~~~~~~~~~~~~
> +
> +There are a few ways to run KUnit-compatible KASAN tests.
> +
> +1. Loadable module
> +~~~~~~~~~~~~~~~~~~
>
> With ``CONFIG_KUNIT`` enabled, ``CONFIG_KASAN_KUNIT_TEST`` can be built as
> -a loadable module and run on any architecture that supports KASAN
> -using something like insmod or modprobe. The module is called ``test_kasan``.
> +a loadable module and run on any architecture that supports KASAN by loading
> +the module with insmod or modprobe. The module is called ``test_kasan``.
>
> -(2) Built-In
> -~~~~~~~~~~~~~
> +2. Built-In
> +~~~~~~~~~~~
>
> With ``CONFIG_KUNIT`` built-in, ``CONFIG_KASAN_KUNIT_TEST`` can be built-in
> -on any architecure that supports KASAN. These and any other KUnit
> -tests enabled will run and print the results at boot as a late-init
> -call.
> +on any architecure that supports KASAN. These and any other KUnit tests enabled
> +will run and print the results at boot as a late-init call.
>
> -(3) Using kunit_tool
> -~~~~~~~~~~~~~~~~~~~~~
> +3. Using kunit_tool
> +~~~~~~~~~~~~~~~~~~~
>
> -With ``CONFIG_KUNIT`` and ``CONFIG_KASAN_KUNIT_TEST`` built-in, we can also
> -use kunit_tool to see the results of these along with other KUnit
> -tests in a more readable way. This will not print the KASAN reports
> -of tests that passed. Use `KUnit documentation <https://www.kernel.org/doc/html/latest/dev-tools/kunit/index.html>`_ for more up-to-date
> -information on kunit_tool.
> +With ``CONFIG_KUNIT`` and ``CONFIG_KASAN_KUNIT_TEST`` built-in, it's also
> +possible use ``kunit_tool`` to see the results of these and other KUnit tests
> +in a more readable way. This will not print the KASAN reports of the tests that
> +passed. Use `KUnit documentation <https://www.kernel.org/doc/html/latest/dev-tools/kunit/index.html>`_
> +for more up-to-date information on ``kunit_tool``.
>
> .. _KUnit: https://www.kernel.org/doc/html/latest/dev-tools/kunit/index.html
> -
> -``CONFIG_TEST_KASAN_MODULE`` is a set of KASAN tests that could not be
> -converted to KUnit. These tests can be run only as a module with
> -``CONFIG_TEST_KASAN_MODULE`` built as a loadable module and
> -``CONFIG_KASAN`` built-in. The type of error expected and the
> -function being run is printed before the expression expected to give
> -an error. Then the error is printed, if found, and that test
> -should be interpretted to pass only if the error was the one expected
> -by the test.
> --
> 2.29.2.299.gdc1121823c-goog
>

2020-11-16 20:28:33

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH mm v3 11/19] kasan: add and integrate kasan boot parameters

On Fri, Nov 13, 2020 at 11:20PM +0100, Andrey Konovalov wrote:
> Hardware tag-based KASAN mode is intended to eventually be used in
> production as a security mitigation. Therefore there's a need for finer
> control over KASAN features and for an existence of a kill switch.
>
> This change adds a few boot parameters for hardware tag-based KASAN that
> allow to disable or otherwise control particular KASAN features.
>
> The features that can be controlled are:
>
> 1. Whether KASAN is enabled at all.
> 2. Whether KASAN collects and saves alloc/free stacks.
> 3. Whether KASAN panics on a detected bug or not.
>
> With this change a new boot parameter kasan.mode allows to choose one of
> three main modes:
>
> - kasan.mode=off - KASAN is disabled, no tag checks are performed
> - kasan.mode=prod - only essential production features are enabled
> - kasan.mode=full - all KASAN features are enabled
>
> The chosen mode provides default control values for the features mentioned
> above. However it's also possible to override the default values by
> providing:
>
> - kasan.stacktrace=off/on - enable alloc/free stack collection
> (default: on for mode=full, otherwise off)
> - kasan.fault=report/panic - only report tag fault or also panic
> (default: report)
>
> If kasan.mode parameter is not provided, it defaults to full when
> CONFIG_DEBUG_KERNEL is enabled, and to prod otherwise.
>
> It is essential that switching between these modes doesn't require
> rebuilding the kernel with different configs, as this is required by
> the Android GKI (Generic Kernel Image) initiative [1].
>
> [1] https://source.android.com/devices/architecture/kernel/generic-kernel-image
>
> Signed-off-by: Andrey Konovalov <[email protected]>
> Link: https://linux-review.googlesource.com/id/If7d37003875b2ed3e0935702c8015c223d6416a4

Reviewed-by: Marco Elver <[email protected]>

> ---
> mm/kasan/common.c | 22 +++++--
> mm/kasan/hw_tags.c | 151 +++++++++++++++++++++++++++++++++++++++++++++
> mm/kasan/kasan.h | 16 +++++
> mm/kasan/report.c | 14 ++++-
> 4 files changed, 196 insertions(+), 7 deletions(-)
>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 1ac4f435c679..a11e3e75eb08 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -135,6 +135,11 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> unsigned int redzone_size;
> int redzone_adjust;
>
> + if (!kasan_stack_collection_enabled()) {
> + *flags |= SLAB_KASAN;
> + return;
> + }
> +
> /* Add alloc meta. */
> cache->kasan_info.alloc_meta_offset = *size;
> *size += sizeof(struct kasan_alloc_meta);
> @@ -171,6 +176,8 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
>
> size_t kasan_metadata_size(struct kmem_cache *cache)
> {
> + if (!kasan_stack_collection_enabled())
> + return 0;
> return (cache->kasan_info.alloc_meta_offset ?
> sizeof(struct kasan_alloc_meta) : 0) +
> (cache->kasan_info.free_meta_offset ?
> @@ -263,11 +270,13 @@ void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
> {
> struct kasan_alloc_meta *alloc_meta;
>
> - if (!(cache->flags & SLAB_KASAN))
> - return (void *)object;
> + if (kasan_stack_collection_enabled()) {
> + if (!(cache->flags & SLAB_KASAN))
> + return (void *)object;
>
> - alloc_meta = kasan_get_alloc_meta(cache, object);
> - __memset(alloc_meta, 0, sizeof(*alloc_meta));
> + alloc_meta = kasan_get_alloc_meta(cache, object);
> + __memset(alloc_meta, 0, sizeof(*alloc_meta));
> + }
>
> if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) || IS_ENABLED(CONFIG_KASAN_HW_TAGS))
> object = set_tag(object, assign_tag(cache, object, true, false));
> @@ -307,6 +316,9 @@ static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
> rounded_up_size = round_up(cache->object_size, KASAN_GRANULE_SIZE);
> poison_range(object, rounded_up_size, KASAN_KMALLOC_FREE);
>
> + if (!kasan_stack_collection_enabled())
> + return false;
> +
> if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine) ||
> unlikely(!(cache->flags & SLAB_KASAN)))
> return false;
> @@ -357,7 +369,7 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
> poison_range((void *)redzone_start, redzone_end - redzone_start,
> KASAN_KMALLOC_REDZONE);
>
> - if (cache->flags & SLAB_KASAN)
> + if (kasan_stack_collection_enabled() && (cache->flags & SLAB_KASAN))
> set_alloc_info(cache, (void *)object, flags);
>
> return set_tag(object, tag);
> diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
> index 863fed4edd3f..30ce88935e9d 100644
> --- a/mm/kasan/hw_tags.c
> +++ b/mm/kasan/hw_tags.c
> @@ -8,18 +8,115 @@
>
> #define pr_fmt(fmt) "kasan: " fmt
>
> +#include <linux/init.h>
> #include <linux/kasan.h>
> #include <linux/kernel.h>
> #include <linux/memory.h>
> #include <linux/mm.h>
> +#include <linux/static_key.h>
> #include <linux/string.h>
> #include <linux/types.h>
>
> #include "kasan.h"
>
> +enum kasan_arg_mode {
> + KASAN_ARG_MODE_DEFAULT,
> + KASAN_ARG_MODE_OFF,
> + KASAN_ARG_MODE_PROD,
> + KASAN_ARG_MODE_FULL,
> +};
> +
> +enum kasan_arg_stacktrace {
> + KASAN_ARG_STACKTRACE_DEFAULT,
> + KASAN_ARG_STACKTRACE_OFF,
> + KASAN_ARG_STACKTRACE_ON,
> +};
> +
> +enum kasan_arg_fault {
> + KASAN_ARG_FAULT_DEFAULT,
> + KASAN_ARG_FAULT_REPORT,
> + KASAN_ARG_FAULT_PANIC,
> +};
> +
> +static enum kasan_arg_mode kasan_arg_mode __ro_after_init;
> +static enum kasan_arg_stacktrace kasan_arg_stacktrace __ro_after_init;
> +static enum kasan_arg_fault kasan_arg_fault __ro_after_init;
> +
> +/* Whether KASAN is enabled at all. */
> +DEFINE_STATIC_KEY_FALSE_RO(kasan_flag_enabled);
> +EXPORT_SYMBOL(kasan_flag_enabled);
> +
> +/* Whether to collect alloc/free stack traces. */
> +DEFINE_STATIC_KEY_FALSE_RO(kasan_flag_stacktrace);
> +
> +/* Whether panic or disable tag checking on fault. */
> +bool kasan_flag_panic __ro_after_init;
> +
> +/* kasan.mode=off/prod/full */
> +static int __init early_kasan_mode(char *arg)
> +{
> + if (!arg)
> + return -EINVAL;
> +
> + if (!strcmp(arg, "off"))
> + kasan_arg_mode = KASAN_ARG_MODE_OFF;
> + else if (!strcmp(arg, "prod"))
> + kasan_arg_mode = KASAN_ARG_MODE_PROD;
> + else if (!strcmp(arg, "full"))
> + kasan_arg_mode = KASAN_ARG_MODE_FULL;
> + else
> + return -EINVAL;
> +
> + return 0;
> +}
> +early_param("kasan.mode", early_kasan_mode);
> +
> +/* kasan.stack=off/on */
> +static int __init early_kasan_flag_stacktrace(char *arg)
> +{
> + if (!arg)
> + return -EINVAL;
> +
> + if (!strcmp(arg, "off"))
> + kasan_arg_stacktrace = KASAN_ARG_STACKTRACE_OFF;
> + else if (!strcmp(arg, "on"))
> + kasan_arg_stacktrace = KASAN_ARG_STACKTRACE_ON;
> + else
> + return -EINVAL;
> +
> + return 0;
> +}
> +early_param("kasan.stacktrace", early_kasan_flag_stacktrace);
> +
> +/* kasan.fault=report/panic */
> +static int __init early_kasan_fault(char *arg)
> +{
> + if (!arg)
> + return -EINVAL;
> +
> + if (!strcmp(arg, "report"))
> + kasan_arg_fault = KASAN_ARG_FAULT_REPORT;
> + else if (!strcmp(arg, "panic"))
> + kasan_arg_fault = KASAN_ARG_FAULT_PANIC;
> + else
> + return -EINVAL;
> +
> + return 0;
> +}
> +early_param("kasan.fault", early_kasan_fault);
> +
> /* kasan_init_hw_tags_cpu() is called for each CPU. */
> void kasan_init_hw_tags_cpu(void)
> {
> + /*
> + * There's no need to check that the hardware is MTE-capable here,
> + * as this function is only called for MTE-capable hardware.
> + */
> +
> + /* If KASAN is disabled, do nothing. */
> + if (kasan_arg_mode == KASAN_ARG_MODE_OFF)
> + return;
> +
> hw_init_tags(KASAN_TAG_MAX);
> hw_enable_tagging();
> }
> @@ -27,6 +124,60 @@ void kasan_init_hw_tags_cpu(void)
> /* kasan_init_hw_tags() is called once on boot CPU. */
> void __init kasan_init_hw_tags(void)
> {
> + /* If hardware doesn't support MTE, do nothing. */
> + if (!system_supports_mte())
> + return;
> +
> + /* Choose KASAN mode if kasan boot parameter is not provided. */
> + if (kasan_arg_mode == KASAN_ARG_MODE_DEFAULT) {
> + if (IS_ENABLED(CONFIG_DEBUG_KERNEL))
> + kasan_arg_mode = KASAN_ARG_MODE_FULL;
> + else
> + kasan_arg_mode = KASAN_ARG_MODE_PROD;
> + }
> +
> + /* Preset parameter values based on the mode. */
> + switch (kasan_arg_mode) {
> + case KASAN_ARG_MODE_DEFAULT:
> + /* Shouldn't happen as per the check above. */
> + WARN_ON(1);
> + return;
> + case KASAN_ARG_MODE_OFF:
> + /* If KASAN is disabled, do nothing. */
> + return;
> + case KASAN_ARG_MODE_PROD:
> + static_branch_enable(&kasan_flag_enabled);
> + break;
> + case KASAN_ARG_MODE_FULL:
> + static_branch_enable(&kasan_flag_enabled);
> + static_branch_enable(&kasan_flag_stacktrace);
> + break;
> + }
> +
> + /* Now, optionally override the presets. */
> +
> + switch (kasan_arg_stacktrace) {
> + case KASAN_ARG_STACKTRACE_DEFAULT:
> + break;
> + case KASAN_ARG_STACKTRACE_OFF:
> + static_branch_disable(&kasan_flag_stacktrace);
> + break;
> + case KASAN_ARG_STACKTRACE_ON:
> + static_branch_enable(&kasan_flag_stacktrace);
> + break;
> + }
> +
> + switch (kasan_arg_fault) {
> + case KASAN_ARG_FAULT_DEFAULT:
> + break;
> + case KASAN_ARG_FAULT_REPORT:
> + kasan_flag_panic = false;
> + break;
> + case KASAN_ARG_FAULT_PANIC:
> + kasan_flag_panic = true;
> + break;
> + }
> +
> pr_info("KernelAddressSanitizer initialized\n");
> }
>
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 8aa83b7ad79e..d01a5ac34f70 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -6,6 +6,22 @@
> #include <linux/kfence.h>
> #include <linux/stackdepot.h>
>
> +#ifdef CONFIG_KASAN_HW_TAGS
> +#include <linux/static_key.h>
> +DECLARE_STATIC_KEY_FALSE(kasan_flag_stacktrace);
> +static inline bool kasan_stack_collection_enabled(void)
> +{
> + return static_branch_unlikely(&kasan_flag_stacktrace);
> +}
> +#else
> +static inline bool kasan_stack_collection_enabled(void)
> +{
> + return true;
> +}
> +#endif
> +
> +extern bool kasan_flag_panic __ro_after_init;
> +
> #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
> #define KASAN_GRANULE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
> #else
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index 76a0e3ae2049..ffa6076b1710 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -99,6 +99,10 @@ static void end_report(unsigned long *flags)
> panic_on_warn = 0;
> panic("panic_on_warn set ...\n");
> }
> +#ifdef CONFIG_KASAN_HW_TAGS
> + if (kasan_flag_panic)
> + panic("kasan.fault=panic set ...\n");
> +#endif
> kasan_enable_current();
> }
>
> @@ -161,8 +165,8 @@ static void describe_object_addr(struct kmem_cache *cache, void *object,
> (void *)(object_addr + cache->object_size));
> }
>
> -static void describe_object(struct kmem_cache *cache, void *object,
> - const void *addr, u8 tag)
> +static void describe_object_stacks(struct kmem_cache *cache, void *object,
> + const void *addr, u8 tag)
> {
> struct kasan_alloc_meta *alloc_meta = kasan_get_alloc_meta(cache, object);
>
> @@ -190,7 +194,13 @@ static void describe_object(struct kmem_cache *cache, void *object,
> }
> #endif
> }
> +}
>
> +static void describe_object(struct kmem_cache *cache, void *object,
> + const void *addr, u8 tag)
> +{
> + if (kasan_stack_collection_enabled())
> + describe_object_stacks(cache, object, addr, tag);
> describe_object_addr(cache, object, addr);
> }
>
> --
> 2.29.2.299.gdc1121823c-goog
>

2020-11-17 11:14:56

by Dmitry Vyukov

[permalink] [raw]
Subject: Re: [PATCH mm v3 11/19] kasan: add and integrate kasan boot parameters

On Mon, Nov 16, 2020 at 4:15 PM Marco Elver <[email protected]> wrote:
>
> On Fri, Nov 13, 2020 at 11:20PM +0100, Andrey Konovalov wrote:
> > Hardware tag-based KASAN mode is intended to eventually be used in
> > production as a security mitigation. Therefore there's a need for finer
> > control over KASAN features and for an existence of a kill switch.
> >
> > This change adds a few boot parameters for hardware tag-based KASAN that
> > allow to disable or otherwise control particular KASAN features.
> >
> > The features that can be controlled are:
> >
> > 1. Whether KASAN is enabled at all.
> > 2. Whether KASAN collects and saves alloc/free stacks.
> > 3. Whether KASAN panics on a detected bug or not.
> >
> > With this change a new boot parameter kasan.mode allows to choose one of
> > three main modes:
> >
> > - kasan.mode=off - KASAN is disabled, no tag checks are performed
> > - kasan.mode=prod - only essential production features are enabled
> > - kasan.mode=full - all KASAN features are enabled
> >
> > The chosen mode provides default control values for the features mentioned
> > above. However it's also possible to override the default values by
> > providing:
> >
> > - kasan.stacktrace=off/on - enable alloc/free stack collection
> > (default: on for mode=full, otherwise off)
> > - kasan.fault=report/panic - only report tag fault or also panic
> > (default: report)
> >
> > If kasan.mode parameter is not provided, it defaults to full when
> > CONFIG_DEBUG_KERNEL is enabled, and to prod otherwise.
> >
> > It is essential that switching between these modes doesn't require
> > rebuilding the kernel with different configs, as this is required by
> > the Android GKI (Generic Kernel Image) initiative [1].
> >
> > [1] https://source.android.com/devices/architecture/kernel/generic-kernel-image
> >
> > Signed-off-by: Andrey Konovalov <[email protected]>
> > Link: https://linux-review.googlesource.com/id/If7d37003875b2ed3e0935702c8015c223d6416a4
>
> Reviewed-by: Marco Elver <[email protected]>

Much nicer with the wrappers now.

Reviewed-by: Dmitry Vyukov <[email protected]>

> > ---
> > mm/kasan/common.c | 22 +++++--
> > mm/kasan/hw_tags.c | 151 +++++++++++++++++++++++++++++++++++++++++++++
> > mm/kasan/kasan.h | 16 +++++
> > mm/kasan/report.c | 14 ++++-
> > 4 files changed, 196 insertions(+), 7 deletions(-)
> >
> > diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> > index 1ac4f435c679..a11e3e75eb08 100644
> > --- a/mm/kasan/common.c
> > +++ b/mm/kasan/common.c
> > @@ -135,6 +135,11 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> > unsigned int redzone_size;
> > int redzone_adjust;
> >
> > + if (!kasan_stack_collection_enabled()) {
> > + *flags |= SLAB_KASAN;
> > + return;
> > + }
> > +
> > /* Add alloc meta. */
> > cache->kasan_info.alloc_meta_offset = *size;
> > *size += sizeof(struct kasan_alloc_meta);
> > @@ -171,6 +176,8 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> >
> > size_t kasan_metadata_size(struct kmem_cache *cache)
> > {
> > + if (!kasan_stack_collection_enabled())
> > + return 0;
> > return (cache->kasan_info.alloc_meta_offset ?
> > sizeof(struct kasan_alloc_meta) : 0) +
> > (cache->kasan_info.free_meta_offset ?
> > @@ -263,11 +270,13 @@ void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
> > {
> > struct kasan_alloc_meta *alloc_meta;
> >
> > - if (!(cache->flags & SLAB_KASAN))
> > - return (void *)object;
> > + if (kasan_stack_collection_enabled()) {
> > + if (!(cache->flags & SLAB_KASAN))
> > + return (void *)object;
> >
> > - alloc_meta = kasan_get_alloc_meta(cache, object);
> > - __memset(alloc_meta, 0, sizeof(*alloc_meta));
> > + alloc_meta = kasan_get_alloc_meta(cache, object);
> > + __memset(alloc_meta, 0, sizeof(*alloc_meta));
> > + }
> >
> > if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) || IS_ENABLED(CONFIG_KASAN_HW_TAGS))
> > object = set_tag(object, assign_tag(cache, object, true, false));
> > @@ -307,6 +316,9 @@ static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
> > rounded_up_size = round_up(cache->object_size, KASAN_GRANULE_SIZE);
> > poison_range(object, rounded_up_size, KASAN_KMALLOC_FREE);
> >
> > + if (!kasan_stack_collection_enabled())
> > + return false;
> > +
> > if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine) ||
> > unlikely(!(cache->flags & SLAB_KASAN)))
> > return false;
> > @@ -357,7 +369,7 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
> > poison_range((void *)redzone_start, redzone_end - redzone_start,
> > KASAN_KMALLOC_REDZONE);
> >
> > - if (cache->flags & SLAB_KASAN)
> > + if (kasan_stack_collection_enabled() && (cache->flags & SLAB_KASAN))
> > set_alloc_info(cache, (void *)object, flags);
> >
> > return set_tag(object, tag);
> > diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
> > index 863fed4edd3f..30ce88935e9d 100644
> > --- a/mm/kasan/hw_tags.c
> > +++ b/mm/kasan/hw_tags.c
> > @@ -8,18 +8,115 @@
> >
> > #define pr_fmt(fmt) "kasan: " fmt
> >
> > +#include <linux/init.h>
> > #include <linux/kasan.h>
> > #include <linux/kernel.h>
> > #include <linux/memory.h>
> > #include <linux/mm.h>
> > +#include <linux/static_key.h>
> > #include <linux/string.h>
> > #include <linux/types.h>
> >
> > #include "kasan.h"
> >
> > +enum kasan_arg_mode {
> > + KASAN_ARG_MODE_DEFAULT,
> > + KASAN_ARG_MODE_OFF,
> > + KASAN_ARG_MODE_PROD,
> > + KASAN_ARG_MODE_FULL,
> > +};
> > +
> > +enum kasan_arg_stacktrace {
> > + KASAN_ARG_STACKTRACE_DEFAULT,
> > + KASAN_ARG_STACKTRACE_OFF,
> > + KASAN_ARG_STACKTRACE_ON,
> > +};
> > +
> > +enum kasan_arg_fault {
> > + KASAN_ARG_FAULT_DEFAULT,
> > + KASAN_ARG_FAULT_REPORT,
> > + KASAN_ARG_FAULT_PANIC,
> > +};
> > +
> > +static enum kasan_arg_mode kasan_arg_mode __ro_after_init;
> > +static enum kasan_arg_stacktrace kasan_arg_stacktrace __ro_after_init;
> > +static enum kasan_arg_fault kasan_arg_fault __ro_after_init;
> > +
> > +/* Whether KASAN is enabled at all. */
> > +DEFINE_STATIC_KEY_FALSE_RO(kasan_flag_enabled);
> > +EXPORT_SYMBOL(kasan_flag_enabled);
> > +
> > +/* Whether to collect alloc/free stack traces. */
> > +DEFINE_STATIC_KEY_FALSE_RO(kasan_flag_stacktrace);
> > +
> > +/* Whether panic or disable tag checking on fault. */
> > +bool kasan_flag_panic __ro_after_init;
> > +
> > +/* kasan.mode=off/prod/full */
> > +static int __init early_kasan_mode(char *arg)
> > +{
> > + if (!arg)
> > + return -EINVAL;
> > +
> > + if (!strcmp(arg, "off"))
> > + kasan_arg_mode = KASAN_ARG_MODE_OFF;
> > + else if (!strcmp(arg, "prod"))
> > + kasan_arg_mode = KASAN_ARG_MODE_PROD;
> > + else if (!strcmp(arg, "full"))
> > + kasan_arg_mode = KASAN_ARG_MODE_FULL;
> > + else
> > + return -EINVAL;
> > +
> > + return 0;
> > +}
> > +early_param("kasan.mode", early_kasan_mode);
> > +
> > +/* kasan.stack=off/on */
> > +static int __init early_kasan_flag_stacktrace(char *arg)
> > +{
> > + if (!arg)
> > + return -EINVAL;
> > +
> > + if (!strcmp(arg, "off"))
> > + kasan_arg_stacktrace = KASAN_ARG_STACKTRACE_OFF;
> > + else if (!strcmp(arg, "on"))
> > + kasan_arg_stacktrace = KASAN_ARG_STACKTRACE_ON;
> > + else
> > + return -EINVAL;
> > +
> > + return 0;
> > +}
> > +early_param("kasan.stacktrace", early_kasan_flag_stacktrace);
> > +
> > +/* kasan.fault=report/panic */
> > +static int __init early_kasan_fault(char *arg)
> > +{
> > + if (!arg)
> > + return -EINVAL;
> > +
> > + if (!strcmp(arg, "report"))
> > + kasan_arg_fault = KASAN_ARG_FAULT_REPORT;
> > + else if (!strcmp(arg, "panic"))
> > + kasan_arg_fault = KASAN_ARG_FAULT_PANIC;
> > + else
> > + return -EINVAL;
> > +
> > + return 0;
> > +}
> > +early_param("kasan.fault", early_kasan_fault);
> > +
> > /* kasan_init_hw_tags_cpu() is called for each CPU. */
> > void kasan_init_hw_tags_cpu(void)
> > {
> > + /*
> > + * There's no need to check that the hardware is MTE-capable here,
> > + * as this function is only called for MTE-capable hardware.
> > + */
> > +
> > + /* If KASAN is disabled, do nothing. */
> > + if (kasan_arg_mode == KASAN_ARG_MODE_OFF)
> > + return;
> > +
> > hw_init_tags(KASAN_TAG_MAX);
> > hw_enable_tagging();
> > }
> > @@ -27,6 +124,60 @@ void kasan_init_hw_tags_cpu(void)
> > /* kasan_init_hw_tags() is called once on boot CPU. */
> > void __init kasan_init_hw_tags(void)
> > {
> > + /* If hardware doesn't support MTE, do nothing. */
> > + if (!system_supports_mte())
> > + return;
> > +
> > + /* Choose KASAN mode if kasan boot parameter is not provided. */
> > + if (kasan_arg_mode == KASAN_ARG_MODE_DEFAULT) {
> > + if (IS_ENABLED(CONFIG_DEBUG_KERNEL))
> > + kasan_arg_mode = KASAN_ARG_MODE_FULL;
> > + else
> > + kasan_arg_mode = KASAN_ARG_MODE_PROD;
> > + }
> > +
> > + /* Preset parameter values based on the mode. */
> > + switch (kasan_arg_mode) {
> > + case KASAN_ARG_MODE_DEFAULT:
> > + /* Shouldn't happen as per the check above. */
> > + WARN_ON(1);
> > + return;
> > + case KASAN_ARG_MODE_OFF:
> > + /* If KASAN is disabled, do nothing. */
> > + return;
> > + case KASAN_ARG_MODE_PROD:
> > + static_branch_enable(&kasan_flag_enabled);
> > + break;
> > + case KASAN_ARG_MODE_FULL:
> > + static_branch_enable(&kasan_flag_enabled);
> > + static_branch_enable(&kasan_flag_stacktrace);
> > + break;
> > + }
> > +
> > + /* Now, optionally override the presets. */
> > +
> > + switch (kasan_arg_stacktrace) {
> > + case KASAN_ARG_STACKTRACE_DEFAULT:
> > + break;
> > + case KASAN_ARG_STACKTRACE_OFF:
> > + static_branch_disable(&kasan_flag_stacktrace);
> > + break;
> > + case KASAN_ARG_STACKTRACE_ON:
> > + static_branch_enable(&kasan_flag_stacktrace);
> > + break;
> > + }
> > +
> > + switch (kasan_arg_fault) {
> > + case KASAN_ARG_FAULT_DEFAULT:
> > + break;
> > + case KASAN_ARG_FAULT_REPORT:
> > + kasan_flag_panic = false;
> > + break;
> > + case KASAN_ARG_FAULT_PANIC:
> > + kasan_flag_panic = true;
> > + break;
> > + }
> > +
> > pr_info("KernelAddressSanitizer initialized\n");
> > }
> >
> > diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> > index 8aa83b7ad79e..d01a5ac34f70 100644
> > --- a/mm/kasan/kasan.h
> > +++ b/mm/kasan/kasan.h
> > @@ -6,6 +6,22 @@
> > #include <linux/kfence.h>
> > #include <linux/stackdepot.h>
> >
> > +#ifdef CONFIG_KASAN_HW_TAGS
> > +#include <linux/static_key.h>
> > +DECLARE_STATIC_KEY_FALSE(kasan_flag_stacktrace);
> > +static inline bool kasan_stack_collection_enabled(void)
> > +{
> > + return static_branch_unlikely(&kasan_flag_stacktrace);
> > +}
> > +#else
> > +static inline bool kasan_stack_collection_enabled(void)
> > +{
> > + return true;
> > +}
> > +#endif
> > +
> > +extern bool kasan_flag_panic __ro_after_init;
> > +
> > #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
> > #define KASAN_GRANULE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
> > #else
> > diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> > index 76a0e3ae2049..ffa6076b1710 100644
> > --- a/mm/kasan/report.c
> > +++ b/mm/kasan/report.c
> > @@ -99,6 +99,10 @@ static void end_report(unsigned long *flags)
> > panic_on_warn = 0;
> > panic("panic_on_warn set ...\n");
> > }
> > +#ifdef CONFIG_KASAN_HW_TAGS
> > + if (kasan_flag_panic)
> > + panic("kasan.fault=panic set ...\n");
> > +#endif
> > kasan_enable_current();
> > }
> >
> > @@ -161,8 +165,8 @@ static void describe_object_addr(struct kmem_cache *cache, void *object,
> > (void *)(object_addr + cache->object_size));
> > }
> >
> > -static void describe_object(struct kmem_cache *cache, void *object,
> > - const void *addr, u8 tag)
> > +static void describe_object_stacks(struct kmem_cache *cache, void *object,
> > + const void *addr, u8 tag)
> > {
> > struct kasan_alloc_meta *alloc_meta = kasan_get_alloc_meta(cache, object);
> >
> > @@ -190,7 +194,13 @@ static void describe_object(struct kmem_cache *cache, void *object,
> > }
> > #endif
> > }
> > +}
> >
> > +static void describe_object(struct kmem_cache *cache, void *object,
> > + const void *addr, u8 tag)
> > +{
> > + if (kasan_stack_collection_enabled())
> > + describe_object_stacks(cache, object, addr, tag);
> > describe_object_addr(cache, object, addr);
> > }
> >
> > --
> > 2.29.2.299.gdc1121823c-goog
> >

2020-11-17 11:17:08

by Dmitry Vyukov

[permalink] [raw]
Subject: Re: [PATCH mm v3 12/19] kasan, mm: check kasan_enabled in annotations

On Mon, Nov 16, 2020 at 4:26 PM Marco Elver <[email protected]> wrote:
>
> On Fri, Nov 13, 2020 at 11:20PM +0100, Andrey Konovalov wrote:
> > Declare the kasan_enabled static key in include/linux/kasan.h and in
> > include/linux/mm.h and check it in all kasan annotations. This allows to
> > avoid any slowdown caused by function calls when kasan_enabled is
> > disabled.
> >
> > Co-developed-by: Vincenzo Frascino <[email protected]>
> > Signed-off-by: Vincenzo Frascino <[email protected]>
> > Signed-off-by: Andrey Konovalov <[email protected]>
> > Link: https://linux-review.googlesource.com/id/I2589451d3c96c97abbcbf714baabe6161c6f153e
>
> Reviewed-by: Marco Elver <[email protected]>

Also much nicer with kasan_enabled() now.

Reviewed-by: Dmitry Vyukov <[email protected]>

> > ---
> > include/linux/kasan.h | 213 ++++++++++++++++++++++++++++++++----------
> > include/linux/mm.h | 22 +++--
> > mm/kasan/common.c | 56 +++++------
> > 3 files changed, 210 insertions(+), 81 deletions(-)
> >
> > diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> > index 872bf145ddde..6bd95243a583 100644
> > --- a/include/linux/kasan.h
> > +++ b/include/linux/kasan.h
> > @@ -2,6 +2,7 @@
> > #ifndef _LINUX_KASAN_H
> > #define _LINUX_KASAN_H
> >
> > +#include <linux/static_key.h>
> > #include <linux/types.h>
> >
> > struct kmem_cache;
> > @@ -74,54 +75,176 @@ static inline void kasan_disable_current(void) {}
> >
> > #ifdef CONFIG_KASAN
> >
> > -void kasan_unpoison_range(const void *address, size_t size);
> > +struct kasan_cache {
> > + int alloc_meta_offset;
> > + int free_meta_offset;
> > +};
> >
> > -void kasan_alloc_pages(struct page *page, unsigned int order);
> > -void kasan_free_pages(struct page *page, unsigned int order);
> > +#ifdef CONFIG_KASAN_HW_TAGS
> > +DECLARE_STATIC_KEY_FALSE(kasan_flag_enabled);
> > +static __always_inline bool kasan_enabled(void)
> > +{
> > + return static_branch_likely(&kasan_flag_enabled);
> > +}
> > +#else
> > +static inline bool kasan_enabled(void)
> > +{
> > + return true;
> > +}
> > +#endif
> >
> > -void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> > - slab_flags_t *flags);
> > +void __kasan_unpoison_range(const void *addr, size_t size);
> > +static __always_inline void kasan_unpoison_range(const void *addr, size_t size)
> > +{
> > + if (kasan_enabled())
> > + __kasan_unpoison_range(addr, size);
> > +}
> >
> > -void kasan_poison_slab(struct page *page);
> > -void kasan_unpoison_object_data(struct kmem_cache *cache, void *object);
> > -void kasan_poison_object_data(struct kmem_cache *cache, void *object);
> > -void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
> > - const void *object);
> > +void __kasan_alloc_pages(struct page *page, unsigned int order);
> > +static __always_inline void kasan_alloc_pages(struct page *page,
> > + unsigned int order)
> > +{
> > + if (kasan_enabled())
> > + __kasan_alloc_pages(page, order);
> > +}
> >
> > -void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
> > - gfp_t flags);
> > -void kasan_kfree_large(void *ptr, unsigned long ip);
> > -void kasan_poison_kfree(void *ptr, unsigned long ip);
> > -void * __must_check kasan_kmalloc(struct kmem_cache *s, const void *object,
> > - size_t size, gfp_t flags);
> > -void * __must_check kasan_krealloc(const void *object, size_t new_size,
> > - gfp_t flags);
> > +void __kasan_free_pages(struct page *page, unsigned int order);
> > +static __always_inline void kasan_free_pages(struct page *page,
> > + unsigned int order)
> > +{
> > + if (kasan_enabled())
> > + __kasan_free_pages(page, order);
> > +}
> >
> > -void * __must_check kasan_slab_alloc(struct kmem_cache *s, void *object,
> > - gfp_t flags);
> > -bool kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip);
> > +void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> > + slab_flags_t *flags);
> > +static __always_inline void kasan_cache_create(struct kmem_cache *cache,
> > + unsigned int *size, slab_flags_t *flags)
> > +{
> > + if (kasan_enabled())
> > + __kasan_cache_create(cache, size, flags);
> > +}
> >
> > -struct kasan_cache {
> > - int alloc_meta_offset;
> > - int free_meta_offset;
> > -};
> > +size_t __kasan_metadata_size(struct kmem_cache *cache);
> > +static __always_inline size_t kasan_metadata_size(struct kmem_cache *cache)
> > +{
> > + if (kasan_enabled())
> > + return __kasan_metadata_size(cache);
> > + return 0;
> > +}
> > +
> > +void __kasan_poison_slab(struct page *page);
> > +static __always_inline void kasan_poison_slab(struct page *page)
> > +{
> > + if (kasan_enabled())
> > + return __kasan_poison_slab(page);
> > +}
> > +
> > +void __kasan_unpoison_object_data(struct kmem_cache *cache, void *object);
> > +static __always_inline void kasan_unpoison_object_data(struct kmem_cache *cache,
> > + void *object)
> > +{
> > + if (kasan_enabled())
> > + return __kasan_unpoison_object_data(cache, object);
> > +}
> > +
> > +void __kasan_poison_object_data(struct kmem_cache *cache, void *object);
> > +static __always_inline void kasan_poison_object_data(struct kmem_cache *cache,
> > + void *object)
> > +{
> > + if (kasan_enabled())
> > + __kasan_poison_object_data(cache, object);
> > +}
> > +
> > +void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache,
> > + const void *object);
> > +static __always_inline void * __must_check kasan_init_slab_obj(
> > + struct kmem_cache *cache, const void *object)
> > +{
> > + if (kasan_enabled())
> > + return __kasan_init_slab_obj(cache, object);
> > + return (void *)object;
> > +}
> > +
> > +bool __kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip);
> > +static __always_inline bool kasan_slab_free(struct kmem_cache *s, void *object,
> > + unsigned long ip)
> > +{
> > + if (kasan_enabled())
> > + return __kasan_slab_free(s, object, ip);
> > + return false;
> > +}
> > +
> > +void * __must_check __kasan_slab_alloc(struct kmem_cache *s,
> > + void *object, gfp_t flags);
> > +static __always_inline void * __must_check kasan_slab_alloc(
> > + struct kmem_cache *s, void *object, gfp_t flags)
> > +{
> > + if (kasan_enabled())
> > + return __kasan_slab_alloc(s, object, flags);
> > + return object;
> > +}
> > +
> > +void * __must_check __kasan_kmalloc(struct kmem_cache *s, const void *object,
> > + size_t size, gfp_t flags);
> > +static __always_inline void * __must_check kasan_kmalloc(struct kmem_cache *s,
> > + const void *object, size_t size, gfp_t flags)
> > +{
> > + if (kasan_enabled())
> > + return __kasan_kmalloc(s, object, size, flags);
> > + return (void *)object;
> > +}
> >
> > -size_t kasan_metadata_size(struct kmem_cache *cache);
> > +void * __must_check __kasan_kmalloc_large(const void *ptr,
> > + size_t size, gfp_t flags);
> > +static __always_inline void * __must_check kasan_kmalloc_large(const void *ptr,
> > + size_t size, gfp_t flags)
> > +{
> > + if (kasan_enabled())
> > + return __kasan_kmalloc_large(ptr, size, flags);
> > + return (void *)ptr;
> > +}
> > +
> > +void * __must_check __kasan_krealloc(const void *object,
> > + size_t new_size, gfp_t flags);
> > +static __always_inline void * __must_check kasan_krealloc(const void *object,
> > + size_t new_size, gfp_t flags)
> > +{
> > + if (kasan_enabled())
> > + return __kasan_krealloc(object, new_size, flags);
> > + return (void *)object;
> > +}
> > +
> > +void __kasan_poison_kfree(void *ptr, unsigned long ip);
> > +static __always_inline void kasan_poison_kfree(void *ptr, unsigned long ip)
> > +{
> > + if (kasan_enabled())
> > + __kasan_poison_kfree(ptr, ip);
> > +}
> > +
> > +void __kasan_kfree_large(void *ptr, unsigned long ip);
> > +static __always_inline void kasan_kfree_large(void *ptr, unsigned long ip)
> > +{
> > + if (kasan_enabled())
> > + __kasan_kfree_large(ptr, ip);
> > +}
> >
> > bool kasan_save_enable_multi_shot(void);
> > void kasan_restore_multi_shot(bool enabled);
> >
> > #else /* CONFIG_KASAN */
> >
> > +static inline bool kasan_enabled(void)
> > +{
> > + return false;
> > +}
> > static inline void kasan_unpoison_range(const void *address, size_t size) {}
> > -
> > static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
> > static inline void kasan_free_pages(struct page *page, unsigned int order) {}
> > -
> > static inline void kasan_cache_create(struct kmem_cache *cache,
> > unsigned int *size,
> > slab_flags_t *flags) {}
> > -
> > +static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; }
> > static inline void kasan_poison_slab(struct page *page) {}
> > static inline void kasan_unpoison_object_data(struct kmem_cache *cache,
> > void *object) {}
> > @@ -132,36 +255,32 @@ static inline void *kasan_init_slab_obj(struct kmem_cache *cache,
> > {
> > return (void *)object;
> > }
> > -
> > -static inline void *kasan_kmalloc_large(void *ptr, size_t size, gfp_t flags)
> > +static inline bool kasan_slab_free(struct kmem_cache *s, void *object,
> > + unsigned long ip)
> > {
> > - return ptr;
> > + return false;
> > +}
> > +static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object,
> > + gfp_t flags)
> > +{
> > + return object;
> > }
> > -static inline void kasan_kfree_large(void *ptr, unsigned long ip) {}
> > -static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {}
> > static inline void *kasan_kmalloc(struct kmem_cache *s, const void *object,
> > size_t size, gfp_t flags)
> > {
> > return (void *)object;
> > }
> > +static inline void *kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags)
> > +{
> > + return (void *)ptr;
> > +}
> > static inline void *kasan_krealloc(const void *object, size_t new_size,
> > gfp_t flags)
> > {
> > return (void *)object;
> > }
> > -
> > -static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object,
> > - gfp_t flags)
> > -{
> > - return object;
> > -}
> > -static inline bool kasan_slab_free(struct kmem_cache *s, void *object,
> > - unsigned long ip)
> > -{
> > - return false;
> > -}
> > -
> > -static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; }
> > +static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {}
> > +static inline void kasan_kfree_large(void *ptr, unsigned long ip) {}
> >
> > #endif /* CONFIG_KASAN */
> >
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index 947f4f1a6536..24f47e140a4c 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -31,6 +31,7 @@
> > #include <linux/sizes.h>
> > #include <linux/sched.h>
> > #include <linux/pgtable.h>
> > +#include <linux/kasan.h>
> >
> > struct mempolicy;
> > struct anon_vma;
> > @@ -1415,22 +1416,30 @@ static inline bool cpupid_match_pid(struct task_struct *task, int cpupid)
> > #endif /* CONFIG_NUMA_BALANCING */
> >
> > #if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
> > +
> > static inline u8 page_kasan_tag(const struct page *page)
> > {
> > - return (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;
> > + if (kasan_enabled())
> > + return (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;
> > + return 0xff;
> > }
> >
> > static inline void page_kasan_tag_set(struct page *page, u8 tag)
> > {
> > - page->flags &= ~(KASAN_TAG_MASK << KASAN_TAG_PGSHIFT);
> > - page->flags |= (tag & KASAN_TAG_MASK) << KASAN_TAG_PGSHIFT;
> > + if (kasan_enabled()) {
> > + page->flags &= ~(KASAN_TAG_MASK << KASAN_TAG_PGSHIFT);
> > + page->flags |= (tag & KASAN_TAG_MASK) << KASAN_TAG_PGSHIFT;
> > + }
> > }
> >
> > static inline void page_kasan_tag_reset(struct page *page)
> > {
> > - page_kasan_tag_set(page, 0xff);
> > + if (kasan_enabled())
> > + page_kasan_tag_set(page, 0xff);
> > }
> > -#else
> > +
> > +#else /* CONFIG_KASAN_SW_TAGS || CONFIG_KASAN_HW_TAGS */
> > +
> > static inline u8 page_kasan_tag(const struct page *page)
> > {
> > return 0xff;
> > @@ -1438,7 +1447,8 @@ static inline u8 page_kasan_tag(const struct page *page)
> >
> > static inline void page_kasan_tag_set(struct page *page, u8 tag) { }
> > static inline void page_kasan_tag_reset(struct page *page) { }
> > -#endif
> > +
> > +#endif /* CONFIG_KASAN_SW_TAGS || CONFIG_KASAN_HW_TAGS */
> >
> > static inline struct zone *page_zone(const struct page *page)
> > {
> > diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> > index a11e3e75eb08..17918bd20ed9 100644
> > --- a/mm/kasan/common.c
> > +++ b/mm/kasan/common.c
> > @@ -59,7 +59,7 @@ void kasan_disable_current(void)
> > }
> > #endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */
> >
> > -void kasan_unpoison_range(const void *address, size_t size)
> > +void __kasan_unpoison_range(const void *address, size_t size)
> > {
> > unpoison_range(address, size);
> > }
> > @@ -87,7 +87,7 @@ asmlinkage void kasan_unpoison_task_stack_below(const void *watermark)
> > }
> > #endif /* CONFIG_KASAN_STACK */
> >
> > -void kasan_alloc_pages(struct page *page, unsigned int order)
> > +void __kasan_alloc_pages(struct page *page, unsigned int order)
> > {
> > u8 tag;
> > unsigned long i;
> > @@ -101,7 +101,7 @@ void kasan_alloc_pages(struct page *page, unsigned int order)
> > unpoison_range(page_address(page), PAGE_SIZE << order);
> > }
> >
> > -void kasan_free_pages(struct page *page, unsigned int order)
> > +void __kasan_free_pages(struct page *page, unsigned int order)
> > {
> > if (likely(!PageHighMem(page)))
> > poison_range(page_address(page),
> > @@ -128,8 +128,8 @@ static inline unsigned int optimal_redzone(unsigned int object_size)
> > object_size <= (1 << 16) - 1024 ? 1024 : 2048;
> > }
> >
> > -void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> > - slab_flags_t *flags)
> > +void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> > + slab_flags_t *flags)
> > {
> > unsigned int orig_size = *size;
> > unsigned int redzone_size;
> > @@ -174,7 +174,7 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> > *flags |= SLAB_KASAN;
> > }
> >
> > -size_t kasan_metadata_size(struct kmem_cache *cache)
> > +size_t __kasan_metadata_size(struct kmem_cache *cache)
> > {
> > if (!kasan_stack_collection_enabled())
> > return 0;
> > @@ -197,7 +197,7 @@ struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache,
> > return kasan_reset_tag(object) + cache->kasan_info.free_meta_offset;
> > }
> >
> > -void kasan_poison_slab(struct page *page)
> > +void __kasan_poison_slab(struct page *page)
> > {
> > unsigned long i;
> >
> > @@ -207,12 +207,12 @@ void kasan_poison_slab(struct page *page)
> > KASAN_KMALLOC_REDZONE);
> > }
> >
> > -void kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
> > +void __kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
> > {
> > unpoison_range(object, cache->object_size);
> > }
> >
> > -void kasan_poison_object_data(struct kmem_cache *cache, void *object)
> > +void __kasan_poison_object_data(struct kmem_cache *cache, void *object)
> > {
> > poison_range(object,
> > round_up(cache->object_size, KASAN_GRANULE_SIZE),
> > @@ -265,7 +265,7 @@ static u8 assign_tag(struct kmem_cache *cache, const void *object,
> > #endif
> > }
> >
> > -void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
> > +void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache,
> > const void *object)
> > {
> > struct kasan_alloc_meta *alloc_meta;
> > @@ -284,7 +284,7 @@ void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
> > return (void *)object;
> > }
> >
> > -static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
> > +static bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
> > unsigned long ip, bool quarantine)
> > {
> > u8 tag;
> > @@ -330,9 +330,9 @@ static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
> > return IS_ENABLED(CONFIG_KASAN_GENERIC);
> > }
> >
> > -bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
> > +bool __kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
> > {
> > - return __kasan_slab_free(cache, object, ip, true);
> > + return ____kasan_slab_free(cache, object, ip, true);
> > }
> >
> > static void set_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
> > @@ -340,7 +340,7 @@ static void set_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
> > kasan_set_track(&kasan_get_alloc_meta(cache, object)->alloc_track, flags);
> > }
> >
> > -static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
> > +static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object,
> > size_t size, gfp_t flags, bool keep_tag)
> > {
> > unsigned long redzone_start;
> > @@ -375,20 +375,20 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
> > return set_tag(object, tag);
> > }
> >
> > -void * __must_check kasan_slab_alloc(struct kmem_cache *cache, void *object,
> > - gfp_t flags)
> > +void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
> > + void *object, gfp_t flags)
> > {
> > - return __kasan_kmalloc(cache, object, cache->object_size, flags, false);
> > + return ____kasan_kmalloc(cache, object, cache->object_size, flags, false);
> > }
> >
> > -void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object,
> > - size_t size, gfp_t flags)
> > +void * __must_check __kasan_kmalloc(struct kmem_cache *cache, const void *object,
> > + size_t size, gfp_t flags)
> > {
> > - return __kasan_kmalloc(cache, object, size, flags, true);
> > + return ____kasan_kmalloc(cache, object, size, flags, true);
> > }
> > -EXPORT_SYMBOL(kasan_kmalloc);
> > +EXPORT_SYMBOL(__kasan_kmalloc);
> >
> > -void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
> > +void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
> > gfp_t flags)
> > {
> > struct page *page;
> > @@ -413,7 +413,7 @@ void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
> > return (void *)ptr;
> > }
> >
> > -void * __must_check kasan_krealloc(const void *object, size_t size, gfp_t flags)
> > +void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flags)
> > {
> > struct page *page;
> >
> > @@ -423,13 +423,13 @@ void * __must_check kasan_krealloc(const void *object, size_t size, gfp_t flags)
> > page = virt_to_head_page(object);
> >
> > if (unlikely(!PageSlab(page)))
> > - return kasan_kmalloc_large(object, size, flags);
> > + return __kasan_kmalloc_large(object, size, flags);
> > else
> > - return __kasan_kmalloc(page->slab_cache, object, size,
> > + return ____kasan_kmalloc(page->slab_cache, object, size,
> > flags, true);
> > }
> >
> > -void kasan_poison_kfree(void *ptr, unsigned long ip)
> > +void __kasan_poison_kfree(void *ptr, unsigned long ip)
> > {
> > struct page *page;
> >
> > @@ -442,11 +442,11 @@ void kasan_poison_kfree(void *ptr, unsigned long ip)
> > }
> > poison_range(ptr, page_size(page), KASAN_FREE_PAGE);
> > } else {
> > - __kasan_slab_free(page->slab_cache, ptr, ip, false);
> > + ____kasan_slab_free(page->slab_cache, ptr, ip, false);
> > }
> > }
> >
> > -void kasan_kfree_large(void *ptr, unsigned long ip)
> > +void __kasan_kfree_large(void *ptr, unsigned long ip)
> > {
> > if (ptr != page_address(virt_to_head_page(ptr)))
> > kasan_report_invalid_free(ptr, ip);
> > --
> > 2.29.2.299.gdc1121823c-goog
> >

2020-11-17 13:55:50

by Dmitry Vyukov

[permalink] [raw]
Subject: Re: [PATCH mm v3 19/19] kasan: update documentation

On Mon, Nov 16, 2020 at 4:47 PM Marco Elver <[email protected]> wrote:
>
> On Fri, Nov 13, 2020 at 11:20PM +0100, Andrey Konovalov wrote:
> > This change updates KASAN documentation to reflect the addition of boot
> > parameters and also reworks and clarifies some of the existing sections,
> > in particular: defines what a memory granule is, mentions quarantine,
> > makes Kunit section more readable.
> >
> > Signed-off-by: Andrey Konovalov <[email protected]>
> > Link: https://linux-review.googlesource.com/id/Ib1f83e91be273264b25f42b04448ac96b858849f
>
> Reviewed-by: Marco Elver <[email protected]>

Reviewed-by: Dmitry Vyukov <[email protected]>

> > ---
> > Documentation/dev-tools/kasan.rst | 186 +++++++++++++++++++-----------
> > 1 file changed, 116 insertions(+), 70 deletions(-)
> >
> > diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst
> > index ffbae8ce5748..0d5d77919b1a 100644
> > --- a/Documentation/dev-tools/kasan.rst
> > +++ b/Documentation/dev-tools/kasan.rst
> > @@ -4,8 +4,9 @@ The Kernel Address Sanitizer (KASAN)
> > Overview
> > --------
> >
> > -KernelAddressSANitizer (KASAN) is a dynamic memory error detector designed to
> > -find out-of-bound and use-after-free bugs. KASAN has three modes:
> > +KernelAddressSANitizer (KASAN) is a dynamic memory safety error detector
> > +designed to find out-of-bound and use-after-free bugs. KASAN has three modes:
> > +
> > 1. generic KASAN (similar to userspace ASan),
> > 2. software tag-based KASAN (similar to userspace HWASan),
> > 3. hardware tag-based KASAN (based on hardware memory tagging).
> > @@ -39,23 +40,13 @@ CONFIG_KASAN_INLINE. Outline and inline are compiler instrumentation types.
> > The former produces smaller binary while the latter is 1.1 - 2 times faster.
> >
> > Both software KASAN modes work with both SLUB and SLAB memory allocators,
> > -hardware tag-based KASAN currently only support SLUB.
> > -For better bug detection and nicer reporting, enable CONFIG_STACKTRACE.
> > +while the hardware tag-based KASAN currently only support SLUB.
> > +
> > +For better error reports that include stack traces, enable CONFIG_STACKTRACE.
> >
> > To augment reports with last allocation and freeing stack of the physical page,
> > it is recommended to enable also CONFIG_PAGE_OWNER and boot with page_owner=on.
> >
> > -To disable instrumentation for specific files or directories, add a line
> > -similar to the following to the respective kernel Makefile:
> > -
> > -- For a single file (e.g. main.o)::
> > -
> > - KASAN_SANITIZE_main.o := n
> > -
> > -- For all files in one directory::
> > -
> > - KASAN_SANITIZE := n
> > -
> > Error reports
> > ~~~~~~~~~~~~~
> >
> > @@ -140,22 +131,75 @@ freed (in case of a use-after-free bug report). Next comes a description of
> > the accessed slab object and information about the accessed memory page.
> >
> > In the last section the report shows memory state around the accessed address.
> > -Reading this part requires some understanding of how KASAN works.
> > -
> > -The state of each 8 aligned bytes of memory is encoded in one shadow byte.
> > -Those 8 bytes can be accessible, partially accessible, freed or be a redzone.
> > -We use the following encoding for each shadow byte: 0 means that all 8 bytes
> > -of the corresponding memory region are accessible; number N (1 <= N <= 7) means
> > -that the first N bytes are accessible, and other (8 - N) bytes are not;
> > -any negative value indicates that the entire 8-byte word is inaccessible.
> > -We use different negative values to distinguish between different kinds of
> > -inaccessible memory like redzones or freed memory (see mm/kasan/kasan.h).
> > +Internally KASAN tracks memory state separately for each memory granule, which
> > +is either 8 or 16 aligned bytes depending on KASAN mode. Each number in the
> > +memory state section of the report shows the state of one of the memory
> > +granules that surround the accessed address.
> > +
> > +For generic KASAN the size of each memory granule is 8. The state of each
> > +granule is encoded in one shadow byte. Those 8 bytes can be accessible,
> > +partially accessible, freed or be a part of a redzone. KASAN uses the following
> > +encoding for each shadow byte: 0 means that all 8 bytes of the corresponding
> > +memory region are accessible; number N (1 <= N <= 7) means that the first N
> > +bytes are accessible, and other (8 - N) bytes are not; any negative value
> > +indicates that the entire 8-byte word is inaccessible. KASAN uses different
> > +negative values to distinguish between different kinds of inaccessible memory
> > +like redzones or freed memory (see mm/kasan/kasan.h).
> >
> > In the report above the arrows point to the shadow byte 03, which means that
> > the accessed address is partially accessible.
> >
> > For tag-based KASAN this last report section shows the memory tags around the
> > -accessed address (see Implementation details section).
> > +accessed address (see `Implementation details`_ section).
> > +
> > +Boot parameters
> > +~~~~~~~~~~~~~~~
> > +
> > +Hardware tag-based KASAN mode (see the section about different mode below) is
> > +intended for use in production as a security mitigation. Therefore it supports
> > +boot parameters that allow to disable KASAN competely or otherwise control
> > +particular KASAN features.
> > +
> > +The things that can be controlled are:
> > +
> > +1. Whether KASAN is enabled at all.
> > +2. Whether KASAN collects and saves alloc/free stacks.
> > +3. Whether KASAN panics on a detected bug or not.
> > +
> > +The ``kasan.mode`` boot parameter allows to choose one of three main modes:
> > +
> > +- ``kasan.mode=off`` - KASAN is disabled, no tag checks are performed
> > +- ``kasan.mode=prod`` - only essential production features are enabled
> > +- ``kasan.mode=full`` - all KASAN features are enabled
> > +
> > +The chosen mode provides default control values for the features mentioned
> > +above. However it's also possible to override the default values by providing:
> > +
> > +- ``kasan.stacktrace=off`` or ``=on`` - enable alloc/free stack collection
> > + (default: ``on`` for ``mode=full``,
> > + otherwise ``off``)
> > +- ``kasan.fault=report`` or ``=panic`` - only print KASAN report or also panic
> > + (default: ``report``)
> > +
> > +If ``kasan.mode`` parameter is not provided, it defaults to ``full`` when
> > +``CONFIG_DEBUG_KERNEL`` is enabled, and to ``prod`` otherwise.
> > +
> > +For developers
> > +~~~~~~~~~~~~~~
> > +
> > +Software KASAN modes use compiler instrumentation to insert validity checks.
> > +Such instrumentation might be incompatible with some part of the kernel, and
> > +therefore needs to be disabled. To disable instrumentation for specific files
> > +or directories, add a line similar to the following to the respective kernel
> > +Makefile:
> > +
> > +- For a single file (e.g. main.o)::
> > +
> > + KASAN_SANITIZE_main.o := n
> > +
> > +- For all files in one directory::
> > +
> > + KASAN_SANITIZE := n
> >
> >
> > Implementation details
> > @@ -164,10 +208,10 @@ Implementation details
> > Generic KASAN
> > ~~~~~~~~~~~~~
> >
> > -From a high level, our approach to memory error detection is similar to that
> > -of kmemcheck: use shadow memory to record whether each byte of memory is safe
> > -to access, and use compile-time instrumentation to insert checks of shadow
> > -memory on each memory access.
> > +From a high level perspective, KASAN's approach to memory error detection is
> > +similar to that of kmemcheck: use shadow memory to record whether each byte of
> > +memory is safe to access, and use compile-time instrumentation to insert checks
> > +of shadow memory on each memory access.
> >
> > Generic KASAN dedicates 1/8th of kernel memory to its shadow memory (e.g. 16TB
> > to cover 128TB on x86_64) and uses direct mapping with a scale and offset to
> > @@ -194,7 +238,10 @@ function calls GCC directly inserts the code to check the shadow memory.
> > This option significantly enlarges kernel but it gives x1.1-x2 performance
> > boost over outline instrumented kernel.
> >
> > -Generic KASAN prints up to 2 call_rcu() call stacks in reports, the last one
> > +Generic KASAN is the only mode that delays the reuse of freed object via
> > +quarantine (see mm/kasan/quarantine.c for implementation).
> > +
> > +Generic KASAN prints up to two call_rcu() call stacks in reports, the last one
> > and the second to last.
> >
> > Software tag-based KASAN
> > @@ -304,15 +351,15 @@ therefore be wasteful. Furthermore, to ensure that different mappings
> > use different shadow pages, mappings would have to be aligned to
> > ``KASAN_GRANULE_SIZE * PAGE_SIZE``.
> >
> > -Instead, we share backing space across multiple mappings. We allocate
> > +Instead, KASAN shares backing space across multiple mappings. It allocates
> > a backing page when a mapping in vmalloc space uses a particular page
> > of the shadow region. This page can be shared by other vmalloc
> > mappings later on.
> >
> > -We hook in to the vmap infrastructure to lazily clean up unused shadow
> > +KASAN hooks into the vmap infrastructure to lazily clean up unused shadow
> > memory.
> >
> > -To avoid the difficulties around swapping mappings around, we expect
> > +To avoid the difficulties around swapping mappings around, KASAN expects
> > that the part of the shadow region that covers the vmalloc space will
> > not be covered by the early shadow page, but will be left
> > unmapped. This will require changes in arch-specific code.
> > @@ -323,24 +370,31 @@ architectures that do not have a fixed module region.
> > CONFIG_KASAN_KUNIT_TEST & CONFIG_TEST_KASAN_MODULE
> > --------------------------------------------------
> >
> > -``CONFIG_KASAN_KUNIT_TEST`` utilizes the KUnit Test Framework for testing.
> > -This means each test focuses on a small unit of functionality and
> > -there are a few ways these tests can be run.
> > +KASAN tests consist on two parts:
> > +
> > +1. Tests that are integrated with the KUnit Test Framework. Enabled with
> > +``CONFIG_KASAN_KUNIT_TEST``. These tests can be run and partially verified
> > +automatically in a few different ways, see the instructions below.
> >
> > -Each test will print the KASAN report if an error is detected and then
> > -print the number of the test and the status of the test:
> > +2. Tests that are currently incompatible with KUnit. Enabled with
> > +``CONFIG_TEST_KASAN_MODULE`` and can only be run as a module. These tests can
> > +only be verified manually, by loading the kernel module and inspecting the
> > +kernel log for KASAN reports.
> >
> > -pass::
> > +Each KUnit-compatible KASAN test prints a KASAN report if an error is detected.
> > +Then the test prints its number and status.
> > +
> > +When a test passes::
> >
> > ok 28 - kmalloc_double_kzfree
> >
> > -or, if kmalloc failed::
> > +When a test fails due to a failed ``kmalloc``::
> >
> > # kmalloc_large_oob_right: ASSERTION FAILED at lib/test_kasan.c:163
> > Expected ptr is not null, but is
> > not ok 4 - kmalloc_large_oob_right
> >
> > -or, if a KASAN report was expected, but not found::
> > +When a test fails due to a missing KASAN report::
> >
> > # kmalloc_double_kzfree: EXPECTATION FAILED at lib/test_kasan.c:629
> > Expected kasan_data->report_expected == kasan_data->report_found, but
> > @@ -348,46 +402,38 @@ or, if a KASAN report was expected, but not found::
> > kasan_data->report_found == 0
> > not ok 28 - kmalloc_double_kzfree
> >
> > -All test statuses are tracked as they run and an overall status will
> > -be printed at the end::
> > +At the end the cumulative status of all KASAN tests is printed. On success::
> >
> > ok 1 - kasan
> >
> > -or::
> > +Or, if one of the tests failed::
> >
> > not ok 1 - kasan
> >
> > -(1) Loadable Module
> > -~~~~~~~~~~~~~~~~~~~~
> > +
> > +There are a few ways to run KUnit-compatible KASAN tests.
> > +
> > +1. Loadable module
> > +~~~~~~~~~~~~~~~~~~
> >
> > With ``CONFIG_KUNIT`` enabled, ``CONFIG_KASAN_KUNIT_TEST`` can be built as
> > -a loadable module and run on any architecture that supports KASAN
> > -using something like insmod or modprobe. The module is called ``test_kasan``.
> > +a loadable module and run on any architecture that supports KASAN by loading
> > +the module with insmod or modprobe. The module is called ``test_kasan``.
> >
> > -(2) Built-In
> > -~~~~~~~~~~~~~
> > +2. Built-In
> > +~~~~~~~~~~~
> >
> > With ``CONFIG_KUNIT`` built-in, ``CONFIG_KASAN_KUNIT_TEST`` can be built-in
> > -on any architecure that supports KASAN. These and any other KUnit
> > -tests enabled will run and print the results at boot as a late-init
> > -call.
> > +on any architecure that supports KASAN. These and any other KUnit tests enabled
> > +will run and print the results at boot as a late-init call.
> >
> > -(3) Using kunit_tool
> > -~~~~~~~~~~~~~~~~~~~~~
> > +3. Using kunit_tool
> > +~~~~~~~~~~~~~~~~~~~
> >
> > -With ``CONFIG_KUNIT`` and ``CONFIG_KASAN_KUNIT_TEST`` built-in, we can also
> > -use kunit_tool to see the results of these along with other KUnit
> > -tests in a more readable way. This will not print the KASAN reports
> > -of tests that passed. Use `KUnit documentation <https://www.kernel.org/doc/html/latest/dev-tools/kunit/index.html>`_ for more up-to-date
> > -information on kunit_tool.
> > +With ``CONFIG_KUNIT`` and ``CONFIG_KASAN_KUNIT_TEST`` built-in, it's also
> > +possible use ``kunit_tool`` to see the results of these and other KUnit tests
> > +in a more readable way. This will not print the KASAN reports of the tests that
> > +passed. Use `KUnit documentation <https://www.kernel.org/doc/html/latest/dev-tools/kunit/index.html>`_
> > +for more up-to-date information on ``kunit_tool``.
> >
> > .. _KUnit: https://www.kernel.org/doc/html/latest/dev-tools/kunit/index.html
> > -
> > -``CONFIG_TEST_KASAN_MODULE`` is a set of KASAN tests that could not be
> > -converted to KUnit. These tests can be run only as a module with
> > -``CONFIG_TEST_KASAN_MODULE`` built as a loadable module and
> > -``CONFIG_KASAN`` built-in. The type of error expected and the
> > -function being run is printed before the expression expected to give
> > -an error. Then the error is printed, if found, and that test
> > -should be interpretted to pass only if the error was the one expected
> > -by the test.
> > --
> > 2.29.2.299.gdc1121823c-goog
> >

2020-11-17 13:56:30

by Dmitry Vyukov

[permalink] [raw]
Subject: Re: [PATCH mm v3 18/19] kasan, mm: allow cache merging with no metadata

On Fri, Nov 13, 2020 at 11:20 PM Andrey Konovalov <[email protected]> wrote:
>
> The reason cache merging is disabled with KASAN is because KASAN puts its
> metadata right after the allocated object. When the merged caches have
> slightly different sizes, the metadata ends up in different places, which
> KASAN doesn't support.
>
> It might be possible to adjust the metadata allocation algorithm and make
> it friendly to the cache merging code. Instead this change takes a simpler
> approach and allows merging caches when no metadata is present. Which is
> the case for hardware tag-based KASAN with kasan.mode=prod.
>
> Co-developed-by: Vincenzo Frascino <[email protected]>
> Signed-off-by: Vincenzo Frascino <[email protected]>
> Signed-off-by: Andrey Konovalov <[email protected]>
> Link: https://linux-review.googlesource.com/id/Ia114847dfb2244f297d2cb82d592bf6a07455dba

Somehow gerrit contains an old version... so I was going to
independently propose what Marco already proposed as simplification...
until I looked at the patch in the email :)

Reviewed-by: Dmitry Vyukov <[email protected]>

> ---
> include/linux/kasan.h | 21 +++++++++++++++++++--
> mm/kasan/common.c | 11 +++++++++++
> mm/slab_common.c | 3 ++-
> 3 files changed, 32 insertions(+), 3 deletions(-)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 16cf53eac29b..173a8e81d001 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -81,17 +81,30 @@ struct kasan_cache {
> };
>
> #ifdef CONFIG_KASAN_HW_TAGS
> +
> DECLARE_STATIC_KEY_FALSE(kasan_flag_enabled);
> +
> static __always_inline bool kasan_enabled(void)
> {
> return static_branch_likely(&kasan_flag_enabled);
> }
> -#else
> +
> +#else /* CONFIG_KASAN_HW_TAGS */
> +
> static inline bool kasan_enabled(void)
> {
> return true;
> }
> -#endif
> +
> +#endif /* CONFIG_KASAN_HW_TAGS */
> +
> +slab_flags_t __kasan_never_merge(void);
> +static __always_inline slab_flags_t kasan_never_merge(void)
> +{
> + if (kasan_enabled())
> + return __kasan_never_merge();
> + return 0;
> +}
>
> void __kasan_unpoison_range(const void *addr, size_t size);
> static __always_inline void kasan_unpoison_range(const void *addr, size_t size)
> @@ -238,6 +251,10 @@ static inline bool kasan_enabled(void)
> {
> return false;
> }
> +static inline slab_flags_t kasan_never_merge(void)
> +{
> + return 0;
> +}
> static inline void kasan_unpoison_range(const void *address, size_t size) {}
> static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
> static inline void kasan_free_pages(struct page *page, unsigned int order) {}
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index cf874243efab..a5a4dcb1254d 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -87,6 +87,17 @@ asmlinkage void kasan_unpoison_task_stack_below(const void *watermark)
> }
> #endif /* CONFIG_KASAN_STACK */
>
> +/*
> + * Only allow cache merging when stack collection is disabled and no metadata
> + * is present.
> + */
> +slab_flags_t __kasan_never_merge(void)
> +{
> + if (kasan_stack_collection_enabled())
> + return SLAB_KASAN;
> + return 0;
> +}
> +
> void __kasan_alloc_pages(struct page *page, unsigned int order)
> {
> u8 tag;
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index 0b5ae1819a8b..075b23ce94ec 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -19,6 +19,7 @@
> #include <linux/seq_file.h>
> #include <linux/proc_fs.h>
> #include <linux/debugfs.h>
> +#include <linux/kasan.h>
> #include <asm/cacheflush.h>
> #include <asm/tlbflush.h>
> #include <asm/page.h>
> @@ -54,7 +55,7 @@ static DECLARE_WORK(slab_caches_to_rcu_destroy_work,
> */
> #define SLAB_NEVER_MERGE (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER | \
> SLAB_TRACE | SLAB_TYPESAFE_BY_RCU | SLAB_NOLEAKTRACE | \
> - SLAB_FAILSLAB | SLAB_KASAN)
> + SLAB_FAILSLAB | kasan_never_merge())
>
> #define SLAB_MERGE_SAME (SLAB_RECLAIM_ACCOUNT | SLAB_CACHE_DMA | \
> SLAB_CACHE_DMA32 | SLAB_ACCOUNT)
> --
> 2.29.2.299.gdc1121823c-goog
>

2020-11-23 13:58:00

by Andrey Konovalov

[permalink] [raw]
Subject: Re: [PATCH mm v3 18/19] kasan, mm: allow cache merging with no metadata

On Tue, Nov 17, 2020 at 2:25 PM Dmitry Vyukov <[email protected]> wrote:
>
> On Fri, Nov 13, 2020 at 11:20 PM Andrey Konovalov <[email protected]> wrote:
> >
> > The reason cache merging is disabled with KASAN is because KASAN puts its
> > metadata right after the allocated object. When the merged caches have
> > slightly different sizes, the metadata ends up in different places, which
> > KASAN doesn't support.
> >
> > It might be possible to adjust the metadata allocation algorithm and make
> > it friendly to the cache merging code. Instead this change takes a simpler
> > approach and allows merging caches when no metadata is present. Which is
> > the case for hardware tag-based KASAN with kasan.mode=prod.
> >
> > Co-developed-by: Vincenzo Frascino <[email protected]>
> > Signed-off-by: Vincenzo Frascino <[email protected]>
> > Signed-off-by: Andrey Konovalov <[email protected]>
> > Link: https://linux-review.googlesource.com/id/Ia114847dfb2244f297d2cb82d592bf6a07455dba
>
> Somehow gerrit contains an old version... so I was going to
> independently propose what Marco already proposed as simplification...
> until I looked at the patch in the email :)

Ah, this is because I couldn't push next/mm-based changes into Gerrit
without manually adding tags to all of the yet-out-of-tree patches. So
the Gerrit doesn't have the last version of the patchset.

> Reviewed-by: Dmitry Vyukov <[email protected]>

Thanks!