Currently, KASAN_SW_TAGS uses 0xFF as the default tag value for
unallocated memory. The underlying idea is that since that memory
hasn't been allocated yet, it's only supposed to be dereferenced
through a pointer with the native 0xFF tag.
While this is a good idea in terms on consistency, practically it
doesn't bring any benefit. Since the 0xFF pointer tag is a match-all
tag, it doesn't matter what tag the accessed memory has. No accesses
through 0xFF-tagged pointers are considered buggy by KASAN.
This patch changes the default tag value for unallocated memory to 0xFE,
which is the tag KASAN uses for inaccessible memory. This doesn't affect
accesses through 0xFF-tagged pointer to this memory, but this allows
KASAN to detect wild and large out-of-bounds invalid memory accesses
through otherwise-tagged pointers.
This is a prepatory patch for the next one, which changes the tag-based
KASAN modes to not poison the boot memory.
Signed-off-by: Andrey Konovalov <[email protected]>
---
include/linux/kasan.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 14f72ec96492..44c147dae7e3 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -30,7 +30,8 @@ struct kunit_kasan_expectation {
/* Software KASAN implementations use shadow memory. */
#ifdef CONFIG_KASAN_SW_TAGS
-#define KASAN_SHADOW_INIT 0xFF
+/* This matches KASAN_TAG_INVALID. */
+#define KASAN_SHADOW_INIT 0xFE
#else
#define KASAN_SHADOW_INIT 0
#endif
--
2.30.0.617.g56c4b15f3c-goog
During boot, all non-reserved memblock memory is exposed to page_alloc
via memblock_free_pages->__free_pages_core(). This results in
kasan_free_pages() being called, which poisons that memory.
Poisoning all that memory lengthens boot time. The most noticeable effect
is observed with the HW_TAGS mode. A boot-time impact may potentially also
affect systems with large amount of RAM.
This patch changes the tag-based modes to not poison the memory during
the memblock->page_alloc transition.
An exception is made for KASAN_GENERIC. Since it marks all new memory as
accessible, not poisoning the memory released from memblock will lead to
KASAN missing invalid boot-time accesses to that memory.
With KASAN_SW_TAGS, as it uses the invalid 0xFE tag as the default tag
for all memory, it won't miss bad boot-time accesses even if the poisoning
of memblock memory is removed.
With KASAN_HW_TAGS, the default memory tags values are unspecified.
Therefore, if memblock poisoning is removed, this KASAN mode will miss
the mentioned type of boot-time bugs with a 1/16 probability. This is
taken as an acceptable trafe-off.
Internally, the poisoning is removed as follows. __free_pages_core() is
used when exposing fresh memory during system boot and when onlining
memory during hotplug. This patch adds a new FPI_SKIP_KASAN_POISON flag
and passes it to __free_pages_ok() through free_pages_prepare() from
__free_pages_core(). If FPI_SKIP_KASAN_POISON is set, kasan_free_pages()
is not called.
All memory allocated normally when the boot is over keeps getting
poisoned as usual.
Reviewed-by: Catalin Marinas <[email protected]>
Signed-off-by: Andrey Konovalov <[email protected]>
---
Changes v1->v2:
- Only drop memblock poisoning for tag-based KASAN modes.
---
mm/page_alloc.c | 45 ++++++++++++++++++++++++++++++++++-----------
1 file changed, 34 insertions(+), 11 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0b55c9c95364..c89e7b107514 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -108,6 +108,17 @@ typedef int __bitwise fpi_t;
*/
#define FPI_TO_TAIL ((__force fpi_t)BIT(1))
+/*
+ * Don't poison memory with KASAN (only for the tag-based modes).
+ * During boot, all non-reserved memblock memory is exposed to page_alloc.
+ * Poisoning all that memory lengthens boot time, especially on systems with
+ * large amount of RAM. This flag is used to skip that poisoning.
+ * This is only done for the tag-based KASAN modes, as those are able to
+ * detect memory corruptions with the memory tags assigned by default.
+ * All memory allocated normally after boot gets poisoned as usual.
+ */
+#define FPI_SKIP_KASAN_POISON ((__force fpi_t)BIT(2))
+
/* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */
static DEFINE_MUTEX(pcp_batch_high_lock);
#define MIN_PERCPU_PAGELIST_FRACTION (8)
@@ -384,10 +395,15 @@ static DEFINE_STATIC_KEY_TRUE(deferred_pages);
* on-demand allocation and then freed again before the deferred pages
* initialization is done, but this is not likely to happen.
*/
-static inline void kasan_free_nondeferred_pages(struct page *page, int order)
+static inline void kasan_free_nondeferred_pages(struct page *page, int order,
+ fpi_t fpi_flags)
{
- if (!static_branch_unlikely(&deferred_pages))
- kasan_free_pages(page, order);
+ if (static_branch_unlikely(&deferred_pages))
+ return;
+ if (!IS_ENABLED(CONFIG_KASAN_GENERIC) &&
+ (fpi_flags & FPI_SKIP_KASAN_POISON))
+ return;
+ kasan_free_pages(page, order);
}
/* Returns true if the struct page for the pfn is uninitialised */
@@ -438,7 +454,14 @@ defer_init(int nid, unsigned long pfn, unsigned long end_pfn)
return false;
}
#else
-#define kasan_free_nondeferred_pages(p, o) kasan_free_pages(p, o)
+static inline void kasan_free_nondeferred_pages(struct page *page, int order,
+ fpi_t fpi_flags)
+{
+ if (!IS_ENABLED(CONFIG_KASAN_GENERIC) &&
+ (fpi_flags & FPI_SKIP_KASAN_POISON))
+ return;
+ kasan_free_pages(page, order);
+}
static inline bool early_page_uninitialised(unsigned long pfn)
{
@@ -1216,7 +1239,7 @@ static void kernel_init_free_pages(struct page *page, int numpages)
}
static __always_inline bool free_pages_prepare(struct page *page,
- unsigned int order, bool check_free)
+ unsigned int order, bool check_free, fpi_t fpi_flags)
{
int bad = 0;
@@ -1290,7 +1313,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
debug_pagealloc_unmap_pages(page, 1 << order);
- kasan_free_nondeferred_pages(page, order);
+ kasan_free_nondeferred_pages(page, order, fpi_flags);
return true;
}
@@ -1303,7 +1326,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
*/
static bool free_pcp_prepare(struct page *page)
{
- return free_pages_prepare(page, 0, true);
+ return free_pages_prepare(page, 0, true, FPI_NONE);
}
static bool bulkfree_pcp_prepare(struct page *page)
@@ -1323,9 +1346,9 @@ static bool bulkfree_pcp_prepare(struct page *page)
static bool free_pcp_prepare(struct page *page)
{
if (debug_pagealloc_enabled_static())
- return free_pages_prepare(page, 0, true);
+ return free_pages_prepare(page, 0, true, FPI_NONE);
else
- return free_pages_prepare(page, 0, false);
+ return free_pages_prepare(page, 0, false, FPI_NONE);
}
static bool bulkfree_pcp_prepare(struct page *page)
@@ -1533,7 +1556,7 @@ static void __free_pages_ok(struct page *page, unsigned int order,
int migratetype;
unsigned long pfn = page_to_pfn(page);
- if (!free_pages_prepare(page, order, true))
+ if (!free_pages_prepare(page, order, true, fpi_flags))
return;
migratetype = get_pfnblock_migratetype(page, pfn);
@@ -1570,7 +1593,7 @@ void __free_pages_core(struct page *page, unsigned int order)
* Bypass PCP and place fresh pages right to the tail, primarily
* relevant for memory onlining.
*/
- __free_pages_ok(page, order, FPI_TO_TAIL);
+ __free_pages_ok(page, order, FPI_TO_TAIL | FPI_SKIP_KASAN_POISON);
}
#ifdef CONFIG_NEED_MULTIPLE_NODES
--
2.30.0.617.g56c4b15f3c-goog
On Fri, Feb 19, 2021 at 1:22 AM Andrey Konovalov <[email protected]> wrote:
>
> Currently, KASAN_SW_TAGS uses 0xFF as the default tag value for
> unallocated memory. The underlying idea is that since that memory
> hasn't been allocated yet, it's only supposed to be dereferenced
> through a pointer with the native 0xFF tag.
>
> While this is a good idea in terms on consistency, practically it
> doesn't bring any benefit. Since the 0xFF pointer tag is a match-all
> tag, it doesn't matter what tag the accessed memory has. No accesses
> through 0xFF-tagged pointers are considered buggy by KASAN.
>
> This patch changes the default tag value for unallocated memory to 0xFE,
> which is the tag KASAN uses for inaccessible memory. This doesn't affect
> accesses through 0xFF-tagged pointer to this memory, but this allows
> KASAN to detect wild and large out-of-bounds invalid memory accesses
> through otherwise-tagged pointers.
>
> This is a prepatory patch for the next one, which changes the tag-based
> KASAN modes to not poison the boot memory.
>
> Signed-off-by: Andrey Konovalov <[email protected]>
> ---
> include/linux/kasan.h | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 14f72ec96492..44c147dae7e3 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -30,7 +30,8 @@ struct kunit_kasan_expectation {
> /* Software KASAN implementations use shadow memory. */
>
> #ifdef CONFIG_KASAN_SW_TAGS
> -#define KASAN_SHADOW_INIT 0xFF
> +/* This matches KASAN_TAG_INVALID. */
> +#define KASAN_SHADOW_INIT 0xFE
> #else
> #define KASAN_SHADOW_INIT 0
> #endif
> --
> 2.30.0.617.g56c4b15f3c-goog
>
Hi Andrew,
Could you pick up this series into mm?
The discussion on v1 of this series was hijacked discussing an unrelated issue.
Thanks!