2023-08-23 15:24:04

by Alexandru Elisei

[permalink] [raw]
Subject: [PATCH RFC 17/37] arm64: mte: Disable dynamic tag storage management if HW KASAN is enabled

Reserving the tag storage associated with a tagged page requires the
ability to migrate existing data if the tag storage is in use for data.

The kernel allocates pages, which are now tagged because of HW KASAN, in
non-preemptible contexts, which can make reserving the associate tag
storage impossible.

Don't expose the tag storage pages to the memory allocator if HW KASAN is
enabled.

Signed-off-by: Alexandru Elisei <[email protected]>
---
arch/arm64/kernel/mte_tag_storage.c | 12 ++++++++++++
1 file changed, 12 insertions(+)

diff --git a/arch/arm64/kernel/mte_tag_storage.c b/arch/arm64/kernel/mte_tag_storage.c
index 4a6bfdf88458..f45128d0244e 100644
--- a/arch/arm64/kernel/mte_tag_storage.c
+++ b/arch/arm64/kernel/mte_tag_storage.c
@@ -314,6 +314,18 @@ static int __init mte_tag_storage_activate_regions(void)
return 0;
}

+ /*
+ * The kernel allocates memory in non-preemptible contexts, which makes
+ * migration impossible when reserving the associated tag storage.
+ *
+ * The check is safe to make because KASAN HW tags are enabled before
+ * the rest of the init functions are called, in smp_prepare_boot_cpu().
+ */
+ if (kasan_hw_tags_enabled()) {
+ pr_info("KASAN HW tags enabled, disabling tag storage");
+ return 0;
+ }
+
for (i = 0; i < num_tag_regions; i++) {
tag_range = &tag_regions[i].tag_range;
for (pfn = tag_range->start; pfn <= tag_range->end; pfn += pageblock_nr_pages) {
--
2.41.0



2023-10-16 12:42:03

by Alexandru Elisei

[permalink] [raw]
Subject: Re: [PATCH RFC 17/37] arm64: mte: Disable dynamic tag storage management if HW KASAN is enabled

Hi,

On Thu, Oct 12, 2023 at 10:35:05AM +0900, Hyesoo Yu wrote:
> On Wed, Aug 23, 2023 at 02:13:30PM +0100, Alexandru Elisei wrote:
> > Reserving the tag storage associated with a tagged page requires the
> > ability to migrate existing data if the tag storage is in use for data.
> >
> > The kernel allocates pages, which are now tagged because of HW KASAN, in
> > non-preemptible contexts, which can make reserving the associate tag
> > storage impossible.
> >
> > Don't expose the tag storage pages to the memory allocator if HW KASAN is
> > enabled.
> >
> > Signed-off-by: Alexandru Elisei <[email protected]>
> > ---
> > arch/arm64/kernel/mte_tag_storage.c | 12 ++++++++++++
> > 1 file changed, 12 insertions(+)
> >
> > diff --git a/arch/arm64/kernel/mte_tag_storage.c b/arch/arm64/kernel/mte_tag_storage.c
> > index 4a6bfdf88458..f45128d0244e 100644
> > --- a/arch/arm64/kernel/mte_tag_storage.c
> > +++ b/arch/arm64/kernel/mte_tag_storage.c
> > @@ -314,6 +314,18 @@ static int __init mte_tag_storage_activate_regions(void)
> > return 0;
> > }
> >
> > + /*
> > + * The kernel allocates memory in non-preemptible contexts, which makes
> > + * migration impossible when reserving the associated tag storage.
> > + *
> > + * The check is safe to make because KASAN HW tags are enabled before
> > + * the rest of the init functions are called, in smp_prepare_boot_cpu().
> > + */
> > + if (kasan_hw_tags_enabled()) {
> > + pr_info("KASAN HW tags enabled, disabling tag storage");
> > + return 0;
> > + }
> > +
>
> Hi.
>
> Is there no plan to enable HW KASAN in the current design ?
> I wonder if dynamic MTE is only used for user ?

The tag storage pages are exposed to the page allocator if and only if HW KASAN
is disabled:

static int __init mte_tag_storage_activate_regions(void)
[..]
/*
* The kernel allocates memory in non-preemptible contexts, which makes
* migration impossible when reserving the associated tag storage.
*
* The check is safe to make because KASAN HW tags are enabled before
* the rest of the init functions are called, in smp_prepare_boot_cpu().
*/
if (kasan_hw_tags_enabled()) {
pr_info("KASAN HW tags enabled, disabling tag storage");
return 0;
}

No plans at the moment to have this series compatible with HW KASAN. I will
revisit this if/when the series gets merged.

Thanks,
Alex

>
> Thanks,
> Hyesoo Yu.
>
>
> > for (i = 0; i < num_tag_regions; i++) {
> > tag_range = &tag_regions[i].tag_range;
> > for (pfn = tag_range->start; pfn <= tag_range->end; pfn += pageblock_nr_pages) {
> > --
> > 2.41.0
> >
> >