2023-11-21 22:05:46

by Ilya Leoshkevich

[permalink] [raw]
Subject: [PATCH v2 32/33] s390: Implement the architecture-specific kmsan functions

arch_kmsan_get_meta_or_null() finds the lowcore shadow by querying the
prefix and calling kmsan_get_metadata() again.

kmsan_virt_addr_valid() delegates to virt_addr_valid().

Signed-off-by: Ilya Leoshkevich <[email protected]>
---
arch/s390/include/asm/kmsan.h | 36 +++++++++++++++++++++++++++++++++++
1 file changed, 36 insertions(+)
create mode 100644 arch/s390/include/asm/kmsan.h

diff --git a/arch/s390/include/asm/kmsan.h b/arch/s390/include/asm/kmsan.h
new file mode 100644
index 000000000000..afec71e9e9ac
--- /dev/null
+++ b/arch/s390/include/asm/kmsan.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_S390_KMSAN_H
+#define _ASM_S390_KMSAN_H
+
+#include <asm/lowcore.h>
+#include <asm/page.h>
+#include <linux/kmsan.h>
+#include <linux/mmzone.h>
+#include <linux/stddef.h>
+
+#ifndef MODULE
+
+static inline void *arch_kmsan_get_meta_or_null(void *addr, bool is_origin)
+{
+ if (addr >= (void *)&S390_lowcore &&
+ addr < (void *)(&S390_lowcore + 1)) {
+ /*
+ * Different lowcores accessed via S390_lowcore are described
+ * by the same struct page. Resolve the prefix manually in
+ * order to get a distinct struct page.
+ */
+ addr += (void *)lowcore_ptr[raw_smp_processor_id()] -
+ (void *)&S390_lowcore;
+ return kmsan_get_metadata(addr, is_origin);
+ }
+ return NULL;
+}
+
+static inline bool kmsan_virt_addr_valid(void *addr)
+{
+ return virt_addr_valid(addr);
+}
+
+#endif /* !MODULE */
+
+#endif /* _ASM_S390_KMSAN_H */
--
2.41.0


2023-12-11 10:27:23

by Alexander Potapenko

[permalink] [raw]
Subject: Re: [PATCH v2 32/33] s390: Implement the architecture-specific kmsan functions

> +static inline void *arch_kmsan_get_meta_or_null(void *addr, bool is_origin)
> +{
> + if (addr >= (void *)&S390_lowcore &&
> + addr < (void *)(&S390_lowcore + 1)) {
> + /*
> + * Different lowcores accessed via S390_lowcore are described
> + * by the same struct page. Resolve the prefix manually in
> + * order to get a distinct struct page.
> + */
> + addr += (void *)lowcore_ptr[raw_smp_processor_id()] -
> + (void *)&S390_lowcore;
> + return kmsan_get_metadata(addr, is_origin);
> + }
> + return NULL;
> +}

Is there a possibility for infinite recursion here? E.g. can
`lowcore_ptr[raw_smp_processor_id()]` point somewhere in between
`(void *)&S390_lowcore` and `(void *)(&S390_lowcore + 1))`?

2023-12-11 10:40:22

by Ilya Leoshkevich

[permalink] [raw]
Subject: Re: [PATCH v2 32/33] s390: Implement the architecture-specific kmsan functions

On Mon, 2023-12-11 at 11:26 +0100, Alexander Potapenko wrote:
> > +static inline void *arch_kmsan_get_meta_or_null(void *addr, bool
> > is_origin)
> > +{
> > +       if (addr >= (void *)&S390_lowcore &&
> > +           addr < (void *)(&S390_lowcore + 1)) {
> > +               /*
> > +                * Different lowcores accessed via S390_lowcore are
> > described
> > +                * by the same struct page. Resolve the prefix
> > manually in
> > +                * order to get a distinct struct page.
> > +                */
> > +               addr += (void *)lowcore_ptr[raw_smp_processor_id()]
> > -
> > +                       (void *)&S390_lowcore;
> > +               return kmsan_get_metadata(addr, is_origin);
> > +       }
> > +       return NULL;
> > +}
>
> Is there a possibility for infinite recursion here? E.g. can
> `lowcore_ptr[raw_smp_processor_id()]` point somewhere in between
> `(void *)&S390_lowcore` and `(void *)(&S390_lowcore + 1))`?

No, it's allocated with __get_free_pages() or memblock_alloc_low().
But since this question came up, I should probably add a check and
a WARN_ON_ONCE() here.

2023-12-11 10:46:33

by Alexander Potapenko

[permalink] [raw]
Subject: Re: [PATCH v2 32/33] s390: Implement the architecture-specific kmsan functions

> > Is there a possibility for infinite recursion here? E.g. can
> > `lowcore_ptr[raw_smp_processor_id()]` point somewhere in between
> > `(void *)&S390_lowcore` and `(void *)(&S390_lowcore + 1))`?
>
> No, it's allocated with __get_free_pages() or memblock_alloc_low().
> But since this question came up, I should probably add a check and
> a WARN_ON_ONCE() here.

Yes, please.


--
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Paul Manicle, Liana Sebastian
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg