2022-03-30 18:25:46

by Alistair Francis

[permalink] [raw]
Subject: [PATCH] riscv: Ensure only ASIDLEN is used for sfence.vma

From: Alistair Francis <[email protected]>

When we set the value of context.id using __new_context() we set both
the asid and the current_version with this return statement in
__new_context():

return asid | ver;

This means that when local_flush_tlb_all_asid() is called with the asid
specified from context.id we can write the incorrect value.

We get away with this as hardware ignores the extra bits, as the RISC-V
specification states:

"bits SXLEN-1:ASIDMAX of the value held in rs2 are reserved for future
standard use. Until their use is defined by a standard extension, they
should be zeroed by software and ignored by current implementations."

but it is still a bug and worth addressing as we are incorrectly setting
extra bits.

This patch uses asid_mask when calling sfence.vma to ensure the asid is
always the correct len (ASIDLEN). This is similar to what we do in
arch/riscv/mm/context.c.

Signed-off-by: Alistair Francis <[email protected]>
---
arch/riscv/mm/context.c | 2 +-
arch/riscv/mm/tlbflush.c | 4 ++--
include/linux/mm_types.h | 2 ++
3 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/arch/riscv/mm/context.c b/arch/riscv/mm/context.c
index 7acbfbd14557..4329fe54176b 100644
--- a/arch/riscv/mm/context.c
+++ b/arch/riscv/mm/context.c
@@ -22,7 +22,7 @@ DEFINE_STATIC_KEY_FALSE(use_asid_allocator);

static unsigned long asid_bits;
static unsigned long num_asids;
-static unsigned long asid_mask;
+unsigned long asid_mask;

static atomic_long_t current_version;

diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c
index 37ed760d007c..4469615aa07f 100644
--- a/arch/riscv/mm/tlbflush.c
+++ b/arch/riscv/mm/tlbflush.c
@@ -10,7 +10,7 @@ static inline void local_flush_tlb_all_asid(unsigned long asid)
{
__asm__ __volatile__ ("sfence.vma x0, %0"
:
- : "r" (asid)
+ : "r" (asid & asid_mask)
: "memory");
}

@@ -19,7 +19,7 @@ static inline void local_flush_tlb_page_asid(unsigned long addr,
{
__asm__ __volatile__ ("sfence.vma %0, %1"
:
- : "r" (addr), "r" (asid)
+ : "r" (addr), "r" (asid & asid_mask)
: "memory");
}

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 8834e38c06a4..5fa7cc0af853 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -666,6 +666,8 @@ struct mm_struct {

extern struct mm_struct init_mm;

+extern unsigned long asid_mask;
+
/* Pointer magic because the dynamic array size confuses some compilers. */
static inline void mm_init_cpumask(struct mm_struct *mm)
{
--
2.35.1


2022-03-30 21:14:20

by Anup Patel

[permalink] [raw]
Subject: Re: [PATCH] riscv: Ensure only ASIDLEN is used for sfence.vma

On Wed, Mar 30, 2022 at 1:04 PM Alistair Francis
<[email protected]> wrote:
>
> From: Alistair Francis <[email protected]>
>
> When we set the value of context.id using __new_context() we set both
> the asid and the current_version with this return statement in
> __new_context():
>
> return asid | ver;
>
> This means that when local_flush_tlb_all_asid() is called with the asid
> specified from context.id we can write the incorrect value.
>
> We get away with this as hardware ignores the extra bits, as the RISC-V
> specification states:
>
> "bits SXLEN-1:ASIDMAX of the value held in rs2 are reserved for future
> standard use. Until their use is defined by a standard extension, they
> should be zeroed by software and ignored by current implementations."
>
> but it is still a bug and worth addressing as we are incorrectly setting
> extra bits.
>
> This patch uses asid_mask when calling sfence.vma to ensure the asid is
> always the correct len (ASIDLEN). This is similar to what we do in
> arch/riscv/mm/context.c.
>
> Signed-off-by: Alistair Francis <[email protected]>

Instead of fixing various local_flush_tlb_xyz() functions, I suggest fixing the
__sbi_tlb_flush_range() in tlbflush.c which passes incorrect ASID.
(Refer line 45, "unsigned long asid = atomic_long_read(&mm->context.id);")

Also, please add the "Fixes: " tag.

Regards,
Anup

> ---
> arch/riscv/mm/context.c | 2 +-
> arch/riscv/mm/tlbflush.c | 4 ++--
> include/linux/mm_types.h | 2 ++
> 3 files changed, 5 insertions(+), 3 deletions(-)
>
> diff --git a/arch/riscv/mm/context.c b/arch/riscv/mm/context.c
> index 7acbfbd14557..4329fe54176b 100644
> --- a/arch/riscv/mm/context.c
> +++ b/arch/riscv/mm/context.c
> @@ -22,7 +22,7 @@ DEFINE_STATIC_KEY_FALSE(use_asid_allocator);
>
> static unsigned long asid_bits;
> static unsigned long num_asids;
> -static unsigned long asid_mask;
> +unsigned long asid_mask;
>
> static atomic_long_t current_version;
>
> diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c
> index 37ed760d007c..4469615aa07f 100644
> --- a/arch/riscv/mm/tlbflush.c
> +++ b/arch/riscv/mm/tlbflush.c
> @@ -10,7 +10,7 @@ static inline void local_flush_tlb_all_asid(unsigned long asid)
> {
> __asm__ __volatile__ ("sfence.vma x0, %0"
> :
> - : "r" (asid)
> + : "r" (asid & asid_mask)
> : "memory");
> }
>
> @@ -19,7 +19,7 @@ static inline void local_flush_tlb_page_asid(unsigned long addr,
> {
> __asm__ __volatile__ ("sfence.vma %0, %1"
> :
> - : "r" (addr), "r" (asid)
> + : "r" (addr), "r" (asid & asid_mask)
> : "memory");
> }
>
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 8834e38c06a4..5fa7cc0af853 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -666,6 +666,8 @@ struct mm_struct {
>
> extern struct mm_struct init_mm;
>
> +extern unsigned long asid_mask;
> +
> /* Pointer magic because the dynamic array size confuses some compilers. */
> static inline void mm_init_cpumask(struct mm_struct *mm)
> {
> --
> 2.35.1
>