2024-04-08 02:40:34

by Seongsu Park

[permalink] [raw]
Subject: [PATCH v3] arm64: Cleanup __cpu_set_tcr_t0sz()

In cpu_set_default_tcr_t0sz(), it is an error to shift TCR_T0SZ_OFFSET
twice form TCR_T0SZ() and __cpu_set_tcr_t0sz().
Since TCR_T0SZ_OFFSET is 0, no error occurred.
We need to clarify whether the parameter of __cpu_set_tcr_t0sz is a
shifted value or an unshifted value.

We have already shifted the value of t0sz in TCR_T0SZ by TCR_T0SZ_OFFSET.
This is necessary for consistency with TCR_T1SZ.
Therefore, the parameter of __cpu_set_tcr_t0sz is clarified as a shifted
value.

Co-developed-by: Leem ChaeHoon <[email protected]>
Signed-off-by: Leem ChaeHoon <[email protected]>
Co-developed-by: Gyeonggeon Choi <[email protected]>
Signed-off-by: Gyeonggeon Choi <[email protected]>
Co-developed-by: Soomin Cho <[email protected]>
Signed-off-by: Soomin Cho <[email protected]>
Co-developed-by: DaeRo Lee <[email protected]>
Signed-off-by: DaeRo Lee <[email protected]>
Co-developed-by: kmasta <[email protected]>
Signed-off-by: kmasta <[email protected]>
Signed-off-by: Seongsu Park <[email protected]>
---

v2:
- Condition is updated
v3:
- Commit message is updated
- cpu_set_tcr_t0sz macro is added

---
arch/arm64/include/asm/mmu_context.h | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index c768d16b81a4..fb603ec7f61f 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -72,15 +72,16 @@ static inline void __cpu_set_tcr_t0sz(unsigned long t0sz)
{
unsigned long tcr = read_sysreg(tcr_el1);

- if ((tcr & TCR_T0SZ_MASK) >> TCR_T0SZ_OFFSET == t0sz)
+ if ((tcr & TCR_T0SZ_MASK) == t0sz)
return;

tcr &= ~TCR_T0SZ_MASK;
- tcr |= t0sz << TCR_T0SZ_OFFSET;
+ tcr |= t0sz;
write_sysreg(tcr, tcr_el1);
isb();
}

+#define cpu_set_tcr_t0sz(t0sz) __cpu_set_tcr_t0sz(TCR_T0SZ(t0sz))
#define cpu_set_default_tcr_t0sz() __cpu_set_tcr_t0sz(TCR_T0SZ(vabits_actual))
#define cpu_set_idmap_tcr_t0sz() __cpu_set_tcr_t0sz(idmap_t0sz)

@@ -134,7 +135,7 @@ static inline void cpu_install_ttbr0(phys_addr_t ttbr0, unsigned long t0sz)
{
cpu_set_reserved_ttbr0();
local_flush_tlb_all();
- __cpu_set_tcr_t0sz(t0sz);
+ cpu_set_tcr_t0sz(t0sz);

/* avoid cpu_switch_mm() and its SW-PAN and CNP interactions */
write_sysreg(ttbr0, ttbr0_el1);
--
2.34.1



2024-04-10 16:12:29

by Will Deacon

[permalink] [raw]
Subject: Re: [PATCH v3] arm64: Cleanup __cpu_set_tcr_t0sz()

On Mon, Apr 08, 2024 at 11:40:16AM +0900, Seongsu Park wrote:
> In cpu_set_default_tcr_t0sz(), it is an error to shift TCR_T0SZ_OFFSET
> twice form TCR_T0SZ() and __cpu_set_tcr_t0sz().
> Since TCR_T0SZ_OFFSET is 0, no error occurred.
> We need to clarify whether the parameter of __cpu_set_tcr_t0sz is a
> shifted value or an unshifted value.
>
> We have already shifted the value of t0sz in TCR_T0SZ by TCR_T0SZ_OFFSET.
> This is necessary for consistency with TCR_T1SZ.
> Therefore, the parameter of __cpu_set_tcr_t0sz is clarified as a shifted
> value.

This commit message needs reworking. I would suggest something like:

The T0SZ field of TCR_EL1 occupies bits 0-5 of the register and
encodes the virtual address space translated by TTBR0_EL1. When
updating the field (for example, because we are switching to/from
the idmap page-table), __cpu_set_tcr_t0sz() erroneously treats its
't0sz' argument as unshifted, resulting in harmless but confusing
double shifts by 0 in the code.

Remove the unnecessary shifts.

> Co-developed-by: Leem ChaeHoon <[email protected]>
> Signed-off-by: Leem ChaeHoon <[email protected]>
> Co-developed-by: Gyeonggeon Choi <[email protected]>
> Signed-off-by: Gyeonggeon Choi <[email protected]>
> Co-developed-by: Soomin Cho <[email protected]>
> Signed-off-by: Soomin Cho <[email protected]>
> Co-developed-by: DaeRo Lee <[email protected]>
> Signed-off-by: DaeRo Lee <[email protected]>
> Co-developed-by: kmasta <[email protected]>
> Signed-off-by: kmasta <[email protected]>
> Signed-off-by: Seongsu Park <[email protected]>

Honestly, although it's great that you all meet up to look at the kernel,
this long list of credits is a little absurd for a trivial patch like
this. Please can you decide who did the most work and give them the
credit? Hopefully there will be future opportunities for you all to
contribute!

> ---
>
> v2:
> - Condition is updated
> v3:
> - Commit message is updated
> - cpu_set_tcr_t0sz macro is added
>
> ---
> arch/arm64/include/asm/mmu_context.h | 7 ++++---
> 1 file changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
> index c768d16b81a4..fb603ec7f61f 100644
> --- a/arch/arm64/include/asm/mmu_context.h
> +++ b/arch/arm64/include/asm/mmu_context.h
> @@ -72,15 +72,16 @@ static inline void __cpu_set_tcr_t0sz(unsigned long t0sz)
> {
> unsigned long tcr = read_sysreg(tcr_el1);
>
> - if ((tcr & TCR_T0SZ_MASK) >> TCR_T0SZ_OFFSET == t0sz)
> + if ((tcr & TCR_T0SZ_MASK) == t0sz)
> return;
>
> tcr &= ~TCR_T0SZ_MASK;
> - tcr |= t0sz << TCR_T0SZ_OFFSET;
> + tcr |= t0sz;
> write_sysreg(tcr, tcr_el1);
> isb();
> }
>
> +#define cpu_set_tcr_t0sz(t0sz) __cpu_set_tcr_t0sz(TCR_T0SZ(t0sz))
> #define cpu_set_default_tcr_t0sz() __cpu_set_tcr_t0sz(TCR_T0SZ(vabits_actual))
> #define cpu_set_idmap_tcr_t0sz() __cpu_set_tcr_t0sz(idmap_t0sz)
>
> @@ -134,7 +135,7 @@ static inline void cpu_install_ttbr0(phys_addr_t ttbr0, unsigned long t0sz)
> {
> cpu_set_reserved_ttbr0();
> local_flush_tlb_all();
> - __cpu_set_tcr_t0sz(t0sz);
> + cpu_set_tcr_t0sz(t0sz);

Sorry, but this is wrong. Please have a look at how cpu_install_ttbr0()
is called; specifically how trans_pgd_idmap_page() sets up 't0sz'.

So I don't think you should change cpu_install_ttbr0() at all and adding
a cpu_set_tcr_t0sz() macro which calls TCR_T0SZ on the 't0sz' argument
is a mistake.

Will

2024-04-11 12:34:41

by Seongsu Park

[permalink] [raw]
Subject: RE: [PATCH v3] arm64: Cleanup __cpu_set_tcr_t0sz()



>
> On Mon, Apr 08, 2024 at 11:40:16AM +0900, Seongsu Park wrote:
> > In cpu_set_default_tcr_t0sz(), it is an error to shift TCR_T0SZ_OFFSET
> > twice form TCR_T0SZ() and __cpu_set_tcr_t0sz().
> > Since TCR_T0SZ_OFFSET is 0, no error occurred.
> > We need to clarify whether the parameter of __cpu_set_tcr_t0sz is a
> > shifted value or an unshifted value.
> >
> > We have already shifted the value of t0sz in TCR_T0SZ by
TCR_T0SZ_OFFSET.
> > This is necessary for consistency with TCR_T1SZ.
> > Therefore, the parameter of __cpu_set_tcr_t0sz is clarified as a
> > shifted value.
>
> This commit message needs reworking. I would suggest something like:
>
> The T0SZ field of TCR_EL1 occupies bits 0-5 of the register and
> encodes the virtual address space translated by TTBR0_EL1. When
> updating the field (for example, because we are switching to/from
> the idmap page-table), __cpu_set_tcr_t0sz() erroneously treats its
> 't0sz' argument as unshifted, resulting in harmless but confusing
> double shifts by 0 in the code.
>
> Remove the unnecessary shifts.
>

Thank you for great feedback.
Please check title and description.
If these are appropriate, I will write the same in v4.

[Title]
arm64: Cleanup __cpu_set_tcr_t0sz()

[Description]
The T0SZ field of TCR_EL1 occupies bits 0-5 of the register and
encodes the virtual address space translated by TTBR0_EL1. When
updating the field, for example because we are switching to/from
the idmap page-table, __cpu_set_tcr_t0sz() erroneously treats its
't0sz' argument as unshifted, resulting in harmless but confusing
double shifts by 0 in the code. Therefore, Remove the unnecessary
shifts.

> > Co-developed-by: Leem ChaeHoon <[email protected]>
> > Signed-off-by: Leem ChaeHoon <[email protected]>
> > Co-developed-by: Gyeonggeon Choi <[email protected]>
> > Signed-off-by: Gyeonggeon Choi <[email protected]>
> > Co-developed-by: Soomin Cho <[email protected]>
> > Signed-off-by: Soomin Cho <[email protected]>
> > Co-developed-by: DaeRo Lee <[email protected]>
> > Signed-off-by: DaeRo Lee <[email protected]>
> > Co-developed-by: kmasta <[email protected]>
> > Signed-off-by: kmasta <[email protected]>
> > Signed-off-by: Seongsu Park <[email protected]>
>
> Honestly, although it's great that you all meet up to look at the kernel,
> this long list of credits is a little absurd for a trivial patch like
this.
> Please can you decide who did the most work and give them the credit?
> Hopefully there will be future opportunities for you all to contribute!
>

Okay. I got it.
In v4, I'll leave only Leem ChaeHoon and me.

> > ---
> >
> > v2:
> > - Condition is updated
> > v3:
> > - Commit message is updated
> > - cpu_set_tcr_t0sz macro is added
> >
> > ---
> > arch/arm64/include/asm/mmu_context.h | 7 ++++---
> > 1 file changed, 4 insertions(+), 3 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/mmu_context.h
> > b/arch/arm64/include/asm/mmu_context.h
> > index c768d16b81a4..fb603ec7f61f 100644
> > --- a/arch/arm64/include/asm/mmu_context.h
> > +++ b/arch/arm64/include/asm/mmu_context.h
> > @@ -72,15 +72,16 @@ static inline void __cpu_set_tcr_t0sz(unsigned
> > long t0sz) {
> > unsigned long tcr = read_sysreg(tcr_el1);
> >
> > - if ((tcr & TCR_T0SZ_MASK) >> TCR_T0SZ_OFFSET == t0sz)
> > + if ((tcr & TCR_T0SZ_MASK) == t0sz)
> > return;
> >
> > tcr &= ~TCR_T0SZ_MASK;
> > - tcr |= t0sz << TCR_T0SZ_OFFSET;
> > + tcr |= t0sz;
> > write_sysreg(tcr, tcr_el1);
> > isb();
> > }
> >
> > +#define cpu_set_tcr_t0sz(t0sz)
> __cpu_set_tcr_t0sz(TCR_T0SZ(t0sz))
> > #define cpu_set_default_tcr_t0sz()
> __cpu_set_tcr_t0sz(TCR_T0SZ(vabits_actual))
> > #define cpu_set_idmap_tcr_t0sz() __cpu_set_tcr_t0sz(idmap_t0sz)
> >
> > @@ -134,7 +135,7 @@ static inline void cpu_install_ttbr0(phys_addr_t
> > ttbr0, unsigned long t0sz) {
> > cpu_set_reserved_ttbr0();
> > local_flush_tlb_all();
> > - __cpu_set_tcr_t0sz(t0sz);
> > + cpu_set_tcr_t0sz(t0sz);
>
> Sorry, but this is wrong. Please have a look at how cpu_install_ttbr0() is
> called; specifically how trans_pgd_idmap_page() sets up 't0sz'.
>
> So I don't think you should change cpu_install_ttbr0() at all and adding a
> cpu_set_tcr_t0sz() macro which calls TCR_T0SZ on the 't0sz' argument is a
> mistake.
>
> Will

Oops. You're right. My mistake.
In v4, I'll remove this.