2024-04-03 02:42:55

by Seongsu Park

[permalink] [raw]
Subject: [PATCH v2] arm64: Fix double TCR_T0SZ_OFFSET shift

We have already shifted the value of t0sz in TCR_T0SZ by TCR_T0SZ_OFFSET.
So, the TCR_T0SZ_OFFSET shift here should be removed.

Co-developed-by: Leem ChaeHoon <[email protected]>
Signed-off-by: Leem ChaeHoon <[email protected]>
Co-developed-by: Gyeonggeon Choi <[email protected]>
Signed-off-by: Gyeonggeon Choi <[email protected]>
Co-developed-by: Soomin Cho <[email protected]>
Signed-off-by: Soomin Cho <[email protected]>
Co-developed-by: DaeRo Lee <[email protected]>
Signed-off-by: DaeRo Lee <[email protected]>
Co-developed-by: kmasta <[email protected]>
Signed-off-by: kmasta <[email protected]>
Signed-off-by: Seongsu Park <[email protected]>
---

Changes in v2:
- Condition is updated

---
arch/arm64/include/asm/mmu_context.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index c768d16b81a4..bd19f4c758b7 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -72,11 +72,11 @@ static inline void __cpu_set_tcr_t0sz(unsigned long t0sz)
{
unsigned long tcr = read_sysreg(tcr_el1);

- if ((tcr & TCR_T0SZ_MASK) >> TCR_T0SZ_OFFSET == t0sz)
+ if ((tcr & TCR_T0SZ_MASK) == t0sz)
return;

tcr &= ~TCR_T0SZ_MASK;
- tcr |= t0sz << TCR_T0SZ_OFFSET;
+ tcr |= t0sz;
write_sysreg(tcr, tcr_el1);
isb();
}
--
2.34.1



2024-04-03 10:30:32

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCH v2] arm64: Fix double TCR_T0SZ_OFFSET shift

On Wed, Apr 03, 2024 at 11:42:36AM +0900, Seongsu Park wrote:
> We have already shifted the value of t0sz in TCR_T0SZ by TCR_T0SZ_OFFSET.
> So, the TCR_T0SZ_OFFSET shift here should be removed.

Can we please write a better commit message?

This doesn't explain:

* Where we have already shifted the value of t0sz, nor why it makes sense to do
that there.

* That the value of TCR_T0SZ_OFFSET is 0, and hence shifting this repeatedly is
beningn, and this patch is a cleanup rather than a fix.

Mark.

> Co-developed-by: Leem ChaeHoon <[email protected]>
> Signed-off-by: Leem ChaeHoon <[email protected]>
> Co-developed-by: Gyeonggeon Choi <[email protected]>
> Signed-off-by: Gyeonggeon Choi <[email protected]>
> Co-developed-by: Soomin Cho <[email protected]>
> Signed-off-by: Soomin Cho <[email protected]>
> Co-developed-by: DaeRo Lee <[email protected]>
> Signed-off-by: DaeRo Lee <[email protected]>
> Co-developed-by: kmasta <[email protected]>
> Signed-off-by: kmasta <[email protected]>
> Signed-off-by: Seongsu Park <[email protected]>
> ---
>
> Changes in v2:
> - Condition is updated
>
> ---
> arch/arm64/include/asm/mmu_context.h | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
> index c768d16b81a4..bd19f4c758b7 100644
> --- a/arch/arm64/include/asm/mmu_context.h
> +++ b/arch/arm64/include/asm/mmu_context.h
> @@ -72,11 +72,11 @@ static inline void __cpu_set_tcr_t0sz(unsigned long t0sz)
> {
> unsigned long tcr = read_sysreg(tcr_el1);
>
> - if ((tcr & TCR_T0SZ_MASK) >> TCR_T0SZ_OFFSET == t0sz)
> + if ((tcr & TCR_T0SZ_MASK) == t0sz)
> return;
>
> tcr &= ~TCR_T0SZ_MASK;
> - tcr |= t0sz << TCR_T0SZ_OFFSET;
> + tcr |= t0sz;
> write_sysreg(tcr, tcr_el1);
> isb();
> }
> --
> 2.34.1
>

2024-04-08 02:19:45

by Seongsu Park

[permalink] [raw]
Subject: RE: [PATCH v2] arm64: Fix double TCR_T0SZ_OFFSET shift



> On Wed, Apr 03, 2024 at 11:42:36AM +0900, Seongsu Park wrote:
> > We have already shifted the value of t0sz in TCR_T0SZ by
TCR_T0SZ_OFFSET.
> > So, the TCR_T0SZ_OFFSET shift here should be removed.
>
> Can we please write a better commit message?
>
> This doesn't explain:
>
> * Where we have already shifted the value of t0sz, nor why it makes sense
> to do
> that there.
>
> * That the value of TCR_T0SZ_OFFSET is 0, and hence shifting this
> repeatedly is
> beningn, and this patch is a cleanup rather than a fix.
>
> Mark.
Thank you for feedback. I'll send v3 patch.
In v3, We will upgrade the commit message, and add the cpu_set_tcr_t0sz
macro.
Please check v3!
>
> > Co-developed-by: Leem ChaeHoon <[email protected]>
> > Signed-off-by: Leem ChaeHoon <[email protected]>
> > Co-developed-by: Gyeonggeon Choi <[email protected]>
> > Signed-off-by: Gyeonggeon Choi <[email protected]>
> > Co-developed-by: Soomin Cho <[email protected]>
> > Signed-off-by: Soomin Cho <[email protected]>
> > Co-developed-by: DaeRo Lee <[email protected]>
> > Signed-off-by: DaeRo Lee <[email protected]>
> > Co-developed-by: kmasta <[email protected]>
> > Signed-off-by: kmasta <[email protected]>
> > Signed-off-by: Seongsu Park <[email protected]>
> > ---
> >
> > Changes in v2:
> > - Condition is updated
> >
> > ---
> > arch/arm64/include/asm/mmu_context.h | 4 ++--
> > 1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/mmu_context.h
> > b/arch/arm64/include/asm/mmu_context.h
> > index c768d16b81a4..bd19f4c758b7 100644
> > --- a/arch/arm64/include/asm/mmu_context.h
> > +++ b/arch/arm64/include/asm/mmu_context.h
> > @@ -72,11 +72,11 @@ static inline void __cpu_set_tcr_t0sz(unsigned
> > long t0sz) {
> > unsigned long tcr = read_sysreg(tcr_el1);
> >
> > - if ((tcr & TCR_T0SZ_MASK) >> TCR_T0SZ_OFFSET == t0sz)
> > + if ((tcr & TCR_T0SZ_MASK) == t0sz)
> > return;
> >
> > tcr &= ~TCR_T0SZ_MASK;
> > - tcr |= t0sz << TCR_T0SZ_OFFSET;
> > + tcr |= t0sz;
> > write_sysreg(tcr, tcr_el1);
> > isb();
> > }
> > --
> > 2.34.1
> >